threads
listlengths
1
2.99k
[ { "msg_contents": "Hi hackers,\n\nI was looking into XLogWrite() and saw the below comment. It cannot really\ncircle back in wal buffers without needing to call pg_write() since next\npages wouldn't be contiguous in memory. So it needs to write whenever it\nreaches the end of wal buffers.\n\n /*\n> * Dump the set if this will be the last loop iteration, or if we are\n> * at the last page of the cache area (since the next page won't be\n> * contiguous in memory), or if we are at the end of the logfile\n> * segment.\n> */\n\n\nI think that we don't have the \"contiguous pages\" constraint when writing\nanymore as we can do vectored IO. It seems unnecessary to write just\nbecause XLogWrite() is at the end of wal buffers.\nAttached patch uses pg_pwritev() instead of pg_pwrite() and tries to write\npages in one call even if they're not contiguous in memory, until it\nreaches the page at startidx.\n\nAfter quickly experimenting the patch and comparing the number of write\ncalls, the patch's affect can be more visible when wal_buffers is quite low\nas it's more likely to circle back to the beginning. When wal_buffers is\nset to a decent amount, the patch only saves a few write calls. But I\nwouldn't expect any regression introduced by the patch (I may be wrong\nhere), so I thought it may be worth to consider.\n\nI appreciate any feedback on the proposed change. I'd also be glad to\nbenchmark the patch if you want to see how it performs in some specific\ncases since I've been struggling with coming up a good test case.\n\nRegards,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Tue, 6 Aug 2024 12:35:54 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": true, "msg_subject": "Vectored IO in XLogWrite()" }, { "msg_contents": "On Tue, Aug 6, 2024 at 5:36 AM Melih Mutlu <[email protected]> wrote:\n> I think that we don't have the \"contiguous pages\" constraint when writing anymore as we can do vectored IO. It seems unnecessary to write just because XLogWrite() is at the end of wal buffers.\n> Attached patch uses pg_pwritev() instead of pg_pwrite() and tries to write pages in one call even if they're not contiguous in memory, until it reaches the page at startidx.\n\nHere are a few notes on this patch:\n\n- It's not pgindent-clean. In fact, it doesn't even pass git diff --check.\n\n- You added a new comment (/* Reaching the buffer... */) in the middle\nof a chunk of lines that were already covered by an existing comment\n(/* Dump the set ... */). This makes it look like the /* Dump the\nset... */ comment only covers the 3 lines of code that immediately\nfollow it rather than everything in the \"if\" statement. You could fix\nthis in a variety of ways, but in this case the easiest solution, to\nme, looks like just skipping the new comment. It seems like the point\nis pretty self-explanatory.\n\n- The patch removes the initialization of \"from\" but not the variable\nitself. You still increment the variable you haven't initialized.\n\n- I don't think the logic is correct after a partial write. Pre-patch,\n\"from\" advances while \"nleft\" goes down, but post-patch, what gets\nwritten is dependent on the contents of \"iov\" which is initialized\noutside the loop and never updated. Perhaps compute_remaining_iovec\nwould be useful here?\n\n- I assume this is probably a good idea in principle, because fewer\nsystem calls are presumably better than more. The impact will probably\nbe very small, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Aug 2024 13:42:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored IO in XLogWrite()" }, { "msg_contents": "Hi Robert,\n\nThanks for reviewing.\n\nRobert Haas <[email protected]>, 6 Ağu 2024 Sal, 20:43 tarihinde şunu\nyazdı:\n\n> On Tue, Aug 6, 2024 at 5:36 AM Melih Mutlu <[email protected]> wrote:\n> > I think that we don't have the \"contiguous pages\" constraint when\n> writing anymore as we can do vectored IO. It seems unnecessary to write\n> just because XLogWrite() is at the end of wal buffers.\n> > Attached patch uses pg_pwritev() instead of pg_pwrite() and tries to\n> write pages in one call even if they're not contiguous in memory, until it\n> reaches the page at startidx.\n>\n> Here are a few notes on this patch:\n>\n> - It's not pgindent-clean. In fact, it doesn't even pass git diff --check.\n>\n\nFixed.\n\n\n> - You added a new comment (/* Reaching the buffer... */) in the middle\n> of a chunk of lines that were already covered by an existing comment\n> (/* Dump the set ... */). This makes it look like the /* Dump the\n> set... */ comment only covers the 3 lines of code that immediately\n> follow it rather than everything in the \"if\" statement. You could fix\n> this in a variety of ways, but in this case the easiest solution, to\n> me, looks like just skipping the new comment. It seems like the point\n> is pretty self-explanatory.\n>\n\nRemoved the new comment. Only keeping the updated version of the /* Dump\nthe set... */ comment.\n\n\n> - The patch removes the initialization of \"from\" but not the variable\n> itself. You still increment the variable you haven't initialized.\n>\n> - I don't think the logic is correct after a partial write. Pre-patch,\n> \"from\" advances while \"nleft\" goes down, but post-patch, what gets\n> written is dependent on the contents of \"iov\" which is initialized\n> outside the loop and never updated. Perhaps compute_remaining_iovec\n> would be useful here?\n>\n\nYou're right. I should have thought about the partial write case. I now\nfixed it by looping and trying to write until compute_remaining_iovec()\nreturns 0.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 7 Aug 2024 01:30:17 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vectored IO in XLogWrite()" }, { "msg_contents": "On Tue, Aug 6, 2024 at 6:30 PM Melih Mutlu <[email protected]> wrote:\n> Fixed.\n\n+ iov[0].iov_base = XLogCtl->pages + startidx * (Size)\nXLOG_BLCKSZ;;\n\nDouble semicolon.\n\nAside from that, this looks correct now, so the next question is\nwhether we want it. To me, it seems like this isn't likely to buy very\nmuch, but it also doesn't really seem to have any kind of downside, so\nI'd be somewhat inclined to go ahead with it. On the other hand, one\ncould argue that it's better not to change working code without a good\nreason.\n\nI wondered whether the regression tests actually hit the iovcnt == 2\ncase, and it turns out that they do, rather frequently actually.\nMaking that case a FATAL causes ~260 regression test failure. However,\non larger systems, we'll often end up with wal_segment_size=16MB and\nwal_buffers=16MB and then it seems like we don't hit the iovcnt==2\ncase. Which I guess just reinforces the point that this is\ntheoretically better but practically not much different.\n\nAny other votes on what to do here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 13:25:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vectored IO in XLogWrite()" } ]
[ { "msg_contents": "hashvalidate(), which validates the signatures of support functions for \nthe hash AM, contains several hardcoded exceptions. For example, \nhash/date_ops support function 1 is hashint4(), which would ordinarily \nfail validation because the function argument is int4, not date. But \nthis works internally because int4 and date are of the same size. There \nare several more exceptions like this that happen to work and were \nallowed historically but would now fail the function signature validation.\n\nAFAICT, these exceptions were just carried over from before the current \nindex AM API and validation functions were added. The code contains \ncomments like \"For the moment, fix it by having a list of allowed \ncases.\", so it probably wasn't meant as the ideal state.\n\nThis patch removes those exceptions by providing new support functions \nthat have the proper declared signatures. They internally share most of \nthe C code with the \"wrong\" functions they replace, so the behavior is \nstill the same.\n\nWith the exceptions gone, hashvalidate() is now simplified and relies \nfully on check_amproc_signature(), similar to other index AMs.\n\nI'm also fixing one case where a brin opclass used hashvarlena() for \nbytea, even though in that case, there is no function signature \nvalidation done, so it doesn't matter that much.\n\nNot done here, but maybe hashvarlena() and hashvarlenaextended() should \nbe removed from pg_proc.dat, since their use as opclass support \nfunctions is now dubious. They could continue to exist in the C code as \ninternal support functions.", "msg_date": "Tue, 6 Aug 2024 12:23:03 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Remove hardcoded hash opclass function signature exceptions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> hashvalidate(), which validates the signatures of support functions for \n> the hash AM, contains several hardcoded exceptions.\n> ...\n> This patch removes those exceptions by providing new support functions \n> that have the proper declared signatures. They internally share most of \n> the C code with the \"wrong\" functions they replace, so the behavior is \n> still the same.\n\n+1 for cleaning this up. A couple of minor nitpicks:\n\n* I don't really like the new control structure, or rather lack of\nstructure, in hashvalidate. In particular the uncommented\ns/break/continue/ changes look like typos. They aren't, but can't\nyou do this in a less confusing fashion? Or at least add comments\nlike \"continue not break because the code below the switch doesn't\napply to this case\".\n\n* Hand-picking OIDs as you did in pg_proc.dat is kind of passé now.\nI guess it's all right as long as nobody else does the same thing in\nthe near future, but ...\n\n> Not done here, but maybe hashvarlena() and hashvarlenaextended() should \n> be removed from pg_proc.dat, since their use as opclass support \n> functions is now dubious.\n\nI wish we could get rid of those, but according to\ncodesearch.debian.net, postgis and a couple of other extensions are\nrelying on them. If we remove them we'll break any convenient upgrade\npath for those extensions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Sep 2024 15:43:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove hardcoded hash opclass function signature exceptions" }, { "msg_contents": "On 06.09.24 21:43, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> hashvalidate(), which validates the signatures of support functions for\n>> the hash AM, contains several hardcoded exceptions.\n>> ...\n>> This patch removes those exceptions by providing new support functions\n>> that have the proper declared signatures. They internally share most of\n>> the C code with the \"wrong\" functions they replace, so the behavior is\n>> still the same.\n> \n> +1 for cleaning this up. A couple of minor nitpicks:\n> \n> * I don't really like the new control structure, or rather lack of\n> structure, in hashvalidate. In particular the uncommented\n> s/break/continue/ changes look like typos. They aren't, but can't\n> you do this in a less confusing fashion? Or at least add comments\n> like \"continue not break because the code below the switch doesn't\n> apply to this case\".\n\nOk, I cleaned that up a bit.\n\n> * Hand-picking OIDs as you did in pg_proc.dat is kind of passé now.\n> I guess it's all right as long as nobody else does the same thing in\n> the near future, but ...\n\nRenumbered with the suggested \"random\" numbers.\n\n>> Not done here, but maybe hashvarlena() and hashvarlenaextended() should\n>> be removed from pg_proc.dat, since their use as opclass support\n>> functions is now dubious.\n> \n> I wish we could get rid of those, but according to\n> codesearch.debian.net, postgis and a couple of other extensions are\n> relying on them. If we remove them we'll break any convenient upgrade\n> path for those extensions.\n\nThose are using the C function, which is ok. I was thinking about \nremoving the SQL function (from pg_proc.dat), because you can't use that \nfor much anymore. (You can't call it directly, and the hash AM will no \nlonger accept it.) I have done that in this patch version and added \nsome code comments around it.", "msg_date": "Mon, 9 Sep 2024 14:56:08 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove hardcoded hash opclass function signature exceptions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 06.09.24 21:43, Tom Lane wrote:\n>> * I don't really like the new control structure, or rather lack of\n>> structure, in hashvalidate. In particular the uncommented\n>> s/break/continue/ changes look like typos. They aren't, but can't\n>> you do this in a less confusing fashion? Or at least add comments\n>> like \"continue not break because the code below the switch doesn't\n>> apply to this case\".\n\n> Ok, I cleaned that up a bit.\n\nThat looks nicer. Thanks.\n\n>> I wish we could get rid of those, but according to\n>> codesearch.debian.net, postgis and a couple of other extensions are\n>> relying on them. If we remove them we'll break any convenient upgrade\n>> path for those extensions.\n\n> Those are using the C function, which is ok. I was thinking about \n> removing the SQL function (from pg_proc.dat), because you can't use that \n> for much anymore. (You can't call it directly, and the hash AM will no \n> longer accept it.) I have done that in this patch version and added \n> some code comments around it.\n\nNo, it isn't okay. What postgis (and the others) is doing is\nequivalent to\n\nregression=# create function myhash(bytea) returns int as 'hashvarlena' LANGUAGE 'internal' IMMUTABLE STRICT PARALLEL SAFE;\nCREATE FUNCTION\n\nAfter applying the v2 patch, I get\n\nregression=# create function myhash(bytea) returns int as 'hashvarlena' LANGUAGE 'internal' IMMUTABLE STRICT PARALLEL SAFE;\nERROR: there is no built-in function named \"hashvarlena\"\n\nThe reason is that the fmgr_builtins table is built from\npg_proc.dat, and only names appearing in it can be used as 'internal'\nfunction definitions. So you really can't remove the pg_proc entry.\n\nThe other thing that's made from pg_proc.dat is the list of extern\nfunction declarations in fmgrprotos.h. That's why you had to add\nthose cowboy declarations inside hashfunc.c, which are both ugly\nand not helpful for any external module that might wish to call those\nfunctions at the C level.\n\nOther than the business about removing those pg_proc entries,\nI think this is good to go.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Sep 2024 13:48:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove hardcoded hash opclass function signature exceptions" } ]
[ { "msg_contents": "While browsing through all our global variables for the multithreading \neffort, I noticed that our MD5 implementation in src/common/md5.c uses a \nstatic buffer on big-endian systems, which makes it not thread-safe. \nThat's a bug because that function is also used in libpq.\n\nThis was introduced in commit b67b57a966, which replaced the old MD5 \nfallback implementation with the one from pgcrypto. The thread-safety \ndidn't matter for pgcrypto, but for libpq it does.\n\nThis only affects big-endian systems that are compiled without OpenSSL.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 6 Aug 2024 15:23:26 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Thread-unsafe MD5 on big-endian systems with no OpenSSL" }, { "msg_contents": "On Tue, Aug 6, 2024 at 8:23 AM Heikki Linnakangas <[email protected]> wrote:\n> While browsing through all our global variables for the multithreading\n> effort, I noticed that our MD5 implementation in src/common/md5.c uses a\n> static buffer on big-endian systems, which makes it not thread-safe.\n> That's a bug because that function is also used in libpq.\n>\n> This was introduced in commit b67b57a966, which replaced the old MD5\n> fallback implementation with the one from pgcrypto. The thread-safety\n> didn't matter for pgcrypto, but for libpq it does.\n>\n> This only affects big-endian systems that are compiled without OpenSSL.\n\nLGTM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Aug 2024 10:04:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thread-unsafe MD5 on big-endian systems with no OpenSSL" }, { "msg_contents": "\n> On Aug 6, 2024, at 23:05, Robert Haas <[email protected]> wrote:\n> On Tue, Aug 6, 2024 at 8:23 AM Heikki Linnakangas <[email protected]> wrote:\n>> \n>> This only affects big-endian systems that are compiled without OpenSSL.\n> \n> LGTM.\n\nNice catch, looks fine to me as well.\n--\nMichael\n\n\n", "msg_date": "Wed, 7 Aug 2024 00:11:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thread-unsafe MD5 on big-endian systems with no OpenSSL" }, { "msg_contents": "On 06/08/2024 18:11, Michael Paquier wrote:\n> \n>> On Aug 6, 2024, at 23:05, Robert Haas <[email protected]> wrote:\n>> On Tue, Aug 6, 2024 at 8:23 AM Heikki Linnakangas <[email protected]> wrote:\n>>>\n>>> This only affects big-endian systems that are compiled without OpenSSL.\n>>\n>> LGTM.\n> \n> Nice catch, looks fine to me as well.\n\nCommitted, thanks\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:51:36 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-unsafe MD5 on big-endian systems with no OpenSSL" } ]
[ { "msg_contents": "I have noticed that ALTER TABLE supports multiple column actions\n(ADD/DROP column etc), but does not support multiple column renaming.\nSee [1]\n\nHere is small example of what im talking about:\n\n```\ndb2=# create table tt();\nCREATE TABLE\n\n-- multiple column altering ok\ndb2=# alter table tt add column i int, add column j int;\nALTER TABLE\n\n-- single column renaming ok\ndb2=# alter table tt rename column i to i2;\nALTER TABLE\n-- multiple column renaming not allowed\ndb2=# alter table tt rename column i2 to i3, rename column j to j2;\nERROR: syntax error at or near \",\"\nLINE 1: alter table tt rename column i2 to i3, rename column j to j2...\n ^\ndb2=#\n```\n\nLooking closely on gram.y, the only reason for this is that RenameStmt\nis defined less flexible than alter_table_cmds (which is a list). All\nother core infrastructure is ready to allow $subj.\n\nSo is it worth a patch?\n\n\n[1] https://www.postgresql.org/docs/current/sql-altertable.html\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Tue, 6 Aug 2024 18:58:11 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": true, "msg_subject": "Support multi-column renaming?" }, { "msg_contents": "On 2024-Aug-06, Kirill Reshke wrote:\n\n> I have noticed that ALTER TABLE supports multiple column actions\n> (ADD/DROP column etc), but does not support multiple column renaming.\n\n> Looking closely on gram.y, the only reason for this is that RenameStmt\n> is defined less flexible than alter_table_cmds (which is a list). All\n> other core infrastructure is ready to allow $subj.\n> \n> So is it worth a patch?\n\nHmm, yeah, maybe it'd be better if ALTER TABLE RENAME is not part of\nRenameStmt but instead part of AlterTableStmt. Probably not a super\ntrivial code change, but it should be doable. The interactions with\ndifferent subcommands types in the same command should be considered\ncarefully (as well as with ALTER {VIEW,SEQUENCE,etc} RENAME, which I bet\nwe don't want changed due to the implications).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n", "msg_date": "Tue, 6 Aug 2024 10:45:48 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support multi-column renaming?" }, { "msg_contents": "Hi,\n\nOn 2024-08-06 10:45:48 -0400, Alvaro Herrera wrote:\n> On 2024-Aug-06, Kirill Reshke wrote:\n> \n> > I have noticed that ALTER TABLE supports multiple column actions\n> > (ADD/DROP column etc), but does not support multiple column renaming.\n> \n> > Looking closely on gram.y, the only reason for this is that RenameStmt\n> > is defined less flexible than alter_table_cmds (which is a list). All\n> > other core infrastructure is ready to allow $subj.\n> > \n> > So is it worth a patch?\n> \n> Hmm, yeah, maybe it'd be better if ALTER TABLE RENAME is not part of\n> RenameStmt but instead part of AlterTableStmt. Probably not a super\n> trivial code change, but it should be doable. The interactions with\n> different subcommands types in the same command should be considered\n> carefully (as well as with ALTER {VIEW,SEQUENCE,etc} RENAME, which I bet\n> we don't want changed due to the implications).\n\nI think you'd likely run into grammar ambiguity issues if you did it\nnaively. I think I looked at something related at some point in the past and\nconcluded that to avoid bison getting confused (afaict the grammar is still\nLALR(1), it's just that bison doesn't merge states well enough), we'd have to\ninvent a RENAME_TO_P and inject that \"manually\" in base_yylex().\n\nIIRC introducing RENAME_TO_P (as well as SET_SCHEMA_P, OWNER TO) did actually\nresult in a decent size reduction of the grammar.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 6 Aug 2024 08:58:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support multi-column renaming?" } ]
[ { "msg_contents": "In a blog post [1], Bruce Momjian notes that expression indexes can\nhelp with planning even if they're not used directly. But the examples\ncited in that post are vague (i.e., they improve stats, but it's not\nclear how they could change plans), and Bruce's answer to a comment\n[2] suggests that this is not documented.\n\nIs there any more info on this mechanism? Specifically, if one has\nunused expression indexes (according to pg_stat_user_indexes), is it\nsafe to drop them? Or could they be providing statistics that\nmaterially affect query planning even though the indexes themselves\nare unused?\n\nIt looks like a very similar question was asked earlier this year on\npgsql-general [3], but got only a vague answer. Any background or tips\nfor where to look in the source regarding this behavior would be\ngreatly appreciated.\n\nThanks,\nMaciek\n\n[1]: https://momjian.us/main/blogs/pgblog/2017.html#February_20_2017\n[2]: https://momjian.us/main/comment_item.html?/main/blogs/pgblog.html/February_20_2017#comment-3174376969\n[3]: https://www.postgresql.org/message-id/flat/CAHg%3DPn%3DOZu7A3p%2B0Z-CDG4s2CHYe3UFQCTZp4RWGCEn2gmD35A%40mail.gmail.com\n\n\n", "msg_date": "Tue, 6 Aug 2024 13:06:08 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Unused expression indexes" }, { "msg_contents": "Maciek Sakrejda <[email protected]> writes:\n> In a blog post [1], Bruce Momjian notes that expression indexes can\n> help with planning even if they're not used directly. But the examples\n> cited in that post are vague (i.e., they improve stats, but it's not\n> clear how they could change plans), and Bruce's answer to a comment\n> [2] suggests that this is not documented.\n\n> Is there any more info on this mechanism? Specifically, if one has\n> unused expression indexes (according to pg_stat_user_indexes), is it\n> safe to drop them? Or could they be providing statistics that\n> materially affect query planning even though the indexes themselves\n> are unused?\n\nExpression indexes definitely can affect planning, because ANALYZE\ncollects stats on the values of those expressions. As a trivial\nexample,\n\nregression=# create table foo (x1 float8);\nCREATE TABLE\nregression=# insert into foo select 10 * random() from generate_series(1,10000);\nINSERT 0 10000\nregression=# analyze foo;\nANALYZE\nregression=# explain analyze select * from foo where sqrt(x1) < 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..195.00 rows=3333 width=8) (actual time=0.009..0.546 rows=1028 loops=1)\n Filter: (sqrt(x1) < '1'::double precision)\n Rows Removed by Filter: 8972\n Planning Time: 0.065 ms\n Execution Time: 0.572 ms\n(5 rows)\n\nThe planner has no info about the values of sqrt(x1), so you get a\ndefault estimate (one-third) of the selectivity of the WHERE clause.\nBut watch this:\n\nregression=# create index on foo (sqrt(x1));\nCREATE INDEX\nregression=# analyze foo;\nANALYZE\nregression=# explain analyze select * from foo where sqrt(x1) < 1;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=24.24..84.63 rows=1026 width=8) (actual time=0.078..0.229 rows=1028 loops=1)\n Recheck Cond: (sqrt(x1) < '1'::double precision)\n Heap Blocks: exact=45\n -> Bitmap Index Scan on foo_sqrt_idx (cost=0.00..23.98 rows=1026 width=0) (actual time=0.068..0.068 rows=1028 loops=1)\n Index Cond: (sqrt(x1) < '1'::double precision)\n Planning Time: 0.113 ms\n Execution Time: 0.259 ms\n(7 rows)\n\nNow there are stats about the values of sqrt(x1), allowing a far more\naccurate selectivity estimate to be made. In this particular example\nthere's no change of plan (it would have used the index anyway), but\ncertainly a different rowcount estimate can make a big difference.\n\nThis mechanism is quite ancient, and in principle it's now superseded\nby extended statistics. For example, I can drop this index and\ninstead do\n\nregression=# drop index foo_sqrt_idx;\nDROP INDEX\nregression=# create statistics foostats on sqrt(x1) from foo;\nCREATE STATISTICS\nregression=# analyze foo;\nANALYZE\nregression=# explain analyze select * from foo where sqrt(x1) < 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..195.00 rows=1026 width=8) (actual time=0.006..0.479 rows=1028 loops=1)\n Filter: (sqrt(x1) < '1'::double precision)\n Rows Removed by Filter: 8972\n Planning Time: 0.079 ms\n Execution Time: 0.503 ms\n(5 rows)\n\nSo the accurate rowcount estimate is still obtained in this example;\nand we're not incurring any index maintenance costs, only ANALYZE\ncosts that are going to be roughly the same either way.\n\nHowever, I am not certain that extended statistics are plugged into\nall the places where the older mechanism applies. Tomas Vondra might\nhave a better idea than I of where gaps remain in that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Aug 2024 16:25:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused expression indexes" }, { "msg_contents": "On Tue, Aug 6, 2024 at 1:25 PM Tom Lane <[email protected]> wrote:\n> The planner has no info about the values of sqrt(x1), so you get a\n> default estimate (one-third) of the selectivity of the WHERE clause.\n> But watch this:\n>\n> regression=# create index on foo (sqrt(x1));\n> CREATE INDEX\n> regression=# analyze foo;\n> ANALYZE\n> regression=# explain analyze select * from foo where sqrt(x1) < 1;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on foo (cost=24.24..84.63 rows=1026 width=8) (actual time=0.078..0.229 rows=1028 loops=1)\n> Recheck Cond: (sqrt(x1) < '1'::double precision)\n> Heap Blocks: exact=45\n> -> Bitmap Index Scan on foo_sqrt_idx (cost=0.00..23.98 rows=1026 width=0) (actual time=0.068..0.068 rows=1028 loops=1)\n> Index Cond: (sqrt(x1) < '1'::double precision)\n> Planning Time: 0.113 ms\n> Execution Time: 0.259 ms\n> (7 rows)\n>\n> Now there are stats about the values of sqrt(x1), allowing a far more\n> accurate selectivity estimate to be made. In this particular example\n> there's no change of plan (it would have used the index anyway), but\n> certainly a different rowcount estimate can make a big difference.\n\nThanks, but I was asking specifically about _unused_ indexes\n(according to pg_stat_user_indexes). Bruce's blog post showed how they\ncan still influence rowcount estimates, but can they do that (1) even\nif they don't end up being used by the query plan and (2) in a way\nthat leads to a different plan?\n\nBasically, if I have some unused expression indexes, is it safe to\ndrop them, or could they be used for planning optimizations even if\nthey are not used directly.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Tue, 6 Aug 2024 13:53:31 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unused expression indexes" }, { "msg_contents": "Maciek Sakrejda <[email protected]> writes:\n> Thanks, but I was asking specifically about _unused_ indexes\n> (according to pg_stat_user_indexes). Bruce's blog post showed how they\n> can still influence rowcount estimates, but can they do that (1) even\n> if they don't end up being used by the query plan and (2) in a way\n> that leads to a different plan?\n\nCertainly. This example was too simple to illustrate that point\nperhaps, but we would have arrived at the same rowcount estimate\nwhether it then chose to use the index or not. (You could prove\nthat with the same example by modifying the comparison constant\nto make the rowcount estimate high enough to discourage use of\nthe index.) In turn, a different rowcount estimate might change\nthe plan for subsequent joins or other processing. I didn't\nspend enough time on the example to show that, but it's surely\nnot hard to show.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Aug 2024 17:03:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused expression indexes" }, { "msg_contents": "On 8/6/24 22:25, Tom Lane wrote:\n> Maciek Sakrejda <[email protected]> writes:\n>> In a blog post [1], Bruce Momjian notes that expression indexes can\n>> help with planning even if they're not used directly. But the examples\n>> cited in that post are vague (i.e., they improve stats, but it's not\n>> clear how they could change plans), and Bruce's answer to a comment\n>> [2] suggests that this is not documented.\n> \n>> Is there any more info on this mechanism? Specifically, if one has\n>> unused expression indexes (according to pg_stat_user_indexes), is it\n>> safe to drop them? Or could they be providing statistics that\n>> materially affect query planning even though the indexes themselves\n>> are unused?\n> \n> Expression indexes definitely can affect planning, because ANALYZE\n> collects stats on the values of those expressions. As a trivial\n> example,\n> \n> regression=# create table foo (x1 float8);\n> CREATE TABLE\n> regression=# insert into foo select 10 * random() from generate_series(1,10000);\n> INSERT 0 10000\n> regression=# analyze foo;\n> ANALYZE\n> regression=# explain analyze select * from foo where sqrt(x1) < 1;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------\n> Seq Scan on foo (cost=0.00..195.00 rows=3333 width=8) (actual time=0.009..0.546 rows=1028 loops=1)\n> Filter: (sqrt(x1) < '1'::double precision)\n> Rows Removed by Filter: 8972\n> Planning Time: 0.065 ms\n> Execution Time: 0.572 ms\n> (5 rows)\n> \n> The planner has no info about the values of sqrt(x1), so you get a\n> default estimate (one-third) of the selectivity of the WHERE clause.\n> But watch this:\n> \n> regression=# create index on foo (sqrt(x1));\n> CREATE INDEX\n> regression=# analyze foo;\n> ANALYZE\n> regression=# explain analyze select * from foo where sqrt(x1) < 1;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on foo (cost=24.24..84.63 rows=1026 width=8) (actual time=0.078..0.229 rows=1028 loops=1)\n> Recheck Cond: (sqrt(x1) < '1'::double precision)\n> Heap Blocks: exact=45\n> -> Bitmap Index Scan on foo_sqrt_idx (cost=0.00..23.98 rows=1026 width=0) (actual time=0.068..0.068 rows=1028 loops=1)\n> Index Cond: (sqrt(x1) < '1'::double precision)\n> Planning Time: 0.113 ms\n> Execution Time: 0.259 ms\n> (7 rows)\n> \n> Now there are stats about the values of sqrt(x1), allowing a far more\n> accurate selectivity estimate to be made. In this particular example\n> there's no change of plan (it would have used the index anyway), but\n> certainly a different rowcount estimate can make a big difference.\n> \n> This mechanism is quite ancient, and in principle it's now superseded\n> by extended statistics. For example, I can drop this index and\n> instead do\n> \n> regression=# drop index foo_sqrt_idx;\n> DROP INDEX\n> regression=# create statistics foostats on sqrt(x1) from foo;\n> CREATE STATISTICS\n> regression=# analyze foo;\n> ANALYZE\n> regression=# explain analyze select * from foo where sqrt(x1) < 1;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------\n> Seq Scan on foo (cost=0.00..195.00 rows=1026 width=8) (actual time=0.006..0.479 rows=1028 loops=1)\n> Filter: (sqrt(x1) < '1'::double precision)\n> Rows Removed by Filter: 8972\n> Planning Time: 0.079 ms\n> Execution Time: 0.503 ms\n> (5 rows)\n> \n> So the accurate rowcount estimate is still obtained in this example;\n> and we're not incurring any index maintenance costs, only ANALYZE\n> costs that are going to be roughly the same either way.\n> \n> However, I am not certain that extended statistics are plugged into\n> all the places where the older mechanism applies. Tomas Vondra might\n> have a better idea than I of where gaps remain in that.\n> \n\nAFAIK it handles all / exactly the same cases. The magic happens in\nexamine_variable() in selfuncs.c, where we look for stats for simple\nVars, and then for stats for expressions. First we look at all indexes,\nand then (if there's no suitable index) at all extended stats.\n\nThere might be a place doing something ad hoc, but I can't think of any.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Tue, 6 Aug 2024 23:20:25 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused expression indexes" }, { "msg_contents": "Great, thank you both for the info.\n\n\n", "msg_date": "Wed, 7 Aug 2024 17:35:05 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unused expression indexes" } ]
[ { "msg_contents": "After some previous work here:\n\nhttps://postgr.es/m/[email protected]\n\nwe are less dependent on setlocale(), but it's still not completely\ngone.\n\nsetlocale() counts as thread-unsafe, so it would be nice to eliminate\nit completely.\n\nThe obvious answer is uselocale(), which sets the locale only for the\ncalling thread, and takes precedence over whatever is set with\nsetlocale().\n\nBut there are a couple problems:\n\n1. I don't think it's supported on Windows.\n\n2. I don't see a good way to canonicalize a locale name, like in\ncheck_locale(), which uses the result of setlocale().\n\nThoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 06 Aug 2024 14:59:55 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Remaining dependency on setlocale()" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> But there are a couple problems:\n\n> 1. I don't think it's supported on Windows.\n\nCan't help with that, but surely Windows has some thread-safe way.\n\n> 2. I don't see a good way to canonicalize a locale name, like in\n> check_locale(), which uses the result of setlocale().\n\nWhat I can tell you about that is that check_locale's expectation\nthat setlocale does any useful canonicalization is mostly wishful\nthinking [1]. On a lot of platforms you just get the input string\nback again. If that's the only thing keeping us on setlocale,\nI think we could drop it. (Perhaps we should do some canonicalization\nof our own instead?)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Tue, 06 Aug 2024 18:23:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, Aug 7, 2024 at 10:23 AM Tom Lane <[email protected]> wrote:\n> Jeff Davis <[email protected]> writes:\n> > But there are a couple problems:\n>\n> > 1. I don't think it's supported on Windows.\n>\n> Can't help with that, but surely Windows has some thread-safe way.\n\nIt does. It's not exactly the same, instead there is a thing you can\ncall that puts setlocale() itself into a thread-local mode, but last\ntime I checked that mode was missing on MinGW so that's a bit of an\nobstacle.\n\nHow far can we get by using more _l() functions? For example, [1]\nshows a use of strftime() that I think can be converted to\nstrftime_l() so that it doesn't depend on setlocale(). Since POSIX\ndoesn't specify every obvious _l function, we might need to provide\nany missing wrappers that save/restore thread-locally with\nuselocale(). Windows doesn't have uselocale(), but it generally\ndoesn't need such wrappers because it does have most of the obvious\n_l() functions.\n\n> > 2. I don't see a good way to canonicalize a locale name, like in\n> > check_locale(), which uses the result of setlocale().\n>\n> What I can tell you about that is that check_locale's expectation\n> that setlocale does any useful canonicalization is mostly wishful\n> thinking [1]. On a lot of platforms you just get the input string\n> back again. If that's the only thing keeping us on setlocale,\n> I think we could drop it. (Perhaps we should do some canonicalization\n> of our own instead?)\n\n+1\n\nI know it does something on Windows (we know the EDB installer gives\nit strings like \"Language,Country\" and it converts them to\n\"Language_Country.Encoding\", see various threads about it all going\nwrong), but I'm not sure it does anything we actually want to\nencourage. I'm hoping we can gradually screw it down so that we only\nhave sane BCP 47 in the system on that OS, and I don't see why we\nwouldn't just use them verbatim.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJ%3Dca39Cg%3Dy%3DS89EaCYvvCF8NrZRO%3Duog-cnz0VzC6Kfg%40mail.gmail.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 19:07:40 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On 8/7/24 03:07, Thomas Munro wrote:\n> How far can we get by using more _l() functions? For example, [1]\n> shows a use of strftime() that I think can be converted to\n> strftime_l() so that it doesn't depend on setlocale(). Since POSIX\n> doesn't specify every obvious _l function, we might need to provide\n> any missing wrappers that save/restore thread-locally with\n> uselocale(). Windows doesn't have uselocale(), but it generally\n> doesn't need such wrappers because it does have most of the obvious\n> _l() functions.\n\nMost of the strtoX functions have an _l variant, but one to watch is \natoi, which is defined with a hardcoded call to strtol, at least with glibc:\n\n8<----------\n/* Convert a string to an int. */\nint\natoi (const char *nptr)\n{\n return (int) strtol (nptr, (char **) NULL, 10);\n}\n8<----------\n\nI guess in many/most places we use atoi we don't care, but maybe it \nmatters for some?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 09:42:32 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, Aug 7, 2024 at 9:42 AM Joe Conway <[email protected]> wrote:\n> I guess in many/most places we use atoi we don't care, but maybe it\n> matters for some?\n\nI think we should move in the direction of replacing atoi() calls with\nstrtol() and actually checking for errors. In many places where use\natoi(), it's unlikely that the string would be anything but an\ninteger, so error checks are arguably unnecessary. A backup label file\nisn't likely to say \"START TIMELINE: potaytoes\". On the other hand, if\nit did say that, I'd prefer to get an error about potaytoes than have\nit be treated as if it said \"START TIMELINE: 0\". And I've definitely\nfound missing error-checks over the years. For example, on pg14,\n\"pg_basebackup -Ft -Zmaximum -Dx\" works as if you specified \"-Z0\"\nbecause atoi(\"maximum\") == 0. If we make a practice of checking\ninteger conversions for errors everywhere, we might avoid some such\nsilliness.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 11:28:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, 2024-08-07 at 19:07 +1200, Thomas Munro wrote:\n> How far can we get by using more _l() functions?\n\nThere are a ton of calls to, for example, isspace(), used mostly for\nparsing.\n\nI wouldn't expect a lot of differences in behavior from locale to\nlocale, like might be the case with iswspace(), but behavior can be\ndifferent at least in theory.\n\nSo I guess we're stuck with setlocale()/uselocale() for a while, unless\nwe're able to move most of those call sites over to an ascii-only\nvariant.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 07 Aug 2024 10:16:09 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On 8/7/24 13:16, Jeff Davis wrote:\n> On Wed, 2024-08-07 at 19:07 +1200, Thomas Munro wrote:\n>> How far can we get by using more _l() functions?\n> \n> There are a ton of calls to, for example, isspace(), used mostly for\n> parsing.\n> \n> I wouldn't expect a lot of differences in behavior from locale to\n> locale, like might be the case with iswspace(), but behavior can be\n> different at least in theory.\n> \n> So I guess we're stuck with setlocale()/uselocale() for a while, unless\n> we're able to move most of those call sites over to an ascii-only\n> variant.\n\nFWIW I see all of these in glibc:\n\nisalnum_l, isalpha_l, isascii_l, isblank_l, iscntrl_l, isdigit_l, \nisgraph_l, islower_l, isprint_l, ispunct_l, isspace_l, isupper_l, \nisxdigit_l\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 13:28:45 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:29 PM Joe Conway <[email protected]> wrote:\n> FWIW I see all of these in glibc:\n>\n> isalnum_l, isalpha_l, isascii_l, isblank_l, iscntrl_l, isdigit_l,\n> isgraph_l, islower_l, isprint_l, ispunct_l, isspace_l, isupper_l,\n> isxdigit_l\n\nOn my MacBook (Ventura, 13.6.7), I see all of these except for isascii_l.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 14:18:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Thu, Aug 8, 2024 at 5:16 AM Jeff Davis <[email protected]> wrote:\n> There are a ton of calls to, for example, isspace(), used mostly for\n> parsing.\n>\n> I wouldn't expect a lot of differences in behavior from locale to\n> locale, like might be the case with iswspace(), but behavior can be\n> different at least in theory.\n>\n> So I guess we're stuck with setlocale()/uselocale() for a while, unless\n> we're able to move most of those call sites over to an ascii-only\n> variant.\n\nWe do know of a few isspace() calls that are already questionable[1]\n(should be scanner_isspace(), or something like that). It's not only\nweird that SELECT ROW('libertà!') is displayed with or without double\nquote depending (in theory) on your locale, it's also undefined\nbehaviour because we feed individual bytes of a multi-byte sequence to\nisspace(), so OSes disagree, and in practice we know that macOS and\nWindows think that the byte 0xa inside 'à' is a space while glibc and\nFreeBSD don't. Looking at the languages with many sequences\ncontaining 0xa0, I guess you'd probably need to be processing CJK text\nand cross-platform for the difference to become obvious (that was the\ncase for the problem report I analysed):\n\nfor i in range(1, 0xffff):\n if (i < 0xd800 or i > 0xdfff) and 0xa0 in chr(i).encode('UTF-8'):\n print(\"%04x: %s\" % (i, chr(i)))\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BHWA9awUW0%2BRV_gO9r1ABZwGoZxPztcJxPy8vMFSTbTfi4jig%40mail.gmail.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 08:52:41 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Thu, Aug 8, 2024 at 6:18 AM Robert Haas <[email protected]> wrote:\n> On Wed, Aug 7, 2024 at 1:29 PM Joe Conway <[email protected]> wrote:\n> > FWIW I see all of these in glibc:\n> >\n> > isalnum_l, isalpha_l, isascii_l, isblank_l, iscntrl_l, isdigit_l,\n> > isgraph_l, islower_l, isprint_l, ispunct_l, isspace_l, isupper_l,\n> > isxdigit_l\n>\n> On my MacBook (Ventura, 13.6.7), I see all of these except for isascii_l.\n\nThose (except isascii_l) are from POSIX 2008[1]. They were absorbed\nfrom \"Extended API Set Part 4\"[2], along with locale_t (that's why\nthere is a header <xlocale.h> on a couple of systems even though after\nabsorption they are supposed to be in <locale.h>). We already\ndecided that all computers have that stuff (commit 8d9a9f03), but the\nreality is a little messier than that... NetBSD hasn't implemented\nuselocale() yet[3], though it has a good set of _l functions. As\ndiscussed in [3], ECPG code is therefore currently broken in\nmultithreaded clients because it's falling back to a setlocale() path,\nand I think Windows+MinGW must be too (it lacks\nHAVE__CONFIGTHREADLOCALE), but those both have a good set of _l\nfunctions. In that thread I tried to figure out how to use _l\nfunctions to fix that problem, but ...\n\nThe issue there is that we have our own snprintf.c, that implicitly\nrequires LC_NUMERIC to be \"C\" (it is documented as always printing\nfloats a certain way ignoring locale and that's what the callers there\nwant in frontend and backend code, but in reality it punts to system\nsnprintf for floats, assuming that LC_NUMERIC is \"C\", which we\nconfigure early in backend startup, but frontend code has to do it for\nitself!). So we could use snprintf_l or strtod_l instead, but POSIX\nhasn't got those yet. Or we could use own own Ryu code (fairly\nspecific), but integrating Ryu into our snprintf.c (and correctly\nimplementing all the %... stuff?) sounds like quite a hard,\ndevil-in-the-details kind of an undertaking to me. Or maybe it's\neasy, I dunno. As for the _l functions, you could probably get away\nwith \"every computer has either uselocale() or snprintf_() (or\nstrtod_()?)\" and have two code paths in our snprintf.c. But then we'd\nalso need a place to track a locale_t for a long-lived newlocale(\"C\"),\nwhich was too messy in my latest attempt...\n\n[1] https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/functions/isspace.html\n[2] https://pubs.opengroup.org/onlinepubs/9699939499/toc.pdf\n[3] https://www.postgresql.org/message-id/flat/CWZBBRR6YA8D.8EHMDRGLCKCD%40neon.tech\n\n\n", "msg_date": "Thu, 8 Aug 2024 09:40:56 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, 2024-08-07 at 13:28 -0400, Joe Conway wrote:\n> FWIW I see all of these in glibc:\n> \n> isalnum_l, isalpha_l, isascii_l, isblank_l, iscntrl_l, isdigit_l, \n> isgraph_l,  islower_l, isprint_l, ispunct_l, isspace_l, isupper_l, \n> isxdigit_l\n\nMy point was just that there are a lot of those call sites (especially\nfor isspace()) in various parsers. It feels like a lot of code churn to\nchange all of them, when a lot of them seem to be intended for ascii\nanyway.\n\nAnd where do we get the locale_t structure from? We can create our own\nat database connection time, and supply it to each of those call sites,\nbut I'm not sure that's a huge advantage over just using uselocale().\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 07 Aug 2024 15:45:19 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On 8/8/24 12:45 AM, Jeff Davis wrote:\n> My point was just that there are a lot of those call sites (especially\n> for isspace()) in various parsers. It feels like a lot of code churn to\n> change all of them, when a lot of them seem to be intended for ascii\n> anyway.\n> \n> And where do we get the locale_t structure from? We can create our own\n> at database connection time, and supply it to each of those call sites,\n> but I'm not sure that's a huge advantage over just using uselocale().\n\nI am leaning towards that we should write our own pure ascii functions \nfor this. Since we do not support any non-ascii compatible encodings \nanyway I do not see the point in having locale support in most of these \ncall-sites.\n\nAndewas\n\n\n", "msg_date": "Fri, 9 Aug 2024 13:41:26 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Fri, 2024-08-09 at 13:41 +0200, Andreas Karlsson wrote:\n> I am leaning towards that we should write our own pure ascii\n> functions \n> for this.\n\nThat makes sense for a lot of call sites, but it could cause breakage\nif we aren't careful.\n\n> Since we do not support any non-ascii compatible encodings \n> anyway I do not see the point in having locale support in most of\n> these \n> call-sites.\n\nAn ascii-compatible encoding just means that the code points in the\nascii range are represented as ascii. I'm not clear on whether code\npoints in the ascii range can return different results for things like\nisspace(), but it sounds plausible -- toupper() can return different\nresults for 'i' in tr_TR.\n\nAlso, what about the values outside 128-255, which are still valid\ninput to isspace()?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 09 Aug 2024 11:24:03 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Tue Aug 6, 2024 at 5:00 PM CDT, Jeff Davis wrote:\n> After some previous work here:\n>\n> https://postgr.es/m/[email protected]\n>\n> we are less dependent on setlocale(), but it's still not completely\n> gone.\n>\n> setlocale() counts as thread-unsafe, so it would be nice to eliminate\n> it completely.\n>\n> The obvious answer is uselocale(), which sets the locale only for the\n> calling thread, and takes precedence over whatever is set with\n> setlocale().\n>\n> But there are a couple problems:\n>\n> 1. I don't think it's supported on Windows.\n>\n> 2. I don't see a good way to canonicalize a locale name, like in\n> check_locale(), which uses the result of setlocale().\n>\n> Thoughts?\n\nHey Jeff,\n\nSee this thread[0] for some work that I had previously done. Feel free \nto take it over, or we could collaborate.\n\n[0]: https://www.postgresql.org/message-id/[email protected]\n\n--\nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 09 Aug 2024 15:16:58 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "Hi,\n\nOn Fri, 2024-08-09 at 15:16 -0500, Tristan Partin wrote:\n> Hey Jeff,\n> \n> See this thread[0] for some work that I had previously done. Feel\n> free \n> to take it over, or we could collaborate.\n> \n> [0]:\n> https://www.postgresql.org/message-id/[email protected]\n\nSounds good, sorry I missed that.\n\nCan you please rebase and we can discuss in that thread?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 09 Aug 2024 13:48:20 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, Aug 7, 2024 at 7:07 PM Thomas Munro <[email protected]> wrote:\n> On Wed, Aug 7, 2024 at 10:23 AM Tom Lane <[email protected]> wrote:\n> > Jeff Davis <[email protected]> writes:\n> > > But there are a couple problems:\n> >\n> > > 1. I don't think it's supported on Windows.\n> >\n> > Can't help with that, but surely Windows has some thread-safe way.\n>\n> It does. It's not exactly the same, instead there is a thing you can\n> call that puts setlocale() itself into a thread-local mode, but last\n> time I checked that mode was missing on MinGW so that's a bit of an\n> obstacle.\n\nActually the MinGW situation might be better than that these days. I\nknow of three environments where we currently have to keep code\nworking on MinGW: build farm animal fairywren (msys2 compiler\ntoochain), CI's optional \"Windows - Server 2019, MinGW64 - Meson\"\ntask, and CI's \"CompilerWarnings\" task, in the \"mingw_cross_warning\"\nstep (which actually runs on Linux, and uses configure rather than\nmeson). All three environments show that they have\n_configthreadlocale. So could we could simply require it on Windows?\nThen it might be possible to write a replacement implementation of\nuselocale() that does a two-step dance with _configthreadlocale() and\nsetlocale(), restoring both afterwards if they changed. That's what\nECPG open-codes already.\n\nThe NetBSD situation is more vexing. I was trying to find out if\nsomeone is working on it and unfortunately it looks like there is a\nprincipled stand against adding it:\n\nhttps://mail-index.netbsd.org/tech-userlevel/2015/12/28/msg009546.html\nhttps://mail-index.netbsd.org/netbsd-users/2017/02/14/msg019352.html\n\nThey're right that we really just want to use \"C\" in some places, and\ntheir LC_C_LOCALE is a very useful system-provided value to be able to\npass into _l functions. It's a shame it's non-standard, because\nwithout it you have to allocate a locale_t for \"C\" and keep it\nsomewhere to feed to _l functions...\n\n\n", "msg_date": "Sat, 10 Aug 2024 09:42:29 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "I've posted a new attempt at ripping those ECPG setlocales() out on\nthe other thread that had the earlier version and discussion:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKG%2BYv%2Bps%3DnS2T8SS1UDU%3DiySHSr4sGHYiYGkPTpZx6Ooww%40mail.gmail.com\n\n\n", "msg_date": "Sat, 10 Aug 2024 13:36:39 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Thu, Aug 8, 2024 at 5:16 AM Jeff Davis <[email protected]> wrote:\n> On Wed, 2024-08-07 at 19:07 +1200, Thomas Munro wrote:\n> > How far can we get by using more _l() functions?\n>\n> There are a ton of calls to, for example, isspace(), used mostly for\n> parsing.\n>\n> I wouldn't expect a lot of differences in behavior from locale to\n> locale, like might be the case with iswspace(), but behavior can be\n> different at least in theory.\n>\n> So I guess we're stuck with setlocale()/uselocale() for a while, unless\n> we're able to move most of those call sites over to an ascii-only\n> variant.\n\nHere are two more cases that I don't think I've seen discussed.\n\n1. The nl_langinfo() call in pg_get_encoding_from_locale(), can\nprobably be changed to nl_langinfo_l() (it is everywhere we currently\ncare about except Windows, which has a different already-thread-safe\nalternative; AIX seems to lack the _l version, but someone writing a\npatch to re-add support for that OS could supply the configure goo for\na uselocale() safe/restore implementation). One problem is that it\nhas callers that pass it NULL meaning the backend default, but we'd\nperhaps use LC_C_GLOBAL for now and have to think about where we get\nthe database default locale_t in the future.\n\n2. localeconv() is *doubly* non-thread-safe: it depends on the\ncurrent locale, and it also returns an object whose storage might be\nclobbered by any other call to localeconv(), setlocale, or even,\naccording to POSIX, uselocale() (!!!). I think that effectively\ncloses off that escape hatch. On some OSes (macOS, BSDs) you find\nlocaleconv_l() and then I think they give you a more workable\nlifetime: as long as the locale_t lives, which makes perfect sense. I\nam surprised that no one has invented localeconv_r() where you supply\nthe output storage, and you could wrap that in uselocale()\nsave/restore to deal with the other problem, or localeconv_r_l() or\nsomething. I can't understand why this is so bad. The glibc\ndocumentation calls it \"a masterpiece of poor design\". Ahh, so it\nseems like we need to delete our use of localeconf() completely,\nbecause we should be able to get all the information we need from\nnl_langinfo_l() instead:\n\nhttps://www.gnu.org/software/libc/manual/html_node/Locale-Information.html\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:24:01 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Mon, Aug 12, 2024 at 3:24 PM Thomas Munro <[email protected]> wrote:\n> 1. The nl_langinfo() call in pg_get_encoding_from_locale(), can\n> probably be changed to nl_langinfo_l() (it is everywhere we currently\n> care about except Windows, which has a different already-thread-safe\n> alternative ...\n\n... though if we wanted to replace all use of localeconv and struct\nlconv with nl_langinfo_l() calls, it's not totally obvious how to do\nthat on Windows. Its closest thing is GetLocaleInfoEx(), but that has\ncomplications: it takes wchar_t locale names, which we don't even have\nand can't access when we only have a locale_t, and it must look them\nup in some data structure every time, and it copies data out to the\ncaller as wchar_t so now you have two conversion problems and a\nstorage problem. If I understand correctly, the whole point of\nnl_langinfo_l(item, loc) is that it is supposed to be fast, it's\nreally just an array lookup, and item is just an index, and the result\nis supposed to be stable as long as loc hasn't been freed (and the\nthread hasn't exited). So you can use it without putting your own\ncaching in front of it. One idea I came up with which I haven't tried\nand it might turn out to be terrible, is that we could change our\ndefinition of locale_t on Windows. Currently it's a typedef to\nWindows' _locale_t, and we use it with a bunch of _XXX functions that\nwe access by macro to remove the underscore. Instead, we could make\nlocale_t a pointer to a struct of our own design in WIN32 builds,\nholding the native _locale_t and also an array full of all the values\nthat nl_langinfo_l() can return. We'd provide the standard enums,\nindexes into that array, in a fake POSIX-oid header <langinfo.h>.\nThen nl_langinfo_l(item, loc) could be implemented as\nloc->private_langinfo[item], and strcoll_l(.., loc) could be a static\ninline function that does _strcol_l(...,\nloc->private_windows_locale_t). These structs would be allocated and\nfreed with standard-looking newlocale() and freelocale(), so we could\nfinally stop using #ifdef WIN32-wrapped _create_locale() directly.\nThen everything would look more POSIX-y, nl_langinfo_l() could be used\ndirectly wherever we need fast access to that info, and we could, I\nthink, banish the awkward localeconv, right? I don't know if this all\nmakes total sense and haven't tried it, just spitballing here...\n\n\n", "msg_date": "Mon, 12 Aug 2024 16:53:17 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Mon, Aug 12, 2024 at 4:53 PM Thomas Munro <[email protected]> wrote:\n> ... though if we wanted to replace all use of localeconv and struct\n> lconv with nl_langinfo_l() calls,\n\nGah. I realised while trying the above that you can't really replace\nlocaleconv() with nl_langinfo_l() as the GNU documentation recommends,\nbecause some of the lconv fields we're using are missing from\nlanginfo.h in POSIX (only GNU added them all, that was a good idea).\n:-(\n\nNext idea:\n\nWindows: its localeconv() returns pointer to thread-local storage,\ngood, but it still needs setlocale(). So the options are: make our\nown lconv-populator function for Windows, using GetLocaleInfoEx(), or\ndo that _configthreadlocale() dance (possibly excluding some MinGW\nconfigurations from working)\nSystems that have localeconv_l(): use that\nPOSIX: use uselocale() and also put a big global lock around\nlocaleconv() call + accessing result (optionally skipping that on an\nOS-by-OS basis after confirming that its implementation doesn't really\nneed it)\n\nThe reason the uselocale() + localeconv() seems to require a Big Lock\n(by default at least) is that the uselocale() deals only with the\n\"current locale\" aspect, not the output buffer aspect. Clearly the\nstandard allows for it to be thread-local storage (that's why since\n2008 it says that after thread-exit you can't access the result, and I\nguess that's how it works on real systems (?)), but it also seems to\nallow for a single static buffer (that's why it says that it's not\nre-entrant, and any call to localeconv() might clobber it). That\nmight be OK in practice because we tend to cache that stuff, eg when\nassigning GUC lc_monetary (that cache would presumably become\nthread-local in the first phase of the multithreading plan), so the\nlocking shouldn't really hurt.\n\nThe reason we'd have to have three ways, and not just two, is again\nthat NetBSD declined to implement uselocale().\n\nI'll try this in a bit unless someone else has better ideas or plans\nfor this part... sorry for the drip-feeding.\n\n\n", "msg_date": "Tue, 13 Aug 2024 10:43:14 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:43 AM Thomas Munro <[email protected]> wrote:\n> I'll try this in a bit unless someone else has better ideas or plans\n> for this part... sorry for the drip-feeding.\n\nAnd done, see commitfest entry #5170.\n\n\n", "msg_date": "Tue, 13 Aug 2024 17:47:46 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Sat, 2024-08-10 at 09:42 +1200, Thomas Munro wrote:\n> The NetBSD situation is more vexing.  I was trying to find out if\n> someone is working on it and unfortunately it looks like there is a\n> principled stand against adding it:\n> \n> https://mail-index.netbsd.org/tech-userlevel/2015/12/28/msg009546.html\n> https://mail-index.netbsd.org/netbsd-users/2017/02/14/msg019352.html\n\nThe objection seems to be very general: that uselocale() modifies the\nthread state and affects calls a long distance from uselocale(). I\ndon't disagree with the general sentiment. But in effect, that just\nprevents people migrating away from setlocale(), to which the same\nargument applies, and is additionally thread-unsafe.\n\nThe only alternative is to essentially ban the use of non-_l variants,\nwhich is fine I suppose, but causes a fair amount of code churn.\n\n> They're right that we really just want to use \"C\" in some places, and\n> their LC_C_LOCALE is a very useful system-provided value to be able\n> to\n> pass into _l functions.  It's a shame it's non-standard, because\n> without it you have to allocate a locale_t for \"C\" and keep it\n> somewhere to feed to _l functions...\n\nIf we're going to do that, why not just have ascii-only variants of our\nown? pg_ascii_isspace(...) is at least as readable as isspace_l(...,\nLC_C_LOCALE).\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 18:05:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, Aug 14, 2024 at 1:05 PM Jeff Davis <[email protected]> wrote:\n> The only alternative is to essentially ban the use of non-_l variants,\n> which is fine I suppose, but causes a fair amount of code churn.\n\nLet's zoom out a bit and consider some ways we could set up the\nprocess, threads and individual calls:\n\n1. The process global locale is always \"C\". If you ever call\nuselocale(), it can only be for short stretches, and you have to\nrestore it straight after; perhaps it is only ever used in replacement\n_l() functions for systems that lack them. You need to use _l()\nfunctions for all non-\"C\" locales. The current database default needs\nto be available as a variable (in future: thread-local variable, or\nreachable from one), so you can use it in _l() functions. The \"C\"\nlocale can be accessed implicitly with non-l() functions, or you could\nban those to reduce confusion and use foo_l(..., LC_GLOBAL_LOCALE) for\n\"C\". Or a name like PG_C_LOCALE, which, in backend code could be just\nLC_GLOBAL_LOCALE, while in frontend/library code it could be the\nsingleton mechanism I showed in CF#5166.\n\nXXX Note that nailing LC_ALL to \"C\" in the backend would extend our\nexisting policy for LC_NUMERIC to all categories. That's why we can\nuse strtod() in the backend and expect the radix character to be '.'.\nIt's interesting to contemplate the strtod() calls in CF#5166: they\nare code copied-and-pasted from backend and frontend; in the backend\nwe can use strtod() currently but in the frontend code I showed a\nchange to strtod_l(..., PG_C_LOCALE), in order to be able to delete\nsome ugly deeply nested uselocale()/setlocale() stuff of the exact\nsort that NetBSD hackers (and I) hate. It's obviously a bit of a code\nsmell that it's copied-and-pasted in the first place, and should\nreally share code. Supposing some of that stuff finishes up in\nsrc/common, then I think you'd want a strtod_l(..., PG_C_LOCALE) that\ncould be allowed to take advantage of the knowledge that the global\nlocale is \"C\" in the backend. Just thoughts...\n\n2. The process global locale is always \"C\". Each backend process (in\nfuture: thread) calls uselocale() to set the thread-local locale to\nthe database default, so it can keep using the non-_l() functions as a\nway to access the database default, and otherwise uses _l() functions\nif it wants something else (as we do already). The \"C\" locale can be\naccessed with foo_l(..., LC_GLOBAL_LOCALE) or PG_C_LOCALE etc.\n\nXXX This option is blocked by NetBSD's rejection of uselocale(). I\nguess if you really wanted to work around NetBSD's policy you could\nmake your own wrapper for all affected functions, translating foo() to\nfoo_l(..., pg_thread_current_locale), so you could write uselocale(),\nwhich is pretty much what every other libc does... But eughhh\n\n3. The process global locale is inherited from the system or can be\nset by the user however they want for the benefit of extensions, but\nwe never change it after startup or refer to it. Then we do the same\nas 1 or 2, except if we ever want \"C\" we'll need a locale_t for that,\nagain perhaps using the PC_C_LOCALE mechanism. Non-_l() functions are\neffectively useless except in cases where you really want to use the\nsystem's settings inherited from startup, eg for messages, so they'd\nmostly be banned.\n\nWhat else?\n\n> > They're right that we really just want to use \"C\" in some places, and\n> > their LC_C_LOCALE is a very useful system-provided value to be able\n> > to\n> > pass into _l functions. It's a shame it's non-standard, because\n> > without it you have to allocate a locale_t for \"C\" and keep it\n> > somewhere to feed to _l functions...\n>\n> If we're going to do that, why not just have ascii-only variants of our\n> own? pg_ascii_isspace(...) is at least as readable as isspace_l(...,\n> LC_C_LOCALE).\n\nYeah, I agree there are some easy things we should do that way. In\nfact we've already established that scanner_isspace() needs to be used\nin lots more places for that, even aside from thread-safety.\n\n\n", "msg_date": "Wed, 14 Aug 2024 14:31:20 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, 2024-08-14 at 14:31 +1200, Thomas Munro wrote:\n> 1.  The process global locale is always \"C\".  If you ever call\n> uselocale(), it can only be for short stretches, and you have to\n> restore it straight after; perhaps it is only ever used in\n> replacement\n> _l() functions for systems that lack them.  You need to use _l()\n> functions for all non-\"C\" locales.  The current database default\n> needs\n> to be available as a variable (in future: thread-local variable, or\n> reachable from one), so you can use it in _l() functions.  The \"C\"\n> locale can be accessed implicitly with non-l() functions, or you\n> could\n> ban those to reduce confusion and use foo_l(..., LC_GLOBAL_LOCALE)\n> for\n> \"C\".  Or a name like PG_C_LOCALE, which, in backend code could be\n> just\n> LC_GLOBAL_LOCALE, while in frontend/library code it could be the\n> singleton mechanism I showed in CF#5166.\n\n+1 to this approach. It makes things more consistent across platforms\nand avoids surprising dependencies on the global setting.\n\nWe'll have to be careful that each call site is either OK with C, or\nthat it gets changed to an _l() variant. We also have to be careful\nabout extensions.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 12:00:52 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Wed, Aug 7, 2024 at 7:07 PM Thomas Munro <[email protected]> wrote:\n> On Wed, Aug 7, 2024 at 10:23 AM Tom Lane <[email protected]> wrote:\n> > Jeff Davis <[email protected]> writes:\n> > > 2. I don't see a good way to canonicalize a locale name, like in\n> > > check_locale(), which uses the result of setlocale().\n> >\n> > What I can tell you about that is that check_locale's expectation\n> > that setlocale does any useful canonicalization is mostly wishful\n> > thinking [1]. On a lot of platforms you just get the input string\n> > back again. If that's the only thing keeping us on setlocale,\n> > I think we could drop it. (Perhaps we should do some canonicalization\n> > of our own instead?)\n>\n> +1\n>\n> I know it does something on Windows (we know the EDB installer gives\n> it strings like \"Language,Country\" and it converts them to\n> \"Language_Country.Encoding\", see various threads about it all going\n> wrong), but I'm not sure it does anything we actually want to\n> encourage. I'm hoping we can gradually screw it down so that we only\n> have sane BCP 47 in the system on that OS, and I don't see why we\n> wouldn't just use them verbatim.\n\nSome more thoughts on check_locale() and canonicalisation:\n\nI doubt the canonicalisation does anything useful on any Unix system,\nas they're basically just file names. In the case of glibc, the\nencoding part is munged before opening the file so it tolerates .utf8\nor .UTF-8 or .u---T----f------8 on input, but it still returns\nwhatever you gave it so the return value isn't cleaning the input or\nanything.\n\n\"\" is a problem however... the special value for \"native environment\"\nis returned as a real locale name, which we probably still need in\nplaces. We could change that to newlocale(\"\") + query instead, but\nthere is a portability pipeline problem getting the name out of it:\n\n1. POSIX only just added getlocalename_l() in 2024[1][2].\n2. Glibc has non-standard nl_langinfo_l(NL_LOCALE_NAME(category), loc).\n3. The <xlocale.h> systems (macOS/*BSD) have non-standard\nquerylocale(mask, loc).\n4. AFAIK there is no way to do it on pure POSIX 2008 systems.\n5. For Windows, there is a completely different thing to get the\nuser's default locale, see CF#3772.\n\nThe systems in category 4 would in practice be Solaris and (if it\ncomes back) AIX. Given that, we probably just can't go that way soon.\n\nSo I think the solution could perhaps be something like: in some early\nstartup phase before there are any threads, we nail down all the\nlocale categories to \"C\" (or whatever we decide on for the permanent\nglobal locale), and also query the \"\" categories and make a copy of\nthem in case anyone wants them later, and then never call setlocale()\nagain.\n\n[1] https://pubs.opengroup.org/onlinepubs/9799919799/functions/getlocalename_l.html\n[2] https://www.austingroupbugs.net/view.php?id=1220\n\n\n", "msg_date": "Thu, 15 Aug 2024 10:43:50 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Thu, 2024-08-15 at 10:43 +1200, Thomas Munro wrote:\n> So I think the solution could perhaps be something like: in some\n> early\n> startup phase before there are any threads, we nail down all the\n> locale categories to \"C\" (or whatever we decide on for the permanent\n> global locale), and also query the \"\" categories and make a copy of\n> them in case anyone wants them later, and then never call setlocale()\n> again.\n\n+1.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 16:00:39 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Thu, Aug 15, 2024 at 11:00 AM Jeff Davis <[email protected]> wrote:\n> On Thu, 2024-08-15 at 10:43 +1200, Thomas Munro wrote:\n> > So I think the solution could perhaps be something like: in some\n> > early\n> > startup phase before there are any threads, we nail down all the\n> > locale categories to \"C\" (or whatever we decide on for the permanent\n> > global locale), and also query the \"\" categories and make a copy of\n> > them in case anyone wants them later, and then never call setlocale()\n> > again.\n>\n> +1.\n\nWe currently nail down these categories:\n\n /* We keep these set to \"C\" always. See pg_locale.c for explanation. */\n init_locale(\"LC_MONETARY\", LC_MONETARY, \"C\");\n init_locale(\"LC_NUMERIC\", LC_NUMERIC, \"C\");\n init_locale(\"LC_TIME\", LC_TIME, \"C\");\n\nCF #5170 has patches to make it so that we stop changing them even\ntransiently, using locale_t interfaces to feed our caches of stuff\nneeded to work with those categories, so they really stay truly nailed\ndown.\n\nIt sounds like someone needs to investigate doing the same thing for\nthese two, from CheckMyDatabase():\n\n if (pg_perm_setlocale(LC_COLLATE, collate) == NULL)\n ereport(FATAL,\n (errmsg(\"database locale is incompatible with\noperating system\"),\n errdetail(\"The database was initialized with\nLC_COLLATE \\\"%s\\\", \"\n \" which is not recognized by setlocale().\", collate),\n errhint(\"Recreate the database with another locale or\ninstall the missing locale.\")));\n\n if (pg_perm_setlocale(LC_CTYPE, ctype) == NULL)\n ereport(FATAL,\n (errmsg(\"database locale is incompatible with\noperating system\"),\n errdetail(\"The database was initialized with LC_CTYPE \\\"%s\\\", \"\n \" which is not recognized by setlocale().\", ctype),\n errhint(\"Recreate the database with another locale or\ninstall the missing locale.\")));\n\nHow should that work? Maybe we could imagine something like\nMyDatabaseLocale, a locale_t with LC_COLLATE and LC_CTYPE categories\nset appropriately. Or should it be a pg_locale_t instead (if your\ndatabase default provider is ICU, then you don't even need a locale_t,\nright?).\n\nThen I think there is one quite gnarly category, from\nassign_locale_messages() (a GUC assignment function):\n\n (void) pg_perm_setlocale(LC_MESSAGES, newval);\n\nI have never really studied gettext(), but I know it was just\nstandardised in POSIX 2024, and the standardised interface has _l()\nvariants of all functions. Current implementations don't have them\nyet. Clearly we absolutely couldn't call pg_perm_setlocale() after\nearly startup -- but if gettext() is relying on the current locale to\naffect far away code, then maybe this is one place where we'd just\nhave to use uselocale(). Perhaps we could plan some transitional\nstrategy where NetBSD users lose the ability to change the GUC without\nrestarting the server and it has to be the same for all sessions, or\nsomething like that, until they produce either gettext_l() or\nuselocale(), but I haven't thought hard about this part at all yet...\n\n\n", "msg_date": "Thu, 15 Aug 2024 20:46:15 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On 15.08.24 00:43, Thomas Munro wrote:\n> \"\" is a problem however... the special value for \"native environment\"\n> is returned as a real locale name, which we probably still need in\n> places. We could change that to newlocale(\"\") + query instead, but\n\nWhere do we need that in the server?\n\nIt should just be initdb doing that and then initializing the server \nwith concrete values based on that.\n\nI guess technically some of these GUC settings default to the \nenvironment? But I think we could consider getting rid of that.\n\n\n", "msg_date": "Thu, 15 Aug 2024 15:25:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Fri, Aug 16, 2024 at 1:25 AM Peter Eisentraut <[email protected]> wrote:\n> On 15.08.24 00:43, Thomas Munro wrote:\n> > \"\" is a problem however... the special value for \"native environment\"\n> > is returned as a real locale name, which we probably still need in\n> > places. We could change that to newlocale(\"\") + query instead, but\n>\n> Where do we need that in the server?\n\nHmm. Yeah, right, the only way I've found so far to even reach that\ncode and that captures that result is:\n\ncreate database db2 locale = '';\n\nThats puts 'en_NZ.UTF-8' or whatever in pg_database. In contrast,\ncreate collation will accept '' but just store it verbatim, and the\nGUCs for changing time, monetary, numeric accept it too and keep it\nverbatim. We could simply ban '' in all user commands. I doubt\nthey're documented as acceptable values, once you get past initdb and\nhave a running system. Looking into that...\n\n> It should just be initdb doing that and then initializing the server\n> with concrete values based on that.\n\nRight.\n\n> I guess technically some of these GUC settings default to the\n> environment? But I think we could consider getting rid of that.\n\nYeah.\n\n\n", "msg_date": "Fri, 16 Aug 2024 09:09:40 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On Fri, Aug 16, 2024 at 9:09 AM Thomas Munro <[email protected]> wrote:\n> On Fri, Aug 16, 2024 at 1:25 AM Peter Eisentraut <[email protected]> wrote:\n> > It should just be initdb doing that and then initializing the server\n> > with concrete values based on that.\n>\n> Right.\n>\n> > I guess technically some of these GUC settings default to the\n> > environment? But I think we could consider getting rid of that.\n>\n> Yeah.\n\nSeems to make a lot of sense. I tried that out over in CF #5170.\n\n(In case it's not clear why I'm splitting discussion between threads:\nI was thinking of this thread as the one where we're discussing what\nneeds to be done, with other threads being spun off to become CF entry\nwith concrete patches. I realised re-reading some discussion that\nthat might not be obvious...)\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:12:33 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" }, { "msg_contents": "On 8/9/24 8:24 PM, Jeff Davis wrote:\n> On Fri, 2024-08-09 at 13:41 +0200, Andreas Karlsson wrote:\n>> I am leaning towards that we should write our own pure ascii\n>> functions\n>> for this.\n> \n> That makes sense for a lot of call sites, but it could cause breakage\n> if we aren't careful.\n> \n>> Since we do not support any non-ascii compatible encodings\n>> anyway I do not see the point in having locale support in most of\n>> these\n>> call-sites.\n> \n> An ascii-compatible encoding just means that the code points in the\n> ascii range are represented as ascii. I'm not clear on whether code\n> points in the ascii range can return different results for things like\n> isspace(), but it sounds plausible -- toupper() can return different\n> results for 'i' in tr_TR.\n> \n> Also, what about the values outside 128-255, which are still valid\n> input to isspace()?\n\nMy idea was that in a lot of those cases we only try to parse e.g. 0-9 \nas digits and always only . as the decimal separator so we should make \njust make that obvious by either using locale C or writing our own ascii \nonly functions. These strings are meant to be read by machines, not \nhumans, primarily.\n\nAndreas\n\n\n", "msg_date": "Wed, 28 Aug 2024 18:26:04 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining dependency on setlocale()" } ]
[ { "msg_contents": "Please find attached a patch to enable data checksums by default.\n\nCurrently, initdb only enables data checksums if passed the\n--data-checksums or -k argument. There was some hesitation years ago when\nthis feature was first added, leading to the current situation where the\ndefault is off. However, many years later, there is wide consensus that\nthis is an extraordinarily safe, desirable setting. Indeed, most (if not\nall) of the major commercial and open source Postgres systems currently\nturn this on by default. I posit you would be hard-pressed to find many\nsystems these days in which it has NOT been turned on. So basically we have\na de-facto standard, and I think it's time we flipped the switch to make it\non by default.\n\nThe patch is simple enough: literally flipping the boolean inside of\ninitdb.c, and adding a new argument '--no-data-checksums' for those\ninstances that truly want the old behavior. One place that still needs the\nold behavior is our internal tests for pg_checksums and pg_amcheck, so I\nadded a new argument to init() in PostgreSQL/Test/Cluster.pm to allow those\nto still pass their tests.\n\nThis is just the default - people are still more than welcome to turn it\noff with the new flag. The pg_checksums program is another option that\nactually argues for having the default \"on\", as switching to \"off\" once\ninitdb has been run is trivial.\n\nYes, I am aware of the previous discussions on this, but the world moves\nfast - wal compression is better than in the past, vacuum is better now,\nand data-checksums being on is such a complete default in the wild, it\nfeels weird and a disservice that we are not running all our tests like\nthat.\n\nCheers,\nGreg", "msg_date": "Tue, 6 Aug 2024 18:46:52 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Enable data checksums by default" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 06, 2024 at 06:46:52PM -0400, Greg Sabino Mullane wrote:\n> Please find attached a patch to enable data checksums by default.\n> \n> Currently, initdb only enables data checksums if passed the\n> --data-checksums or -k argument. There was some hesitation years ago when\n> this feature was first added, leading to the current situation where the\n> default is off. However, many years later, there is wide consensus that\n> this is an extraordinarily safe, desirable setting. Indeed, most (if not\n> all) of the major commercial and open source Postgres systems currently\n> turn this on by default. I posit you would be hard-pressed to find many\n> systems these days in which it has NOT been turned on. So basically we have\n> a de-facto standard, and I think it's time we flipped the switch to make it\n> on by default.\n\n[...]\n \n> Yes, I am aware of the previous discussions on this, but the world moves\n> fast - wal compression is better than in the past, vacuum is better now,\n> and data-checksums being on is such a complete default in the wild, it\n> feels weird and a disservice that we are not running all our tests like\n> that.\n\nI agree.\n\nSome review on the patch:\n\n> diff --git a/doc/src/sgml/ref/initdb.sgml b/doc/src/sgml/ref/initdb.sgml\n> index bdd613e77f..511f489d34 100644\n> --- a/doc/src/sgml/ref/initdb.sgml\n> +++ b/doc/src/sgml/ref/initdb.sgml\n> @@ -267,12 +267,14 @@ PostgreSQL documentation\n> <para>\n> Use checksums on data pages to help detect corruption by the\n> I/O system that would otherwise be silent. Enabling checksums\n> - may incur a noticeable performance penalty. If set, checksums\n> + may incur a small performance penalty. If set, checksums\n> are calculated for all objects, in all databases. All checksum\n\nI think the last time we dicussed this the consensus was that\ncomputational overhead of computing the checksums is pretty small for\nmost systems (so the above change seems warranted regardless of whether\nwe switch the default), but turning on wal_compression also turns on\nwal_log_hints, which can increase WAL by quite a lot. Maybe this is\ncovered elsewhere in the documentation (I just looked at the patch), but\nif not, it probably should be added here as a word of caution.\n\n> failures will be reported in the\n> <link linkend=\"monitoring-pg-stat-database-view\">\n> <structname>pg_stat_database</structname></link> view.\n> See <xref linkend=\"checksums\" /> for details.\n> + As of version 18, checksums are enabled by default. They can be\n> + disabled by use of <option>--no-data-checksums</option>.\n\nI think we usually do not mention when a feature was added/changed, do\nwe? So I'd just write \"(default: enabled)\" or whatever is the style of\nthe surrounding options.\n\n> </para>\n> </listitem>\n> </varlistentry>\n> diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c\n> index f00718a015..ce7d3e99e5 100644\n> --- a/src/bin/initdb/initdb.c\n> +++ b/src/bin/initdb/initdb.c\n> @@ -164,7 +164,7 @@ static bool noinstructions = false;\n> static bool do_sync = true;\n> static bool sync_only = false;\n> static bool show_setting = false;\n> -static bool data_checksums = false;\n> +static bool data_checksums = true;\n> static char *xlog_dir = NULL;\n> static int\twal_segment_size_mb = (DEFAULT_XLOG_SEG_SIZE) / (1024 * 1024);\n> static DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;\n> @@ -3121,6 +3121,7 @@ main(int argc, char *argv[])\n> \t\t{\"waldir\", required_argument, NULL, 'X'},\n> \t\t{\"wal-segsize\", required_argument, NULL, 12},\n> \t\t{\"data-checksums\", no_argument, NULL, 'k'},\n> +\t\t{\"no-data-checksums\", no_argument, NULL, 20},\n\nDoes it make sense to add -K (capital k) as a short-cut for this? I\nthink this is how we distinguish on/off for pg_dump (-t/-T etc.) but\nmaybe that is not wider project policy.\n\n\nMichael\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:43:41 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Wed, Aug 7, 2024 at 4:43 AM Michael Banck <[email protected]> wrote:\n\n> I think the last time we dicussed this the consensus was that\n> computational overhead of computing the checksums is pretty small for\n> most systems (so the above change seems warranted regardless of whether\n> we switch the default), but turning on wal_compression also turns on\n> wal_log_hints, which can increase WAL by quite a lot. Maybe this is\n> covered elsewhere in the documentation (I just looked at the patch), but\n> if not, it probably should be added here as a word of caution.\n>\n\nYeah, that seems something beyond this patch? Certainly we should mention\nwal_compression in the release notes if the default changes. I mean, I feel\nwal_log_hints should probably default to on as well, but I've honestly\nnever really given it much thought because my fingers are trained to type\n\"initdb -k\". I've been using data checksums for roughly a decade now. I\nthink the only time I've NOT used checksums was when I was doing checksum\noverhead measurements, or hacking on the pg_checksums program.\n\n\n> I think we usually do not mention when a feature was added/changed, do\n> we? So I'd just write \"(default: enabled)\" or whatever is the style of\n> the surrounding options.\n>\n\n+1\n\n\n> > + {\"no-data-checksums\", no_argument, NULL, 20},\n>\n> Does it make sense to add -K (capital k) as a short-cut for this? I\n> think this is how we distinguish on/off for pg_dump (-t/-T etc.) but\n> maybe that is not wider project policy.\n>\n\nI'd rather not. Better to keep it explicit rather than some other weird\nletter that has no mnemonic value.\n\nCheers,\nGreg\n\nOn Wed, Aug 7, 2024 at 4:43 AM Michael Banck <[email protected]> wrote:I think the last time we dicussed this the consensus was that\ncomputational overhead of computing the checksums is pretty small for\nmost systems (so the above change seems warranted regardless of whether\nwe switch the default), but turning on wal_compression also turns on\nwal_log_hints, which can increase WAL by quite a lot. Maybe this is\ncovered elsewhere in the documentation (I just looked at the patch), but\nif not, it probably should be added here as a word of caution.Yeah, that seems something beyond this patch? Certainly we should mention wal_compression in the release notes if the default changes. I mean, I feel wal_log_hints should probably default to on as well, but I've honestly never really given it much thought because my fingers are trained to type \"initdb -k\". I've been using data checksums for roughly a decade now. I think the only time I've NOT used checksums was when I was doing checksum overhead measurements, or hacking on the pg_checksums program. I think we usually do not mention when a feature was added/changed, do\nwe? So I'd just write \"(default: enabled)\" or whatever is the style of\nthe surrounding options.+1 > +             {\"no-data-checksums\", no_argument, NULL, 20},\n\nDoes it make sense to add -K (capital k) as a short-cut for this? I\nthink this is how we distinguish on/off for pg_dump (-t/-T etc.) but\nmaybe that is not wider project policy.I'd rather not. Better to keep it explicit rather than some other weird letter that has no mnemonic value.Cheers,Greg", "msg_date": "Wed, 7 Aug 2024 10:17:30 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Aug 7, 2024, at 23:18, Greg Sabino Mullane <[email protected]> wrote:On Wed, Aug 7, 2024 at 4:43 AM Michael Banck <[email protected]> wrote:.\nDoes it make sense to add -K (capital k) as a short-cut for this? I\nthink this is how we distinguish on/off for pg_dump (-t/-T etc.) but\nmaybe that is not wider project policy.I'd rather not. Better to keep it explicit rather than some other weird letter that has no mnemonic value.Not sure to see the point of a short option while the long option is able to do the job in a clear enough way. Using one character to define a positive or negative is confusing, harder to parse. That’s just my take.Switching the default to use checksums makes sense to me. Even if there will be always an argument about efficiency, every uses of Postgres I’ve seen in the last 10 years enable data checksums to mitigate Postgres as a source of corruption.The patch should be split in more pieces: one for the initdb option, a second for the tap test option switching some tests to use it where it matters and a third patch to change the default. This would limit the damage should the default be reverted as the new options are useful on their own.--Michael", "msg_date": "Thu, 8 Aug 2024 09:59:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On 07.08.24 00:46, Greg Sabino Mullane wrote:\n> Currently, initdb only enables data checksums if passed the \n> --data-checksums or -k argument. There was some hesitation years ago \n> when this feature was first added, leading to the current situation \n> where the default is off. However, many years later, there is wide \n> consensus that this is an extraordinarily safe, desirable setting. \n> Indeed, most (if not all) of the major commercial and open source \n> Postgres systems currently turn this on by default. I posit you would be \n> hard-pressed to find many systems these days in which it has NOT been \n> turned on. So basically we have a de-facto standard, and I think it's \n> time we flipped the switch to make it on by default.\n\nI'm sympathetic to this proposal, but I want to raise some concerns.\n\nMy understanding was that the reason for some hesitation about adopting \ndata checksums was the performance impact. Not the checksumming itself, \nbut the overhead from hint bit logging. The last time I looked into \nthat, you could get performance impacts on the order of 5% tps. Maybe \nthat's acceptable, and you of course can turn it off if you want the \nextra performance. But I think this should be discussed in this thread.\n\nAbout the claim that it's already the de-facto standard. Maybe that is \napproximately true for \"serious\" installations. But AFAICT, the popular \npackagings don't enable checksums by default, so there is likely a \nsignificant middle tier between \"just trying it out\" and serious \nproduction use that don't have it turned on.\n\nFor those uses, this change would render pg_upgrade useless for upgrades \nfrom an old instance with default settings to a new instance with \ndefault settings. And then users would either need to re-initdb with \nchecksums turned back off, or I suppose run pg_checksums on the old \ninstance before upgrading? This is significant additional complication. \n And packagers who have built abstractions on top of pg_upgrade (such \nas Debian pg_upgradecluster) would also need to implement something to \nmanage this somehow.\n\nSo I think we need to think through the upgrade experience a bit more. \nUnfortunately, pg_checksums hasn't gotten to the point that we were \nperhaps once hoping for that you could enable checksums on a live \nsystem. I'm thinking pg_upgrade could have a mode where it adds the \nchecksum during the upgrade as it copies the files (essentially a subset \nof pg_checksums). I think that would be useful for that middle tier of \nusers who just want a good default experience.\n\n\n\n", "msg_date": "Thu, 8 Aug 2024 12:11:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Thu, Aug 08, 2024 at 12:11:38PM +0200, Peter Eisentraut wrote:\n> So I think we need to think through the upgrade experience a bit more.\n> Unfortunately, pg_checksums hasn't gotten to the point that we were perhaps\n> once hoping for that you could enable checksums on a live system. I'm\n> thinking pg_upgrade could have a mode where it adds the checksum during the\n> upgrade as it copies the files (essentially a subset of pg_checksums). I\n> think that would be useful for that middle tier of users who just want a\n> good default experience.\n\nWell that, or, as a first less ambitious step, pg_upgrade could carry\nover the data_checksums setting from the old to the new instance by\nessentially disabling it via pg_checksums -d (which is fast) if it the\ncurrent default (off) is set on the old instance and the new instance\nwas created with the new onw (checksums on).\n\nProbably should include a warning or something in that case, though I\nguess a lot of users will read just past it. But at least they are not\nworse off than before.\n\n\nMichael\n\n\n", "msg_date": "Thu, 8 Aug 2024 14:54:55 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "> On 8 Aug 2024, at 12:11, Peter Eisentraut <[email protected]> wrote:\n\n> My understanding was that the reason for some hesitation about adopting data checksums was the performance impact. Not the checksumming itself, but the overhead from hint bit logging. The last time I looked into that, you could get performance impacts on the order of 5% tps. Maybe that's acceptable, and you of course can turn it off if you want the extra performance. But I think this should be discussed in this thread.\n\nThat's been my experience as well, the overhead of the checksumming is\nnegligible but the overhead in WAL can be (having hint bits WAL logged does\ncarry other benefits as well to be fair).\n\n> I think we need to think through the upgrade experience a bit more.\n\n+1\n\n> Unfortunately, pg_checksums hasn't gotten to the point that we were perhaps once hoping for that you could enable checksums on a live system. \n\nI don't recall there being any work done (or plans for) using pg_checksums on a\nlive system. Anyone interested in enabling checksums on a live cluster can\nhowever review the patch for that in:\n\n https://postgr.es/m/[email protected]\n\n> I'm thinking pg_upgrade could have a mode where it adds the checksum during the upgrade as it copies the files (essentially a subset of pg_checksums). I think that would be useful for that middle tier of users who just want a good default experience.\n\nAs a side-note, I implemented this in pg_upgrade at Greenplum (IIRC it was\nsubmitted to -hackers at the time as well) and it worked well with not a lot of\ncode.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:01:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "Thank you for the feedback. Please find attached three separate patches.\nOne to add a new flag to initdb (--no-data-checksums), one to adjust the\ntests to use this flag as needed, and the final to make the actual switch\nof the default value (along with tests and docs).\n\nCheers,\nGreg", "msg_date": "Thu, 8 Aug 2024 13:19:14 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Thu, Aug 8, 2024 at 6:11 AM Peter Eisentraut <[email protected]> wrote:\n> About the claim that it's already the de-facto standard. Maybe that is\n> approximately true for \"serious\" installations. But AFAICT, the popular\n> packagings don't enable checksums by default, so there is likely a\n> significant middle tier between \"just trying it out\" and serious\n> production use that don't have it turned on.\n\n+1.\n\n> I'm thinking pg_upgrade could have a mode where it adds the\n> checksum during the upgrade as it copies the files (essentially a subset\n> of pg_checksums). I think that would be useful for that middle tier of\n> users who just want a good default experience.\n\nThat would be very nice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 13:42:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On 8/8/24 19:42, Robert Haas wrote:\n> On Thu, Aug 8, 2024 at 6:11 AM Peter Eisentraut <[email protected]> wrote:\n>> About the claim that it's already the de-facto standard. Maybe that is\n>> approximately true for \"serious\" installations. But AFAICT, the popular\n>> packagings don't enable checksums by default, so there is likely a\n>> significant middle tier between \"just trying it out\" and serious\n>> production use that don't have it turned on.\n> \n> +1.\n> \n>> I'm thinking pg_upgrade could have a mode where it adds the\n>> checksum during the upgrade as it copies the files (essentially a subset\n>> of pg_checksums). I think that would be useful for that middle tier of\n>> users who just want a good default experience.\n> \n> That would be very nice.\n> \n\nYeah, but it might also disable checksums on the new cluster, which\nwould work for link mode too. So we'd probably want multiple modes, one\nto enable checksums during file copy, one to disable checksums, and one\nto just fail for incompatible clusters.\n\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Thu, 8 Aug 2024 20:49:36 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Thu, Aug 8, 2024 at 6:11 AM Peter Eisentraut <[email protected]>\nwrote:\n\n\n> My understanding was that the reason for some hesitation about adopting\n> data checksums was the performance impact. Not the checksumming itself,\n> but the overhead from hint bit logging. The last time I looked into that,\n> you could get performance impacts on the order of 5% tps. Maybe that's\n> acceptable, and you of course can turn it off if you want the extra\n> performance. But I think this should be discussed in this thread.\n>\n\nFair enough. I think the performance impact is acceptable, as evidenced by\nthe large number of people that turn it on. And it is easy enough to turn\nit off again, either via --no-data-checksums or pg_checksums --disable.\nI've come across people who have regretted not throwing a -k into their\ninitial initdb, but have not yet come across someone who has the opposite\nregret. When I did some measurements some time ago, I found numbers much\nless than 5%, but of course it depends on a lot of factors.\n\nAbout the claim that it's already the de-facto standard. Maybe that is\n> approximately true for \"serious\" installations. But AFAICT, the popular\n> packagings don't enable checksums by default, so there is likely a\n> significant middle tier between \"just trying it out\" and serious\n> production use that don't have it turned on.\n>\n\nI would push back on that \"significant\" a good bit. The number of Postgres\ninstallations in the cloud is very likely to dwarf the total package\ninstallations. Maybe not 10 years ago, but now? Maybe someone from Amazon\ncan share some numbers. Not that we have any way to compare against package\ninstalls :) But anecdotally the number of people who mention RDS etc. on\nthe various fora has exploded.\n\n\n> For those uses, this change would render pg_upgrade useless for upgrades\n> from an old instance with default settings to a new instance with default\n> settings. And then users would either need to re-initdb with checksums\n> turned back off, or I suppose run pg_checksums on the old instance before\n> upgrading? This is significant additional complication.\n>\n\nMeh, re-running initdb with --no-data-checksums seems a fairly low hurdle.\n\n\n> And packagers who have built abstractions on top of pg_upgrade (such as\n> Debian pg_upgradecluster) would also need to implement something to manage\n> this somehow.\n>\n\nHow does it deal with clusters with checksums enabled now?\n\n\n> I'm thinking pg_upgrade could have a mode where it adds the checksum\n> during the upgrade as it copies the files (essentially a subset\n> of pg_checksums). I think that would be useful for that middle tier of\n> users who just want a good default experience.\n>\n\nHm...might be a bad experience if it forces a switch out of --link mode.\nPerhaps a warning at the end of pg_upgrade that suggests running\npg_checksums on your new cluster if you want to enable checksums?\n\nCheers,\nGreg\n\nOn Thu, Aug 8, 2024 at 6:11 AM Peter Eisentraut <[email protected]> wrote: My understanding was that the reason for some hesitation about adopting data checksums was the performance impact.  Not the checksumming itself, but the overhead from hint bit logging.  The last time I looked into that, you could get performance impacts on the order of 5% tps.  Maybe that's acceptable, and you of course can turn it off if you want the extra performance.  But I think this should be discussed in this thread.Fair enough. I think the performance impact is acceptable, as evidenced by the large number of people that turn it on. And it is easy enough to turn it off again, either via --no-data-checksums or pg_checksums --disable. I've come across people who have regretted not throwing a -k into their initial initdb, but have not yet come across someone who has the opposite regret. When I did some measurements some time ago, I found numbers much less than 5%, but of course it depends on a lot of factors.\nAbout the claim that it's already the de-facto standard.  Maybe that is approximately true for \"serious\" installations.  But AFAICT, the popular packagings don't enable checksums by default, so there is likely a significant middle tier between \"just trying it out\" and serious \nproduction use that don't have it turned on.I would push back on that \"significant\" a good bit. The number of Postgres installations in the cloud is very likely to dwarf the total package installations. Maybe not 10 years ago, but now? Maybe someone from Amazon can share some numbers. Not that we have any way to compare against package installs :) But anecdotally the number of people who mention RDS etc. on the various fora has exploded. \nFor those uses, this change would render pg_upgrade useless for upgrades from an old instance with default settings to a new instance with default settings.  And then users would either need to re-initdb with checksums turned back off, or I suppose run pg_checksums on the old instance before upgrading?  This is significant additional complication. \nMeh, re-running initdb with --no-data-checksums seems a fairly low hurdle. And packagers who have built abstractions on top of pg_upgrade (such as Debian pg_upgradecluster) would also need to implement something to manage this somehow.How does it deal with clusters with checksums enabled now?  I'm thinking pg_upgrade could have a mode where it adds the checksum during the upgrade as it copies the files (essentially a subset \nof pg_checksums).  I think that would be useful for that middle tier of users who just want a good default experience.Hm...might be a bad experience if it forces a switch out of --link mode. Perhaps a warning at the end of pg_upgrade that suggests running pg_checksums on your new cluster if you want to enable checksums?Cheers,Greg", "msg_date": "Tue, 13 Aug 2024 10:41:44 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:42 AM Greg Sabino Mullane <[email protected]> wrote:\n> Fair enough. I think the performance impact is acceptable, as evidenced by the large number of people that turn it on. And it is easy enough to turn it off again, either via --no-data-checksums or pg_checksums --disable.\n> When I did some measurements some time ago, I found numbers much less than 5%, but of course it depends on a lot of factors.\n\nI think the bad case is when you have a write workload that is\nsignificantly bigger than shared_buffers but still small enough to fit\ncomfortably in the OS cache. When everything fits in shared_buffers,\nyou only need to write dirty buffers once per checkpoint cycle, so\nmaking it more expensive isn't necessarily a big deal. When you're\nconstantly going to disk, that's so expensive that you don't notice\nthe computational overhead. But when you're in that middle zone where\nyou keep evicting buffers from PG but not actually having to write\nthem down to the disk, then I think it's pretty noticeable.\n\n> I've come across people who have regretted not throwing a -k into their initial initdb, but have not yet come across someone who has the opposite regret.\n\nI don't think this is really a fair comparison, because everything\nbeing a little slower all the time is not something that people are\nlikely to \"regret\" in the same way that they regret it when a data\ncorruption issue goes undetected. An undetected data corruption issue\nis a single, very painful event that people are likely to notice,\nwhereas a small performance loss over time kind of blends into the\nbackground. You don't really regret that kind of thing in the same way\nthat you regret a bad event that happens at a particular moment in\ntime.\n\nAnd it's not like we have statistics anywhere that you can look at to\nsee how much CPU time you spent computing checksums, so if a user DOES\nhave a performance problem that would not have occurred if checksums\nhad been disabled, they'll probably never know it.\n\n>> For those uses, this change would render pg_upgrade useless for upgrades from an old instance with default settings to a new instance with default settings. And then users would either need to re-initdb with checksums turned back off, or I suppose run pg_checksums on the old instance before upgrading? This is significant additional complication.\n> Meh, re-running initdb with --no-data-checksums seems a fairly low hurdle.\n\nI tend to agree with that, but I would also like to see the sort of\nimprovements that Peter mentions. It's a lot less work to say \"let's\njust change the default\" and then get mad at anyone who disagrees than\nit is to do the engineering to make changing the default less of a\nproblem. But that kind of engineering really adds a lot of value\ncompared to just changing the default.\n\nNone of that is to say that I'm totally hostile to this change.\nChecksums don't actually prevent your data from getting corrupted, or\nlet you recover it after it does. They just tell you about the\nproblem, and very often you would have found out anyway. However, they\ndo have peace-of-mind value. If you've got checksums turned on, you\ncan verify your checksums regularly and see that they're OK, and\npeople like that. Whether that's worth the overhead for everyone, I'm\nnot quite sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 16:07:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On 08.08.24 19:42, Robert Haas wrote:\n>> I'm thinking pg_upgrade could have a mode where it adds the\n>> checksum during the upgrade as it copies the files (essentially a subset\n>> of pg_checksums). I think that would be useful for that middle tier of\n>> users who just want a good default experience.\n> That would be very nice.\n\nHere is a demo patch for that. It turned out to be quite simple.\n\nI wrote above about a separate mode for that (like \n--copy-and-make-adjustments), but it was just as easy to stick it into \nthe existing --copy mode.\n\nIt would be useful to check what the performance overhead of this is \nversus a copy that does not have to make adjustments. I expect it's \nvery little.\n\nA drawback is that as written this does not work on Windows, because \nWindows uses a different code path in copyFile(). I don't know the \nreasons for that. But it would need to be figured out.", "msg_date": "Thu, 15 Aug 2024 08:38:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Wed, Aug 7, 2024 at 4:18 PM Greg Sabino Mullane <[email protected]> wrote:\n>\n> On Wed, Aug 7, 2024 at 4:43 AM Michael Banck <[email protected]> wrote:\n>>\n>> I think the last time we dicussed this the consensus was that\n>> computational overhead of computing the checksums is pretty small for\n>> most systems (so the above change seems warranted regardless of whether\n>> we switch the default), but turning on wal_compression also turns on\n>> wal_log_hints, which can increase WAL by quite a lot. Maybe this is\n[..]\n>\n>\n> Yeah, that seems something beyond this patch? Certainly we should mention wal_compression in the release notes if the default changes. I mean, I feel wal_log_hints should probably default to on as well, but I've honestly never really given it much thought because my fingers are trained to type \"initdb -k\". I've been using data checksums for roughly a decade now. I think the only time I've NOT used checksums was when I was doing checksum overhead measurements, or hacking on the pg_checksums program.\n\nMaybe I don't understand something, but just to be clear:\nwal_compression (mentioned above) is not turning wal_log_hints on,\njust the wal_log_hints needs to be on when using data checksums\n(implicitly, by the XLogHintBitIsNeeded() macro). I suppose Michael\nwas thinking about the wal_log_hints earlier (?)\n\n-J.\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:49:04 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "Hi Greg and others\n\nOn Tue, Aug 13, 2024 at 4:42 PM Greg Sabino Mullane <[email protected]> wrote:\n>\n> On Thu, Aug 8, 2024 at 6:11 AM Peter Eisentraut <[email protected]> wrote:\n>\n>>\n>> My understanding was that the reason for some hesitation about adopting data checksums was the performance impact. Not the checksumming itself, but the overhead from hint bit logging. The last time I looked into that, you could get performance impacts on the order of 5% tps. Maybe that's acceptable, and you of course can turn it off if you want the extra performance. But I think this should be discussed in this thread.\n>\n>\n> Fair enough. I think the performance impact is acceptable, as evidenced by the large number of people that turn it on. And it is easy enough to turn it off again, either via --no-data-checksums or pg_checksums --disable. I've come across people who have regretted not throwing a -k into their initial initdb, but have not yet come across someone who has the opposite regret. When I did some measurements some time ago, I found numbers much less than 5%, but of course it depends on a lot of factors.\n\nSame here, and +1 to data_checksums=on by default for new installations.\n\nThe best public measurement of the impact was posted in [1] in 2019 by\nTomas to the best of my knowledge, where he explicitly mentioned the\nproblem with more WAL with hints/checksums: SATA disks (low IOPS). My\ntake: now we have 2024, and most people are using at least SSDs or\nslow-SATA (but in cloud they could just change the class of I/O if\nrequired to get IOPS to avoid too much throttling), therefore the\nprice of IOPS dropped significantly.\n\n>> About the claim that it's already the de-facto standard. Maybe that is approximately true for \"serious\" installations. But AFAICT, the popular packagings don't enable checksums by default, so there is likely a significant middle tier between \"just trying it out\" and serious\n>> production use that don't have it turned on.\n>\n>\n> I would push back on that \"significant\" a good bit. The number of Postgres installations in the cloud is very likely to dwarf the total package installations. Maybe not 10 years ago, but now? Maybe someone from Amazon can share some numbers. Not that we have any way to compare against package installs :) But anecdotally the number of people who mention RDS etc. on the various fora has exploded.\n\nSame here. If it helps the case the: 43% of all PostgreSQL DBs\ninvolved in any support case or incident in EDB within last year had\ndata_checksums=on (at least if they had collected the data using our )\n. That's a surprisingly high number (for something that's off by\ndefault), and it makes me think this is because plenty of customers\nare either managed by DBAs who care, or assisted by consultants when\ndeploying, or simply using TPAexec [2] which has this on by default.\n\nAnother thing is plenty of people run with wal_log_hints=on (without\ndata_checksums=off) just to have pg_rewind working. As this is a\nstrictly standby related tool it means they don't have WAL/network\nbandwidth problems, so the WAL rate is not that high in the wild to\ncause problems. I found 1 or 2 cases within last year where we would\nmention that high WAL generation was attributed to\nwal_log_hints=on/XLOG_FPI and they still didn't disable it apparently\n(we have plenty of cases related to too much WAL, but it's mostly due\nto other basic reasons)\n\n-J.\n\n[1] - https://www.postgresql.org/message-id/20190330192543.GH4719%40development\n[2] - https://www.enterprisedb.com/docs/pgd/4/deployments/tpaexec/\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:49:26 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Thu, Aug 15, 2024 at 09:49:04AM +0200, Jakub Wartak wrote:\n> On Wed, Aug 7, 2024 at 4:18 PM Greg Sabino Mullane <[email protected]> wrote:\n> >\n> > On Wed, Aug 7, 2024 at 4:43 AM Michael Banck <[email protected]> wrote:\n> >>\n> >> I think the last time we dicussed this the consensus was that\n> >> computational overhead of computing the checksums is pretty small for\n> >> most systems (so the above change seems warranted regardless of whether\n> >> we switch the default), but turning on wal_compression also turns on\n> >> wal_log_hints, which can increase WAL by quite a lot. Maybe this is\n> [..]\n> >\n> >\n> > Yeah, that seems something beyond this patch? Certainly we should\n> > mention wal_compression in the release notes if the default changes.\n> > I mean, I feel wal_log_hints should probably default to on as well,\n> > but I've honestly never really given it much thought because my\n> > fingers are trained to type \"initdb -k\". I've been using data\n> > checksums for roughly a decade now. I think the only time I've NOT\n> > used checksums was when I was doing checksum overhead measurements,\n> > or hacking on the pg_checksums program.\n> \n> Maybe I don't understand something, but just to be clear:\n> wal_compression (mentioned above) is not turning wal_log_hints on,\n> just the wal_log_hints needs to be on when using data checksums\n> (implicitly, by the XLogHintBitIsNeeded() macro). I suppose Michael\n> was thinking about the wal_log_hints earlier (?)\n\nUh, I am pretty sure I meant to say \"turning on data_checksums als turns\non wal_log_hints\", sorry about the confusion.\n\nI guess the connection is that if you turn on wal_lot_hints (either\ndirectly or via data_checksums) then the number FPIs goes up (possibly\nsignficantly), and enabling wal_compression could (partly) remedy that.\nBut I agree with Greg that such a discussion is probably out-of-scope\nfor this default change.\n\n\nMichael\n\n\n", "msg_date": "Thu, 15 Aug 2024 10:03:30 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "Hi all,\n\nOn Tue, Aug 13, 2024 at 10:08 PM Robert Haas <[email protected]> wrote:\n\n> And it's not like we have statistics anywhere that you can look at to\n> see how much CPU time you spent computing checksums, so if a user DOES\n> have a performance problem that would not have occurred if checksums\n> had been disabled, they'll probably never know it.\n\nIn worst case, per second and per-pid CPU time consumption could be\nquantified by having eBPF which is the standard on distros now\n(requires kernel headers and bpfcc-tools installed), e.g. here 7918\nwas PID doing pgbench-related -c 4 workload with checksum=on (sorry\nfor formatting, but I don't want to use HTML here):\n\n# funclatency-bpfcc --microseconds -i 1 -p 7918\n/usr/lib/postgresql/16/bin/postgres:pg_checksum_page\nTracing 1 functions for\n\"/usr/lib/postgresql/16/bin/postgres:pg_checksum_page\"... Hit Ctrl-C\nto end.\n\n usecs : count distribution\n 0 -> 1 : 0 | |\n 2 -> 3 : 238 |************* |\n 4 -> 7 : 714 |****************************************|\n 8 -> 15 : 2 | |\n 16 -> 31 : 5 | |\n 32 -> 63 : 0 | |\n 64 -> 127 : 1 | |\n 128 -> 255 : 0 | |\n 256 -> 511 : 1 | |\n 512 -> 1023 : 1 | |\n\navg = 6 usecs, total: 6617 usecs, count: 962\n\n\n usecs : count distribution\n 0 -> 1 : 0 | |\n 2 -> 3 : 241 |************* |\n 4 -> 7 : 706 |****************************************|\n 8 -> 15 : 11 | |\n 16 -> 31 : 10 | |\n 32 -> 63 : 1 | |\n\navg = 5 usecs, total: 5639 usecs, count: 969\n\n[..refreshes every 1s here..]\n\nSo the above can tell us e.g. that this pg_checksum_page() took 5639\nus out of 1s full sample time (and with 100% CPU pegged core so that's\ngives again ~5% CPU util per this routine; I'm ignoring the WAL/log\nhint impact for sure). One could also write a small script using\nbpftrace instead, too. Disassembly on Debian version and stock PGDG is\ntelling me it's ful SSE2 instruction-set, so that's nice and optimal\ntoo.\n\n> >> For those uses, this change would render pg_upgrade useless for upgrades from an old instance with default settings to a new instance with default settings. And then users would either need to re-initdb with checksums turned back off, or I suppose run pg_checksums on the old instance before upgrading? This is significant additional complication.\n> > Meh, re-running initdb with --no-data-checksums seems a fairly low hurdle.\n>\n> I tend to agree with that, but I would also like to see the sort of\n> improvements that Peter mentions.\n[..]\n> None of that is to say that I'm totally hostile to this change.\n[.,.]\n> Whether that's worth the overhead for everyone, I'm not quite sure.\n\nWithout data checksums there's a risk that someone receives silent-bit\ncorruption and no one will notice. Shouldn't integrity of data stand\nabove performance by default, in this case? (and performance could be\nopt-in, if someone really wants it).\n\n-J.\n\n\n", "msg_date": "Thu, 15 Aug 2024 14:02:04 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On 15.08.24 08:38, Peter Eisentraut wrote:\n> On 08.08.24 19:42, Robert Haas wrote:\n>>> I'm thinking pg_upgrade could have a mode where it adds the\n>>> checksum during the upgrade as it copies the files (essentially a subset\n>>> of pg_checksums).  I think that would be useful for that middle tier of\n>>> users who just want a good default experience.\n>> That would be very nice.\n> \n> Here is a demo patch for that.  It turned out to be quite simple.\n> \n> I wrote above about a separate mode for that (like \n> --copy-and-make-adjustments), but it was just as easy to stick it into \n> the existing --copy mode.\n> \n> It would be useful to check what the performance overhead of this is \n> versus a copy that does not have to make adjustments.  I expect it's \n> very little.\n> \n> A drawback is that as written this does not work on Windows, because \n> Windows uses a different code path in copyFile().  I don't know the \n> reasons for that.  But it would need to be figured out.\n\nHere is an updated patch for this. I simplified the logic a bit and \nalso handle the case where the read() reads less than a round number of \nblocks. I did some performance testing. The overhead of computing the \nchecksums versus a straight --copy without checksum adjustments appears \nto be around 5% wall clock time, which seems ok to me. I also looked \naround the documentation to see if there is anything to update, but \ndidn't find anything.\n\nI think if we can work out what to do on Windows, this could be a useful \nlittle feature for facilitating $subject.", "msg_date": "Thu, 22 Aug 2024 08:11:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Thu, Aug 22, 2024 at 8:11 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 15.08.24 08:38, Peter Eisentraut wrote:\n> > On 08.08.24 19:42, Robert Haas wrote:\n> >>> I'm thinking pg_upgrade could have a mode where it adds the\n> >>> checksum during the upgrade as it copies the files (essentially a subset\n> >>> of pg_checksums). I think that would be useful for that middle tier of\n> >>> users who just want a good default experience.\n> >> That would be very nice.\n> >\n> > Here is a demo patch for that. It turned out to be quite simple.\n> >\n> > I wrote above about a separate mode for that (like\n> > --copy-and-make-adjustments), but it was just as easy to stick it into\n> > the existing --copy mode.\n> >\n> > It would be useful to check what the performance overhead of this is\n> > versus a copy that does not have to make adjustments. I expect it's\n> > very little.\n> >\n> > A drawback is that as written this does not work on Windows, because\n> > Windows uses a different code path in copyFile(). I don't know the\n> > reasons for that. But it would need to be figured out.\n>\n> Here is an updated patch for this. I simplified the logic a bit and\n> also handle the case where the read() reads less than a round number of\n> blocks. I did some performance testing. The overhead of computing the\n> checksums versus a straight --copy without checksum adjustments appears\n> to be around 5% wall clock time, which seems ok to me. I also looked\n> around the documentation to see if there is anything to update, but\n> didn't find anything.\n>\n> I think if we can work out what to do on Windows, this could be a useful\n> little feature for facilitating $subject.\n\nMy take:\n1. I wonder if we should or should not by default calculate/enable the\nchecksums when doing pg_upgrade --copy from cluster with\nchecksums=off. Maybe we should error on that like we are doing now.\nThere might be still people want to have them off, but they would use\nthe proposed-new-defaults-of-initdb with checksums on blindly (so this\nshould be opt-in via some switch like with let's say\n--copy-and-enable-checksums; so the user is in full control).\n2. WIN32's copyFile() could then stay as it is, and then that new\n--copy-and-enable-checksums on WIN32 would have to fallback to classic\nloop.\n\n-J.\n\n\n", "msg_date": "Thu, 22 Aug 2024 13:10:15 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On 08.08.24 19:19, Greg Sabino Mullane wrote:\n> Thank you for the feedback. Please find attached three separate patches. \n> One to add a new flag to initdb (--no-data-checksums), one to adjust the \n> tests to use this flag as needed, and the final to make the actual \n> switch of the default value (along with tests and docs).\n\nI think we can get started with the initdb --no-data-checksums option.\n\nThe 0001 patch is missing documentation and --help output for this \noption. Also, some of the tests for the option that are in patch 0003 \nmight be better in patch 0001.\n\nSeparately, this\n\n- may incur a noticeable performance penalty. If set, checksums\n+ may incur a small performance penalty. If set, checksums\n\nshould perhaps be committed separately. I don't think the patch 0003 \nreally changes the performance penalty. ;-)\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 15:17:17 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "Rebased and reworked patches attached:\n\nv2-0001-Add-new-initdb-argument-no-data-checksums-to-force-checksums-off.patch\n- Adds --no-data-checksums to initdb.c, adjusts --help and sgml docs, adds\nsimple tests\n\nv2-0002-Allow-tests-to-force-checksums-off-when-calling-init.patch\n- Adjusts the Perl tests to use the new flag as needed\n\nv2-0003-Change-initdb-to-default-to-using-data-checksums.patch\n- Flips the boolean to true, adjusts the tests to match it, tweaks docs\n\nv2-0004-Tweak-docs-to-reduce-possible-impact-of-data-checksums.patch\n- Changes \"noticeable\" penalty to \"small\" penalty\n\nCheers,\nGreg", "msg_date": "Fri, 23 Aug 2024 09:55:58 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Fri, Aug 23, 2024 at 03:17:17PM +0200, Peter Eisentraut wrote:\n> On 08.08.24 19:19, Greg Sabino Mullane wrote:\n> > Thank you for the feedback. Please find attached three separate patches.\n> > One to add a new flag to initdb (--no-data-checksums), one to adjust the\n> > tests to use this flag as needed, and the final to make the actual\n> > switch of the default value (along with tests and docs).\n> \n> I think we can get started with the initdb --no-data-checksums option.\n> \n> The 0001 patch is missing documentation and --help output for this option.\n> Also, some of the tests for the option that are in patch 0003 might be\n> better in patch 0001.\n> \n> Separately, this\n> \n> - may incur a noticeable performance penalty. If set, checksums\n> + may incur a small performance penalty. If set, checksums\n> \n> should perhaps be committed separately. I don't think the patch 0003 really\n> changes the performance penalty. ;-)\n\nI think \"might\" would be more precise than \"may\" above.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 23 Aug 2024 10:42:07 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "In general, +1 for $SUBJECT.\n\n printf(_(\" -k, --data-checksums use data page checksums\\n\"));\n+ printf(_(\" --no-data-checksums do not use data page checksums\\n\"));\n\nShould we error if both --data-checksum and --no-data-checksums are\nspecified? IIUC with 0001, we'll use whichever is specified last.\n\nnitpick: these 4 patches are small enough that they could likely be\ncombined and committed together.\n\nI think it's fair to say we should make the pg_upgrade experience nicer\nonce the default changes, but IMHO that needn't block actually changing the\ndefault.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:46:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Mon, Aug 26, 2024 at 3:46 PM Nathan Bossart <[email protected]>\nwrote:\n\n> Should we error if both --data-checksum and --no-data-checksums are\n> specified? IIUC with 0001, we'll use whichever is specified last.\n>\n\nHmmm, that is a good question. We have never (to my recollection) flipped a\ndefault quite like this before. I'm inclined to leave it as \"last one\nwins\", as I can see automated systems appending their desired selection to\nthe end of the arg list, and expecting it to work.\n\nnitpick: these 4 patches are small enough that they could likely be\n> combined and committed together.\n>\n\nThis was split per request upthread, which I do agree with.\n\nI think it's fair to say we should make the pg_upgrade experience nicer\n> once the default changes, but IMHO that needn't block actually changing the\n> default.\n>\n\n+1\n\nCheers,\nGreg\n\nOn Mon, Aug 26, 2024 at 3:46 PM Nathan Bossart <[email protected]> wrote:Should we error if both --data-checksum and --no-data-checksums are\nspecified?  IIUC with 0001, we'll use whichever is specified last.Hmmm, that is a good question. We have never (to my recollection) flipped a default quite like this before. I'm inclined to leave it as \"last one wins\", as I can see automated systems appending their desired selection to the end of the arg list, and expecting it to work.nitpick: these 4 patches are small enough that they could likely be combined and committed together.This was split per request upthread, which I do agree with.I think it's fair to say we should make the pg_upgrade experience nicer once the default changes, but IMHO that needn't block actually changing the default.+1Cheers,Greg", "msg_date": "Tue, 27 Aug 2024 09:44:39 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On 27.08.24 15:44, Greg Sabino Mullane wrote:\n> On Mon, Aug 26, 2024 at 3:46 PM Nathan Bossart <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Should we error if both --data-checksum and --no-data-checksums are\n> specified?  IIUC with 0001, we'll use whichever is specified last.\n> \n> \n> Hmmm, that is a good question. We have never (to my recollection) \n> flipped a default quite like this before. I'm inclined to leave it as \n> \"last one wins\", as I can see automated systems appending their desired \n> selection to the end of the arg list, and expecting it to work.\n\nYes, last option wins is the normal expected behavior.\n\n\n\n", "msg_date": "Tue, 27 Aug 2024 17:16:51 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" }, { "msg_contents": "On Tue, Aug 27, 2024 at 05:16:51PM +0200, Peter Eisentraut wrote:\n> On 27.08.24 15:44, Greg Sabino Mullane wrote:\n>> On Mon, Aug 26, 2024 at 3:46 PM Nathan Bossart <[email protected]\n>> <mailto:[email protected]>> wrote:\n>> \n>> Should we error if both --data-checksum and --no-data-checksums are\n>> specified?  IIUC with 0001, we'll use whichever is specified last.\n>> \n>> \n>> Hmmm, that is a good question. We have never (to my recollection)\n>> flipped a default quite like this before. I'm inclined to leave it as\n>> \"last one wins\", as I can see automated systems appending their desired\n>> selection to the end of the arg list, and expecting it to work.\n> \n> Yes, last option wins is the normal expected behavior.\n\nWFM\n\n001_verify_heapam fails with this patch set. I think you may need to use\n--no-data-checksums in that test, too. Otherwise, it looks pretty good to\nme.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 27 Aug 2024 10:26:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enable data checksums by default" } ]
[ { "msg_contents": "Introduce hash_search_with_hash_value() function\n\nThis new function iterates hash entries with given hash values. This function\nis designed to avoid full sequential hash search in the syscache invalidation\ncallbacks.\n\nDiscussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\nAuthor: Teodor Sigaev\nReviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\nReviewed-by: Andrei Lepikhov\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d0f020037e19c33c74d683eb7e0c7cc5725294b4\n\nModified Files\n--------------\nsrc/backend/utils/hash/dynahash.c | 38 ++++++++++++++++++++++++++++++++++++++\nsrc/include/utils/hsearch.h | 5 +++++\n2 files changed, 43 insertions(+)", "msg_date": "Wed, 07 Aug 2024 04:08:11 +0000", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "Hi\n\nst 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <[email protected]>\nnapsal:\n\n> Introduce hash_search_with_hash_value() function\n>\n> This new function iterates hash entries with given hash values. This\n> function\n> is designed to avoid full sequential hash search in the syscache\n> invalidation\n> callbacks.\n>\n> Discussion:\n> https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n> Author: Teodor Sigaev\n> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\n> Reviewed-by: Andrei Lepikhov\n>\n\nI tried to use hash_seq_init_with_hash_value in session variables patch,\nbut it doesn't work there.\n\n<-->if (!sessionvars)\n<--><-->return;\n\nelog(NOTICE, \"%u\", hashvalue);\n\n\n<-->hash_seq_init(&status, sessionvars);\n\n<-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n<-->{\n<--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n<--><-->{\n<--><--><-->elog(NOTICE, \"FOUND OLD\");\n<--><--><-->svar->is_valid = false;\n<--><-->}\n<-->}\n\n\n\n<-->/*\n<--> * If the hashvalue is not specified, we have to recheck all currently\n<--> * used session variables. Since we can't tell the exact session\nvariable\n<--> * from its hashvalue, we have to iterate over all items in the hash\nbucket.\n<--> */\n<-->if (hashvalue == 0)\n<--><-->hash_seq_init(&status, sessionvars);\n<-->else\n<--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);\n\n<-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n<-->{\n<--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n\nelog(NOTICE, \"found\");\n\n<--><-->svar->is_valid = false;\n<--><-->needs_validation = true;\n<-->}\n}\n\nOld methods found an entry, but new not.\n\nWhat am I doing wrong?\n\nRegards\n\nPavel\n\n\n\n> Branch\n> ------\n> master\n>\n> Details\n> -------\n>\n> https://git.postgresql.org/pg/commitdiff/d0f020037e19c33c74d683eb7e0c7cc5725294b4\n>\n> Modified Files\n> --------------\n> src/backend/utils/hash/dynahash.c | 38\n> ++++++++++++++++++++++++++++++++++++++\n> src/include/utils/hsearch.h | 5 +++++\n> 2 files changed, 43 insertions(+)\n>\n>\n\nHist 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <[email protected]> napsal:Introduce hash_search_with_hash_value() function\n\nThis new function iterates hash entries with given hash values.  This function\nis designed to avoid full sequential hash search in the syscache invalidation\ncallbacks.\n\nDiscussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\nAuthor: Teodor Sigaev\nReviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\nReviewed-by: Andrei LepikhovI tried to use hash_seq_init_with_hash_value in session variables patch, but it doesn't work there.<-->if (!sessionvars)<--><-->return;elog(NOTICE, \"%u\", hashvalue);<-->hash_seq_init(&status, sessionvars);<-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)<-->{<--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)<--><-->{<--><--><-->elog(NOTICE, \"FOUND OLD\");<--><--><-->svar->is_valid = false;<--><-->}<-->}<-->/*<--> * If the hashvalue is not specified, we have to recheck all currently<--> * used session variables.  Since we can't tell the exact session variable<--> * from its hashvalue, we have to iterate over all items in the hash bucket.<--> */<-->if (hashvalue == 0)<--><-->hash_seq_init(&status, sessionvars);<-->else<--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);<-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)<-->{<--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);elog(NOTICE, \"found\");<--><-->svar->is_valid = false;<--><-->needs_validation = true;<-->}}Old methods found an entry, but new not. What am I doing wrong?RegardsPavel \n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d0f020037e19c33c74d683eb7e0c7cc5725294b4\n\nModified Files\n--------------\nsrc/backend/utils/hash/dynahash.c | 38 ++++++++++++++++++++++++++++++++++++++\nsrc/include/utils/hsearch.h       |  5 +++++\n2 files changed, 43 insertions(+)", "msg_date": "Wed, 7 Aug 2024 08:34:21 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "Hi, Pavel!\n\nOn Wed, Aug 7, 2024 at 9:35 AM Pavel Stehule <[email protected]> wrote:\n> st 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <[email protected]> napsal:\n>>\n>> Introduce hash_search_with_hash_value() function\n>>\n>> This new function iterates hash entries with given hash values. This function\n>> is designed to avoid full sequential hash search in the syscache invalidation\n>> callbacks.\n>>\n>> Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n>> Author: Teodor Sigaev\n>> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\n>> Reviewed-by: Andrei Lepikhov\n>\n>\n> I tried to use hash_seq_init_with_hash_value in session variables patch, but it doesn't work there.\n>\n> <-->if (!sessionvars)\n> <--><-->return;\n>\n> elog(NOTICE, \"%u\", hashvalue);\n>\n>\n> <-->hash_seq_init(&status, sessionvars);\n>\n> <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> <-->{\n> <--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n> <--><-->{\n> <--><--><-->elog(NOTICE, \"FOUND OLD\");\n> <--><--><-->svar->is_valid = false;\n> <--><-->}\n> <-->}\n>\n>\n>\n> <-->/*\n> <--> * If the hashvalue is not specified, we have to recheck all currently\n> <--> * used session variables. Since we can't tell the exact session variable\n> <--> * from its hashvalue, we have to iterate over all items in the hash bucket.\n> <--> */\n> <-->if (hashvalue == 0)\n> <--><-->hash_seq_init(&status, sessionvars);\n> <-->else\n> <--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);\n>\n> <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> <-->{\n> <--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n>\n> elog(NOTICE, \"found\");\n>\n> <--><-->svar->is_valid = false;\n> <--><-->needs_validation = true;\n> <-->}\n> }\n>\n> Old methods found an entry, but new not.\n>\n> What am I doing wrong?\n\nI'm trying to check this. Applying this patch [1], but got conflicts.\nCould you please, rebase the patch, so I can recheck the issue?\n\nLinks.\n1. https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 7 Aug 2024 11:52:13 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "st 7. 8. 2024 v 10:52 odesílatel Alexander Korotkov <[email protected]>\nnapsal:\n\n> Hi, Pavel!\n>\n> On Wed, Aug 7, 2024 at 9:35 AM Pavel Stehule <[email protected]>\n> wrote:\n> > st 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <\n> [email protected]> napsal:\n> >>\n> >> Introduce hash_search_with_hash_value() function\n> >>\n> >> This new function iterates hash entries with given hash values. This\n> function\n> >> is designed to avoid full sequential hash search in the syscache\n> invalidation\n> >> callbacks.\n> >>\n> >> Discussion:\n> https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n> >> Author: Teodor Sigaev\n> >> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman\n> Zharkov\n> >> Reviewed-by: Andrei Lepikhov\n> >\n> >\n> > I tried to use hash_seq_init_with_hash_value in session variables patch,\n> but it doesn't work there.\n> >\n> > <-->if (!sessionvars)\n> > <--><-->return;\n> >\n> > elog(NOTICE, \"%u\", hashvalue);\n> >\n> >\n> > <-->hash_seq_init(&status, sessionvars);\n> >\n> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> > <-->{\n> > <--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n> > <--><-->{\n> > <--><--><-->elog(NOTICE, \"FOUND OLD\");\n> > <--><--><-->svar->is_valid = false;\n> > <--><-->}\n> > <-->}\n> >\n> >\n> >\n> > <-->/*\n> > <--> * If the hashvalue is not specified, we have to recheck all\n> currently\n> > <--> * used session variables. Since we can't tell the exact session\n> variable\n> > <--> * from its hashvalue, we have to iterate over all items in the hash\n> bucket.\n> > <--> */\n> > <-->if (hashvalue == 0)\n> > <--><-->hash_seq_init(&status, sessionvars);\n> > <-->else\n> > <--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);\n> >\n> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> > <-->{\n> > <--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n> >\n> > elog(NOTICE, \"found\");\n> >\n> > <--><-->svar->is_valid = false;\n> > <--><-->needs_validation = true;\n> > <-->}\n> > }\n> >\n> > Old methods found an entry, but new not.\n> >\n> > What am I doing wrong?\n>\n> I'm trying to check this. Applying this patch [1], but got conflicts.\n> Could you please, rebase the patch, so I can recheck the issue?\n>\n>\nI sent rebased patchset\n\nMessage-ID:\nCAFj8pRAskimJmB9Q8pHDa8YoLphVoZMH1xPeGBK8Eze=u+_hBQ@mail.gmail.com\n<https://www.postgresql.org/message-id/CAFj8pRAskimJmB9Q8pHDa8YoLphVoZMH1xPeGBK8Eze%3Du%2B_hBQ%40mail.gmail.com>\n\nRegards\n\nPavel\n\n\n> Links.\n> 1.\n> https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com\n>\n> ------\n> Regards,\n> Alexander Korotkov\n> Supabase\n>\n\nst 7. 8. 2024 v 10:52 odesílatel Alexander Korotkov <[email protected]> napsal:Hi, Pavel!\n\nOn Wed, Aug 7, 2024 at 9:35 AM Pavel Stehule <[email protected]> wrote:\n> st 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <[email protected]> napsal:\n>>\n>> Introduce hash_search_with_hash_value() function\n>>\n>> This new function iterates hash entries with given hash values.  This function\n>> is designed to avoid full sequential hash search in the syscache invalidation\n>> callbacks.\n>>\n>> Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n>> Author: Teodor Sigaev\n>> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\n>> Reviewed-by: Andrei Lepikhov\n>\n>\n> I tried to use hash_seq_init_with_hash_value in session variables patch, but it doesn't work there.\n>\n> <-->if (!sessionvars)\n> <--><-->return;\n>\n> elog(NOTICE, \"%u\", hashvalue);\n>\n>\n> <-->hash_seq_init(&status, sessionvars);\n>\n> <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> <-->{\n> <--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n> <--><-->{\n> <--><--><-->elog(NOTICE, \"FOUND OLD\");\n> <--><--><-->svar->is_valid = false;\n> <--><-->}\n> <-->}\n>\n>\n>\n> <-->/*\n> <--> * If the hashvalue is not specified, we have to recheck all currently\n> <--> * used session variables.  Since we can't tell the exact session variable\n> <--> * from its hashvalue, we have to iterate over all items in the hash bucket.\n> <--> */\n> <-->if (hashvalue == 0)\n> <--><-->hash_seq_init(&status, sessionvars);\n> <-->else\n> <--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);\n>\n> <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> <-->{\n> <--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n>\n> elog(NOTICE, \"found\");\n>\n> <--><-->svar->is_valid = false;\n> <--><-->needs_validation = true;\n> <-->}\n> }\n>\n> Old methods found an entry, but new not.\n>\n> What am I doing wrong?\n\nI'm trying to check this.  Applying this patch [1], but got conflicts.\nCould you please, rebase the patch, so I can recheck the issue?\nI sent rebased patchset Message-ID:\nCAFj8pRAskimJmB9Q8pHDa8YoLphVoZMH1xPeGBK8Eze=u+_hBQ@mail.gmail.comRegardsPavel \nLinks.\n1. https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Wed, 7 Aug 2024 12:02:52 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:03 PM Pavel Stehule <[email protected]> wrote:\n>\n> st 7. 8. 2024 v 10:52 odesílatel Alexander Korotkov <[email protected]> napsal:\n>>\n>> Hi, Pavel!\n>>\n>> On Wed, Aug 7, 2024 at 9:35 AM Pavel Stehule <[email protected]> wrote:\n>> > st 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <[email protected]> napsal:\n>> >>\n>> >> Introduce hash_search_with_hash_value() function\n>> >>\n>> >> This new function iterates hash entries with given hash values. This function\n>> >> is designed to avoid full sequential hash search in the syscache invalidation\n>> >> callbacks.\n>> >>\n>> >> Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n>> >> Author: Teodor Sigaev\n>> >> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\n>> >> Reviewed-by: Andrei Lepikhov\n>> >\n>> >\n>> > I tried to use hash_seq_init_with_hash_value in session variables patch, but it doesn't work there.\n>> >\n>> > <-->if (!sessionvars)\n>> > <--><-->return;\n>> >\n>> > elog(NOTICE, \"%u\", hashvalue);\n>> >\n>> >\n>> > <-->hash_seq_init(&status, sessionvars);\n>> >\n>> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n>> > <-->{\n>> > <--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n>> > <--><-->{\n>> > <--><--><-->elog(NOTICE, \"FOUND OLD\");\n>> > <--><--><-->svar->is_valid = false;\n>> > <--><-->}\n>> > <-->}\n>> >\n>> >\n>> >\n>> > <-->/*\n>> > <--> * If the hashvalue is not specified, we have to recheck all currently\n>> > <--> * used session variables. Since we can't tell the exact session variable\n>> > <--> * from its hashvalue, we have to iterate over all items in the hash bucket.\n>> > <--> */\n>> > <-->if (hashvalue == 0)\n>> > <--><-->hash_seq_init(&status, sessionvars);\n>> > <-->else\n>> > <--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);\n>> >\n>> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n>> > <-->{\n>> > <--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n>> >\n>> > elog(NOTICE, \"found\");\n>> >\n>> > <--><-->svar->is_valid = false;\n>> > <--><-->needs_validation = true;\n>> > <-->}\n>> > }\n>> >\n>> > Old methods found an entry, but new not.\n>> >\n>> > What am I doing wrong?\n>>\n>> I'm trying to check this. Applying this patch [1], but got conflicts.\n>> Could you please, rebase the patch, so I can recheck the issue?\n>\n> I sent rebased patchset\n\n\nThank you.\nPlease see 40064a8ee1 takes special efforts to match HTAB hash\nfunction to syscache hash function. By default, their hash values\ndon't match and you can't simply use syscache hash value to search\nHTAB. This probably should be mentioned in the header comment of\nhash_seq_init_with_hash_value().\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 7 Aug 2024 13:22:13 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "st 7. 8. 2024 v 12:22 odesílatel Alexander Korotkov <[email protected]>\nnapsal:\n\n> On Wed, Aug 7, 2024 at 1:03 PM Pavel Stehule <[email protected]>\n> wrote:\n> >\n> > st 7. 8. 2024 v 10:52 odesílatel Alexander Korotkov <\n> [email protected]> napsal:\n> >>\n> >> Hi, Pavel!\n> >>\n> >> On Wed, Aug 7, 2024 at 9:35 AM Pavel Stehule <[email protected]>\n> wrote:\n> >> > st 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <\n> [email protected]> napsal:\n> >> >>\n> >> >> Introduce hash_search_with_hash_value() function\n> >> >>\n> >> >> This new function iterates hash entries with given hash values.\n> This function\n> >> >> is designed to avoid full sequential hash search in the syscache\n> invalidation\n> >> >> callbacks.\n> >> >>\n> >> >> Discussion:\n> https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n> >> >> Author: Teodor Sigaev\n> >> >> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman\n> Zharkov\n> >> >> Reviewed-by: Andrei Lepikhov\n> >> >\n> >> >\n> >> > I tried to use hash_seq_init_with_hash_value in session variables\n> patch, but it doesn't work there.\n> >> >\n> >> > <-->if (!sessionvars)\n> >> > <--><-->return;\n> >> >\n> >> > elog(NOTICE, \"%u\", hashvalue);\n> >> >\n> >> >\n> >> > <-->hash_seq_init(&status, sessionvars);\n> >> >\n> >> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> >> > <-->{\n> >> > <--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n> >> > <--><-->{\n> >> > <--><--><-->elog(NOTICE, \"FOUND OLD\");\n> >> > <--><--><-->svar->is_valid = false;\n> >> > <--><-->}\n> >> > <-->}\n> >> >\n> >> >\n> >> >\n> >> > <-->/*\n> >> > <--> * If the hashvalue is not specified, we have to recheck all\n> currently\n> >> > <--> * used session variables. Since we can't tell the exact session\n> variable\n> >> > <--> * from its hashvalue, we have to iterate over all items in the\n> hash bucket.\n> >> > <--> */\n> >> > <-->if (hashvalue == 0)\n> >> > <--><-->hash_seq_init(&status, sessionvars);\n> >> > <-->else\n> >> > <--><-->hash_seq_init_with_hash_value(&status, sessionvars,\n> hashvalue);\n> >> >\n> >> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n> >> > <-->{\n> >> > <--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n> >> >\n> >> > elog(NOTICE, \"found\");\n> >> >\n> >> > <--><-->svar->is_valid = false;\n> >> > <--><-->needs_validation = true;\n> >> > <-->}\n> >> > }\n> >> >\n> >> > Old methods found an entry, but new not.\n> >> >\n> >> > What am I doing wrong?\n> >>\n> >> I'm trying to check this. Applying this patch [1], but got conflicts.\n> >> Could you please, rebase the patch, so I can recheck the issue?\n> >\n> > I sent rebased patchset\n>\n>\n> Thank you.\n> Please see 40064a8ee1 takes special efforts to match HTAB hash\n> function to syscache hash function. By default, their hash values\n> don't match and you can't simply use syscache hash value to search\n> HTAB. This probably should be mentioned in the header comment of\n> hash_seq_init_with_hash_value().\n>\n\nyes, enhancing doc should be great + maybe assert\n\nRegards\n\nPavel\n\n>\n> ------\n> Regards,\n> Alexander Korotkov\n> Supabase\n>\n\nst 7. 8. 2024 v 12:22 odesílatel Alexander Korotkov <[email protected]> napsal:On Wed, Aug 7, 2024 at 1:03 PM Pavel Stehule <[email protected]> wrote:\n>\n> st 7. 8. 2024 v 10:52 odesílatel Alexander Korotkov <[email protected]> napsal:\n>>\n>> Hi, Pavel!\n>>\n>> On Wed, Aug 7, 2024 at 9:35 AM Pavel Stehule <[email protected]> wrote:\n>> > st 7. 8. 2024 v 6:08 odesílatel Alexander Korotkov <[email protected]> napsal:\n>> >>\n>> >> Introduce hash_search_with_hash_value() function\n>> >>\n>> >> This new function iterates hash entries with given hash values.  This function\n>> >> is designed to avoid full sequential hash search in the syscache invalidation\n>> >> callbacks.\n>> >>\n>> >> Discussion: https://postgr.es/m/5812a6e5-68ae-4d84-9d85-b443176966a1%40sigaev.ru\n>> >> Author: Teodor Sigaev\n>> >> Reviewed-by: Aleksander Alekseev, Tom Lane, Michael Paquier, Roman Zharkov\n>> >> Reviewed-by: Andrei Lepikhov\n>> >\n>> >\n>> > I tried to use hash_seq_init_with_hash_value in session variables patch, but it doesn't work there.\n>> >\n>> > <-->if (!sessionvars)\n>> > <--><-->return;\n>> >\n>> > elog(NOTICE, \"%u\", hashvalue);\n>> >\n>> >\n>> > <-->hash_seq_init(&status, sessionvars);\n>> >\n>> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n>> > <-->{\n>> > <--><-->if (hashvalue == 0 || svar->hashvalue == hashvalue)\n>> > <--><-->{\n>> > <--><--><-->elog(NOTICE, \"FOUND OLD\");\n>> > <--><--><-->svar->is_valid = false;\n>> > <--><-->}\n>> > <-->}\n>> >\n>> >\n>> >\n>> > <-->/*\n>> > <--> * If the hashvalue is not specified, we have to recheck all currently\n>> > <--> * used session variables.  Since we can't tell the exact session variable\n>> > <--> * from its hashvalue, we have to iterate over all items in the hash bucket.\n>> > <--> */\n>> > <-->if (hashvalue == 0)\n>> > <--><-->hash_seq_init(&status, sessionvars);\n>> > <-->else\n>> > <--><-->hash_seq_init_with_hash_value(&status, sessionvars, hashvalue);\n>> >\n>> > <-->while ((svar = (SVariable) hash_seq_search(&status)) != NULL)\n>> > <-->{\n>> > <--><-->Assert(hashvalue == 0 || svar->hashvalue == hashvalue);\n>> >\n>> > elog(NOTICE, \"found\");\n>> >\n>> > <--><-->svar->is_valid = false;\n>> > <--><-->needs_validation = true;\n>> > <-->}\n>> > }\n>> >\n>> > Old methods found an entry, but new not.\n>> >\n>> > What am I doing wrong?\n>>\n>> I'm trying to check this.  Applying this patch [1], but got conflicts.\n>> Could you please, rebase the patch, so I can recheck the issue?\n>\n> I sent rebased patchset\n\n\nThank you.\nPlease see 40064a8ee1 takes special efforts to match HTAB hash\nfunction to syscache hash function.  By default, their hash values\ndon't match and you can't simply use syscache hash value to search\nHTAB.  This probably should be mentioned in the header comment of\nhash_seq_init_with_hash_value().yes, enhancing doc should be great + maybe assertRegardsPavel\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Wed, 7 Aug 2024 12:34:11 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "You may have already realized this, but the name of the function the\npatch adds is not the same as the name that appears in the commit\nmessage.\n\n...Robert\n\n\n", "msg_date": "Wed, 7 Aug 2024 08:24:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 3:24 PM Robert Haas <[email protected]> wrote:\n> You may have already realized this, but the name of the function the\n> patch adds is not the same as the name that appears in the commit\n> message.\n\n:sigh:\nI didn't realize that before your message. That would be another item\nfor my checklist: ensure entities referenced from commit message and\ncomments didn't change their names.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 7 Aug 2024 18:55:03 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 11:55 AM Alexander Korotkov <[email protected]> wrote:\n> On Wed, Aug 7, 2024 at 3:24 PM Robert Haas <[email protected]> wrote:\n> > You may have already realized this, but the name of the function the\n> > patch adds is not the same as the name that appears in the commit\n> > message.\n>\n> :sigh:\n> I didn't realize that before your message. That would be another item\n> for my checklist: ensure entities referenced from commit message and\n> comments didn't change their names.\n\nI really wish there was some way to fix commit messages. I had a typo\nin mine today, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 12:30:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On 2024-Aug-07, Robert Haas wrote:\n\n> I really wish there was some way to fix commit messages. I had a typo\n> in mine today, too.\n\nWe could use git notes. The UI is a bit inconvenient (they have to be\npushed and pulled separately from commits), but they seem useful enough.\n\nhttps://initialcommit.com/blog/git-notes\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n", "msg_date": "Wed, 7 Aug 2024 13:08:35 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 07, 2024 at 01:08:35PM -0400, Alvaro Herrera wrote:\n> On 2024-Aug-07, Robert Haas wrote:\n>> I really wish there was some way to fix commit messages. I had a typo\n>> in mine today, too.\n> \n> We could use git notes. The UI is a bit inconvenient (they have to be\n> pushed and pulled separately from commits), but they seem useful enough.\n\nYeah, I spend a lot of time on commit messages because they're pretty much\nwritten in stone once pushed. I'd definitely use git notes to add errata,\nfollow-up commits that fixed/reverted things, etc.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 7 Aug 2024 12:15:19 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:15 PM Nathan Bossart <[email protected]> wrote:\n> > We could use git notes. The UI is a bit inconvenient (they have to be\n> > pushed and pulled separately from commits), but they seem useful enough.\n>\n> Yeah, I spend a lot of time on commit messages because they're pretty much\n> written in stone once pushed. I'd definitely use git notes to add errata,\n> follow-up commits that fixed/reverted things, etc.\n\nI think this could be a good idea, although it wouldn't really fix the\nproblem, because in the case of both this commit and the one I\nmentioned earlier today, all you could do with the note is point out\nthe earlier mistake. You couldn't actually fix it.\n\nAlso, for the notes to be useful, we'd probably need some conventions\nabout how we, as a project, want to use them. If everyone does\nsomething different, the result isn't likely to be all that great.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 14:01:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 7:30 PM Robert Haas <[email protected]> wrote:\n> On Wed, Aug 7, 2024 at 11:55 AM Alexander Korotkov <[email protected]> wrote:\n> > On Wed, Aug 7, 2024 at 3:24 PM Robert Haas <[email protected]> wrote:\n> > > You may have already realized this, but the name of the function the\n> > > patch adds is not the same as the name that appears in the commit\n> > > message.\n> >\n> > :sigh:\n> > I didn't realize that before your message. That would be another item\n> > for my checklist: ensure entities referenced from commit message and\n> > comments didn't change their names.\n>\n> I really wish there was some way to fix commit messages. I had a typo\n> in mine today, too.\n\n+1,\nOne of the scariest things that happened to me was forgetting to\nmention reviewers or even authors. People don't get credit for their\nwork, and you can't fix that.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 7 Aug 2024 23:08:00 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 9:01 PM Robert Haas <[email protected]> wrote:\n> On Wed, Aug 7, 2024 at 1:15 PM Nathan Bossart <[email protected]> wrote:\n> > > We could use git notes. The UI is a bit inconvenient (they have to be\n> > > pushed and pulled separately from commits), but they seem useful enough.\n> >\n> > Yeah, I spend a lot of time on commit messages because they're pretty much\n> > written in stone once pushed. I'd definitely use git notes to add errata,\n> > follow-up commits that fixed/reverted things, etc.\n>\n> I think this could be a good idea, although it wouldn't really fix the\n> problem, because in the case of both this commit and the one I\n> mentioned earlier today, all you could do with the note is point out\n> the earlier mistake. You couldn't actually fix it.\n\nCorrect, but something looks better than nothing.\n\n> Also, for the notes to be useful, we'd probably need some conventions\n> about how we, as a project, want to use them. If everyone does\n> something different, the result isn't likely to be all that great.\n\n+1\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 7 Aug 2024 23:08:45 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:34 PM Pavel Stehule <[email protected]> wrote:\n> st 7. 8. 2024 v 12:22 odesílatel Alexander Korotkov <[email protected]> napsal:\n>> Thank you.\n>> Please see 40064a8ee1 takes special efforts to match HTAB hash\n>> function to syscache hash function. By default, their hash values\n>> don't match and you can't simply use syscache hash value to search\n>> HTAB. This probably should be mentioned in the header comment of\n>> hash_seq_init_with_hash_value().\n>\n>\n> yes, enhancing doc should be great + maybe assert\n\nPlease check the patch, which adds a caveat to the function header\ncomment. I don't particularly like an assert here, because there\ncould be use-cases besides syscache callbacks, which could legally use\ndefault hash function.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Wed, 7 Aug 2024 23:25:12 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "st 7. 8. 2024 v 22:25 odesílatel Alexander Korotkov <[email protected]>\nnapsal:\n\n> On Wed, Aug 7, 2024 at 1:34 PM Pavel Stehule <[email protected]>\n> wrote:\n> > st 7. 8. 2024 v 12:22 odesílatel Alexander Korotkov <\n> [email protected]> napsal:\n> >> Thank you.\n> >> Please see 40064a8ee1 takes special efforts to match HTAB hash\n> >> function to syscache hash function. By default, their hash values\n> >> don't match and you can't simply use syscache hash value to search\n> >> HTAB. This probably should be mentioned in the header comment of\n> >> hash_seq_init_with_hash_value().\n> >\n> >\n> > yes, enhancing doc should be great + maybe assert\n>\n> Please check the patch, which adds a caveat to the function header\n> comment. I don't particularly like an assert here, because there\n> could be use-cases besides syscache callbacks, which could legally use\n> default hash function.\n>\n\nlooks well\n\nRegards\n\nPavel\n\n>\n> ------\n> Regards,\n> Alexander Korotkov\n> Supabase\n>\n\nst 7. 8. 2024 v 22:25 odesílatel Alexander Korotkov <[email protected]> napsal:On Wed, Aug 7, 2024 at 1:34 PM Pavel Stehule <[email protected]> wrote:\n> st 7. 8. 2024 v 12:22 odesílatel Alexander Korotkov <[email protected]> napsal:\n>> Thank you.\n>> Please see 40064a8ee1 takes special efforts to match HTAB hash\n>> function to syscache hash function.  By default, their hash values\n>> don't match and you can't simply use syscache hash value to search\n>> HTAB.  This probably should be mentioned in the header comment of\n>> hash_seq_init_with_hash_value().\n>\n>\n> yes, enhancing doc should be great + maybe assert\n\nPlease check the patch, which adds a caveat to the function header\ncomment.  I don't particularly like an assert here, because there\ncould be use-cases besides syscache callbacks, which could legally use\ndefault hash function.looks wellRegardsPavel \n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Thu, 8 Aug 2024 06:44:31 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Thu, Aug 8, 2024 at 7:45 AM Pavel Stehule <[email protected]> wrote:\n> st 7. 8. 2024 v 22:25 odesílatel Alexander Korotkov <[email protected]> napsal:\n>>\n>> On Wed, Aug 7, 2024 at 1:34 PM Pavel Stehule <[email protected]> wrote:\n>> > st 7. 8. 2024 v 12:22 odesílatel Alexander Korotkov <[email protected]> napsal:\n>> >> Thank you.\n>> >> Please see 40064a8ee1 takes special efforts to match HTAB hash\n>> >> function to syscache hash function. By default, their hash values\n>> >> don't match and you can't simply use syscache hash value to search\n>> >> HTAB. This probably should be mentioned in the header comment of\n>> >> hash_seq_init_with_hash_value().\n>> >\n>> >\n>> > yes, enhancing doc should be great + maybe assert\n>>\n>> Please check the patch, which adds a caveat to the function header\n>> comment. I don't particularly like an assert here, because there\n>> could be use-cases besides syscache callbacks, which could legally use\n>> default hash function.\n>\n>\n> looks well\n\nThank you, pushed.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Thu, 8 Aug 2024 11:51:17 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Wed, Aug 07, 2024 at 02:01:30PM -0400, Robert Haas wrote:\n> Also, for the notes to be useful, we'd probably need some conventions\n> about how we, as a project, want to use them. If everyone does\n> something different, the result isn't likely to be all that great.\n\nWhat did you have in mind? Would it be sufficient to propose a template\nthat, once ratified, would be available in the committing checklist [0]?\n\n[0] https://wiki.postgresql.org/wiki/Committing_checklist\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 15 Aug 2024 16:01:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" }, { "msg_contents": "On Thu, Aug 15, 2024 at 5:02 PM Nathan Bossart <[email protected]> wrote:\n> On Wed, Aug 07, 2024 at 02:01:30PM -0400, Robert Haas wrote:\n> > Also, for the notes to be useful, we'd probably need some conventions\n> > about how we, as a project, want to use them. If everyone does\n> > something different, the result isn't likely to be all that great.\n>\n> What did you have in mind? Would it be sufficient to propose a template\n> that, once ratified, would be available in the committing checklist [0]?\n>\n> [0] https://wiki.postgresql.org/wiki/Committing_checklist\n\nYes, that would suffice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 10:49:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Introduce hash_search_with_hash_value() function" } ]
[ { "msg_contents": "I think we could remove the TRACE_SORT macro.\n\nThe TRACE_SORT macro has guarded the availability of the trace_sort GUC\nsetting. But it has been enabled by default ever since it was \nintroduced in PostgreSQL 8.1, and there have been no reports that \nsomeone wanted to disable it. So I think we could just remove the macro \nto simplify things.", "msg_date": "Wed, 7 Aug 2024 08:56:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Remove TRACE_SORT macro?" }, { "msg_contents": "On 07/08/2024 09:56, Peter Eisentraut wrote:\n> I think we could remove the TRACE_SORT macro.\n> \n> The TRACE_SORT macro has guarded the availability of the trace_sort GUC\n> setting.  But it has been enabled by default ever since it was \n> introduced in PostgreSQL 8.1, and there have been no reports that \n> someone wanted to disable it.  So I think we could just remove the macro \n> to simplify things.\n\n+1, I don't see why anyone would want to build without TRACE_SORT.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:24:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove TRACE_SORT macro?" }, { "msg_contents": "On 07.08.24 09:24, Heikki Linnakangas wrote:\n> On 07/08/2024 09:56, Peter Eisentraut wrote:\n>> I think we could remove the TRACE_SORT macro.\n>>\n>> The TRACE_SORT macro has guarded the availability of the trace_sort GUC\n>> setting.  But it has been enabled by default ever since it was \n>> introduced in PostgreSQL 8.1, and there have been no reports that \n>> someone wanted to disable it.  So I think we could just remove the \n>> macro to simplify things.\n> \n> +1, I don't see why anyone would want to build without TRACE_SORT.\n\ncommitted\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 08:14:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove TRACE_SORT macro?" } ]
[ { "msg_contents": "Many resource managers set up a temporary memory context which is reset \nafter replaying the record. It seems a bit silly for each rmgr to do \nthat on their own, so I propose that we do it in a centralized fashion. \nThe attached patch creates one new temporary context and switches to it \nfor each rm_redo() call.\n\nI was afraid of the overhead of the MemoryContextReset between each WAL \nrecord since this is a very hot codepath, but it doesn't seem to be \nnoticeable. I used the attached scripts to benchmark it. \nredobench-setup.sh sets up a base backup and about 5 GB of WAL. The WAL \nconsists of just tiny logical decoding messages, no real page \nmodifications. The idea is that replaying that WAL should make any \nper-record overhead stand out as much as possible, since there's no real \nwork to do. Use redobench.sh to perform the replay. I am not seeing any \nmeasurable difference this patch, so I think we're good. But if we \nneeded to optimize, we could e.g. have an inlined fastpath version of \nMemoryContextReset for the common case that the context is empty, or \nonly reset it every 100 records or something.\n\nThis leaves no built-in rmgrs with any rm_startup or rm_clenaup hooks. \nExtensions might still use them, and they seem like they might be \nuseful, so I kept them.\n\nThere was no natural place to document this, so I added a brief \nexplanation of rm_redo in the RmgrData comment, and then tacked the \ninformation about the memory context there too. I also added a note in \n\"Custom WAL Resource Managers\" section of the docs to point out that \nthis changed in v18.\n\n(Why am I doing this now? I was browsing through all the global \nvariables for the multithreading work, and these \"opCtx\"s caught my eye. \nThis is in no way critical for multithreading though.)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 7 Aug 2024 14:24:39 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Call rm_redo in a temporary memory context" } ]
[ { "msg_contents": "It's bothered me for a long time that some of the shmem initialization \nfunctions have non-standard names. Most of them are called \nFoobarShmemSize() and FoobarShmemInit(), but there are a few exceptions:\n\nInitBufferPool\nInitLocks\nInitPredicateLocks\nCreateSharedProcArray\nCreateSharedBackendStatus\nCreateSharedInvalidationState\n\nI always have trouble remembering what exactly these functions do and \nwhen get called. But they are the same as all the FoobarShmemInit() \nfunctions, just named differently.\n\nThe attached patches rename them to follow the usual naming convention. \nThe InitLocks() function is refactored a bit more, splitting the \nper-backend initialization to a separate InitLockManagerAccess() \nfunction. That's why it's in a separate commit.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 7 Aug 2024 15:08:42 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Little cleanup of ShmemInit function names" }, { "msg_contents": "On 8/7/24 2:08 PM, Heikki Linnakangas wrote:\n> The attached patches rename them to follow the usual naming convention. \n> The InitLocks() function is refactored a bit more, splitting the \n> per-backend initialization to a separate InitLockManagerAccess() \n> function. That's why it's in a separate commit.\n\nI like the idea behind the patches. And they still apply and build.\n\nThe first patch is clean and well commented. I just have two minor nitpicks.\n\nSmall typo with the extra \"which\" which makes the sentence not flow \ncorrectly\n\n\"This is called from CreateSharedMemoryAndSemaphores(), which see for \nmore comments.\"\n\nOn the topic of minor language issues I think the comma below is redundant.\n\n\"In the normal postmaster case, the shared hash tables are created here.\"\n\nThe second patch is a simple renaming which reduces mental load by \nmaking the naming more consistent so I like it. Also since these \nfunctions are not really useful for any extension authors I do not see \nany harm in renaming them.\n\nAfter cleaning up the language of that comment I think these patches can \nbe committed.\n\nAndreas\n\n\n", "msg_date": "Wed, 28 Aug 2024 17:26:38 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Little cleanup of ShmemInit function names" }, { "msg_contents": "On 28/08/2024 18:26, Andreas Karlsson wrote:\n> On 8/7/24 2:08 PM, Heikki Linnakangas wrote:\n>> The attached patches rename them to follow the usual naming convention.\n>> The InitLocks() function is refactored a bit more, splitting the\n>> per-backend initialization to a separate InitLockManagerAccess()\n>> function. That's why it's in a separate commit.\n> \n> I like the idea behind the patches. And they still apply and build.\n> \n> The first patch is clean and well commented. I just have two minor nitpicks.\n> \n> Small typo with the extra \"which\" which makes the sentence not flow\n> correctly\n> \n> \"This is called from CreateSharedMemoryAndSemaphores(), which see for\n> more comments.\"\n\nHmm, I don't see the issue. It's an uncommon sentence structure, but it \nwas there before this patch, and it's correct AFAICS. If you grep for \n\"which see \", you'll find some more examples of that.\n\n> On the topic of minor language issues I think the comma below is redundant.\n> \n> \"In the normal postmaster case, the shared hash tables are created here.\"\n\nOn second thoughts, I rearranged the sentences in the paragraph a \nlittle, see attached.\n\n> The second patch is a simple renaming which reduces mental load by\n> making the naming more consistent so I like it. Also since these\n> functions are not really useful for any extension authors I do not see\n> any harm in renaming them.\n> \n> After cleaning up the language of that comment I think these patches can\n> be committed.\n\nThanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 28 Aug 2024 20:26:06 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Little cleanup of ShmemInit function names" }, { "msg_contents": "On 8/28/24 7:26 PM, Heikki Linnakangas wrote:\n> Hmm, I don't see the issue. It's an uncommon sentence structure, but it \n> was there before this patch, and it's correct AFAICS. If you grep for \n> \"which see \", you'll find some more examples of that.\n\nNot sure if it is correct or not but from some googling it seems to be a \ndirect translation of \"quod vide\". I think \"for which, see\" would likely \nbe more proper English but it is not my native language and we use \n\"which see\" elsewhere so we might as well be consistent and use \"which see\".\n\n>> On the topic of minor language issues I think the comma below is \n>> redundant.\n>>\n>> \"In the normal postmaster case, the shared hash tables are created here.\"\n> \n> On second thoughts, I rearranged the sentences in the paragraph a \n> little, see attached.\n\nLooks good!\n\nAndreas\n\n\n", "msg_date": "Wed, 28 Aug 2024 20:41:22 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Little cleanup of ShmemInit function names" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 28/08/2024 18:26, Andreas Karlsson wrote:\n>> Small typo with the extra \"which\" which makes the sentence not flow\n>> correctly\n>> \"This is called from CreateSharedMemoryAndSemaphores(), which see for\n>> more comments.\"\n\n> Hmm, I don't see the issue. It's an uncommon sentence structure, but it \n> was there before this patch, and it's correct AFAICS. If you grep for \n> \"which see \", you'll find some more examples of that.\n\nI didn't check the git history for this particular line, but at least\nsome of those examples are mine. I'm pretty certain it's good English.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Aug 2024 14:44:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Little cleanup of ShmemInit function names" }, { "msg_contents": "On Wed, Aug 28, 2024 at 2:41 PM Andreas Karlsson <[email protected]> wrote:\n> Not sure if it is correct or not but from some googling it seems to be a\n> direct translation of \"quod vide\". I think \"for which, see\" would likely\n> be more proper English but it is not my native language and we use\n> \"which see\" elsewhere so we might as well be consistent and use \"which see\".\n\nIf somebody wrote \"for which, see\" in a patch I was reviewing, I would\ndefinitely complain about it.\n\nI wouldn't complain about \"which see\", but that's mostly because I\nknow Tom likes the expression. As a native English speaker, it sounds\nbasically grammatical to me, but it's an extremely uncommon usage. I\nprefer to phrase things in ways that are closer to how people actually\ntalk, partly because I know that we do have many people working on the\nproject who are not native speakers of English, and are thus more\nlikely to be tripped up by obscure usages.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Aug 2024 15:15:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Little cleanup of ShmemInit function names" }, { "msg_contents": "On 2024-Aug-28, Tom Lane wrote:\n\n> Heikki Linnakangas <[email protected]> writes:\n\n> > Hmm, I don't see the issue. It's an uncommon sentence structure, but it \n> > was there before this patch, and it's correct AFAICS. If you grep for \n> > \"which see \", you'll find some more examples of that.\n> \n> I didn't check the git history for this particular line, but at least\n> some of those examples are mine. I'm pretty certain it's good English.\n\nAs a non-native speaker, I'm always grateful for examples of unusual\ngrammatical constructs. They have given me many more opportunities for\ngrowth than if all comments were restricted to simple English. I have\nhad many a chance to visit english.stackexchange.net on account of\nsomething I read in pgsql lists or code comments.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 28 Aug 2024 21:06:45 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Little cleanup of ShmemInit function names" }, { "msg_contents": "On 29/08/2024 04:06, Alvaro Herrera wrote:\n> I have had many a chance to visit english.stackexchange.net on\n> account of something I read in pgsql lists or code comments.\nI see what you did there :-).\n\nCommitted, with \"which see\". Thanks everyone!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 29 Aug 2024 10:01:34 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Little cleanup of ShmemInit function names" } ]
[ { "msg_contents": "Hi,\n\nThe documentation states that postgres_fdw can be used with remote servers\nas far back as PostgreSQL 8.3.\nhttps://www.postgresql.org/docs/devel/postgres-fdw.html#POSTGRES-FDW-CROSS-VERSION-COMPATIBILITY\n\nHowever, when using PostgreSQL 9.4 or earlier as a remote server,\nINSERT ON CONFLICT on a foreign table fails because this feature\nis only supported in v9.5 and later. Should we add a note to\nthe documentation to clarify this limitation?\n\nFor example:\n\"Another limitation is that when executing INSERT statements with\nan ON CONFLICT DO NOTHING clause on a foreign table, the remote server\nmust be running PostgreSQL 9.5 or later, as earlier versions do not\nsupport this feature.\"\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 7 Aug 2024 21:32:06 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "On Wed, Aug 7, 2024 at 9:32 PM Fujii Masao <[email protected]> wrote:\n> However, when using PostgreSQL 9.4 or earlier as a remote server,\n> INSERT ON CONFLICT on a foreign table fails because this feature\n> is only supported in v9.5 and later. Should we add a note to\n> the documentation to clarify this limitation?\n\n+1\n\n> For example:\n> \"Another limitation is that when executing INSERT statements with\n> an ON CONFLICT DO NOTHING clause on a foreign table, the remote server\n> must be running PostgreSQL 9.5 or later, as earlier versions do not\n> support this feature.\"\n\nWe already have this note in the documentation:\n\n“Note that postgres_fdw currently lacks support for INSERT statements\nwith an ON CONFLICT DO UPDATE clause. However, the ON CONFLICT DO\nNOTHING clause is supported, provided a unique index inference\nspecification is omitted.”\n\nSo yet another idea is to expand the latter part a little bit like:\n\nHowever, the ON CONFLICT DO NOTHING clause is supported, provided a\nunique index inference specification is omitted and the remote server\nis 9.5 or later.\n\nI just thought consolidating the information to one place would make\nthe documentation more readable.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 9 Aug 2024 17:49:41 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "On 2024/08/09 17:49, Etsuro Fujita wrote:\n> I just thought consolidating the information to one place would make\n> the documentation more readable.\n\nYes, so I think that adding a note about the required remote server version\nto the cross-version compatibility section would be more intuitive.\nThis section already discusses the necessary server versions and limitations.\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 13 Aug 2024 12:15:01 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "Fujii Masao <[email protected]> writes:\n> Yes, so I think that adding a note about the required remote server version\n> to the cross-version compatibility section would be more intuitive.\n> This section already discusses the necessary server versions and limitations.\n> Patch attached.\n\nThis discussion tickles a concern I've had for awhile: do we really\nknow that modern postgres_fdw would work with an 8.3 server (never\nmind 8.1)? How many of us are in a position to test or debug such\na setup? The discussions we've had around old-version compatibility\nfor pg_dump and psql seem just as relevant here.\n\nIn short, I'm wondering if we should move up the goalposts and only\nclaim compatibility back to 9.2.\n\nIt'd be even better if we had some routine testing to verify that\nsuch cases work. I'm not volunteering to make that happen, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Aug 2024 23:49:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "On Tue, Aug 13, 2024 at 12:15 PM Fujii Masao\n<[email protected]> wrote:\n> On 2024/08/09 17:49, Etsuro Fujita wrote:\n> > I just thought consolidating the information to one place would make\n> > the documentation more readable.\n>\n> Yes, so I think that adding a note about the required remote server version\n> to the cross-version compatibility section would be more intuitive.\n> This section already discusses the necessary server versions and limitations.\n> Patch attached.\n\nThanks for the patch!\n\nI noticed that we already have mentioned such a version limitation on\nthe analyze_sampling option outside that section (see the description\nfor that option in the “Cost Estimation Options” section). So my\nconcern is that it might be inconsistent to state only the INSERT\nlimitation there. Maybe I am too worried about that.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 14 Aug 2024 03:25:48 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "On Tue, Aug 13, 2024 at 12:49 PM Tom Lane <[email protected]> wrote:\n> This discussion tickles a concern I've had for awhile: do we really\n> know that modern postgres_fdw would work with an 8.3 server (never\n> mind 8.1)? How many of us are in a position to test or debug such\n> a setup? The discussions we've had around old-version compatibility\n> for pg_dump and psql seem just as relevant here.\n>\n> In short, I'm wondering if we should move up the goalposts and only\n> claim compatibility back to 9.2.\n>\n> It'd be even better if we had some routine testing to verify that\n> such cases work. I'm not volunteering to make that happen, though.\n\n+1 to both. Unfortunately, I do not think I will have time for that, though.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 14 Aug 2024 03:35:33 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "On Wed, Aug 14, 2024 at 03:25:48AM +0900, Etsuro Fujita wrote:\n> On Tue, Aug 13, 2024 at 12:15 PM Fujii Masao\n> <[email protected]> wrote:\n> > On 2024/08/09 17:49, Etsuro Fujita wrote:\n> > > I just thought consolidating the information to one place would make\n> > > the documentation more readable.\n> >\n> > Yes, so I think that adding a note about the required remote server version\n> > to the cross-version compatibility section would be more intuitive.\n> > This section already discusses the necessary server versions and limitations.\n> > Patch attached.\n> \n> Thanks for the patch!\n> \n> I noticed that we already have mentioned such a version limitation on\n> the analyze_sampling option outside that section (see the description\n> for that option in the “Cost Estimation Options” section). So my\n> concern is that it might be inconsistent to state only the INSERT\n> limitation there. Maybe I am too worried about that.\n\nI think it is an improvement, so applied to master. Thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 19 Aug 2024 19:55:06 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" }, { "msg_contents": "On Tue, Aug 20, 2024 at 8:55 AM Bruce Momjian <[email protected]> wrote:\n> I think it is an improvement, so applied to master. Thanks.\n\nThanks for taking care of this!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 27 Aug 2024 18:21:09 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross-version Compatibility of postgres_fdw" } ]
[ { "msg_contents": "Hi,\n When I use the 'pqmq' recently, I found some issues, just fix them.\n\n Allow the param 'dsm_segment *seg' to be NULL in function\n 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created\n in shared memory instead of DSM.\n\n Add function 'pq_leave_shm_mq' to allow the process to go\n back to the previous pq environment.", "msg_date": "Thu, 8 Aug 2024 11:23:44 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "[patch] Imporve pqmq" }, { "msg_contents": "On Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <[email protected]> wrote:\n> When I use the 'pqmq' recently, I found some issues, just fix them.\n>\n> Allow the param 'dsm_segment *seg' to be NULL in function\n> 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created\n> in shared memory instead of DSM.\n\nUnder what circumstances does this happen?\n\n> Add function 'pq_leave_shm_mq' to allow the process to go\n> back to the previous pq environment.\n\nIn the code as it currently exists, a parallel worker never has a\nconnected client, and it talks to a shm_mq instead. So there's no need\nfor this. If a backend needs to communicate with both a connected\nclient and also a shm_mq, it probably should not use pqmq but rather\ndecide explicitly which messages should be sent to the client and\nwhich to the shm_mq. Otherwise, it seems hard to avoid possible loss\nof protocol sync.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:24:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] Imporve pqmq" }, { "msg_contents": "Robert Haas <[email protected]>\n03:24 (6小时前)\n发送至 我、 pgsql-hackers\nOn Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <[email protected]> wrote:\n> When I use the 'pqmq' recently, I found some issues, just fix them.\n>\n> Allow the param 'dsm_segment *seg' to be NULL in function\n> 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created\n> in shared memory instead of DSM.\n\n>. Under what circumstances does this happen?\n\n I just create a shm_mq in shared memory, compared with DSM, it is easier.\n And don't need to attach and detach to the DSM.\n This shm_mq in shared memory can meet my requirement, which is used\n in two different sessions, one session A dumps some information into\nanother\n session B through the shm_mq. Session B is actually a monitor session,\nuser can\n use it to monitor the state of slow queries, such as queries in session A.\n\n Yes, I can choose to use DSM in such situation. But I think it's better\nto let the 'pqmq'\n to support the shm_mq not in DSM.\n\n\n> Add function 'pq_leave_shm_mq' to allow the process to go\n> back to the previous pq environment.\n\n>. In the code as it currently exists, a parallel worker never has a\n>. connected client, and it talks to a shm_mq instead. So there's no need\n>. for this. If a backend needs to communicate with both a connected\n> client and also a shm_mq, it probably should not use pqmq but rather\n> decide explicitly which messages should be sent to the client and\n> which to the shm_mq. Otherwise, it seems hard to avoid possible loss\n> of protocol sync.\n\n As described above, session B will send a signal to session A, then\n session A handle the signal and send the message into the shm_mq.\n The message is sent by pq protocol. So session A will firstly call\n 'pq_redirect_to_shm_mq' and then call 'pq_leave_shm_mq' to\n continue to do its work.\n\nRobert Haas <[email protected]> 于2024年8月9日周五 03:24写道:\n\n> On Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <[email protected]>\n> wrote:\n> > When I use the 'pqmq' recently, I found some issues, just fix them.\n> >\n> > Allow the param 'dsm_segment *seg' to be NULL in function\n> > 'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created\n> > in shared memory instead of DSM.\n>\n> Under what circumstances does this happen?\n>\n> > Add function 'pq_leave_shm_mq' to allow the process to go\n> > back to the previous pq environment.\n>\n> In the code as it currently exists, a parallel worker never has a\n> connected client, and it talks to a shm_mq instead. So there's no need\n> for this. If a backend needs to communicate with both a connected\n> client and also a shm_mq, it probably should not use pqmq but rather\n> decide explicitly which messages should be sent to the client and\n> which to the shm_mq. Otherwise, it seems hard to avoid possible loss\n> of protocol sync.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nRobert Haas <[email protected]>03:24 (6小时前)发送至 我、 pgsql-hackersOn Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <[email protected]> wrote:>     When I use the 'pqmq' recently, I found some issues, just fix them.>>      Allow the param 'dsm_segment *seg' to be NULL in function>     'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created>     in shared memory instead of DSM.>.    Under what circumstances does this happen?  I just create a shm_mq in shared memory, compared with DSM, it is easier.  And don't need to attach and detach to the DSM.  This shm_mq in shared memory can meet my requirement, which is used  in two different sessions, one session A dumps some information into another  session B through the shm_mq. Session B is actually a monitor session, user can  use it to monitor the state of slow queries, such as queries in session A.  Yes,  I can choose to use DSM in such situation. But I think it's better to let the 'pqmq'  to support the shm_mq not in DSM.>     Add function 'pq_leave_shm_mq' to allow the process to go>     back to the previous pq environment.>.    In the code as it currently exists, a parallel worker never has a>.    connected client, and it talks to a shm_mq instead. So there's no need>.    for this. If a backend needs to communicate with both a connected>     client and also a shm_mq, it probably should not use pqmq but rather>     decide explicitly which messages should be sent to the client and>     which to the shm_mq. Otherwise, it seems hard to avoid possible loss>     of protocol sync.    As described above, session B  will send a signal to session A, then     session A handle the signal and send the message into the shm_mq.   The message is sent by pq protocol. So session A will firstly call   'pq_redirect_to_shm_mq'  and then  call 'pq_leave_shm_mq' to    continue to do its work.Robert Haas <[email protected]> 于2024年8月9日周五 03:24写道:On Wed, Aug 7, 2024 at 11:24 PM Xiaoran Wang <[email protected]> wrote:\n>     When I use the 'pqmq' recently, I found some issues, just fix them.\n>\n>      Allow the param 'dsm_segment *seg' to be NULL in function\n>     'pq_redirect_to_shm_mq'. As sometimes the shm_mq is created\n>     in shared memory instead of DSM.\n\nUnder what circumstances does this happen?\n\n>     Add function 'pq_leave_shm_mq' to allow the process to go\n>     back to the previous pq environment.\n\nIn the code as it currently exists, a parallel worker never has a\nconnected client, and it talks to a shm_mq instead. So there's no need\nfor this. If a backend needs to communicate with both a connected\nclient and also a shm_mq, it probably should not use pqmq but rather\ndecide explicitly which messages should be sent to the client and\nwhich to the shm_mq. Otherwise, it seems hard to avoid possible loss\nof protocol sync.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 9 Aug 2024 10:27:35 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch] Imporve pqmq" }, { "msg_contents": "On Thu, Aug 8, 2024 at 10:27 PM Xiaoran Wang <[email protected]> wrote:\n> > Add function 'pq_leave_shm_mq' to allow the process to go\n> > back to the previous pq environment.\n>\n> >. In the code as it currently exists, a parallel worker never has a\n> >. connected client, and it talks to a shm_mq instead. So there's no need\n> >. for this. If a backend needs to communicate with both a connected\n> > client and also a shm_mq, it probably should not use pqmq but rather\n> > decide explicitly which messages should be sent to the client and\n> > which to the shm_mq. Otherwise, it seems hard to avoid possible loss\n> > of protocol sync.\n>\n> As described above, session B will send a signal to session A, then\n> session A handle the signal and send the message into the shm_mq.\n> The message is sent by pq protocol. So session A will firstly call\n> 'pq_redirect_to_shm_mq' and then call 'pq_leave_shm_mq' to\n> continue to do its work.\n\nIn this kind of use case, there is really no reason to use the libpq\nprotocol at all. You would be better off just using a shm_mq directly,\nand then you don't need this patch. See tqueue.c for an example of\nsuch a coding pattern.\n\nUsing pqmq is very error-prone here. In particular, if a backend\nunexpectedly hits an ERROR while the direct is in place, the error\nwill be sent to the other session rather than to the connected client.\nThis breaks wire protocol synchronization.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 12 Aug 2024 12:28:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] Imporve pqmq" }, { "msg_contents": "Robert Haas <[email protected]> 于2024年8月13日周二 00:28写道:\n\n> On Thu, Aug 8, 2024 at 10:27 PM Xiaoran Wang <[email protected]>\n> wrote:\n> > > Add function 'pq_leave_shm_mq' to allow the process to go\n> > > back to the previous pq environment.\n> >\n> > >. In the code as it currently exists, a parallel worker never has a\n> > >. connected client, and it talks to a shm_mq instead. So there's no\n> need\n> > >. for this. If a backend needs to communicate with both a connected\n> > > client and also a shm_mq, it probably should not use pqmq but\n> rather\n> > > decide explicitly which messages should be sent to the client and\n> > > which to the shm_mq. Otherwise, it seems hard to avoid possible\n> loss\n> > > of protocol sync.\n> >\n> > As described above, session B will send a signal to session A, then\n> > session A handle the signal and send the message into the shm_mq.\n> > The message is sent by pq protocol. So session A will firstly call\n> > 'pq_redirect_to_shm_mq' and then call 'pq_leave_shm_mq' to\n> > continue to do its work.\n>\n> In this kind of use case, there is really no reason to use the libpq\n> protocol at all. You would be better off just using a shm_mq directly,\n> and then you don't need this patch. See tqueue.c for an example of\n> such a coding pattern.\n>\n\nThanks for your reply and suggestion, I will look into that.\n\n>\n> Using pqmq is very error-prone here. In particular, if a backend\n> unexpectedly hits an ERROR while the direct is in place, the error\n> will be sent to the other session rather than to the connected client.\n> This breaks wire protocol synchronization.\n>\n\nYes, I found this problem too. Between the 'pq_beginmessage' and\n'pq_endmessage',\nany log should not be emitted to the client as it will be sent to the shm_mq\ninstead of client. Such as I sometimes set client_min_messages='debug1'\nin psql, then it will go totally wrong. It maybe better to firstly write\nthe 'msg'\ninto a StringInfoData, then send the 'msg' by libpq.\n\nI agree that it is not good way to communicate between tow sessions.\n\n\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n\n\n-- \nBest regards !\nXiaoran Wang\n\nRobert Haas <[email protected]> 于2024年8月13日周二 00:28写道:On Thu, Aug 8, 2024 at 10:27 PM Xiaoran Wang <[email protected]> wrote:\n> >     Add function 'pq_leave_shm_mq' to allow the process to go\n> >     back to the previous pq environment.\n>\n> >.    In the code as it currently exists, a parallel worker never has a\n> >.    connected client, and it talks to a shm_mq instead. So there's no need\n> >.    for this. If a backend needs to communicate with both a connected\n> >     client and also a shm_mq, it probably should not use pqmq but rather\n> >     decide explicitly which messages should be sent to the client and\n> >     which to the shm_mq. Otherwise, it seems hard to avoid possible loss\n> >     of protocol sync.\n>\n>     As described above, session B  will send a signal to session A, then\n>     session A handle the signal and send the message into the shm_mq.\n>    The message is sent by pq protocol. So session A will firstly call\n>    'pq_redirect_to_shm_mq'  and then  call 'pq_leave_shm_mq' to\n>    continue to do its work.\n\nIn this kind of use case, there is really no reason to use the libpq\nprotocol at all. You would be better off just using a shm_mq directly,\nand then you don't need this patch. See tqueue.c for an example of\nsuch a coding pattern.Thanks for your reply and suggestion, I will look into that.\n\nUsing pqmq is very error-prone here. In particular, if a backend\nunexpectedly hits an ERROR while the direct is in place, the error\nwill be sent to the other session rather than to the connected client.\nThis breaks wire protocol synchronization.Yes, I found this problem too. Between the 'pq_beginmessage' and 'pq_endmessage', any log should not be emitted to the client as it will be sent to the shm_mqinstead of client.  Such as I sometimes set client_min_messages='debug1'in psql, then it will go totally wrong. It maybe better to firstly write the 'msg'into a StringInfoData, then send the 'msg' by libpq.I agree that it is not good way to communicate between tow sessions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n-- Best regards !Xiaoran Wang", "msg_date": "Tue, 13 Aug 2024 15:38:55 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch] Imporve pqmq" } ]
[ { "msg_contents": "Dear hackers,\n\nThis thread forks from [1]. Here can be used to discuss second item.\nBelow part contains the same statements written in [1], but I did copy-and-paste\njust in case. Attached patch is almost the same but bit modified based on the comment\nfrom Amit [2] - an unrelated change is removed.\n\nFound issue\n=====\nWhen the subscriber enables two-phase commit but doesn't set max_prepared_transaction >0\nand a transaction is prepared on the publisher, the apply worker reports an ERROR\non the subscriber. After that, the prepared transaction is not replayed, which\nmeans it's lost forever. Attached script can emulate the situation.\n\n--\nERROR: prepared transactions are disabled\nHINT: Set \"max_prepared_transactions\" to a nonzero value.\n--\n\nThe reason is that we advanced the origin progress when aborting the\ntransaction as well (RecordTransactionAbort->replorigin_session_advance). So,\nafter setting replorigin_session_origin_lsn, if any ERROR happens when preparing\nthe transaction, the transaction aborts which incorrectly advances the origin lsn.\n\nAn easiest fix is to reset session replication origin before calling the\nRecordTransactionAbort(). I think this can happen when 1) LogicalRepApplyLoop()\nraises an ERROR or 2) apply worker exits. Attached patch can fix the issue on HEAD.\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5692FA4926754B91E9D7B5F0F5AA2%40TYAPR01MB5692.jpnprd01.prod.outlook.com\n[2]: https://www.postgresql.org/message-id/CAA4eK1L-r8OKGdBwC6AeXSibrjr9xKsg8LjGpX_PDR5Go-A9TA%40mail.gmail.com\n\nBest regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Thu, 8 Aug 2024 05:07:18 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "[bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Thu, Aug 8, 2024 at 10:37 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n...\n>\n> An easiest fix is to reset session replication origin before calling the\n> RecordTransactionAbort(). I think this can happen when 1) LogicalRepApplyLoop()\n> raises an ERROR or 2) apply worker exits. Attached patch can fix the issue on HEAD.\n>\n\nFew comments:\n=============\n*\n@@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\n }\n PG_CATCH();\n {\n+ /*\n+ * Reset the origin data to prevent the advancement of origin progress\n+ * if the transaction failed to apply.\n+ */\n+ replorigin_session_origin = InvalidRepOriginId;\n+ replorigin_session_origin_lsn = InvalidXLogRecPtr;\n+ replorigin_session_origin_timestamp = 0;\n\nCan't we call replorigin_reset() instead here?\n\n*\n+ /*\n+ * Register a callback to reset the origin state before aborting the\n+ * transaction in ShutdownPostgres(). This is to prevent the advancement\n+ * of origin progress if the transaction failed to apply.\n+ */\n+ before_shmem_exit(replorigin_reset, (Datum) 0);\n\nI think we need this despite resetting the origin-related variables in\nPG_CATCH block to handle FATAL error cases, right? If so, can we use\nPG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Aug 2024 12:03:30 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Thu, Aug 8, 2024 at 12:03 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 8, 2024 at 10:37 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> ...\n> >\n> > An easiest fix is to reset session replication origin before calling the\n> > RecordTransactionAbort(). I think this can happen when 1) LogicalRepApplyLoop()\n> > raises an ERROR or 2) apply worker exits. Attached patch can fix the issue on HEAD.\n> >\n>\n> Few comments:\n> =============\n> *\n> @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\n> }\n> PG_CATCH();\n> {\n> + /*\n> + * Reset the origin data to prevent the advancement of origin progress\n> + * if the transaction failed to apply.\n> + */\n> + replorigin_session_origin = InvalidRepOriginId;\n> + replorigin_session_origin_lsn = InvalidXLogRecPtr;\n> + replorigin_session_origin_timestamp = 0;\n>\n> Can't we call replorigin_reset() instead here?\n>\n> *\n> + /*\n> + * Register a callback to reset the origin state before aborting the\n> + * transaction in ShutdownPostgres(). This is to prevent the advancement\n> + * of origin progress if the transaction failed to apply.\n> + */\n> + before_shmem_exit(replorigin_reset, (Datum) 0);\n>\n> I think we need this despite resetting the origin-related variables in\n> PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n\n+1\n\nBasic tests work fine on this patch. Just thinking out loud,\nSetupApplyOrSyncWorker() is called for table-sync worker as well and\nIIUC tablesync worker does not deal with 2PC txns. So do we even need\nto register replorigin_reset() for tablesync worker as well? If we may\nhit such an issue in general, then perhaps we need it in table-sync\nworker otherwise not. It needs some investigation. Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:31:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Thursday, August 8, 2024 6:01 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Thu, Aug 8, 2024 at 12:03 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Thu, Aug 8, 2024 at 10:37 AM Hayato Kuroda (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > ...\r\n> > >\r\n> > > An easiest fix is to reset session replication origin before calling\r\n> > > the RecordTransactionAbort(). I think this can happen when 1)\r\n> > > LogicalRepApplyLoop() raises an ERROR or 2) apply worker exits.\r\n> Attached patch can fix the issue on HEAD.\r\n> > >\r\n> >\r\n> > Few comments:\r\n> > =============\r\n> > *\r\n> > @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\r\n> > }\r\n> > PG_CATCH();\r\n> > {\r\n> > + /*\r\n> > + * Reset the origin data to prevent the advancement of origin\r\n> > + progress\r\n> > + * if the transaction failed to apply.\r\n> > + */\r\n> > + replorigin_session_origin = InvalidRepOriginId;\r\n> > + replorigin_session_origin_lsn = InvalidXLogRecPtr;\r\n> > + replorigin_session_origin_timestamp = 0;\r\n> >\r\n> > Can't we call replorigin_reset() instead here?\r\n> >\r\n> > *\r\n> > + /*\r\n> > + * Register a callback to reset the origin state before aborting the\r\n> > + * transaction in ShutdownPostgres(). This is to prevent the\r\n> > + advancement\r\n> > + * of origin progress if the transaction failed to apply.\r\n> > + */\r\n> > + before_shmem_exit(replorigin_reset, (Datum) 0);\r\n> >\r\n> > I think we need this despite resetting the origin-related variables in\r\n> > PG_CATCH block to handle FATAL error cases, right? If so, can we use\r\n> > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\r\n> \r\n> +1\r\n> \r\n> Basic tests work fine on this patch. Just thinking out loud,\r\n> SetupApplyOrSyncWorker() is called for table-sync worker as well and IIUC\r\n> tablesync worker does not deal with 2PC txns. So do we even need to register\r\n> replorigin_reset() for tablesync worker as well? If we may hit such an issue in\r\n> general, then perhaps we need it in table-sync worker otherwise not. It\r\n> needs some investigation. Thoughts?\r\n\r\nI think this is a general issue that can occur not only due to 2PC. IIUC, this\r\nproblem should arise if any ERROR arises after setting the\r\nreplorigin_session_origin_lsn but before the CommitTransactionCommand is\r\ncompleted. If so, I think we should register it for tablesync worker as well.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 8 Aug 2024 10:11:11 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Thu, Aug 8, 2024 at 3:41 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Thursday, August 8, 2024 6:01 PM shveta malik <[email protected]> wrote:\n> >\n> > On Thu, Aug 8, 2024 at 12:03 PM Amit Kapila <[email protected]>\n> > wrote:\n> > >\n> > > On Thu, Aug 8, 2024 at 10:37 AM Hayato Kuroda (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > > ...\n> > > >\n> > > > An easiest fix is to reset session replication origin before calling\n> > > > the RecordTransactionAbort(). I think this can happen when 1)\n> > > > LogicalRepApplyLoop() raises an ERROR or 2) apply worker exits.\n> > Attached patch can fix the issue on HEAD.\n> > > >\n> > >\n> > > Few comments:\n> > > =============\n> > > *\n> > > @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\n> > > }\n> > > PG_CATCH();\n> > > {\n> > > + /*\n> > > + * Reset the origin data to prevent the advancement of origin\n> > > + progress\n> > > + * if the transaction failed to apply.\n> > > + */\n> > > + replorigin_session_origin = InvalidRepOriginId;\n> > > + replorigin_session_origin_lsn = InvalidXLogRecPtr;\n> > > + replorigin_session_origin_timestamp = 0;\n> > >\n> > > Can't we call replorigin_reset() instead here?\n> > >\n> > > *\n> > > + /*\n> > > + * Register a callback to reset the origin state before aborting the\n> > > + * transaction in ShutdownPostgres(). This is to prevent the\n> > > + advancement\n> > > + * of origin progress if the transaction failed to apply.\n> > > + */\n> > > + before_shmem_exit(replorigin_reset, (Datum) 0);\n> > >\n> > > I think we need this despite resetting the origin-related variables in\n> > > PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> > > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n> >\n> > +1\n> >\n> > Basic tests work fine on this patch. Just thinking out loud,\n> > SetupApplyOrSyncWorker() is called for table-sync worker as well and IIUC\n> > tablesync worker does not deal with 2PC txns. So do we even need to register\n> > replorigin_reset() for tablesync worker as well? If we may hit such an issue in\n> > general, then perhaps we need it in table-sync worker otherwise not. It\n> > needs some investigation. Thoughts?\n>\n> I think this is a general issue that can occur not only due to 2PC. IIUC, this\n> problem should arise if any ERROR arises after setting the\n> replorigin_session_origin_lsn but before the CommitTransactionCommand is\n> completed. If so, I think we should register it for tablesync worker as well.\n>\n\nAs pointed out earlier, won't using PG_ENSURE_ERROR_CLEANUP() instead\nof PG_CATCH() be enough?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Aug 2024 18:08:11 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Thu, Aug 8, 2024 at 6:08 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 8, 2024 at 3:41 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > On Thursday, August 8, 2024 6:01 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 8, 2024 at 12:03 PM Amit Kapila <[email protected]>\n> > > wrote:\n> > > >\n> > > > On Thu, Aug 8, 2024 at 10:37 AM Hayato Kuroda (Fujitsu)\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > ...\n> > > > >\n> > > > > An easiest fix is to reset session replication origin before calling\n> > > > > the RecordTransactionAbort(). I think this can happen when 1)\n> > > > > LogicalRepApplyLoop() raises an ERROR or 2) apply worker exits.\n> > > Attached patch can fix the issue on HEAD.\n> > > > >\n> > > >\n> > > > Few comments:\n> > > > =============\n> > > > *\n> > > > @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\n> > > > }\n> > > > PG_CATCH();\n> > > > {\n> > > > + /*\n> > > > + * Reset the origin data to prevent the advancement of origin\n> > > > + progress\n> > > > + * if the transaction failed to apply.\n> > > > + */\n> > > > + replorigin_session_origin = InvalidRepOriginId;\n> > > > + replorigin_session_origin_lsn = InvalidXLogRecPtr;\n> > > > + replorigin_session_origin_timestamp = 0;\n> > > >\n> > > > Can't we call replorigin_reset() instead here?\n> > > >\n> > > > *\n> > > > + /*\n> > > > + * Register a callback to reset the origin state before aborting the\n> > > > + * transaction in ShutdownPostgres(). This is to prevent the\n> > > > + advancement\n> > > > + * of origin progress if the transaction failed to apply.\n> > > > + */\n> > > > + before_shmem_exit(replorigin_reset, (Datum) 0);\n> > > >\n> > > > I think we need this despite resetting the origin-related variables in\n> > > > PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> > > > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n> > >\n> > > +1\n> > >\n> > > Basic tests work fine on this patch. Just thinking out loud,\n> > > SetupApplyOrSyncWorker() is called for table-sync worker as well and IIUC\n> > > tablesync worker does not deal with 2PC txns. So do we even need to register\n> > > replorigin_reset() for tablesync worker as well? If we may hit such an issue in\n> > > general, then perhaps we need it in table-sync worker otherwise not. It\n> > > needs some investigation. Thoughts?\n> >\n> > I think this is a general issue that can occur not only due to 2PC. IIUC, this\n> > problem should arise if any ERROR arises after setting the\n> > replorigin_session_origin_lsn but before the CommitTransactionCommand is\n> > completed. If so, I think we should register it for tablesync worker as well.\n> >\n>\n> As pointed out earlier, won't using PG_ENSURE_ERROR_CLEANUP() instead\n> of PG_CATCH() be enough?\n\nYes, I think it should suffice. IIUC, we are going to change\n'replorigin_session_origin_lsn' only in start_apply() and not before\nthat, and thus ensuring its reset during any ERROR or FATAL in\nstart_apply() is good enough. And I guess we don't want this\norigin-reset to be called during regular shutdown, isn't it? But\nregistering it through before_shmem_exit() will make the\nreset-function to be called during normal shutdown as well.\nAnd to answer my previous question (as Hou-San also pointed out), we\ndo need it in table-sync worker as well. So a change in start_apply\nwill make sure the fix is valid for both apply and tablesync worker.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 9 Aug 2024 09:34:54 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Fri, Aug 9, 2024 at 9:35 AM shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 8, 2024 at 6:08 PM Amit Kapila <[email protected]> wrote:\n> >\n> > >\n> > > I think this is a general issue that can occur not only due to 2PC. IIUC, this\n> > > problem should arise if any ERROR arises after setting the\n> > > replorigin_session_origin_lsn but before the CommitTransactionCommand is\n> > > completed. If so, I think we should register it for tablesync worker as well.\n> > >\n> >\n> > As pointed out earlier, won't using PG_ENSURE_ERROR_CLEANUP() instead\n> > of PG_CATCH() be enough?\n>\n> Yes, I think it should suffice. IIUC, we are going to change\n> 'replorigin_session_origin_lsn' only in start_apply() and not before\n> that, and thus ensuring its reset during any ERROR or FATAL in\n> start_apply() is good enough.\n>\n\nRight, I also think so.\n\n> And I guess we don't want this\n> origin-reset to be called during regular shutdown, isn't it?\n>\n\nAgreed. OTOH, there was no harm even if such a reset function is invoked.\n\n> But\n> registering it through before_shmem_exit() will make the\n> reset-function to be called during normal shutdown as well.\n>\n\nTrue and unless I am missing something we won't need it. I would like\nto hear the opinions of Hou-San and Kuroda-San on the same.\n\n> And to answer my previous question (as Hou-San also pointed out), we\n> do need it in table-sync worker as well. So a change in start_apply\n> will make sure the fix is valid for both apply and tablesync worker.\n>\n\nAs table-sync workers can also apply transactions after the initial\ncopy, we need it for table-sync during its apply phase.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Aug 2024 11:19:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "Dear Amit, Shveta, Hou,\r\n\r\nThanks for giving many comments! I've updated the patch.\r\n\r\n> @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\r\n> }\r\n> PG_CATCH();\r\n> {\r\n> + /*\r\n> + * Reset the origin data to prevent the advancement of origin progress\r\n> + * if the transaction failed to apply.\r\n> + */\r\n> + replorigin_session_origin = InvalidRepOriginId;\r\n> + replorigin_session_origin_lsn = InvalidXLogRecPtr;\r\n> + replorigin_session_origin_timestamp = 0;\r\n> \r\n> Can't we call replorigin_reset() instead here?\r\n\r\nI didn't use the function because arguments of calling function looked strange,\r\nbut ideally I can. Fixed.\r\n\r\n> + /*\r\n> + * Register a callback to reset the origin state before aborting the\r\n> + * transaction in ShutdownPostgres(). This is to prevent the advancement\r\n> + * of origin progress if the transaction failed to apply.\r\n> + */\r\n> + before_shmem_exit(replorigin_reset, (Datum) 0);\r\n> \r\n> I think we need this despite resetting the origin-related variables in\r\n> PG_CATCH block to handle FATAL error cases, right? If so, can we use\r\n> PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\r\n\r\nThere are two reasons to add a shmem-exit callback. One is to support a FATAL,\r\nanother one is to support the case that user does the shutdown request while\r\napplying changes. In this case, I think ShutdownPostgres() can be called so that\r\nthe session origin may advance.\r\n\r\nHowever, I think we cannot use PG_ENSURE_ERROR_CLEANUP()/PG_END_ENSURE_ERROR_CLEANUP\r\nmacros here. According to codes, it assumes that any before-shmem callbacks are\r\nnot registered within the block because the cleanup function is registered and canceled\r\nwithin the macro. LogicalRepApplyLoop() can register the function when\r\nit handles COMMIT PREPARED message so it breaks the rule.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Fri, 9 Aug 2024 09:09:34 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Fri, Aug 9, 2024 at 2:39 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit, Shveta, Hou,\n>\n> Thanks for giving many comments! I've updated the patch.\n>\n> > @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\n> > }\n> > PG_CATCH();\n> > {\n> > + /*\n> > + * Reset the origin data to prevent the advancement of origin progress\n> > + * if the transaction failed to apply.\n> > + */\n> > + replorigin_session_origin = InvalidRepOriginId;\n> > + replorigin_session_origin_lsn = InvalidXLogRecPtr;\n> > + replorigin_session_origin_timestamp = 0;\n> >\n> > Can't we call replorigin_reset() instead here?\n>\n> I didn't use the function because arguments of calling function looked strange,\n> but ideally I can. Fixed.\n>\n> > + /*\n> > + * Register a callback to reset the origin state before aborting the\n> > + * transaction in ShutdownPostgres(). This is to prevent the advancement\n> > + * of origin progress if the transaction failed to apply.\n> > + */\n> > + before_shmem_exit(replorigin_reset, (Datum) 0);\n> >\n> > I think we need this despite resetting the origin-related variables in\n> > PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n>\n> There are two reasons to add a shmem-exit callback. One is to support a FATAL,\n> another one is to support the case that user does the shutdown request while\n> applying changes. In this case, I think ShutdownPostgres() can be called so that\n> the session origin may advance.\n\nAgree that we need the 'reset' during shutdown flow as well. Details at [1]\n\n> However, I think we cannot use PG_ENSURE_ERROR_CLEANUP()/PG_END_ENSURE_ERROR_CLEANUP\n> macros here. According to codes, it assumes that any before-shmem callbacks are\n> not registered within the block because the cleanup function is registered and canceled\n> within the macro. LogicalRepApplyLoop() can register the function when\n> it handles COMMIT PREPARED message so it breaks the rule.\n\nYes, on reanalyzing, we can not use PG_ENSURE_ERROR_CLEANUP in this\nflow due to the limitation of cancel_before_shmem_exit() that it can\ncancel only the last registered callback, while in our flow we have\nother callbacks also registered after we register our reset one.\n\n[1]\nShutdown analysis:\n\nI did a test where we make apply worker wait for say 0sec right after\nit updates 'replorigin_session_origin_lsn' in\napply_handle_prepare_internal() (say it at code-point1). During this\nwait, we triggered a subscriber shutdown.Under normal circumstances,\neverything works fine: after the wait, the apply worker processes the\nSIGTERM (via LogicalRepApplyLoop-->ProcessInterrupts()) only after the\nprepare phase is complete, meaning the PREPARE LSN is flushed, and the\norigin LSN is correctly advanced in EndPrepare() before the worker\nshuts down. But, if we insert a LOG statement between code-point1 and\nEndPrepare(), the apply worker processes the SIGTERM during the LOG\noperation, as errfinish() triggers CHECK_FOR_INTERRUPTS at the end,\nwhich causes the origin LSN to be incorrectly advanced during\nshutdown. And thus the subsequent COMMIT PREPARED on the publisher\nresults in ERROR on subscriber; as the 'PREPARE' is lost on the\nsubscriber and is not resent by the publisher. ERROR: prepared\ntransaction with identifier \"pg_gid_16403_757\" does not exist\n\nA similar problem can also occur without introducing any additional\nLOG statements, but by simply setting log_min_messages=debug5. This\ncauses the apply worker to output a few DEBUG messages upon receiving\na shutdown signal (after code-point1) before it reaches EndPrepare().\nAs a result, it ends up processing the SIGTERM (during logging)and\ninvoking AbortOutOfAnyTransaction(), which incorrectly advances the\norigin LSN.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:36:52 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Mon, Aug 12, 2024 at 3:36 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Aug 9, 2024 at 2:39 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Amit, Shveta, Hou,\n> >\n> > Thanks for giving many comments! I've updated the patch.\n> >\n> > > @@ -4409,6 +4409,14 @@ start_apply(XLogRecPtr origin_startpos)\n> > > }\n> > > PG_CATCH();\n> > > {\n> > > + /*\n> > > + * Reset the origin data to prevent the advancement of origin progress\n> > > + * if the transaction failed to apply.\n> > > + */\n> > > + replorigin_session_origin = InvalidRepOriginId;\n> > > + replorigin_session_origin_lsn = InvalidXLogRecPtr;\n> > > + replorigin_session_origin_timestamp = 0;\n> > >\n> > > Can't we call replorigin_reset() instead here?\n> >\n> > I didn't use the function because arguments of calling function looked strange,\n> > but ideally I can. Fixed.\n> >\n> > > + /*\n> > > + * Register a callback to reset the origin state before aborting the\n> > > + * transaction in ShutdownPostgres(). This is to prevent the advancement\n> > > + * of origin progress if the transaction failed to apply.\n> > > + */\n> > > + before_shmem_exit(replorigin_reset, (Datum) 0);\n> > >\n> > > I think we need this despite resetting the origin-related variables in\n> > > PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> > > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n> >\n> > There are two reasons to add a shmem-exit callback. One is to support a FATAL,\n> > another one is to support the case that user does the shutdown request while\n> > applying changes. In this case, I think ShutdownPostgres() can be called so that\n> > the session origin may advance.\n>\n> Agree that we need the 'reset' during shutdown flow as well. Details at [1]\n>\n> > However, I think we cannot use PG_ENSURE_ERROR_CLEANUP()/PG_END_ENSURE_ERROR_CLEANUP\n> > macros here. According to codes, it assumes that any before-shmem callbacks are\n> > not registered within the block because the cleanup function is registered and canceled\n> > within the macro. LogicalRepApplyLoop() can register the function when\n> > it handles COMMIT PREPARED message so it breaks the rule.\n>\n> Yes, on reanalyzing, we can not use PG_ENSURE_ERROR_CLEANUP in this\n> flow due to the limitation of cancel_before_shmem_exit() that it can\n> cancel only the last registered callback, while in our flow we have\n> other callbacks also registered after we register our reset one.\n>\n> [1]\n> Shutdown analysis:\n>\n> I did a test where we make apply worker wait for say 0sec right after\n\nCorrection here: 0sec -->10sec\n\n> it updates 'replorigin_session_origin_lsn' in\n> apply_handle_prepare_internal() (say it at code-point1). During this\n> wait, we triggered a subscriber shutdown.Under normal circumstances,\n> everything works fine: after the wait, the apply worker processes the\n> SIGTERM (via LogicalRepApplyLoop-->ProcessInterrupts()) only after the\n> prepare phase is complete, meaning the PREPARE LSN is flushed, and the\n> origin LSN is correctly advanced in EndPrepare() before the worker\n> shuts down. But, if we insert a LOG statement between code-point1 and\n> EndPrepare(), the apply worker processes the SIGTERM during the LOG\n> operation, as errfinish() triggers CHECK_FOR_INTERRUPTS at the end,\n> which causes the origin LSN to be incorrectly advanced during\n> shutdown. And thus the subsequent COMMIT PREPARED on the publisher\n> results in ERROR on subscriber; as the 'PREPARE' is lost on the\n> subscriber and is not resent by the publisher. ERROR: prepared\n> transaction with identifier \"pg_gid_16403_757\" does not exist\n>\n> A similar problem can also occur without introducing any additional\n> LOG statements, but by simply setting log_min_messages=debug5. This\n> causes the apply worker to output a few DEBUG messages upon receiving\n> a shutdown signal (after code-point1) before it reaches EndPrepare().\n> As a result, it ends up processing the SIGTERM (during logging)and\n> invoking AbortOutOfAnyTransaction(), which incorrectly advances the\n> origin LSN.\n>\n> thanks\n> Shveta\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:40:03 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Mon, Aug 12, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Aug 9, 2024 at 2:39 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> >\n> > > + /*\n> > > + * Register a callback to reset the origin state before aborting the\n> > > + * transaction in ShutdownPostgres(). This is to prevent the advancement\n> > > + * of origin progress if the transaction failed to apply.\n> > > + */\n> > > + before_shmem_exit(replorigin_reset, (Datum) 0);\n> > >\n> > > I think we need this despite resetting the origin-related variables in\n> > > PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> > > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n> >\n> > There are two reasons to add a shmem-exit callback. One is to support a FATAL,\n> > another one is to support the case that user does the shutdown request while\n> > applying changes. In this case, I think ShutdownPostgres() can be called so that\n> > the session origin may advance.\n>\n> Agree that we need the 'reset' during shutdown flow as well. Details at [1]\n>\n\nThanks for the detailed analysis. I agree with your analysis that we\nneed to reset the origin information for the shutdown path to avoid it\nbeing advanced incorrectly. However, the patch doesn't have sufficient\ncomments to explain why we need to reset it for both the ERROR and\nShutdown paths. Can we improve the comments in the patch?\n\nAlso, for the ERROR path, can we reset the origin information in\napply_error_callback()?\n\nBTW, this needs to be backpatched till 16 when it has been introduced\nby the parallel apply feature as part of commit 216a7848. So, can we\ntest this patch in back-branches as well?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Aug 2024 09:48:37 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Tue, Aug 13, 2024 at 9:48 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 12, 2024 at 3:37 PM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Aug 9, 2024 at 2:39 PM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > >\n> > > > + /*\n> > > > + * Register a callback to reset the origin state before aborting the\n> > > > + * transaction in ShutdownPostgres(). This is to prevent the advancement\n> > > > + * of origin progress if the transaction failed to apply.\n> > > > + */\n> > > > + before_shmem_exit(replorigin_reset, (Datum) 0);\n> > > >\n> > > > I think we need this despite resetting the origin-related variables in\n> > > > PG_CATCH block to handle FATAL error cases, right? If so, can we use\n> > > > PG_ENSURE_ERROR_CLEANUP() instead of PG_CATCH()?\n> > >\n> > > There are two reasons to add a shmem-exit callback. One is to support a FATAL,\n> > > another one is to support the case that user does the shutdown request while\n> > > applying changes. In this case, I think ShutdownPostgres() can be called so that\n> > > the session origin may advance.\n> >\n> > Agree that we need the 'reset' during shutdown flow as well. Details at [1]\n> >\n>\n> Thanks for the detailed analysis. I agree with your analysis that we\n> need to reset the origin information for the shutdown path to avoid it\n> being advanced incorrectly. However, the patch doesn't have sufficient\n> comments to explain why we need to reset it for both the ERROR and\n> Shutdown paths. Can we improve the comments in the patch?\n>\n> Also, for the ERROR path, can we reset the origin information in\n> apply_error_callback()?\n\nPlease find v4 attached. Addressed comments in that.\n\nManual testing done on v4:\n1) Error and Fatal case\n2) Shutdown after replorigin_session_origin_lsn was set in\napply_handle_prepare() and before EndPrepare was called.\n 2a) with log_min_messages=debug5. This will result in processing\nof shutdown signal by errfinish() before PREPARE is over.\n 2b) with default log_min_messages. This will result in processing\nof shutdown signal by LogicalRepApplyLoop() after ongoing PREPARE is\nover.\n\n>\n> BTW, this needs to be backpatched till 16 when it has been introduced\n> by the parallel apply feature as part of commit 216a7848. So, can we\n> test this patch in back-branches as well?\n>\n\nSure, will do next.\n\nthanks\nShveta", "msg_date": "Wed, 14 Aug 2024 10:25:56 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Tue, Aug 13, 2024 at 9:48 AM Amit Kapila <[email protected]> wrote:\n>\n> BTW, this needs to be backpatched till 16 when it has been introduced\n> by the parallel apply feature as part of commit 216a7848. So, can we\n> test this patch in back-branches as well?\n>\n\nI was able to reproduce the problem on REL_16_STABLE and REL_17_STABLE\nthrough both the flows (shutdown and apply-error). The patch v4 fixes\nthe issues on both.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 14 Aug 2024 14:53:25 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Wed, Aug 14, 2024 at 10:26 AM shveta malik <[email protected]> wrote:\n>\n> >\n> > Thanks for the detailed analysis. I agree with your analysis that we\n> > need to reset the origin information for the shutdown path to avoid it\n> > being advanced incorrectly. However, the patch doesn't have sufficient\n> > comments to explain why we need to reset it for both the ERROR and\n> > Shutdown paths. Can we improve the comments in the patch?\n> >\n> > Also, for the ERROR path, can we reset the origin information in\n> > apply_error_callback()?\n>\n> Please find v4 attached. Addressed comments in that.\n>\n\nThe patch looks mostly good to me. I have slightly changed a few of\nthe comments in the attached. What do you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 20 Aug 2024 11:36:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Tue, Aug 20, 2024 at 11:36 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Aug 14, 2024 at 10:26 AM shveta malik <[email protected]> wrote:\n> >\n> > >\n> > > Thanks for the detailed analysis. I agree with your analysis that we\n> > > need to reset the origin information for the shutdown path to avoid it\n> > > being advanced incorrectly. However, the patch doesn't have sufficient\n> > > comments to explain why we need to reset it for both the ERROR and\n> > > Shutdown paths. Can we improve the comments in the patch?\n> > >\n> > > Also, for the ERROR path, can we reset the origin information in\n> > > apply_error_callback()?\n> >\n> > Please find v4 attached. Addressed comments in that.\n> >\n>\n> The patch looks mostly good to me. I have slightly changed a few of\n> the comments in the attached. What do you think of the attached?\n>\n\nLooks good to me. Please find backported patches attached.\n\nthanks\nShveta", "msg_date": "Tue, 20 Aug 2024 14:00:53 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" }, { "msg_contents": "On Tue, Aug 20, 2024 at 2:01 PM shveta malik <[email protected]> wrote:\n>\n> On Tue, Aug 20, 2024 at 11:36 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Aug 14, 2024 at 10:26 AM shveta malik <[email protected]> wrote:\n> > >\n> > > >\n> > > > Thanks for the detailed analysis. I agree with your analysis that we\n> > > > need to reset the origin information for the shutdown path to avoid it\n> > > > being advanced incorrectly. However, the patch doesn't have sufficient\n> > > > comments to explain why we need to reset it for both the ERROR and\n> > > > Shutdown paths. Can we improve the comments in the patch?\n> > > >\n> > > > Also, for the ERROR path, can we reset the origin information in\n> > > > apply_error_callback()?\n> > >\n> > > Please find v4 attached. Addressed comments in that.\n> > >\n> >\n> > The patch looks mostly good to me. I have slightly changed a few of\n> > the comments in the attached. What do you think of the attached?\n> >\n>\n> Looks good to me. Please find backported patches attached.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Aug 2024 15:27:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [bug fix] prepared transaction might be lost when\n max_prepared_transactions is zero on the subscriber" } ]
[ { "msg_contents": "When systable_beginscan() and systable_beginscan_ordered() choose an \nindex scan, they remap the attribute numbers in the passed-in scan keys \nto the attribute numbers of the index, and then write those remapped \nattribute numbers back into the scan key passed by the caller. This \nsecond part is surprising and gratuitous. It means that a scan key \ncannot safely be used more than once (but it might sometimes work, \ndepending on circumstances). Also, there is no value in providing these \nremapped attribute numbers back to the caller, since they can't do \nanything with that.\n\nI propose to fix that by making a copy of the scan keys passed by the \ncaller and make the modifications there.\n\nIn order to prove to myself that there are no other cases where \ncaller-provided scan keys are modified, I went through and \nconst-qualified all the APIs. This works out correctly. Several levels \ndown in the stack, the access methods make their own copy of the scan \nkeys that they store in their scan descriptors, and they use those in \nnon-const-clean ways, but that's ok, that's their business. As far as \nthe top-level callers are concerned, they can rely on their scan keys to \nbe const after this.\n\nI'm not proposing this second patch for committing at this time, since \nthat would modify the public access method APIs in an incompatible way. \nI've made a proposal of a similar nature in [0]. At some point, it \nmight be worth batching these and other changes together and make the \nchange. I might come back to that later.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/14c31f4a-0347-0805-dce8-93a9072c05a5%40eisentraut.org\n\nWhile researching how the scan keys get copied around, I noticed that \nthe index access methods all use memmove() to make the above-mentioned \ncopy into their own scan descriptor. This is fine, but memmove() is \nusually only used when something special is going on that would prevent \nmemcpy() from working, which is not the case there. So to avoid the \nconfusion for future readers, I changed those to memcpy(). I suspect \nthat this code has been copied between the different index AM over time. \n (The nbtree version of this code is literally unchanged since July 1996.)", "msg_date": "Thu, 8 Aug 2024 08:46:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Don't overwrite scan key in systable_beginscan()" }, { "msg_contents": "On Thu, Aug 8, 2024 at 2:46 AM Peter Eisentraut <[email protected]> wrote:\n> When systable_beginscan() and systable_beginscan_ordered() choose an\n> index scan, they remap the attribute numbers in the passed-in scan keys\n> to the attribute numbers of the index, and then write those remapped\n> attribute numbers back into the scan key passed by the caller. This\n> second part is surprising and gratuitous. It means that a scan key\n> cannot safely be used more than once (but it might sometimes work,\n> depending on circumstances). Also, there is no value in providing these\n> remapped attribute numbers back to the caller, since they can't do\n> anything with that.\n>\n> I propose to fix that by making a copy of the scan keys passed by the\n> caller and make the modifications there.\n\nThis does have the disadvantage of adding more palloc overhead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:18:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't overwrite scan key in systable_beginscan()" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Aug 8, 2024 at 2:46 AM Peter Eisentraut <[email protected]> wrote:\n>> I propose to fix that by making a copy of the scan keys passed by the\n>> caller and make the modifications there.\n\n> This does have the disadvantage of adding more palloc overhead.\n\nIt seems hard to believe that one more palloc + memcpy is going to be\nnoticeable in the context of an index scan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Aug 2024 15:28:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't overwrite scan key in systable_beginscan()" }, { "msg_contents": "On Thu, Aug 08, 2024 at 08:46:35AM +0200, Peter Eisentraut wrote:\n> When systable_beginscan() and systable_beginscan_ordered() choose an index\n> scan, they remap the attribute numbers in the passed-in scan keys to the\n> attribute numbers of the index, and then write those remapped attribute\n> numbers back into the scan key passed by the caller. This second part is\n> surprising and gratuitous. It means that a scan key cannot safely be used\n> more than once (but it might sometimes work, depending on circumstances).\n> Also, there is no value in providing these remapped attribute numbers back\n> to the caller, since they can't do anything with that.\n> \n> I propose to fix that by making a copy of the scan keys passed by the caller\n> and make the modifications there.\n\nNo objection, but this would obsolete at least some of these comments (the\ncatcache.c ones if nothing else):\n\n$ git grep -in scankey | grep -i copy\nsrc/backend/access/gist/gistscan.c:257: * Copy consistent support function to ScanKey structure instead\nsrc/backend/access/gist/gistscan.c:330: * Copy distance support function to ScanKey structure instead of\nsrc/backend/access/nbtree/nbtutils.c:281: ScanKey arrayKeyData; /* modified copy of scan->keyData */\nsrc/backend/access/nbtree/nbtutils.c:3303: * We copy the appropriate indoption value into the scankey sk_flags\nsrc/backend/access/nbtree/nbtutils.c:3318: * It's a bit ugly to modify the caller's copy of the scankey but in practice\nsrc/backend/access/spgist/spgscan.c:385: /* copy scankeys into local storage */\nsrc/backend/utils/cache/catcache.c:1474: * Ok, need to make a lookup in the relation, copy the scankey and\nsrc/backend/utils/cache/catcache.c:1794: * Ok, need to make a lookup in the relation, copy the scankey and\nsrc/backend/utils/cache/relfilenumbermap.c:192: /* copy scankey to local copy, it will be modified during the scan */\n\n> In order to prove to myself that there are no other cases where\n> caller-provided scan keys are modified, I went through and const-qualified\n> all the APIs. This works out correctly. Several levels down in the stack,\n> the access methods make their own copy of the scan keys that they store in\n> their scan descriptors, and they use those in non-const-clean ways, but\n> that's ok, that's their business. As far as the top-level callers are\n> concerned, they can rely on their scan keys to be const after this.\n\nWe do still have code mutating IndexScanDescData.keyData, but that's fine. We\nmust be copying function args to form IndexScanDescData.keyData, or else your\nproof patch would have noticed an assignment of const to non-const.\n\n\n", "msg_date": "Thu, 8 Aug 2024 21:05:46 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't overwrite scan key in systable_beginscan()" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Thu, Aug 08, 2024 at 08:46:35AM +0200, Peter Eisentraut wrote:\n>> I propose to fix that by making a copy of the scan keys passed by the caller\n>> and make the modifications there.\n\n> No objection, but this would obsolete at least some of these comments (the\n> catcache.c ones if nothing else):\n\nThat ties into something I forgot to ask: aren't there any copy steps\nor other overhead that we could remove, given this new API constraint?\nThat would help address Robert's concern.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Aug 2024 00:55:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't overwrite scan key in systable_beginscan()" }, { "msg_contents": "On 09.08.24 06:55, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n>> On Thu, Aug 08, 2024 at 08:46:35AM +0200, Peter Eisentraut wrote:\n>>> I propose to fix that by making a copy of the scan keys passed by the caller\n>>> and make the modifications there.\n> \n>> No objection, but this would obsolete at least some of these comments (the\n>> catcache.c ones if nothing else):\n> \n> That ties into something I forgot to ask: aren't there any copy steps\n> or other overhead that we could remove, given this new API constraint?\n> That would help address Robert's concern.\n\nI added two more patches to the series here.\n\nFirst (or 0004), some additional cleanup for code that had to workaround \nsystable_beginscan() overwriting the scan keys, along the lines \nsuggested by Noah.\n\nSecond (or 0005), an alternative to palloc is to make the converted scan \nkeys a normal local variable. Then it's just a question of whether a \nsmaller palloc is preferred over an over-allocated local variable. I \nthink I still prefer the palloc version, but it doesn't matter much \neither way I think.", "msg_date": "Mon, 12 Aug 2024 09:44:02 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't overwrite scan key in systable_beginscan()" }, { "msg_contents": "On 12.08.24 09:44, Peter Eisentraut wrote:\n> On 09.08.24 06:55, Tom Lane wrote:\n>> Noah Misch <[email protected]> writes:\n>>> On Thu, Aug 08, 2024 at 08:46:35AM +0200, Peter Eisentraut wrote:\n>>>> I propose to fix that by making a copy of the scan keys passed by \n>>>> the caller\n>>>> and make the modifications there.\n>>\n>>> No objection, but this would obsolete at least some of these comments \n>>> (the\n>>> catcache.c ones if nothing else):\n>>\n>> That ties into something I forgot to ask: aren't there any copy steps\n>> or other overhead that we could remove, given this new API constraint?\n>> That would help address Robert's concern.\n> \n> I added two more patches to the series here.\n> \n> First (or 0004), some additional cleanup for code that had to workaround \n> systable_beginscan() overwriting the scan keys, along the lines \n> suggested by Noah.\n> \n> Second (or 0005), an alternative to palloc is to make the converted scan \n> keys a normal local variable.  Then it's just a question of whether a \n> smaller palloc is preferred over an over-allocated local variable.  I \n> think I still prefer the palloc version, but it doesn't matter much \n> either way I think.\n\nLooks like the discussion here has settled, so I plan to go head with \npatches\n\n[PATCH v2 1/5] Don't overwrite scan key in systable_beginscan()\n[PATCH v2 3/5] Replace gratuitous memmove() with memcpy()\n[PATCH v2 4/5] Update some code that handled systable_beginscan()\n overwriting scan key\n\n(folding patch 4 into patch 1)\n\n\n\n", "msg_date": "Thu, 5 Sep 2024 17:43:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't overwrite scan key in systable_beginscan()" } ]
[ { "msg_contents": "Hi,\n\nI am working on using the read stream in autoprewarm. I observed ~10%\nperformance gain with this change. The patch is attached.\n\nThe downside of the read stream approach is that a new read stream\nobject needs to be created for each database, relation and fork. I was\nwondering if this would cause a regression but it did not (at least\ndepending on results of my testing). Another downside could be the\ncode getting complicated.\n\nFor the testing,\n- I created 50 databases with each of them having 50 tables and the\nsize of the tables are 520KB.\n - patched: 51157 ms\n - master: 56769 ms\n- I created 5 databases with each of them having 1 table and the size\nof the tables are 3GB.\n - patched: 32679 ms\n - master: 36706 ms\n\nI put debugging message with timing information in\nautoprewarm_database_main() function, then run autoprewarm 100 times\n(by restarting the server) and cleared the OS cache before each\nrestart. Also, I ensured that the block number of the buffer returning\nfrom the read stream API is correct. I am not sure if that much\ntesting is enough for this kind of change.\n\nAny feedback would be appreciated.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 8 Aug 2024 10:32:16 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Using read stream in autoprewarm" } ]
[ { "msg_contents": "Hello PostgreSQL Hackers,\n\nI am submitting a patch to add hooks for the functions\npg_total_relation_size and pg_indexes_size. These hooks allow for custom\nbehaviour to be injected into these functions, which can be useful for\nextensions and other custom PostgreSQL modifications.\n\nPatch details:\n\n - Adds pg_total_relation_size_hook and pg_indexes_size_hook\n - Modifies pg_total_relation_size and pg_indexes_size to call these\n hooks if they are set\n - Adds necessary type definitions and extern declarations\n\nThis feature is useful because it allows for more flexible and customizable\nbehaviour in relation size calculations, which can be particularly valuable\nfor extensions that need to account for additional storage outside of the\nstandard PostgreSQL mechanisms.\n\nThe patch is attached.\n\nThank you for considering this patch. I look forward to your feedback.\n\nKind regards,\nAbdoulaye Ba", "msg_date": "Thu, 8 Aug 2024 14:18:14 +0200", "msg_from": "Abdoulaye Ba <[email protected]>", "msg_from_op": true, "msg_subject": "PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" }, { "msg_contents": "On 8/8/24 14:18, Abdoulaye Ba wrote:\n> Hello PostgreSQL Hackers,\n> \n> I am submitting a patch to add hooks for the functions\n> pg_total_relation_size and pg_indexes_size. These hooks allow for custom\n> behaviour to be injected into these functions, which can be useful for\n> extensions and other custom PostgreSQL modifications.\n> \n> Patch details: \n> \n> * Adds pg_total_relation_size_hook and pg_indexes_size_hook \n> * Modifies pg_total_relation_size and pg_indexes_size to call these\n> hooks if they are set \n> * Adds necessary type definitions and extern declarations\n> \n> This feature is useful because it allows for more flexible and\n> customizable behaviour in relation size calculations, which can be\n> particularly valuable for extensions that need to account for additional\n> storage outside of the standard PostgreSQL mechanisms.\n> \n> The patch is attached. \n> \n> Thank you for considering this patch. I look forward to your feedback.\n> \n\nHi,\n\nThanks for the patch. A couple comments:\n\n1) Unfortunately, it doesn't compile - it fails with errors like this:\n\nIn file included from ../../../../src/include/access/tableam.h:25,\n from detoast.c:18:\n../../../../src/include/utils/rel.h:711:51: error: unknown type name\n‘FunctionCallInfo’\n 711 | typedef Datum\n(*Pg_total_relation_size_hook_type)(FunctionCallInfo fcinfo);\n\nwhich happens because rel.h references FunctionCallInfo without\nincluding fmgr.h. I wonder how you tested the patch ...\n\n2) Also, I'm not sure why you have the \"Pg_\" at the beginning.\n\n3) I'm not sure why the patch first changes the return to add +1 and\nthen undoes that:\n\n PG_RETURN_INT64(size + 1);\n\nThat seems quite unnecessary. I wonder how you created the patch, seems\nyou just concatenated multiple patches.\n\n4) The patch has a mix of tabs and spaces. We don't do that.\n\n5) It would be really helpful if you could explain the motivation for\nthis. Not just \"make this customizable\" but what exactly you want to do\nin the hooks and why (presumably you have an extension).\n\n6) Isn't it a bit strange the patch modifies pg_total_relation_size()\nbut not pg_relation_size() or pg_table_size()? Am I missing something?\n\n7) You should add the patch to the next commitfest (2024-09) at\n\n https://commitfest.postgresql.org\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Thu, 8 Aug 2024 23:29:12 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" }, { "msg_contents": "On Thu, 8 Aug 2024 at 23:29, Tomas Vondra <[email protected]> wrote:\n\n> On 8/8/24 14:18, Abdoulaye Ba wrote:\n> > Hello PostgreSQL Hackers,\n> >\n> > I am submitting a patch to add hooks for the functions\n> > pg_total_relation_size and pg_indexes_size. These hooks allow for custom\n> > behaviour to be injected into these functions, which can be useful for\n> > extensions and other custom PostgreSQL modifications.\n> >\n> > Patch details:\n> >\n> > * Adds pg_total_relation_size_hook and pg_indexes_size_hook\n> > * Modifies pg_total_relation_size and pg_indexes_size to call these\n> > hooks if they are set\n> > * Adds necessary type definitions and extern declarations\n> >\n> > This feature is useful because it allows for more flexible and\n> > customizable behaviour in relation size calculations, which can be\n> > particularly valuable for extensions that need to account for additional\n> > storage outside of the standard PostgreSQL mechanisms.\n> >\n> > The patch is attached.\n> >\n> > Thank you for considering this patch. I look forward to your feedback.\n> >\n>\n> Hi,\n>\n> Thanks for the patch. A couple comments:\n>\n> 1) Unfortunately, it doesn't compile - it fails with errors like this:\n>\n> In file included from ../../../../src/include/access/tableam.h:25,\n> from detoast.c:18:\n> ../../../../src/include/utils/rel.h:711:51: error: unknown type name\n> ‘FunctionCallInfo’\n> 711 | typedef Datum\n> (*Pg_total_relation_size_hook_type)(FunctionCallInfo fcinfo);\n>\n> which happens because rel.h references FunctionCallInfo without\n> including fmgr.h. I wonder how you tested the patch ...\n>\n> 2) Also, I'm not sure why you have the \"Pg_\" at the beginning.\n>\n> 3) I'm not sure why the patch first changes the return to add +1 and\n> then undoes that:\n>\n> PG_RETURN_INT64(size + 1);\n>\n> That seems quite unnecessary. I wonder how you created the patch, seems\n> you just concatenated multiple patches.\n>\n> 4) The patch has a mix of tabs and spaces. We don't do that.\n>\n> 5) It would be really helpful if you could explain the motivation for\n> this. Not just \"make this customizable\" but what exactly you want to do\n> in the hooks and why (presumably you have an extension).\n>\n> 6) Isn't it a bit strange the patch modifies pg_total_relation_size()\n> but not pg_relation_size() or pg_table_size()? Am I missing something?\n>\n> 7) You should add the patch to the next commitfest (2024-09) at\n>\n> https://commitfest.postgresql.org\n>\n>\n> regards\n>\n>\n>\n\n> Hi all,\n>\nHere's my follow-up based on the feedback received:\n>\n> 1.\n>\n> *Compilation Issue:*\n> I didn’t encounter any errors when compiling on my machine, but I’ll\n> review the environment and ensure fmgr.h is included where necessary\n> to avoid the issue.\n> 2.\n>\n> *Prefix \"Pg_\":*\n> I’ll remove the \"Pg_\" prefix as I see now that it’s unnecessary.\n> 3.\n>\n> *Return Value Change:*\n> The +1 in the return value was part of a test that I forgot to remove.\n> I’ll clean that up in the next version of the patch.\n> 4.\n>\n> *Tabs and Spaces:*\n> I’ll correct the indentation to use tabs consistently, as per the\n> project’s coding standards.\n> 5.\n>\n> *Motivation:*\n> The hooks are intended to add support for calculating the Tantivy\n> index size, in line with the need outlined in this issue\n> <https://github.com/paradedb/paradedb/issues/1061>. This will allow us\n> to integrate additional index sizes into PostgreSQL’s built-in size\n> functions.\n> 6.\n>\n> *Additional Hooks:*\n> I’ll add hooks for pg_relation_size() and pg_table_size() for\n> consistency.\n>\n> I’ll make these changes and resubmit the patch soon. Thanks again for your\n> guidance!\n>\n> Best regards,\n>\n\nOn Thu, 8 Aug 2024 at 23:29, Tomas Vondra <[email protected]> wrote:On 8/8/24 14:18, Abdoulaye Ba wrote:\n> Hello PostgreSQL Hackers,\n> \n> I am submitting a patch to add hooks for the functions\n> pg_total_relation_size and pg_indexes_size. These hooks allow for custom\n> behaviour to be injected into these functions, which can be useful for\n> extensions and other custom PostgreSQL modifications.\n> \n> Patch details: \n> \n>   * Adds pg_total_relation_size_hook and pg_indexes_size_hook \n>   * Modifies pg_total_relation_size and pg_indexes_size to call these\n>     hooks if they are set \n>   * Adds necessary type definitions and extern declarations\n> \n> This feature is useful because it allows for more flexible and\n> customizable behaviour in relation size calculations, which can be\n> particularly valuable for extensions that need to account for additional\n> storage outside of the standard PostgreSQL mechanisms.\n> \n> The patch is attached. \n> \n> Thank you for considering this patch. I look forward to your feedback.\n> \n\nHi,\n\nThanks for the patch. A couple comments:\n\n1) Unfortunately, it doesn't compile - it fails with errors like this:\n\nIn file included from ../../../../src/include/access/tableam.h:25,\n                 from detoast.c:18:\n../../../../src/include/utils/rel.h:711:51: error: unknown type name\n‘FunctionCallInfo’\n  711 | typedef Datum\n(*Pg_total_relation_size_hook_type)(FunctionCallInfo fcinfo);\n\nwhich happens because rel.h references FunctionCallInfo without\nincluding fmgr.h. I wonder how you tested the patch ...\n\n2) Also, I'm not sure why you have the \"Pg_\" at the beginning.\n\n3) I'm not sure why the patch first changes the return to add +1 and\nthen undoes that:\n\n   PG_RETURN_INT64(size + 1);\n\nThat seems quite unnecessary. I wonder how you created the patch, seems\nyou just concatenated multiple patches.\n\n4) The patch has a mix of tabs and spaces. We don't do that.\n\n5) It would be really helpful if you could explain the motivation for\nthis. Not just \"make this customizable\" but what exactly you want to do\nin the hooks and why (presumably you have an extension).\n\n6) Isn't it a bit strange the patch modifies pg_total_relation_size()\nbut not pg_relation_size() or pg_table_size()? Am I missing something?\n\n7) You should add the patch to the next commitfest (2024-09) at\n\n   https://commitfest.postgresql.org\n\n\nregards\n  Hi all, Here's my follow-up based on the feedback received:Compilation Issue:I didn’t encounter any errors when compiling on my machine, but I’ll review the environment and ensure fmgr.h is included where necessary to avoid the issue.Prefix \"Pg_\":I’ll remove the \"Pg_\" prefix as I see now that it’s unnecessary.Return Value Change:The +1 in the return value was part of a test that I forgot to remove. I’ll clean that up in the next version of the patch.Tabs and Spaces:I’ll correct the indentation to use tabs consistently, as per the project’s coding standards.Motivation:The hooks are intended to add support for calculating the Tantivy index size, in line with the need outlined in this issue. This will allow us to integrate additional index sizes into PostgreSQL’s built-in size functions.Additional Hooks:I’ll add hooks for pg_relation_size() and pg_table_size() for consistency.I’ll make these changes and resubmit the patch soon. Thanks again for your guidance!Best regards,", "msg_date": "Fri, 9 Aug 2024 00:20:41 +0200", "msg_from": "Abdoulaye Ba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" }, { "msg_contents": "On 8/8/24 2:18 PM, Abdoulaye Ba wrote:\n> I am submitting a patch to add hooks for the functions \n> pg_total_relation_size and pg_indexes_size. These hooks allow for custom \n> behaviour to be injected into these functions, which can be useful for \n> extensions and other custom PostgreSQL modifications.\n\nWhat kind of extensions do you imagine would use this hook? If it is for \ncustom index AMs I think adding this to the index AM interface would \nmake more sense than just adding a generic callback hook.\n\nAndreas\n\n\n\n", "msg_date": "Fri, 9 Aug 2024 18:10:27 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" }, { "msg_contents": "On Fri, 9 Aug 2024 at 18:10, Andreas Karlsson <[email protected]> wrote:\n\n> On 8/8/24 2:18 PM, Abdoulaye Ba wrote:\n> > I am submitting a patch to add hooks for the functions\n> > pg_total_relation_size and pg_indexes_size. These hooks allow for custom\n> > behaviour to be injected into these functions, which can be useful for\n> > extensions and other custom PostgreSQL modifications.\n>\n> What kind of extensions do you imagine would use this hook? If it is for\n> custom index AMs I think adding this to the index AM interface would\n> make more sense than just adding a generic callback hook.\n>\n> Andreas\n>\n\n\n> The primary use case for this hook is to allow extensions to account for\n> additional storage mechanisms that are not covered by the default\n> PostgreSQL relation size calculations. For instance, in our project, we are\n> working with an external indexing system (Tantivy) that maintains\n> additional data structures outside the standard PostgreSQL storage. This\n> hook allows us to include the size of these additional structures in the\n> total relation size calculations.\n>\n> While I understand your suggestion about custom index AMs, the intent\n> behind this hook is broader. It is not limited to custom index types but\n> can also be used for other forms of external storage that need to be\n> accounted for in relation size calculations. This is why a generic callback\n> hook was chosen over extending the index AM interface.\n>\n> However, if there is a consensus that such a hook would be better suited\n> within the index AM interface for cases involving custom index storage, I'm\n> open to discussing this further and exploring how it could be integrated\n> more tightly with the existing PostgreSQL AM framework.\n>\n\nOn Fri, 9 Aug 2024 at 18:10, Andreas Karlsson <[email protected]> wrote:On 8/8/24 2:18 PM, Abdoulaye Ba wrote:\n> I am submitting a patch to add hooks for the functions \n> pg_total_relation_size and pg_indexes_size. These hooks allow for custom \n> behaviour to be injected into these functions, which can be useful for \n> extensions and other custom PostgreSQL modifications.\n\nWhat kind of extensions do you imagine would use this hook? If it is for \ncustom index AMs I think adding this to the index AM interface would \nmake more sense than just adding a generic callback hook.\n\nAndreas The primary use case for this hook is to allow extensions to account for additional storage mechanisms that are not covered by the default PostgreSQL relation size calculations. For instance, in our project, we are working with an external indexing system (Tantivy) that maintains additional data structures outside the standard PostgreSQL storage. This hook allows us to include the size of these additional structures in the total relation size calculations.While I understand your suggestion about custom index AMs, the intent behind this hook is broader. It is not limited to custom index types but can also be used for other forms of external storage that need to be accounted for in relation size calculations. This is why a generic callback hook was chosen over extending the index AM interface.However, if there is a consensus that such a hook would be better suited within the index AM interface for cases involving custom index storage, I'm open to discussing this further and exploring how it could be integrated more tightly with the existing PostgreSQL AM framework.", "msg_date": "Fri, 9 Aug 2024 18:59:37 +0200", "msg_from": "Abdoulaye Ba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" }, { "msg_contents": "On 8/9/24 6:59 PM, Abdoulaye Ba wrote:> The primary use case for \nthis hook is to allow extensions to account\n> for additional storage mechanisms that are not covered by the\n> default PostgreSQL relation size calculations. For instance, in our\n> project, we are working with an external indexing system (Tantivy)\n> that maintains additional data structures outside the standard\n> PostgreSQL storage. This hook allows us to include the size of these\n> additional structures in the total relation size calculations.\n> \n> While I understand your suggestion about custom index AMs, the\n> intent behind this hook is broader. It is not limited to custom\n> index types but can also be used for other forms of external storage\n> that need to be accounted for in relation size calculations. This is\n> why a generic callback hook was chosen over extending the index AM\n> interface.\n> \n> However, if there is a consensus that such a hook would be better\n> suited within the index AM interface for cases involving custom\n> index storage, I'm open to discussing this further and exploring how\n> it could be integrated more tightly with the existing PostgreSQL AM\n> framework.\n\nYeah, I strongly suspected it was ParadeDB. :)\n\nI am only one developer but I really do not like solving this with a \nhook, instead I think the proper solution is to integrate this properly \nwith custom AMs and storage managers. I think we should do it properly \nor not at all.\n\nAndreas\n\n\n", "msg_date": "Wed, 28 Aug 2024 17:53:36 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" }, { "msg_contents": "\n\nOn 8/28/24 17:53, Andreas Karlsson wrote:\n> On 8/9/24 6:59 PM, Abdoulaye Ba wrote:>     The primary use case for\n> this hook is to allow extensions to account\n>>     for additional storage mechanisms that are not covered by the\n>>     default PostgreSQL relation size calculations. For instance, in our\n>>     project, we are working with an external indexing system (Tantivy)\n>>     that maintains additional data structures outside the standard\n>>     PostgreSQL storage. This hook allows us to include the size of these\n>>     additional structures in the total relation size calculations.\n>>\n>>     While I understand your suggestion about custom index AMs, the\n>>     intent behind this hook is broader. It is not limited to custom\n>>     index types but can also be used for other forms of external storage\n>>     that need to be accounted for in relation size calculations. This is\n>>     why a generic callback hook was chosen over extending the index AM\n>>     interface.\n>>\n>>     However, if there is a consensus that such a hook would be better\n>>     suited within the index AM interface for cases involving custom\n>>     index storage, I'm open to discussing this further and exploring how\n>>     it could be integrated more tightly with the existing PostgreSQL AM\n>>     framework.\n> \n> Yeah, I strongly suspected it was ParadeDB. :)\n> \n> I am only one developer but I really do not like solving this with a\n> hook, instead I think the proper solution is to integrate this properly\n> with custom AMs and storage managers. I think we should do it properly\n> or not at all.\n> \n\nNot sure. I'd agree if the index was something that could be implemented\nthrough index AM - then that'd be the way to go. It might require some\nimprovements to the index AM to use the correct index size, haven't checked.\n\nBut it seems pg_search (which AFAIK is what paradedb uses to integrate\ntantivy indexes) uses the term \"index\" for something very different. I'm\nnot sure that's something that could be conveniently implemented as\nindex AM, but I haven't checked. But that just raises the question why\nshould that be included in pg_total_relation_size and pg_indexes_size at\nall, if it's not what we'd call an index.\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Wed, 28 Aug 2024 20:01:59 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add hooks for pg_total_relation_size and pg_indexes_size" } ]
[ { "msg_contents": "I noticed that the \"WAL reliability\" documentation says that we use CRC-32\nfor WAL records and two-phase state files, but we've actually used CRC-32C\nsince v9.5 (commit 5028f22). I've attached a short patch to fix this that\nI think should be back-patched to all supported versions.\n\nI've attached a second patch that standardizes how we refer to these kinds\nof algorithms in our docs. Specifically, it adds dashes (e.g., \"CRC-32C\"\ninstead of \"CRC32C\"). Wikipedia uses this style pretty consistently [0]\n[1] [2], and so I think we should, too. I don't think this one needs to be\nback-patched.\n\nThoughts?\n\n[0] https://en.wikipedia.org/wiki/SHA-1\n[1] https://en.wikipedia.org/wiki/SHA-2\n[2] https://en.wikipedia.org/wiki/Cyclic_redundancy_check\n\n-- \nnathan", "msg_date": "Thu, 8 Aug 2024 12:51:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "fix CRC algorithm in WAL reliability docs" }, { "msg_contents": "On Thu, Aug 8, 2024 at 1:51 PM Nathan Bossart <[email protected]> wrote:\n> I noticed that the \"WAL reliability\" documentation says that we use CRC-32\n> for WAL records and two-phase state files, but we've actually used CRC-32C\n> since v9.5 (commit 5028f22). I've attached a short patch to fix this that\n> I think should be back-patched to all supported versions.\n>\n> I've attached a second patch that standardizes how we refer to these kinds\n> of algorithms in our docs. Specifically, it adds dashes (e.g., \"CRC-32C\"\n> instead of \"CRC32C\"). Wikipedia uses this style pretty consistently [0]\n> [1] [2], and so I think we should, too. I don't think this one needs to be\n> back-patched.\n>\n> Thoughts?\n\nLGTM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:13:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fix CRC algorithm in WAL reliability docs" }, { "msg_contents": "On Thu, Aug 08, 2024 at 03:13:50PM -0400, Robert Haas wrote:\n> LGTM.\n\nCommitted, thanks.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 9 Aug 2024 13:17:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fix CRC algorithm in WAL reliability docs" } ]
[ { "msg_contents": "The collation cache, which maps collation oids to pg_locale_t objects,\nhas a few longstanding issues:\n\n1. Using TopMemoryContext too much, with potential for leaks on error\npaths.\n\n2. Creating collators (UCollator for ICU or locale_t for libc) that can\nleak on some error paths. For instance, the following will leak the\nresult of newlocale():\n\n create collation c2 (provider=libc,\n locale='C.utf8', version='bogus');\n\n3. Not invalidating the cache. Collations don't change in a way that\nmatters during the lifetime of a backend, so the main problem is DROP\nCOLLATION. That can leave dangling entries in the cache until the\nbackend exits, and perhaps be a problem if there's OID wraparound.\n\nThe patches make use of resource owners for problems #2 and #3. There\naren't a lot of examples where resource owners are used in this way, so\nI'd appreciate feedback on whether this is a reasonable thing to do or\nnot. Does it feel over-engineered? We can solve these problems by\nrearranging the code to avoid problem #2 and iterating through the hash\ntable for problem #3, but using resource owners felt cleaner to me.\n\nThese problems exist in all supported branches, but the fixes are\nsomewhat invasive so I'm not inclined to backport them unless someone\nthinks the problems are serious enough.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 08 Aug 2024 12:24:28 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "[18] Fix a few issues with the collation cache" }, { "msg_contents": "On Thu, 2024-08-08 at 12:24 -0700, Jeff Davis wrote:\n> The collation cache, which maps collation oids to pg_locale_t\n> objects,\n> has a few longstanding issues:\n\nHere's a patch set v2.\n\nI changed it so that the pg_locale_t itself a resource kind, rather\nthan having separate locale_t and UCollator resource kinds. That\nrequires a bit more care to make sure that the pg_locale_t can be\ninitialized without leaking the locale_t or UCollator, but worked out\nto be simpler overall.\n\nA potential benefit of these changes is that, for eventual support of\nmulti-lib or an extension hook, improvements in the API here will make\nthings less error-prone.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 14 Aug 2024 16:30:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [18] Fix a few issues with the collation cache" }, { "msg_contents": "On Wed, 2024-08-14 at 16:30 -0700, Jeff Davis wrote:\n> On Thu, 2024-08-08 at 12:24 -0700, Jeff Davis wrote:\n> > The collation cache, which maps collation oids to pg_locale_t\n> > objects,\n> > has a few longstanding issues:\n> \n> Here's a patch set v2.\n\nUpdated and rebased.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 20 Sep 2024 17:28:48 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [18] Fix a few issues with the collation cache" } ]
[ { "msg_contents": "In dbcommands.c function createdb(), there are several sets of very \nsimilar local variable names, such as \"downer\" and \"dbowner\", which is \nvery confusing and error-prone. The first set are the DefElem nodes \nfrom the parser, the second set are the local variables with the values \nextracted from them. This patch renames the former to \"ownerEl\" and so \non, similar to collationcmds.c and typecmds.c, to improve clarity.", "msg_date": "Fri, 9 Aug 2024 09:21:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Variable renaming in dbcommands.c" }, { "msg_contents": "> On 9 Aug 2024, at 09:21, Peter Eisentraut <[email protected]> wrote:\n\n> In dbcommands.c function createdb(), there are several sets of very similar local variable names, such as \"downer\" and \"dbowner\", which is very confusing and error-prone. The first set are the DefElem nodes from the parser, the second set are the local variables with the values extracted from them. This patch renames the former to \"ownerEl\" and so on, similar to collationcmds.c and typecmds.c, to improve clarity.\n\nNo objections, patch LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 9 Aug 2024 09:43:37 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Variable renaming in dbcommands.c" }, { "msg_contents": "On 09.08.24 09:43, Daniel Gustafsson wrote:\n>> On 9 Aug 2024, at 09:21, Peter Eisentraut <[email protected]> wrote:\n> \n>> In dbcommands.c function createdb(), there are several sets of very similar local variable names, such as \"downer\" and \"dbowner\", which is very confusing and error-prone. The first set are the DefElem nodes from the parser, the second set are the local variables with the values extracted from them. This patch renames the former to \"ownerEl\" and so on, similar to collationcmds.c and typecmds.c, to improve clarity.\n> \n> No objections, patch LGTM.\n\ncommitted\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 07:14:26 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Variable renaming in dbcommands.c" } ]
[ { "msg_contents": "Hi,\n\nHolger Jacobs complained in pgsql-admin that in v17, if you have the ICU\ndevelopment libraries installed but not pkg-config, you get a somewhat\nunhelpful error message about ICU not being present:\n\n|checking for pkg-config... no\n|checking whether to build with ICU support... yes\n|checking for icu-uc icu-i18n... no\n|configure: error: ICU library not found\n|If you have ICU already installed, see config.log for details on the\n|failure. It is possible the compiler isn't looking in the proper directory.\n|Use --without-icu to disable ICU support.\n\nThe attached patch improves things to that:\n\n|checking for pkg-config... no\n|checking whether to build with ICU support... yes\n|configure: error: ICU library not found\n|The ICU library could not be found because pkg-config is not available, see\n|config.log for details on the failure. If ICU is installed, the variables\n|ICU_CFLAGS and ICU_LIBS can be set explicitly in this case, or use\n|--without-icu to disable ICU support.\n\n\nMichael\n\n\n", "msg_date": "Fri, 9 Aug 2024 10:16:28 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "Meh, forgot the attachment. Also, this should be backpatched to v17 if\naccepted.\n\n\nMichael", "msg_date": "Fri, 9 Aug 2024 10:24:00 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "On 09/08/2024 11:16, Michael Banck wrote:\n> Hi,\n> \n> Holger Jacobs complained in pgsql-admin that in v17, if you have the ICU\n> development libraries installed but not pkg-config, you get a somewhat\n> unhelpful error message about ICU not being present:\n> \n> |checking for pkg-config... no\n> |checking whether to build with ICU support... yes\n> |checking for icu-uc icu-i18n... no\n> |configure: error: ICU library not found\n> |If you have ICU already installed, see config.log for details on the\n> |failure. It is possible the compiler isn't looking in the proper directory.\n> |Use --without-icu to disable ICU support.\n> \n> The attached patch improves things to that:\n> \n> |checking for pkg-config... no\n> |checking whether to build with ICU support... yes\n> |configure: error: ICU library not found\n> |The ICU library could not be found because pkg-config is not available, see\n> |config.log for details on the failure. If ICU is installed, the variables\n> |ICU_CFLAGS and ICU_LIBS can be set explicitly in this case, or use\n> |--without-icu to disable ICU support.\n\nHmm, if that's a good change, shouldn't we do it for all libraries that \nwe try to find with pkg-config?\n\nI'm surprised the pkg.m4 module doesn't provide a nice error message \nalready if pkg-config is not found. I can see some messages like that in \npkg.m4. Why are they not printed?\n\n> Also, this should be backpatched to v17 if accepted.\nDid something change here in v17?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 9 Aug 2024 11:59:12 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "Hi,\n\nadding Jeff to CC as he changed the way ICU configure detection was done\nin fcb21b3.\n\nOn Fri, Aug 09, 2024 at 11:59:12AM +0300, Heikki Linnakangas wrote:\n> On 09/08/2024 11:16, Michael Banck wrote:\n> > Hi,\n> > \n> > Holger Jacobs complained in pgsql-admin that in v17, if you have the ICU\n> > development libraries installed but not pkg-config, you get a somewhat\n> > unhelpful error message about ICU not being present:\n> > \n> > |checking for pkg-config... no\n> > |checking whether to build with ICU support... yes\n> > |checking for icu-uc icu-i18n... no\n> > |configure: error: ICU library not found\n> > |If you have ICU already installed, see config.log for details on the\n> > |failure. It is possible the compiler isn't looking in the proper directory.\n> > |Use --without-icu to disable ICU support.\n> > \n> > The attached patch improves things to that:\n> > \n> > |checking for pkg-config... no\n> > |checking whether to build with ICU support... yes\n> > |configure: error: ICU library not found\n> > |The ICU library could not be found because pkg-config is not available, see\n> > |config.log for details on the failure. If ICU is installed, the variables\n> > |ICU_CFLAGS and ICU_LIBS can be set explicitly in this case, or use\n> > |--without-icu to disable ICU support.\n> \n> Hmm, if that's a good change, shouldn't we do it for all libraries that we\n> try to find with pkg-config?\n\nHrm, probably. I think the main difference is that libicu is checked by\ndefault (actually since v16, see below), but the others are not, so\nmaybe it is less of a problem there? \n\nSo I had a further look and the only other pkg-config checks seem to be\nxml2, lz4 and zstd. From those, lz4 and zstd do not have a custom\nAC_MSG_ERROR so there you get something more helpful like this:\n\n|checking for pkg-config... no\n[...]\n|checking whether to build with LZ4 support... yes\n|checking for liblz4... no\n|configure: error: in `/home/mbanck/repos/postgres/postgresql/build':\n|configure: error: The pkg-config script could not be found or is too old. Make sure it\n|is in your PATH or set the PKG_CONFIG environment variable to the full\n|path to pkg-config.\n|\n|Alternatively, you may set the environment variables LZ4_CFLAGS\n|and LZ4_LIBS to avoid the need to call pkg-config.\n|See the pkg-config man page for more details.\n|\n|To get pkg-config, see <http://pkg-config.freedesktop.org/>.\n|See `config.log' for more details\n\nThe XML check sets the error as no-op because there is xml2-config which\nis usually used:\n\n| if test -z \"$XML2_CONFIG\" -a -n \"$PKG_CONFIG\"; then\n| PKG_CHECK_MODULES(XML2, [libxml-2.0 >= 2.6.23],\n| [have_libxml2_pkg_config=yes], [# do nothing])\n[...]\n|if test \"$with_libxml\" = yes ; then\n| AC_CHECK_LIB(xml2, xmlSaveToBuffer, [], [AC_MSG_ERROR([library 'xml2' (version >= 2.6.23) is required for XML support])])\n|fi\n\nSo if both pkg-config and libxml2-dev are missing this results in:\n\n|checking for pkg-config... no\n[...]\n|checking whether to build with XML support... yes\n|checking for xml2-config... no\n[...]\n|checking for xmlSaveToBuffer in -lxml2... no\n|configure: error: library 'xml2' (version >= 2.6.23) is required for XML support\n\nWhereas if only pkg-config is missing, configure goes through fine:\n\n|checking for pkg-config... no\n[...]\n|checking whether to build with XML support... yes\n|checking for xml2-config... /usr/bin/xml2-config\n[...]\n|checking for xmlSaveToBuffer in -lxml2... yes\n\nSo to summarize, I think the others are fine.\n\n> I'm surprised the pkg.m4 module doesn't provide a nice error message already\n> if pkg-config is not found. I can see some messages like that in pkg.m4. Why\n> are they not printed?\n> \n> > Also, this should be backpatched to v17 if accepted.\n> Did something change here in v17?\n\nI was mistaken, things changed in v16 when ICU was checked for by\ndefault and the explicit error message was added. Before, ICU behaved\nlike LZ4/ZST now, i.e. if you added --with-icu explicitly you would get\nthe error about pkg-config not being there.\n\nSo maybe the better change is to just remove the explicit error message\nagain and depend on PKG_CHECK_MODULES erroring out helpfully? The\ndownside would be that the hint about specifying --without-icu to get\naround this would disappear, but I think somebody building from source\ncan figure that out more easily than the subtle issue that pkg-config is\nnot installed. This would lead to this:\n\n|checking for pkg-config... no\n|checking whether to build with ICU support... yes\n|checking for icu-uc icu-i18n... no\n|configure: error: in `/home/mbanck/repos/postgres/postgresql/build':\n|configure: error: The pkg-config script could not be found or is too old. Make sure it\n|is in your PATH or set the PKG_CONFIG environment variable to the full\n|path to pkg-config.\n|\n|Alternatively, you may set the environment variables ICU_CFLAGS\n|and ICU_LIBS to avoid the need to call pkg-config.\n|See the pkg-config man page for more details.\n|\n|To get pkg-config, see <http://pkg-config.freedesktop.org/>.\n|See `config.log' for more details\n\nI have attached a new patch for that.\n\nAdditionally, going forward, v18+ could just mandate pkg-config to be\ninstalled, but I would assume this is not something we would want to\nchange in the back branches.\n\n(I also haven't looked how Meson is handling this)\n\n\nMichael", "msg_date": "Fri, 9 Aug 2024 11:44:04 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "On 09.08.24 10:59, Heikki Linnakangas wrote:\n> On 09/08/2024 11:16, Michael Banck wrote:\n>> Hi,\n>>\n>> Holger Jacobs complained in pgsql-admin that in v17, if you have the ICU\n>> development libraries installed but not pkg-config, you get a somewhat\n>> unhelpful error message about ICU not being present:\n>>\n>> |checking for pkg-config... no\n>> |checking whether to build with ICU support... yes\n>> |checking for icu-uc icu-i18n... no\n>> |configure: error: ICU library not found\n>> |If you have ICU already installed, see config.log for details on the\n>> |failure.  It is possible the compiler isn't looking in the proper \n>> directory.\n>> |Use --without-icu to disable ICU support.\n>>\n>> The attached patch improves things to that:\n>>\n>> |checking for pkg-config... no\n>> |checking whether to build with ICU support... yes\n>> |configure: error: ICU library not found\n>> |The ICU library could not be found because pkg-config is not \n>> available, see\n>> |config.log for details on the failure.  If ICU is installed, the \n>> variables\n>> |ICU_CFLAGS and ICU_LIBS can be set explicitly in this case, or use\n>> |--without-icu to disable ICU support.\n> \n> Hmm, if that's a good change, shouldn't we do it for all libraries that \n> we try to find with pkg-config?\n> \n> I'm surprised the pkg.m4 module doesn't provide a nice error message \n> already if pkg-config is not found. I can see some messages like that in \n> pkg.m4. Why are they not printed?\n\nBecause we override it with our own message. If we don't supply our own \nmessage, we get the built-in ones. Might be worth trying whether the \nbuilt-in ones are better? (But they won't know about higher-level \noptions like \"--without-icu\", so they won't be able to give suggestions \nlike that.)\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 22:13:27 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "On Fri, 2024-08-09 at 11:44 +0200, Michael Banck wrote:\n> So maybe the better change is to just remove the explicit error\n> message\n> again and depend on PKG_CHECK_MODULES erroring out helpfully?  The\n> downside would be that the hint about specifying --without-icu to get\n> around this would disappear, but I think somebody building from\n> source\n> can figure that out more easily than the subtle issue that pkg-config\n> is\n> not installed.\n\nThat looks good to me. Does anyone have a different opinion? If not,\nI'll go ahead and commit (but not backport) this change.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 18:05:19 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 14, 2024 at 06:05:19PM -0700, Jeff Davis wrote:\n> That looks good to me. Does anyone have a different opinion? If not,\n> I'll go ahead and commit (but not backport) this change.\n\nWhat is the rationale not to backpatch this? The error message changes,\nbut we do not translate configure output, so is this a problem/project\npolicy to never change configure output in the back-branches?\n\nIf the change is too invasive, would something like the initial patch I\nsuggested (i.e., in the absense of pkg-config, add a more targetted\nerror message) be acceptable for the back-branches?\n\n\nMichael\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:20:28 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" }, { "msg_contents": "On 15.08.24 09:20, Michael Banck wrote:\n> On Wed, Aug 14, 2024 at 06:05:19PM -0700, Jeff Davis wrote:\n>> That looks good to me. Does anyone have a different opinion? If not,\n>> I'll go ahead and commit (but not backport) this change.\n> \n> What is the rationale not to backpatch this? The error message changes,\n> but we do not translate configure output, so is this a problem/project\n> policy to never change configure output in the back-branches?\n> \n> If the change is too invasive, would something like the initial patch I\n> suggested (i.e., in the absense of pkg-config, add a more targetted\n> error message) be acceptable for the back-branches?\n\nBut it's not just changing an error message, it's changing the logic \nthat leads to the error message. Have we really thought through all the \ncombinations here? I don't know. So committing for master and then \nseeing if there is any further feedback seems most prudent.\n\n(I'm not endorsing either patch version here, just commenting on the \nprocess.)\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 15:22:00 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve error message for ICU libraries if pkg-config is absent" } ]
[ { "msg_contents": "Hello everyone,\n\nThe ALT Linux Team has recently initiated a focused testing effort on PostgreSQL, with an emphasis on its security aspects.\n\nAs part of this effort, we conducted static analysis using Svace, which raised some questions regarding the use of the Assert() function.\nWe were unable to find clear answers in the documentation or previous discussions and would greatly appreciate your insights,\nas these will influence how we approach future patch submissions:\n\n1. General purpose of Assert() function:\n- Is the primary purpose of the Assert() function to:\n - Inform developers of the assumptions made in the code,\n - Facilitate testing of new builds with assertions enabled,\n - Accelerate development by temporarily placing assertions where defensive code may later be required,\n - Or does it serve some other purpose?\n\n2. Assertions and defensive code:\nWe have observed patches where assertions were added to defensive code (e.g., [1]) and where defensive code was added to assertions.\nIs it generally advisable and encouraged to incorporate defensive code into assertions' assumptions?\n\nFor instance, we encountered cases where a null pointer dereference occurs immediately after an assertion that the pointer is not null.\nIn assertion-enabled builds, this results in an assertion failure, while in non-assert builds, it leads to a dereference, which we aim to avoid.\nHowever, it is unclear to us whether the use of an assertion in such cases indicates that this flaw is known and will not be addressed specifically,\nor if additional protective code should be introduced.\n\nA clearer understanding of whether assertions should signal potential failure points that might later be rewritten or protected would assist us\nin quickly developing fixes for such cases, particularly where we believe the issue could occur in practice.\n\n3. Approach to fixing flaws:\nHow should we generally fix the flaws - by adding protection code, or by adding assertions?\nI previously submitted two patches and encountered some confusion based on the feedback: in one instance,\nI added protective code but was asked whether an assertion would suffice [2],\nand in another, I added an assertion but was questioned on its necessity, given that it would cause a failure regardless, which I agreed with [3].\n\n\nYour guidance on these matters would be invaluable in helping us align our patch submissions with the community’s best practices.\n\nThank you in advance for your time and assistance.\n\n1. https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=30aaab26e52144097a1a5bbb0bb66ea1ebc0cb81\n2. https://www.postgresql.org/message-id/e22df993-cdb4-4d0a-b629-42211ebed582%40altlinux.org\n3. https://www.postgresql.org/message-id/6d0323c3-3f5d-4137-af73-98a5ab90e77c%40altlinux.org\n\n-- \nBest regards,\nAlexander Kuznetsov\n\n\n", "msg_date": "Fri, 9 Aug 2024 13:01:41 +0300", "msg_from": "Alexander Kuznetsov <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL's approach to assertion usage: seeking best practices" }, { "msg_contents": "Hi Alexander,\n\n> As part of this effort, we conducted static analysis using Svace, which raised some questions regarding the use of the Assert() function.\n> We were unable to find clear answers in the documentation or previous discussions and would greatly appreciate your insights,\n> as these will influence how we approach future patch submissions:\n>\n> 1. General purpose of Assert() function:\n> - Is the primary purpose of the Assert() function to:\n> - Inform developers of the assumptions made in the code,\n> - Facilitate testing of new builds with assertions enabled,\n> - Accelerate development by temporarily placing assertions where defensive code may later be required,\n> - Or does it serve some other purpose?\n>\n> 2. Assertions and defensive code:\n> [...]\n>\n> 3. Approach to fixing flaws:\n> [...]\n\nAssert() describes an invariant - e.g. x should never be NULL here.\nIt's present only in the debug builds. It has two purposes. 1. Check\nthe invariants during the execution of the tests and usage of a debug\nbuild. 2. Serve as a description of the invariant for the developers.\nAdding Asserts() is usually a good idea unless the invariant is too\nexpensive to check and/or complicated to read/understand.\n\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 12 Aug 2024 13:55:04 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL's approach to assertion usage: seeking best practices" } ]
[ { "msg_contents": "Hey all,\n\nA lot of calendaring software includes the ability to subscribe to \na calendar. For instance, in my personal life, I subscribe to a US \nHolidays calendar[0]. I wanted to see if anyone else thought it might be \na good idea to host an ICS file on postgresql.org that people could \nsubscribe to. Someone does a good job of updating the roadmap[1]. Seems \nlike we could put this same data into an ICS for subscription purposes. \nThen I can get notifications via my calendar about upcoming releases. It \nmight also be good to add the dates when a release goes out of support.\n\nI can see this being useful for many types of people including:\n\n- Packagers (though they also have their mailing list)\n- Users (knowing when they should start preparing for an upgrade)\n- Providers (knowing when to prepare their stack for upgrades)\n- News outlets (when to launch/prepare articles or benchmarks)\n\nWithout the ICS file, one would have to come to the roadmap site and \nmanually enter events into their calendar, and repeatedly come check the \nsite to see if dates were moved or when new dates are added.\n\nWhat would it take to facilitate this if others would be interested?\n\n[0]: https://www.officeholidays.com/ics-clean/usa\n[1]: https://www.postgresql.org/developer/roadmap/\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 09 Aug 2024 11:27:08 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Subscription to Postgres Releases via ICS" }, { "msg_contents": "Sounds like a good idea. You probably want to ask on pgsql-www. I imagine\nit would have to be coded against this:\n\nhttps://git.postgresql.org/gitweb/?p=pgweb.git\n\nA working patch would probably be nice to get things started. but I\nrecognize it's a little bit of chicken-and-egg.\n\nCheers,\nGreg\n\nSounds like a good idea. You probably want to ask on pgsql-www. I imagine it would have to be coded against this:https://git.postgresql.org/gitweb/?p=pgweb.gitA working patch would probably be nice to get things started. but I recognize it's a little bit of chicken-and-egg.Cheers,Greg", "msg_date": "Tue, 13 Aug 2024 10:14:47 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subscription to Postgres Releases via ICS" }, { "msg_contents": "On Tue Aug 13, 2024 at 9:15 AM CDT, Greg Sabino Mullane wrote:\n> Sounds like a good idea. You probably want to ask on pgsql-www. I imagine\n> it would have to be coded against this:\n>\n> https://git.postgresql.org/gitweb/?p=pgweb.git\n>\n> A working patch would probably be nice to get things started. but I\n> recognize it's a little bit of chicken-and-egg.\n\nThanks Greg. I'll send another email on that list and try to see what \nI can do regarding a patch!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 13 Aug 2024 17:33:42 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Subscription to Postgres Releases via ICS" } ]
[ { "msg_contents": "I came across this misbehavior:\n\nregression=# create or replace function foo() returns text as\n$$select current_setting('role')$$ language sql\nparallel safe set role = postgres;\nCREATE FUNCTION\nregression=# select foo();\n foo \n----------\n postgres\n(1 row)\n\nregression=# set debug_parallel_query to 1;\nSET\nregression=# select foo();\nERROR: cannot set parameters during a parallel operation\nCONTEXT: parallel worker\n\nWhat is failing is the attempt to update the \"is_superuser\" GUC\nas a side-effect of setting \"role\". That's coming from here:\n\n /*\n * GUC_ACTION_SAVE changes are acceptable during a parallel operation,\n * because the current worker will also pop the change. We're probably\n * dealing with a function having a proconfig entry. Only the function's\n * body should observe the change, and peer workers do not share in the\n * execution of a function call started by this worker.\n *\n * Other changes might need to affect other workers, so forbid them.\n */\n if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE)\n // throw error\n\nSince we're using GUC_ACTION_SET to set \"is_superuser\", this spits up.\n\nThe simplest fix would be to hack this test to allow the action anyway\nwhen context == PGC_INTERNAL, excusing that as \"assume the caller\nknows what it's doing\". That feels pretty grotty though. Perhaps\na cleaner way would be to move this check to some higher code level,\nbut I'm not sure where would be a good place.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Aug 2024 14:43:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "On Fri, Aug 09, 2024 at 02:43:59PM -0400, Tom Lane wrote:\n> The simplest fix would be to hack this test to allow the action anyway\n> when context == PGC_INTERNAL, excusing that as \"assume the caller\n> knows what it's doing\". That feels pretty grotty though. Perhaps\n> a cleaner way would be to move this check to some higher code level,\n> but I'm not sure where would be a good place.\n\n From a couple of quick tests, it looks like setting\n\"current_role_is_superuser\" directly works. That's still grotty, but at\nleast the grottiness would be localized and not require broad assumptions\nabout callers knowing what they're doing when using PGC_INTERNAL. I\nwouldn't be surprised if there are other problems with this approach, too.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 9 Aug 2024 14:26:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Aug 09, 2024 at 02:43:59PM -0400, Tom Lane wrote:\n>> The simplest fix would be to hack this test to allow the action anyway\n>> when context == PGC_INTERNAL, excusing that as \"assume the caller\n>> knows what it's doing\". That feels pretty grotty though. Perhaps\n>> a cleaner way would be to move this check to some higher code level,\n>> but I'm not sure where would be a good place.\n\n> From a couple of quick tests, it looks like setting\n> \"current_role_is_superuser\" directly works.\n\nYeah, I had been thinking along the same lines. Here's a draft\npatch. (Still needs some attention to nearby comments, and I can't\navoid the impression that the miscinit.c code in this area could\nuse refactoring.)\n\nA problem with this is that it couldn't readily be back-patched\nfurther than v14, since we didn't have ReportChangedGUCOptions\nbefore that. Maybe that doesn't matter; given the lack of\nprevious complaints, maybe we only need to fix this in HEAD.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 09 Aug 2024 16:04:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "On Fri, Aug 09, 2024 at 04:04:15PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> From a couple of quick tests, it looks like setting\n>> \"current_role_is_superuser\" directly works.\n> \n> Yeah, I had been thinking along the same lines. Here's a draft\n> patch. (Still needs some attention to nearby comments, and I can't\n> avoid the impression that the miscinit.c code in this area could\n> use refactoring.)\n\nHm. That's a bit more code than I expected.\n\n> A problem with this is that it couldn't readily be back-patched\n> further than v14, since we didn't have ReportChangedGUCOptions\n> before that. Maybe that doesn't matter; given the lack of\n> previous complaints, maybe we only need to fix this in HEAD.\n\nAnother option might be to introduce a new GUC flag or source for anything\nwe want to bypass the check (perhaps with the stipulation that it must also\nbe marked PGC_INTERNAL). I think a new flag would require moving the\nparallel check down a stanza, but that seems fine. A new source would\nallow us to limit the damage to specific SetConfigOption() call-sites, but\nI haven't thought through that idea fully.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 9 Aug 2024 15:30:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Aug 09, 2024 at 04:04:15PM -0400, Tom Lane wrote:\n>> Yeah, I had been thinking along the same lines. Here's a draft\n>> patch. (Still needs some attention to nearby comments, and I can't\n>> avoid the impression that the miscinit.c code in this area could\n>> use refactoring.)\n\n> Hm. That's a bit more code than I expected.\n\nYeah. I can see a couple of points of attraction in this, but\nthey're not strong:\n\n* Fewer cycles involved in setting session_authorization or role.\nBut nobody has really complained that those are slow.\n\n* Gets us out from any other gotchas that may exist or be added\nin the SetConfigOption code path, not just this one point.\nThis is mostly hypothetical, and a regression test case or two\nwould likely catch any new problems anyway.\n\n> Another option might be to introduce a new GUC flag or source for anything\n> we want to bypass the check (perhaps with the stipulation that it must also\n> be marked PGC_INTERNAL).\n\nA new GUC flag seems like a promising approach, and better than\ngiving a blanket exemption to PGC_INTERNAL. At least for\nis_superuser, there's no visible value in restricting which\nSetConfigOption calls can change it; they'd all need the escape\nhatch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Aug 2024 16:42:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "I wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Another option might be to introduce a new GUC flag or source for anything\n>> we want to bypass the check (perhaps with the stipulation that it must also\n>> be marked PGC_INTERNAL).\n\n> A new GUC flag seems like a promising approach, and better than\n> giving a blanket exemption to PGC_INTERNAL. At least for\n> is_superuser, there's no visible value in restricting which\n> SetConfigOption calls can change it; they'd all need the escape\n> hatch.\n\nHere's a draft patch to fix it with a flag, now with regression tests.\n\nAlso, now that the error depends on which parameter we're talking\nabout, I thought it best to include the parameter name in the error\nand to re-word it to be more like all the other can't-set-this-now\nerrors just below it. I'm half tempted to change the errcode and\nset_config_option return value to match the others too, ie\nERRCODE_CANT_CHANGE_RUNTIME_PARAM and \"return 0\" not -1.\nI don't think the existing choices here are very well thought\nthrough, and they're certainly inconsistent with a lot of\notherwise-similar-seeming refusals in set_config_option.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 09 Aug 2024 18:50:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "On Fri, Aug 09, 2024 at 06:50:14PM -0400, Tom Lane wrote:\n> Here's a draft patch to fix it with a flag, now with regression tests.\n\nLooks reasonable.\n\n> Also, now that the error depends on which parameter we're talking\n> about, I thought it best to include the parameter name in the error\n> and to re-word it to be more like all the other can't-set-this-now\n> errors just below it. I'm half tempted to change the errcode and\n> set_config_option return value to match the others too, ie\n> ERRCODE_CANT_CHANGE_RUNTIME_PARAM and \"return 0\" not -1.\n> I don't think the existing choices here are very well thought\n> through, and they're certainly inconsistent with a lot of\n> otherwise-similar-seeming refusals in set_config_option.\n\nThis comment for set_config_option() leads me to think we should be\nreturning -1 instead of 0 in many more places in set_config_with_handle():\n\n * Return value:\n * +1: the value is valid and was successfully applied.\n * 0: the name or value is invalid (but see below).\n * -1: the value was not applied because of context, priority, or changeVal.\n\nBut I haven't thought through it, either. At this point, maybe the comment\nis wrong...\n\n-- \nnathan\n\n\n", "msg_date": "Sat, 10 Aug 2024 09:26:38 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Aug 09, 2024 at 06:50:14PM -0400, Tom Lane wrote:\n>> Also, now that the error depends on which parameter we're talking\n>> about, I thought it best to include the parameter name in the error\n>> and to re-word it to be more like all the other can't-set-this-now\n>> errors just below it. I'm half tempted to change the errcode and\n>> set_config_option return value to match the others too, ie\n>> ERRCODE_CANT_CHANGE_RUNTIME_PARAM and \"return 0\" not -1.\n\n> This comment for set_config_option() leads me to think we should be\n> returning -1 instead of 0 in many more places in set_config_with_handle():\n\n> * Return value:\n> * +1: the value is valid and was successfully applied.\n> * 0: the name or value is invalid (but see below).\n> * -1: the value was not applied because of context, priority, or changeVal.\n\n> But I haven't thought through it, either. At this point, maybe the comment\n> is wrong...\n\nI poked through all the call sites. The only one that makes a\ndistinction between 0 and -1 is ProcessConfigFileInternal(),\nand what it thinks is:\n\n else if (scres == 0)\n {\n error = true;\n item->errmsg = pstrdup(\"setting could not be applied\");\n ConfFileWithError = item->filename;\n }\n else\n {\n /* no error, but variable's active value was not changed */\n item->applied = true;\n }\n\nNow, I don't believe that ProcessConfigFileInternal is ever executed\nwhile IsInParallelMode, so it appears that no caller really cares\nabout which return code this case would return. However, if you\nlook through set_config_with_handle the general pattern is that\nwe \"return 0\" after any ereport call (either one directly in that\nfunction, or one in a called function). Getting to those of course\nimplies that elevel is too low to throw an error; but we did think\nthere was an error condition. We \"return -1\" in cases where we didn't\nereport anything. So I am still of the opinion that the -1 usage here\nis inconsistent, even if it happens to not make a difference today.\n\nYeah, the header comment could stand to be improved to make this\nclearer. I think there are more conditions being checked now than\nexisted when the comment was written. But the para right below the\nbit you quoted is pretty clear that \"return 0\" is associated with\nan ereport.\n\nMaybe\n\n * Return value:\n * +1: the value is valid and was successfully applied.\n * 0: the name or value is invalid, or it's invalid to try to set\n * this GUC now; but elevel was less than ERROR (see below).\n * -1: no error detected, but the value was not applied, either\n * because changeVal is false or there is some overriding value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Aug 2024 12:57:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "On Sat, Aug 10, 2024 at 12:57:36PM -0400, Tom Lane wrote:\n> Yeah, the header comment could stand to be improved to make this\n> clearer. I think there are more conditions being checked now than\n> existed when the comment was written. But the para right below the\n> bit you quoted is pretty clear that \"return 0\" is associated with\n> an ereport.\n\nAh, sorry, ENOCOFFEE.\n\n> Maybe\n> \n> * Return value:\n> * +1: the value is valid and was successfully applied.\n> * 0: the name or value is invalid, or it's invalid to try to set\n> * this GUC now; but elevel was less than ERROR (see below).\n> * -1: no error detected, but the value was not applied, either\n> * because changeVal is false or there is some overriding value.\n\nNevertheless, I think this is an improvement.\n\nRegarding returning 0 instead of -1 for the parallel case, I think that\nfollows. While doing some additional research, I noticed this return value\nwas just added in December (commit 059de3c [0]). Before that, it\napparently assumed that elevel >= ERROR. With that and your analysis of\nthe call sites, it seems highly unlikely that changing it will cause any\nproblems.\n\nFor the errcode, I do see that we pretty consistently use\nERRCODE_INVALID_TRANSACTION_STATE for \"can't do thing during a parallel\noperation.\" In fact, it looks like all but one use is for parallel errors.\nI don't have any particular qualms about changing it to\nERRCODE_CANT_CHANGE_RUNTIME_PARAM in set_config_with_handle(), but I\nthought that was interesting.\n\n[0] https://postgr.es/m/2089235.1703617353%40sss.pgh.pa.us\n\n-- \nnathan\n\n\n", "msg_date": "Sat, 10 Aug 2024 13:32:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Regarding returning 0 instead of -1 for the parallel case, I think that\n> follows. While doing some additional research, I noticed this return value\n> was just added in December (commit 059de3c [0]). Before that, it\n> apparently assumed that elevel >= ERROR. With that and your analysis of\n> the call sites, it seems highly unlikely that changing it will cause any\n> problems.\n\nHah ... so the failure to think clearly about which value to use\nwas mine :-(.\n\n> For the errcode, I do see that we pretty consistently use\n> ERRCODE_INVALID_TRANSACTION_STATE for \"can't do thing during a parallel\n> operation.\" In fact, it looks like all but one use is for parallel errors.\n\nOK, I'll leave that alone but will change the return code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Aug 2024 14:38:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is_superuser versus set_config_option's parallelism check" } ]
[ { "msg_contents": "Hi Hackers,\n\nOften in a debugger I've wanted to way to print Datums, in particular non-trivial ones like range \ntypes. This came up a lot when I was working on multiranges, and I've wished for it lately while \nworking on UPDATE/DELETE FOR PORTION OF. But all the obvious approaches are inlined functions or \npreprocessor macros, so they aren't available. Usually I wind up giving up on gdb and adding elog \nstatements. Once or twice I've copy/pasted from the three or four levels of nested macros to make \ngdb do what I wanted, but it's a pain.\n\nI assumed printing a Datum was easy, and I was the only one who didn't know how to do it. But \nperhaps not. The conversation about print.c [1] made me think I should propose a way to make it \neasier. This function takes a Datum and the appropriate out function, and returns a char *. So you \ncan do this:\n\n(gdb) call format_datum(range_out, $1)\n$2 = 0x59162692d938 \"[1,4)\"\n\nI assume a patch like this doesn't need documentation. Does it need a test? Anything else?\n\n[1] https://www.postgresql.org/message-id/flat/7d023c20-6679-44bd-b5f7-44106659bd5a%40eisentraut.org\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]\n\n\n", "msg_date": "Fri, 9 Aug 2024 16:04:45 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "format_datum debugging function" }, { "msg_contents": "Hi Paul,\n\n> [...] This function takes a Datum and the appropriate out function, and returns a char *. So you\n> can do this:\n>\n> (gdb) call format_datum(range_out, $1)\n> $2 = 0x59162692d938 \"[1,4)\"\n>\n> I assume a patch like this doesn't need documentation. Does it need a test? Anything else?\n\nI think you forgot to attach the patch. Or is it just a proposal?\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 12 Aug 2024 14:32:42 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: format_datum debugging function" }, { "msg_contents": "On 8/12/24 04:32, Aleksander Alekseev wrote:\n>> [...] This function takes a Datum and the appropriate out function, and returns a char *. So you\n>> can do this:\n>>\n>> (gdb) call format_datum(range_out, $1)\n>> $2 = 0x59162692d938 \"[1,4)\"\n>>\n>> I assume a patch like this doesn't need documentation. Does it need a test? Anything else?\n> \n> I think you forgot to attach the patch. Or is it just a proposal?\n\nSorry, patch attached here.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Mon, 12 Aug 2024 14:15:23 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: format_datum debugging function" }, { "msg_contents": "On Mon, 12 Aug 2024 at 23:15, Paul Jungwirth\n<[email protected]> wrote:\n> On 8/12/24 04:32, Aleksander Alekseev wrote:\n> >> (gdb) call format_datum(range_out, $1)\n> >> $2 = 0x59162692d938 \"[1,4)\"\n> >>\n> >> I assume a patch like this doesn't need documentation. Does it need a test? Anything else?\n> >\n> > I think you forgot to attach the patch. Or is it just a proposal?\n>\n> Sorry, patch attached here.\n\n+1 for the idea. And the code looks trivial enough. I think this\nshould also contain a print_datum function too though.\n\n\n", "msg_date": "Mon, 12 Aug 2024 23:30:32 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: format_datum debugging function" }, { "msg_contents": "On 12.08.24 23:15, Paul Jungwirth wrote:\n> On 8/12/24 04:32, Aleksander Alekseev wrote:\n>>> [...] This function takes a Datum and the appropriate out function, \n>>> and returns a char *. So you\n>>> can do this:\n>>>\n>>> (gdb) call format_datum(range_out, $1)\n>>> $2 = 0x59162692d938 \"[1,4)\"\n>>>\n>>> I assume a patch like this doesn't need documentation. Does it need a \n>>> test? Anything else?\n>>\n>> I think you forgot to attach the patch. Or is it just a proposal?\n> \n> Sorry, patch attached here.\n\nI don't think it's safe to call output functions at arbitrary points \nfrom a debugger. But if you're okay with that during development, say, \nthen I think you could just call OidOutputFunctionCall(F_RANGE_OUT, $1)?\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 11:16:01 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: format_datum debugging function" }, { "msg_contents": "On 8/14/24 02:16, Peter Eisentraut wrote:\n> On 12.08.24 23:15, Paul Jungwirth wrote:\n>> On 8/12/24 04:32, Aleksander Alekseev wrote:\n>>>> [...] This function takes a Datum and the appropriate out function, and returns a char *. So you\n>>>> can do this:\n>>>>\n>>>> (gdb) call format_datum(range_out, $1)\n>>>> $2 = 0x59162692d938 \"[1,4)\"\n>>>>\n>>>> I assume a patch like this doesn't need documentation. Does it need a test? Anything else?\n>>>\n>>> I think you forgot to attach the patch. Or is it just a proposal?\n>>\n>> Sorry, patch attached here.\n> \n> I don't think it's safe to call output functions at arbitrary points from a debugger.  But if you're \n> okay with that during development, say, then I think you could just call \n> OidOutputFunctionCall(F_RANGE_OUT, $1)?\n\nI assumed it wasn't safe everywhere (e.g. there is potentially a TOAST lookup), but for debugging a \npatch that's okay with me.\n\nAre you doing something to get macro expansion? I've never gotten my gdb to see #defines, although \nin theory this configure line should do it, right?:\n\n./configure 'CFLAGS=-ggdb -Og -g3 -fno-omit-frame-pointer' --enable-tap-tests --enable-cassert \n--enable-debug --prefix=${HOME}/local\n\nI also tried -gdwarf and -gdwarf-4 and -gdwarf-5 (all still with -Og -g3).\n\nIf it makes a difference, I'm attaching to a process:\n\npaul@tal:~/src/postgresql$ gdb -p 1175735\nGNU gdb (Ubuntu 12.1-0ubuntu1~22.04.2) 12.1\nCopyright (C) 2022 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law.\nType \"show copying\" and \"show warranty\" for details.\nThis GDB was configured as \"x86_64-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<https://www.gnu.org/software/gdb/bugs/>.\nFind the GDB manual and other documentation resources online at:\n <http://www.gnu.org/software/gdb/documentation/>.\n\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\".\nAttaching to process 1175735\nReading symbols from /home/paul/local/bin/postgres...\nReading symbols from /lib/x86_64-linux-gnu/libz.so.1...\n(No debugging symbols found in /lib/x86_64-linux-gnu/libz.so.1)\nReading symbols from /lib/x86_64-linux-gnu/libm.so.6...\nReading symbols from /usr/lib/debug/.build-id/a5/08ec5d8bf12fb7fd08204e0f87518e5cd0b102.debug...\nReading symbols from /lib/x86_64-linux-gnu/libicui18n.so.70...\n(No debugging symbols found in /lib/x86_64-linux-gnu/libicui18n.so.70)\nReading symbols from /lib/x86_64-linux-gnu/libicuuc.so.70...\n(No debugging symbols found in /lib/x86_64-linux-gnu/libicuuc.so.70)\nReading symbols from /lib/x86_64-linux-gnu/libc.so.6...\nReading symbols from /usr/lib/debug/.build-id/49/0fef8403240c91833978d494d39e537409b92e.debug...\nReading symbols from /lib64/ld-linux-x86-64.so.2...\nReading symbols from /usr/lib/debug/.build-id/41/86944c50f8a32b47d74931e3f512b811813b64.debug...\nReading symbols from /lib/x86_64-linux-gnu/libstdc++.so.6...\n(No debugging symbols found in /lib/x86_64-linux-gnu/libstdc++.so.6)\nReading symbols from /lib/x86_64-linux-gnu/libgcc_s.so.1...\n(No debugging symbols found in /lib/x86_64-linux-gnu/libgcc_s.so.1)\nReading symbols from /lib/x86_64-linux-gnu/libicudata.so.70...\n(No debugging symbols found in /lib/x86_64-linux-gnu/libicudata.so.70)\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n0x000079878e325dea in epoll_wait (epfd=7, events=0x573f0e475820, maxevents=1, timeout=-1) at \n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.\n(gdb) b range_in\nBreakpoint 1 at 0x573f0d9b5042: file rangetypes.c, line 91.\n(gdb) c\nContinuing.\n\nBreakpoint 1, range_in (fcinfo=0x7fff6d8fea90) at rangetypes.c:91\n91 {\n(gdb) fin\nRun till exit from #0 range_in (fcinfo=0x7fff6d8fea90) at rangetypes.c:91\n0x0000573f0da8c87e in InputFunctionCall (flinfo=0x7fff6d8feb20, str=0x573f0e47a258 \"[3,5)\", \ntypioparam=3904, typmod=-1) at fmgr.c:1547\n1547 result = FunctionCallInvoke(fcinfo);\nValue returned is $1 = 95928334988456\n(gdb) call OidOutputFunctionCall(F_RANGE_OUT, $1)\nNo symbol \"F_RANGE_OUT\" in current context.\n\nIf I know the oid, then this works:\n\n(gdb) call OidOutputFunctionCall(3835, $1)\n$2 = 0x5f9be16ca4d8 \"[3,5)\"\n\nThat is a big improvement, but still a little annoying.\n\nThanks,\n\n-- \nPaul ~{:-)\[email protected]\n\n\n", "msg_date": "Wed, 14 Aug 2024 08:46:44 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: format_datum debugging function" }, { "msg_contents": "On 14.08.24 17:46, Paul Jungwirth wrote:\n> On 8/14/24 02:16, Peter Eisentraut wrote:\n>> On 12.08.24 23:15, Paul Jungwirth wrote:\n>>> On 8/12/24 04:32, Aleksander Alekseev wrote:\n>>>>> [...] This function takes a Datum and the appropriate out function, \n>>>>> and returns a char *. So you\n>>>>> can do this:\n>>>>>\n>>>>> (gdb) call format_datum(range_out, $1)\n>>>>> $2 = 0x59162692d938 \"[1,4)\"\n>>>>>\n>>>>> I assume a patch like this doesn't need documentation. Does it need \n>>>>> a test? Anything else?\n>>>>\n>>>> I think you forgot to attach the patch. Or is it just a proposal?\n>>>\n>>> Sorry, patch attached here.\n>>\n>> I don't think it's safe to call output functions at arbitrary points \n>> from a debugger.  But if you're okay with that during development, \n>> say, then I think you could just call \n>> OidOutputFunctionCall(F_RANGE_OUT, $1)?\n> \n> I assumed it wasn't safe everywhere (e.g. there is potentially a TOAST \n> lookup), but for debugging a patch that's okay with me.\n> \n> Are you doing something to get macro expansion? I've never gotten my gdb \n> to see #defines, although in theory this configure line should do it, \n> right?:\n\nOh I see, you don't have the F_* constants available then. Maybe just \nput in the OID manually then?\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:50:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: format_datum debugging function" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 14.08.24 17:46, Paul Jungwirth wrote:\n>> Are you doing something to get macro expansion? I've never gotten my gdb \n>> to see #defines, although in theory this configure line should do it, \n>> right?:\n\n> Oh I see, you don't have the F_* constants available then. Maybe just \n> put in the OID manually then?\n\nThat's pretty illegible and error-prone. I agree that writing the\noutput function's C name is noticeably better. However, I would\ncall the result DirectOutputFunctionCall and put it near\nOidOutputFunctionCall in fmgr.c. It's not like we don't already\nhave a naming convention and many instances of this.\n\n(Also, now that I look at the code, I wonder why it looks so\nlittle like any of the existing DirectFunctionCall functions.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Aug 2024 10:05:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: format_datum debugging function" } ]
[ { "msg_contents": "I was playing with a static analyzer security scanner and it flagged a \ntime-of-check-time-of-use violation in libpq. I was going to propose a \nfix for this on -hackers, since you probably can't do anything \ninteresting with this, but then I figured I'd better check here first.\n\nlibpq checks the permissions of the password file before opening it and \nrejects it if the permissions are more permissive than 0600. It does \nthis in two separate steps, first stat(), then fopen(), which is what \ntriggers the static analyzer. The standard fix for this kind of thing \nis to open the file first and then use fstat() on the file handle for \nthe permission check.\n\nNote that libpq doesn't check who the owner of the file is or what \ndirectory it is in. The location of the password file can be changed \nfrom its default via libpq connection settings. So it seems to me that \nif the location of the password file were a world-writable directory, an \n\"attacker\" could supply a dummy file with 0600 permissions for the \nstat() call and then swap it out for a world-readable file with \npasswords for the fopen() call, both files owned by the attacker. And \nso they could have the libpq application try out passwords on their \nbehalf. Obviously, this requires a number of unlikely circumstances, \nincluding the ability to repeatedly exploit the gap between the stat() \nand fopen() call. But it seems formally incorrect, so it seems good to \nfix it, at least to make the code a better example.\n\nThoughts?", "msg_date": "Sat, 10 Aug 2024 09:10:16 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "libpq minor TOCTOU violation" }, { "msg_contents": "Hi,\n\n> I was playing with a static analyzer security scanner and it flagged a\n> time-of-check-time-of-use violation in libpq. I was going to propose a\n> fix for this on -hackers, since you probably can't do anything\n> interesting with this, but then I figured I'd better check here first.\n>\n> libpq checks the permissions of the password file before opening it and\n> rejects it if the permissions are more permissive than 0600. It does\n> this in two separate steps, first stat(), then fopen(), which is what\n> triggers the static analyzer. The standard fix for this kind of thing\n> is to open the file first and then use fstat() on the file handle for\n> the permission check.\n>\n> Note that libpq doesn't check who the owner of the file is or what\n> directory it is in. The location of the password file can be changed\n> from its default via libpq connection settings. So it seems to me that\n> if the location of the password file were a world-writable directory, an\n> \"attacker\" could supply a dummy file with 0600 permissions for the\n> stat() call and then swap it out for a world-readable file with\n> passwords for the fopen() call, both files owned by the attacker. And\n> so they could have the libpq application try out passwords on their\n> behalf. Obviously, this requires a number of unlikely circumstances,\n> including the ability to repeatedly exploit the gap between the stat()\n> and fopen() call. But it seems formally incorrect, so it seems good to\n> fix it, at least to make the code a better example.\n\nNot entirely sure about the presence of a serious security issue but\nsilencing a static analyzer sounds like a good idea, especially since\nthe fix is simple.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:34:52 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq minor TOCTOU violation" }, { "msg_contents": "On 8/10/24 9:10 AM, Peter Eisentraut wrote:\n> Thoughts?\n\nI like it. Not because of the security issue but mainly because it is \nmore correct to do it this way. Plus the old code running stat() on \nWindows also made little sense.\n\nI think this simple fix can be committed.\n\nAndreas\n\n\n", "msg_date": "Wed, 14 Aug 2024 03:12:32 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq minor TOCTOU violation" }, { "msg_contents": "On 14.08.24 03:12, Andreas Karlsson wrote:\n> On 8/10/24 9:10 AM, Peter Eisentraut wrote:\n>> Thoughts?\n> \n> I like it. Not because of the security issue but mainly because it is \n> more correct to do it this way. Plus the old code running stat() on \n> Windows also made little sense.\n> \n> I think this simple fix can be committed.\n\ncommitted\n\n\n\n", "msg_date": "Fri, 16 Aug 2024 06:48:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq minor TOCTOU violation" } ]
[ { "msg_contents": "Hi hackers,\n\nLogical replication apply worker by default switches off asynchronous \ncommit. Cite from documentation of subscription parameters:\n\n```\n\n|synchronous_commit|(|enum|)<https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-SYNCHRONOUS-COMMIT>\n\n The value of this parameter overrides thesynchronous_commit\n <https://www.postgresql.org/docs/devel/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT>setting\n within this subscription's apply worker processes. The default value\n is|off|.\n It is safe to use|off|for logical replication: If the subscriber\n loses transactions because of missing synchronization, the data will\n be sent again from the publisher.\n\n```\n\nSo subscriber can confirm transaction which are not persisted. But \nconsider a PostgreSQL HA setup with:\n\n * primary node\n * (cold) standby node streaming WAL from the primary\n * synchronous replication enabled, so that you get zero data loss if\n the primary dies\n * the primary/standby cluster is a subscriber to a remote PostgreSQL\n server\n\nIt can happen that:\n\n * the primary streams some transactions from the remote PostgreSQL,\n with logical replication\n * the primary crashes. Failover to the standby happens\n * the standby tries to stream the transactions from the subscriber.\n But some transactions are missed, because the primary had already\n reported a higher flush LSN.\n\n\nI wonder if such scenario is considered as an \"expected behavior\" or \n\"bug\" by community?\nIt seems to be quite easily fixed (see attached patch).\n\nSo should we take in account sync replication in LR apply worker or not?\n\nThanks to Heikki Linnakangas <[email protected]> for describing this \nscenario and Arseny Sher <[email protected]> for providing the patch.", "msg_date": "Sat, 10 Aug 2024 15:25:04 +0300", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "asynchronous commit&synchronous replication" }, { "msg_contents": "\n\n> On 10 Aug 2024, at 17:25, Konstantin Knizhnik <[email protected]> wrote:\n> \n> So should we take in account sync replication in LR apply worker or not?\n\nThere was some relevant discussion of this topic on PGCon2020 Unconference [0].\nMy recollection is that it would be nice to have LR slot setting akin to synchronous_standby_names which describes what kind of durability guarantees should be met by streamed data.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://wiki.postgresql.org/wiki/PgCon_2020_Developer_Unconference/Edge_cases_of_synchronous_replication_in_HA_solutions\n\n", "msg_date": "Sat, 10 Aug 2024 18:03:21 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: asynchronous commit&synchronous replication" } ]
[ { "msg_contents": "Hi, hackers! If you look at the code in the src/backend/executor/spi.c file,\nyou will see the SPI_connect function familiar to many there, which\ninternally simply calls SPI_connect_ext. The return type is int, at the end\nit says return SPI_OK_CONNECT;\nIt confuses me that nothing but OK, judging by the code, can return.(I\nunderstand that earlier, before 1833f1a1, it could also return\nSPI_ERROR_CONNECT). Therefore, I suggest making the returned value void\ninstead of int and not checking the returned value. What do you think about\nthis?\nBest Regards, Stepan Neretin.", "msg_date": "Sat, 10 Aug 2024 16:55:46 +0300", "msg_from": "Stepan <[email protected]>", "msg_from_op": true, "msg_subject": "SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "Regarding checking the return value of these functions, I would also like\nto add that somewhere in the code before my patch, the value is checked, and\nsomewhere it is not. I removed the check everywhere and it became the same\nstyle.\n\nRegarding checking the return value of these functions, I would also like to add that somewhere in the code before my patch, the value is checked, and somewhere it is not. I removed the check everywhere and it became the same style.", "msg_date": "Sat, 10 Aug 2024 17:00:02 +0300", "msg_from": "Stepan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "Stepan <[email protected]> writes:\n> Hi, hackers! If you look at the code in the src/backend/executor/spi.c file,\n> you will see the SPI_connect function familiar to many there, which\n> internally simply calls SPI_connect_ext. The return type is int, at the end\n> it says return SPI_OK_CONNECT;\n> It confuses me that nothing but OK, judging by the code, can return.(I\n> understand that earlier, before 1833f1a1, it could also return\n> SPI_ERROR_CONNECT). Therefore, I suggest making the returned value void\n> instead of int and not checking the returned value. What do you think about\n> this?\n\nThat would break a lot of code (much of it not under our control) to\nlittle purpose; it would also foreclose the option to return to using\nSPI_ERROR_CONNECT someday.\n\nWe go to a lot of effort to keep the SPI API as stable as we can\nacross major versions, so I don't see why we'd just randomly make\nan API-breaking change like this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Aug 2024 10:12:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "> >That would break a lot of code (much of it not under our control) to\n> little purpose; it would also foreclose the option to return to using\n> SPI_ERROR_CONNECT someday.\n>\n\nAgree, it makes sense.\nThe only question left is, is there any logic in why in some places its\nreturn value of these functions is checked, and in some not? Can I add\nchecks everywhere?\nBest Regards, Stepan Neretin.\n\n\n>That would break a lot of code (much of it not under our control) to\nlittle purpose; it would also foreclose the option to return to using\nSPI_ERROR_CONNECT someday. Agree, it makes sense. The only question left is, is there any logic in why in some places its return value of these functions is checked, and in some not? Can I add checks everywhere?Best Regards, Stepan Neretin.", "msg_date": "Sat, 10 Aug 2024 17:45:01 +0300", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "On Saturday, August 10, 2024, Tom Lane <[email protected]> wrote:\n\n> Stepan <[email protected]> writes:\n> > Hi, hackers! If you look at the code in the src/backend/executor/spi.c\n> file,\n> > you will see the SPI_connect function familiar to many there, which\n> > internally simply calls SPI_connect_ext. The return type is int, at the\n> end\n> > it says return SPI_OK_CONNECT;\n> > It confuses me that nothing but OK, judging by the code, can return.(I\n> > understand that earlier, before 1833f1a1, it could also return\n> > SPI_ERROR_CONNECT). Therefore, I suggest making the returned value void\n> > instead of int and not checking the returned value. What do you think\n> about\n> > this?\n>\n> That would break a lot of code (much of it not under our control) to\n> little purpose; it would also foreclose the option to return to using\n> SPI_ERROR_CONNECT someday.\n>\n\nI suggest we document it as deprecated and insist any future attempt to\nimplement a return-on-error connection function define a completely new\nfunction.\n\nDavid J.\n\nOn Saturday, August 10, 2024, Tom Lane <[email protected]> wrote:Stepan <[email protected]> writes:\n> Hi, hackers! If you look at the code in the src/backend/executor/spi.c file,\n> you will see the SPI_connect function familiar to many there, which\n> internally simply calls SPI_connect_ext. The return type is int, at the end\n> it says return SPI_OK_CONNECT;\n> It confuses me that nothing but OK, judging by the code, can return.(I\n> understand that earlier, before 1833f1a1, it could also return\n> SPI_ERROR_CONNECT). Therefore, I suggest making the returned value void\n> instead of int and not checking the returned value. What do you think about\n> this?\n\nThat would break a lot of code (much of it not under our control) to\nlittle purpose; it would also foreclose the option to return to using\nSPI_ERROR_CONNECT someday.\nI suggest we document it as deprecated and insist any future attempt to implement a return-on-error connection function define a completely new function.David J.", "msg_date": "Sat, 10 Aug 2024 08:18:31 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Saturday, August 10, 2024, Tom Lane <[email protected]> wrote:\n>> That would break a lot of code (much of it not under our control) to\n>> little purpose; it would also foreclose the option to return to using\n>> SPI_ERROR_CONNECT someday.\n\n> I suggest we document it as deprecated and insist any future attempt to\n> implement a return-on-error connection function define a completely new\n> function.\n\nTrue; we're kind of in an intermediate place right now where certain\ncall sites aren't bothering to check the return code, but it's hard\nto argue that they're really wrong --- and more to the point,\nre-introducing use of SPI_ERROR_CONNECT would break them. I don't\nknow if that usage pattern has propagated outside Postgres core,\nbut it might've. Perhaps it would be better to update the docs to\nsay that the only return value is SPI_OK_CONNECT and all failure\ncases are reported via elog/ereport.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Aug 2024 12:29:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "On Sat, Aug 10, 2024 at 9:29 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Saturday, August 10, 2024, Tom Lane <[email protected]> wrote:\n> >> That would break a lot of code (much of it not under our control) to\n> >> little purpose; it would also foreclose the option to return to using\n> >> SPI_ERROR_CONNECT someday.\n>\n> > I suggest we document it as deprecated and insist any future attempt to\n> > implement a return-on-error connection function define a completely new\n> > function.\n>\n> I don't\n> know if that usage pattern has propagated outside Postgres core,\n> but it might've.\n\n\nI'd give it decent odds since our example usage doesn't include the test.\n\nhttps://www.postgresql.org/docs/current/spi-examples.html\n\n /* Convert given text object to a C string */\n command = text_to_cstring(PG_GETARG_TEXT_PP(0));\n cnt = PG_GETARG_INT32(1);\n\n SPI_connect();\n\n ret = SPI_exec(command, cnt);\n\n proc = SPI_processed;\n\nDavid J.\n\nOn Sat, Aug 10, 2024 at 9:29 AM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Saturday, August 10, 2024, Tom Lane <[email protected]> wrote:\n>> That would break a lot of code (much of it not under our control) to\n>> little purpose; it would also foreclose the option to return to using\n>> SPI_ERROR_CONNECT someday.\n\n> I suggest we document it as deprecated and insist any future attempt to\n> implement a return-on-error connection function define a completely new\n> function.I don't\nknow if that usage pattern has propagated outside Postgres core,\nbut it might've.I'd give it decent odds since our example usage doesn't include the test.https://www.postgresql.org/docs/current/spi-examples.html    /* Convert given text object to a C string */    command = text_to_cstring(PG_GETARG_TEXT_PP(0));    cnt = PG_GETARG_INT32(1);    SPI_connect();    ret = SPI_exec(command, cnt);    proc = SPI_processed;David J.", "msg_date": "Sat, 10 Aug 2024 10:25:29 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "> I'd give it decent odds since our example usage doesn't include the test.\n> https://www.postgresql.org/docs/current/spi-examples.html\n\nhttps://www.postgresql.org/docs/devel/trigger-example.html\n<https://www.postgresql.org/docs/current/spi-examples.html>\n\n> /* connect to SPI manager */\n\n> if ((ret = SPI_connect()) < 0)\n> elog(ERROR, \"trigf (fired %s): SPI_connect returned %d\", when, ret);\n\n\nin this page check include in the example.\nI think we need to give examples of one kind. If there is no void in\nthe code, then there should be checks everywhere (at least in the\ndocumentation).\n\n What do you think about the attached patch?\n\nBest Regards, Stepan Neretin.", "msg_date": "Sat, 10 Aug 2024 21:34:43 +0300", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "On 10.08.24 16:12, Tom Lane wrote:\n> Stepan <[email protected]> writes:\n>> Hi, hackers! If you look at the code in the src/backend/executor/spi.c file,\n>> you will see the SPI_connect function familiar to many there, which\n>> internally simply calls SPI_connect_ext. The return type is int, at the end\n>> it says return SPI_OK_CONNECT;\n>> It confuses me that nothing but OK, judging by the code, can return.(I\n>> understand that earlier, before 1833f1a1, it could also return\n>> SPI_ERROR_CONNECT). Therefore, I suggest making the returned value void\n>> instead of int and not checking the returned value. What do you think about\n>> this?\n> \n> That would break a lot of code (much of it not under our control) to\n> little purpose; it would also foreclose the option to return to using\n> SPI_ERROR_CONNECT someday.\n> \n> We go to a lot of effort to keep the SPI API as stable as we can\n> across major versions, so I don't see why we'd just randomly make\n> an API-breaking change like this.\n\nHere is a previous discussion: \nhttps://www.postgresql.org/message-id/flat/1356682025.20017.4.camel%40vanquo.pezone.net\n\nI like the idea that we would keep the API but convert most errors to \nexceptions.\n\n\n\n", "msg_date": "Mon, 12 Aug 2024 14:39:04 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 10.08.24 16:12, Tom Lane wrote:\n>> We go to a lot of effort to keep the SPI API as stable as we can\n>> across major versions, so I don't see why we'd just randomly make\n>> an API-breaking change like this.\n\n> Here is a previous discussion: \n> https://www.postgresql.org/message-id/flat/1356682025.20017.4.camel%40vanquo.pezone.net\n\n> I like the idea that we would keep the API but convert most errors to \n> exceptions.\n\nAfter thinking about this for awhile, one reason that it's practical\nto change it today for SPI_connect is that SPI_connect has not\nreturned anything except SPI_OK_CONNECT since v10. So if we tell\nextension authors they don't need to check the result, it's unlikely\nthat that will cause any new code they write to get used with PG\nversions where it would be wrong. I fear that we'd need a similar\nmulti-year journey to get to a place where we could deprecate checking\nthe result of any other SPI function.\n\nNonetheless, there seems no reason not to do it now for SPI_connect.\nSo attached is a patch that documents the result value as vestigial\nand removes the calling-code checks in our own code, but doesn't\ntouch SPI_connect[_ext] itself. This combines portions of Stepan's\ntwo patches with some additional work (mostly, that he'd missed fixing\nany of contrib/).\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 05 Sep 2024 17:13:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "> So if we tell extension authors they don't need to check the result, it's\nunlikely\n> that that will cause any new code they write to get used with PG\n> versions where it would be wrong.\nYes, I concur.\n\n> This combines portions of Stepan's\n> two patches with some additional work (mostly, that he'd missed fixing\n> any of contrib/).\nThank you for the feedback! I believe the patch looks satisfactory. Should\nwe await a review? The changes seem straightforward to me.\n\n> So if we tell extension authors they don't need to check the result, it's unlikely\n> that that will cause any new code they write to get used with PG\n> versions where it would be wrong.Yes, I concur.> This combines portions of Stepan's\n> two patches with some additional work (mostly, that he'd missed fixing\n> any of contrib/).Thank you for the feedback! I believe the patch looks satisfactory. Should we await a review? The changes seem straightforward to me.", "msg_date": "Sun, 8 Sep 2024 13:35:00 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "Stepan Neretin <[email protected]> writes:\n>> This combines portions of Stepan's\n>> two patches with some additional work (mostly, that he'd missed fixing\n>> any of contrib/).\n\n> Thank you for the feedback! I believe the patch looks satisfactory. Should\n> we await a review? The changes seem straightforward to me.\n\nI too think it's good to go. If no one complains or asks for\nmore time to review, I will push it Monday or so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Sep 2024 03:31:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "I wrote:\n> I too think it's good to go. If no one complains or asks for\n> more time to review, I will push it Monday or so.\n\nAnd done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Sep 2024 12:19:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" }, { "msg_contents": "On Mon, Sep 9, 2024 at 12:20 PM Tom Lane <[email protected]> wrote:\n> I wrote:\n> > I too think it's good to go. If no one complains or asks for\n> > more time to review, I will push it Monday or so.\n>\n> And done.\n\nI didn't see this thread until after the commit had already happened,\nbut a belated +1 for this and any other cruft removal we can do in\nSPI-land.\n\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 12:29:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SPI_connect, SPI_connect_ext return type" } ]
[ { "msg_contents": "Hi PostgreSQL Community,\n\nI have a scenario where I am working with two functions: one in SQL and\nanother in C, where the SQL function is a wrapper around C function. Here’s\nan example:\n\nCREATE OR REPLACE FUNCTION my_func(IN input text)RETURNS BIGINT AS $$DECLARE\n result BIGINT;BEGIN\n SELECT col2 INTO result FROM my_func_extended(input);\n RETURN result;END;\n$$ LANGUAGE plpgsql;\nCREATE OR REPLACE FUNCTION my_func_extended(\n IN input text,\n OUT col1 text,\n OUT col2 BIGINT\n)RETURNS SETOF recordAS 'MODULE_PATHNAME', 'my_func_extended'LANGUAGE\nC STRICT PARALLEL SAFE;\n\nI need to prevent direct execution of my_func_extended from psql while\nstill allowing it to be called from within the wrapper function my_func.\n\nI’m considering the following options:\n\n 1. Using GRANT/REVOKE in SQL to manage permissions.\n 2. Adding a check in the C function to allow execution only if my_func\n is in the call stack (previous parent or something), and otherwise throwing\n an error.\n\nIs there an existing approach to achieve this, or would you recommend a\nspecific solution?\n\nBest regards,\nAyush Vatsa\nAWS\n\nHi PostgreSQL Community,\nI have a scenario where I am working with two functions: one in SQL and another in C, where the SQL function is a wrapper around C function. Here’s an example:\nCREATE OR REPLACE FUNCTION my_func(IN input text)\nRETURNS BIGINT AS $$\nDECLARE\n result BIGINT;\nBEGIN\n SELECT col2 INTO result FROM my_func_extended(input);\n RETURN result;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION my_func_extended(\n IN input text,\n OUT col1 text,\n OUT col2 BIGINT\n)\nRETURNS SETOF record\nAS 'MODULE_PATHNAME', 'my_func_extended'\nLANGUAGE C STRICT PARALLEL SAFE;\n\nI need to prevent direct execution of my_func_extended from psql while still allowing it to be called from within the wrapper function my_func. \nI’m considering the following options:\n\nUsing GRANT/REVOKE in SQL to manage permissions.\nAdding a check in the C function to allow execution only if my_func is in the call stack (previous parent or something), and otherwise throwing an error.\n\nIs there an existing approach to achieve this, or would you recommend a specific solution?\nBest regards,Ayush VatsaAWS", "msg_date": "Sun, 11 Aug 2024 12:53:23 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "Hi\n\nne 11. 8. 2024 v 9:23 odesílatel Ayush Vatsa <[email protected]>\nnapsal:\n\n> Hi PostgreSQL Community,\n>\n> I have a scenario where I am working with two functions: one in SQL and\n> another in C, where the SQL function is a wrapper around C function. Here’s\n> an example:\n>\n> CREATE OR REPLACE FUNCTION my_func(IN input text)RETURNS BIGINT AS $$DECLARE\n> result BIGINT;BEGIN\n> SELECT col2 INTO result FROM my_func_extended(input);\n> RETURN result;END;\n> $$ LANGUAGE plpgsql;\n> CREATE OR REPLACE FUNCTION my_func_extended(\n> IN input text,\n> OUT col1 text,\n> OUT col2 BIGINT\n> )RETURNS SETOF recordAS 'MODULE_PATHNAME', 'my_func_extended'LANGUAGE C STRICT PARALLEL SAFE;\n>\n> I need to prevent direct execution of my_func_extended from psql while\n> still allowing it to be called from within the wrapper function my_func.\n>\n> I’m considering the following options:\n>\n> 1. Using GRANT/REVOKE in SQL to manage permissions.\n> 2. Adding a check in the C function to allow execution only if my_func\n> is in the call stack (previous parent or something), and otherwise throwing\n> an error.\n>\n> Is there an existing approach to achieve this, or would you recommend a\n> specific solution?\n>\nYou can use fmgr hook, and hold some variable as gate if your function\nmy_func_extended can be called\n\nhttps://pgpedia.info/f/fmgr_hook.html\n\nWith this option, the execution of my_func_extended will be faster, but all\nother execution will be little bit slower (due overhead of hook). But the\ncode probably will be more simpler than processing callback stack.\n\nplpgsql_check uses fmgr hook, and it is working well - just there can be\nsome surprises, when the hook is activated in different order against\nfunction's execution, and then the FHET_END can be executed without related\nFHET_START.\n\nRegards\n\nPavel\n\n\n> Best regards,\n> Ayush Vatsa\n> AWS\n>\n\nHine 11. 8. 2024 v 9:23 odesílatel Ayush Vatsa <[email protected]> napsal:Hi PostgreSQL Community,\nI have a scenario where I am working with two functions: one in SQL and another in C, where the SQL function is a wrapper around C function. Here’s an example:\nCREATE OR REPLACE FUNCTION my_func(IN input text)\nRETURNS BIGINT AS $$\nDECLARE\n result BIGINT;\nBEGIN\n SELECT col2 INTO result FROM my_func_extended(input);\n RETURN result;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION my_func_extended(\n IN input text,\n OUT col1 text,\n OUT col2 BIGINT\n)\nRETURNS SETOF record\nAS 'MODULE_PATHNAME', 'my_func_extended'\nLANGUAGE C STRICT PARALLEL SAFE;\n\nI need to prevent direct execution of my_func_extended from psql while still allowing it to be called from within the wrapper function my_func. \nI’m considering the following options:\n\nUsing GRANT/REVOKE in SQL to manage permissions.\nAdding a check in the C function to allow execution only if my_func is in the call stack (previous parent or something), and otherwise throwing an error.\n\nIs there an existing approach to achieve this, or would you recommend a specific solution?You can use fmgr hook, and hold some variable as gate if your function my_func_extended can be calledhttps://pgpedia.info/f/fmgr_hook.htmlWith this option, the execution of my_func_extended will be faster, but all other execution will be little bit slower (due overhead of hook). But the code probably will be more simpler than processing callback stack.plpgsql_check uses fmgr hook, and it is working well - just there can be some surprises, when the hook is activated in different order against function's execution, and then the FHET_END can be executed without related FHET_START. RegardsPavel \nBest regards,Ayush VatsaAWS", "msg_date": "Sun, 11 Aug 2024 11:41:11 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "On 11/08/2024 12:41, Pavel Stehule wrote:\n> ne 11. 8. 2024 v 9:23 odesílatel Ayush Vatsa <[email protected] \n> <mailto:[email protected]>> napsal:\n> \n> Hi PostgreSQL Community,\n> \n> I have a scenario where I am working with two functions: one in SQL\n> and another in C, where the SQL function is a wrapper around C\n> function. Here’s an example:\n> \n> |CREATE OR REPLACE FUNCTION my_func(IN input text) RETURNS BIGINT AS\n> $$ DECLARE result BIGINT; BEGIN SELECT col2 INTO result FROM\n> my_func_extended(input); RETURN result; END; $$ LANGUAGE plpgsql;\n> CREATE OR REPLACE FUNCTION my_func_extended( IN input text, OUT col1\n> text, OUT col2 BIGINT ) RETURNS SETOF record AS 'MODULE_PATHNAME',\n> 'my_func_extended' LANGUAGE C STRICT PARALLEL SAFE; |\n> \n> I need to prevent direct execution of |my_func_extended| from psql\n> while still allowing it to be called from within the wrapper\n> function |my_func|.\n> \n> I’m considering the following options:\n> \n> 1. Using GRANT/REVOKE in SQL to manage permissions.\n> 2. Adding a check in the C function to allow execution only if\n> |my_func| is in the call stack (previous parent or something),\n> and otherwise throwing an error.\n> \n> Is there an existing approach to achieve this, or would you\n> recommend a specific solution?\n> \n> You can use fmgr hook, and hold some variable as gate if your function \n> my_func_extended can be called\n> \n> https://pgpedia.info/f/fmgr_hook.html \n> <https://pgpedia.info/f/fmgr_hook.html>\n> \n> With this option, the execution of my_func_extended will be faster, but \n> all other execution will be little bit slower (due overhead of hook). \n> But the code probably will be more simpler than processing callback stack.\n> \n> plpgsql_check uses fmgr hook, and it is working well - just there can be \n> some surprises, when the hook is activated in different order against \n> function's execution, and then the FHET_END can be executed without \n> related FHET_START.\n\nSounds complicated. I would go with the GRANT approach. Make my_func() a \nSECURITY DEFINER function, and revoke access to my_func_extended() for \nall other roles.\n\nAnother option to consider is to not expose my_func_extended() at the \nSQL level in the first place, and rewrite my_func() in C. Dunno how \ncomplicated the logic in my_func() is, if that makes sense.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sun, 11 Aug 2024 15:08:26 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "ne 11. 8. 2024 v 14:08 odesílatel Heikki Linnakangas <[email protected]>\nnapsal:\n\n> On 11/08/2024 12:41, Pavel Stehule wrote:\n> > ne 11. 8. 2024 v 9:23 odesílatel Ayush Vatsa <[email protected]\n> > <mailto:[email protected]>> napsal:\n> >\n> > Hi PostgreSQL Community,\n> >\n> > I have a scenario where I am working with two functions: one in SQL\n> > and another in C, where the SQL function is a wrapper around C\n> > function. Here’s an example:\n> >\n> > |CREATE OR REPLACE FUNCTION my_func(IN input text) RETURNS BIGINT AS\n> > $$ DECLARE result BIGINT; BEGIN SELECT col2 INTO result FROM\n> > my_func_extended(input); RETURN result; END; $$ LANGUAGE plpgsql;\n> > CREATE OR REPLACE FUNCTION my_func_extended( IN input text, OUT col1\n> > text, OUT col2 BIGINT ) RETURNS SETOF record AS 'MODULE_PATHNAME',\n> > 'my_func_extended' LANGUAGE C STRICT PARALLEL SAFE; |\n> >\n> > I need to prevent direct execution of |my_func_extended| from psql\n> > while still allowing it to be called from within the wrapper\n> > function |my_func|.\n> >\n> > I’m considering the following options:\n> >\n> > 1. Using GRANT/REVOKE in SQL to manage permissions.\n> > 2. Adding a check in the C function to allow execution only if\n> > |my_func| is in the call stack (previous parent or something),\n> > and otherwise throwing an error.\n> >\n> > Is there an existing approach to achieve this, or would you\n> > recommend a specific solution?\n> >\n> > You can use fmgr hook, and hold some variable as gate if your function\n> > my_func_extended can be called\n> >\n> > https://pgpedia.info/f/fmgr_hook.html\n> > <https://pgpedia.info/f/fmgr_hook.html>\n> >\n> > With this option, the execution of my_func_extended will be faster, but\n> > all other execution will be little bit slower (due overhead of hook).\n> > But the code probably will be more simpler than processing callback\n> stack.\n> >\n> > plpgsql_check uses fmgr hook, and it is working well - just there can be\n> > some surprises, when the hook is activated in different order against\n> > function's execution, and then the FHET_END can be executed without\n> > related FHET_START.\n>\n> Sounds complicated. I would go with the GRANT approach. Make my_func() a\n> SECURITY DEFINER function, and revoke access to my_func_extended() for\n> all other roles.\n>\n> Another option to consider is to not expose my_func_extended() at the\n> SQL level in the first place, and rewrite my_func() in C. Dunno how\n> complicated the logic in my_func() is, if that makes sense.\n>\n\n+1\n\nThe SPI API is not difficult, and this looks like best option\n\nRegards\n\nPavel\n\n\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n\nne 11. 8. 2024 v 14:08 odesílatel Heikki Linnakangas <[email protected]> napsal:On 11/08/2024 12:41, Pavel Stehule wrote:\n> ne 11. 8. 2024 v 9:23 odesílatel Ayush Vatsa <[email protected] \n> <mailto:[email protected]>> napsal:\n> \n>     Hi PostgreSQL Community,\n> \n>     I have a scenario where I am working with two functions: one in SQL\n>     and another in C, where the SQL function is a wrapper around C\n>     function. Here’s an example:\n> \n>     |CREATE OR REPLACE FUNCTION my_func(IN input text) RETURNS BIGINT AS\n>     $$ DECLARE result BIGINT; BEGIN SELECT col2 INTO result FROM\n>     my_func_extended(input); RETURN result; END; $$ LANGUAGE plpgsql;\n>     CREATE OR REPLACE FUNCTION my_func_extended( IN input text, OUT col1\n>     text, OUT col2 BIGINT ) RETURNS SETOF record AS 'MODULE_PATHNAME',\n>     'my_func_extended' LANGUAGE C STRICT PARALLEL SAFE; |\n> \n>     I need to prevent direct execution of |my_func_extended| from psql\n>     while still allowing it to be called from within the wrapper\n>     function |my_func|.\n> \n>     I’m considering the following options:\n> \n>      1. Using GRANT/REVOKE in SQL to manage permissions.\n>      2. Adding a check in the C function to allow execution only if\n>         |my_func| is in the call stack (previous parent or something),\n>         and otherwise throwing an error.\n> \n>     Is there an existing approach to achieve this, or would you\n>     recommend a specific solution?\n> \n> You can use fmgr hook, and hold some variable as gate if your function \n> my_func_extended can be called\n> \n> https://pgpedia.info/f/fmgr_hook.html \n> <https://pgpedia.info/f/fmgr_hook.html>\n> \n> With this option, the execution of my_func_extended will be faster, but \n> all other execution will be little bit slower (due overhead of hook). \n> But the code probably will be more simpler than processing callback stack.\n> \n> plpgsql_check uses fmgr hook, and it is working well - just there can be \n> some surprises, when the hook is activated in different order against \n> function's execution, and then the FHET_END can be executed without \n> related FHET_START.\n\nSounds complicated. I would go with the GRANT approach. Make my_func() a \nSECURITY DEFINER function, and revoke access to my_func_extended() for \nall other roles.\n\nAnother option to consider is to not expose my_func_extended() at the \nSQL level in the first place, and rewrite my_func() in C. Dunno how \ncomplicated the logic in my_func() is, if that makes sense.+1 The SPI API is not difficult, and this looks like best optionRegardsPavel\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sun, 11 Aug 2024 14:11:59 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "Thanks for the responses.\n\n> I would go with the GRANT approach. Make my_func() a\nSECURITY DEFINER function, and revoke access to my_func_extended() for\nall other roles.\nThis sounds reasonable, and can be one of the options.\n\n> Dunno how\ncomplicated the logic in my_func() is, if that makes sense.\nActually my_func_extended already exists hence I don't want\nto touch its C definition, nor wanted to duplicate the logic.\n\n>The SPI API is not difficult, and this looks like best option\nSorry didn't understand this part, are you suggesting I can have called\nmy_func_extended() through SPI inside my_func(), but didnt that also\nrequired\nmy_func_extended() declaration present in SQL ? And If that is present then\nanyone can call my_func_extended() directly.\n\nRegards\nAyush\nAWS\n\nThanks for the responses.> I would go with the GRANT approach. Make my_func() aSECURITY DEFINER function, and revoke access to my_func_extended() forall other roles.This sounds reasonable, and can be one of the options.> Dunno howcomplicated the logic in my_func() is, if that makes sense.Actually my_func_extended already exists hence I don't want to touch its C definition, nor wanted to duplicate the logic.>The SPI API is not difficult, and this looks like best optionSorry didn't understand this part, are you suggesting I can have called my_func_extended() through SPI inside my_func(), but didnt that also required my_func_extended() declaration present in SQL ? And If that is present thenanyone can call my_func_extended() directly.RegardsAyushAWS", "msg_date": "Sun, 11 Aug 2024 19:04:41 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "ne 11. 8. 2024 v 15:34 odesílatel Ayush Vatsa <[email protected]>\nnapsal:\n\n> Thanks for the responses.\n>\n> > I would go with the GRANT approach. Make my_func() a\n> SECURITY DEFINER function, and revoke access to my_func_extended() for\n> all other roles.\n> This sounds reasonable, and can be one of the options.\n>\n> > Dunno how\n> complicated the logic in my_func() is, if that makes sense.\n> Actually my_func_extended already exists hence I don't want\n> to touch its C definition, nor wanted to duplicate the logic.\n>\n> >The SPI API is not difficult, and this looks like best option\n> Sorry didn't understand this part, are you suggesting I can have called\n> my_func_extended() through SPI inside my_func(), but didnt that also\n> required\n> my_func_extended() declaration present in SQL ? And If that is present then\n> anyone can call my_func_extended() directly.\n>\n\nno, my proposal is write your my_func in C - like Heikki proposes, then\nmy_func_extended should not be visible from SQL, and then you don't need to\nsolve this issue.\n\n\n\n>\n> Regards\n> Ayush\n> AWS\n>\n\nne 11. 8. 2024 v 15:34 odesílatel Ayush Vatsa <[email protected]> napsal:Thanks for the responses.> I would go with the GRANT approach. Make my_func() aSECURITY DEFINER function, and revoke access to my_func_extended() forall other roles.This sounds reasonable, and can be one of the options.> Dunno howcomplicated the logic in my_func() is, if that makes sense.Actually my_func_extended already exists hence I don't want to touch its C definition, nor wanted to duplicate the logic.>The SPI API is not difficult, and this looks like best optionSorry didn't understand this part, are you suggesting I can have called my_func_extended() through SPI inside my_func(), but didnt that also required my_func_extended() declaration present in SQL ? And If that is present thenanyone can call my_func_extended() directly.no, my proposal is write your my_func in C - like Heikki proposes, then my_func_extended should not be visible from SQL, and then you don't need to solve this issue. RegardsAyushAWS", "msg_date": "Sun, 11 Aug 2024 15:43:47 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "Understood, Thanks for the help.\n\nRegards\nAyush\n\n>\n\nUnderstood, Thanks for the help.RegardsAyush", "msg_date": "Sun, 11 Aug 2024 19:26:48 +0530", "msg_from": "Ayush Vatsa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Sounds complicated. I would go with the GRANT approach. Make my_func() a \n> SECURITY DEFINER function, and revoke access to my_func_extended() for \n> all other roles.\n\n+1\n\n> Another option to consider is to not expose my_func_extended() at the \n> SQL level in the first place, and rewrite my_func() in C. Dunno how \n> complicated the logic in my_func() is, if that makes sense.\n\nAnother way to think about that is \"push down into C the part of\nmy_func() that you feel is necessary to make my_func_extended()\nsafely callable\". Personally I'd probably change my_func_extended()\nitself to do that, but if you feel a need to leave it alone, you\ncould write a C wrapper function. Anyway my point is you might\nnot have to move *all* of my_func()'s functionality into C. Think\nabout what it is exactly that makes you feel it's unsafe to call\nmy_func_extended() directly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Aug 2024 11:29:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restricting Direct Access to a C Function in PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nI was testing creating a rule that uses RETURNING and noticed a difference\nbetween the extended query protocol and the simple query protocol. In the\nformer, RETURNING is ignored (at least in my case) and the latter it is\nrespected:\n\nCREATE table test (id bigint, deleted boolean);\n\n\nCREATE RULE soft_delete AS ON DELETE TO test DO INSTEAD (UPDATE test set\ndeleted = true WHERE id = old.id RETURNING old.*);\n\n\nINSERT INTO test values (1, false);\n\n\n# extended protocol result\n\npostgres=# DELETE FROM test WHERE id = $1 RETURNING * \\bind 1 \\g\n\nDELETE 0\n\n\n# simple protocol result\n\npostgres=# DELETE FROM test WHERE id = 1 RETURNING *;\n\n id | deleted\n\n----+---------\n\n 1 | t\n\n(1 row)\n\n\nDELETE 0\n\n\nI was wondering if this is something that is just fundamentally not\nexpected to work or if it might be able to work without jeopardizing\ncritical parts of Postgres. If the latter I was interested in digging\nthrough the code and seeing if I could figure it out.\n\n\nNote that I work on a driver/client for Postgres and the example above came\nfrom a user. I'm not sure if it's the best way to do what they want but\ntheir question sparked my interest in the general behaviour of returning\nfrom rules with the extended query protocol.\n\n\nThanks\n\nHi,I was testing creating a rule that uses RETURNING and noticed a difference between the extended query protocol and the simple query protocol. In the former, RETURNING is ignored (at least in my case) and the latter it is respected:\nCREATE table test (id bigint, deleted boolean);\nCREATE RULE soft_delete AS ON DELETE TO test DO INSTEAD (UPDATE test set deleted = true WHERE id = old.id RETURNING old.*);\nINSERT INTO test values (1, false);# extended protocol resultpostgres=# DELETE FROM test WHERE id = $1 RETURNING * \\bind 1 \\g\nDELETE 0# simple protocol resultpostgres=# DELETE FROM test WHERE id = 1 RETURNING *; id | deleted ----+---------  1 | t(1 row)\nDELETE 0I was wondering if this is something that is just fundamentally not expected to work or if it might be able to work without jeopardizing critical parts of Postgres. If the latter I was interested in digging through the code and seeing if I could figure it out.Note that I work on a driver/client for Postgres and the example above came from a user. I'm not sure if it's the best way to do what they want but their question sparked my interest in the general behaviour of returning from rules with the extended query protocol.  Thanks", "msg_date": "Sun, 11 Aug 2024 07:29:25 -0400", "msg_from": "Greg Rychlewski <[email protected]>", "msg_from_op": true, "msg_subject": "Returning from a rule with extended query protocol" }, { "msg_contents": "On Sun, 11 Aug 2024 at 13:29, Greg Rychlewski <[email protected]> wrote:\n> I was testing creating a rule that uses RETURNING and noticed a difference between the extended query protocol and the simple query protocol. In the former, RETURNING is ignored (at least in my case) and the latter it is respected:\n\nThat seems like a bug to me. The simple and extended protocol should\nreturn the same data for the same query. I'm guessing CREATE RULE\nisn't often enough for this difference to be noticed earlier. So yeah\nplease dig through the code and submit a patch to fix this.\n\n\n", "msg_date": "Sun, 11 Aug 2024 22:15:41 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Returning from a rule with extended query protocol" }, { "msg_contents": "Greg Rychlewski <[email protected]> writes:\n> I was testing creating a rule that uses RETURNING and noticed a difference\n> between the extended query protocol and the simple query protocol. In the\n> former, RETURNING is ignored (at least in my case) and the latter it is\n> respected:\n\nI think this might be the same issue recently discussed here:\n\nhttps://www.postgresql.org/message-id/flat/1df84daa-7d0d-e8cc-4762-85523e45e5e7%40mailbox.org\n\nThat discussion was leaning towards the idea that the cost-benefit\nof fixing this isn't attractive and we should just document the\ndiscrepancy. However, with two reports now, maybe we should rethink.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Aug 2024 17:54:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Returning from a rule with extended query protocol" }, { "msg_contents": "On Sun, 11 Aug 2024 at 23:54, Tom Lane <[email protected]> wrote:\n> I think this might be the same issue recently discussed here:\n>\n> https://www.postgresql.org/message-id/flat/1df84daa-7d0d-e8cc-4762-85523e45e5e7%40mailbox.org\n\nYeah that's definitely the same issue.\n\n> That discussion was leaning towards the idea that the cost-benefit\n> of fixing this isn't attractive and we should just document the\n> discrepancy. However, with two reports now, maybe we should rethink.\n\nI think it's interesting that both reports use rules in the same way,\ni.e. to implement soft-deletes. That indeed seems like a pretty good\nusecase for them. And since pretty much every serious client library\nuses the extended query protocol, this breaks that usecase. But if\nit's hard to fix, I'm indeed not sure if it's worth the effort. If we\ndon't we should definitely document it though, at CREATE RULE and in\nthe protocol spec.\n\n\n", "msg_date": "Mon, 12 Aug 2024 23:26:02 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Returning from a rule with extended query protocol" } ]
[ { "msg_contents": "Hi,\r\n\r\n\r\n4, 5 ===\r\n\r\n\r\n&gt; if (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\r\n&gt;&nbsp; &nbsp; &nbsp;(SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT &amp;&amp; info != XLOG_HEAP_INPLACE) ||\r\n&gt;&nbsp; &nbsp; &nbsp;ctx-&gt;fast_forward)\r\n&gt;&nbsp; &nbsp; &nbsp;return;\r\n\r\n\r\n\r\nI think during fast forward, we also need handle the xlog that marks a transaction\r\nas&nbsp;catalog modifying, or the snapshot might lose some transactions?\r\n\r\n\r\n&gt; That way we'd still rely on what's being done in the XLOG_HEAP_INPLACE case\r\n\r\n\r\n+\t\tif (SnapBuildCurrentState(builder) &gt;= SNAPBUILD_BUILDING_SNAPSHOT)\r\n+\t\t{\r\n+\t\t\t/* Currently only XLOG_HEAP_INPLACE means a catalog modifying */\r\n+\t\t\tif (info == XLOG_HEAP_INPLACE &amp;&amp; TransactionIdIsValid(xid))\r\n+\t\t\t\tReorderBufferXidSetCatalogChanges(ctx-&gt;reorder, xid, buf-&gt;origptr);\r\n+\t\t}\r\n\r\n\r\n\r\nWe only call&nbsp;ReorderBufferXidSetCatalogChanges() for the xlog that marks a transaction as&nbsp;catalog\r\nmodifying, and we don't care about the other steps being done in the xlog, so I think the current\r\napproach is ok.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nHi,4, 5 ===> if (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||>     (SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT && info != XLOG_HEAP_INPLACE) ||>     ctx->fast_forward)>     return;I think during fast forward, we also need handle the xlog that marks a transactionas catalog modifying, or the snapshot might lose some transactions?> That way we'd still rely on what's being done in the XLOG_HEAP_INPLACE case+ if (SnapBuildCurrentState(builder) >= SNAPBUILD_BUILDING_SNAPSHOT)+ {+ /* Currently only XLOG_HEAP_INPLACE means a catalog modifying */+ if (info == XLOG_HEAP_INPLACE && TransactionIdIsValid(xid))+ ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);+ }We only call ReorderBufferXidSetCatalogChanges() for the xlog that marks a transaction as catalogmodifying, and we don't care about the other steps being done in the xlog, so I think the currentapproach is ok.--Regards,ChangAo Chen", "msg_date": "Mon, 12 Aug 2024 16:34:25 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 12, 2024 at 04:34:25PM +0800, cca5507 wrote:\n> Hi,\n> \n> \n> 4, 5 ===\n> \n> \n> &gt; if (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n> &gt;&nbsp; &nbsp; &nbsp;(SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT &amp;&amp; info != XLOG_HEAP_INPLACE) ||\n> &gt;&nbsp; &nbsp; &nbsp;ctx-&gt;fast_forward)\n> &gt;&nbsp; &nbsp; &nbsp;return;\n> \n> \n> \n> I think during fast forward, we also need handle the xlog that marks a transaction\n> as&nbsp;catalog modifying, or the snapshot might lose some transactions?\n\nI think it's fine to skip during fast forward as we are not generating logical\nchanges. It's done that way in master, in your proposal and in my \"if\" proposals.\nNote that my proposals related to the if conditions are for heap2_decode and\nheap_decode (not xact_decode).\n\n> &gt; That way we'd still rely on what's being done in the XLOG_HEAP_INPLACE case\n> \n> \n> +\t\tif (SnapBuildCurrentState(builder) &gt;= SNAPBUILD_BUILDING_SNAPSHOT)\n> +\t\t{\n> +\t\t\t/* Currently only XLOG_HEAP_INPLACE means a catalog modifying */\n> +\t\t\tif (info == XLOG_HEAP_INPLACE &amp;&amp; TransactionIdIsValid(xid))\n> +\t\t\t\tReorderBufferXidSetCatalogChanges(ctx-&gt;reorder, xid, buf-&gt;origptr);\n> +\t\t}\n> \n> \n> \n> We only call&nbsp;ReorderBufferXidSetCatalogChanges() for the xlog that marks a transaction as&nbsp;catalog\n> modifying, and we don't care about the other steps being done in the xlog, so I think the current\n> approach is ok.\n\nYeah, I think your proposal does not do anything wrong. I just prefer to put\neverything in a single if condition (as per my proposal) so that we can jump\ndirectly in the appropriate case. I think that way the code is easier to maintain\ninstead of having to deal with the same info values in distinct places.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Aug 2024 10:35:44 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\r\n\r\nThanks for the comments!\r\n\r\n\r\n- I think it's fine to skip during fast forward as we are not generating logical\r\n- changes. It's done that way in master, in your proposal and in my \"if\" proposals.\r\n- Note that my proposals related to the if conditions are for heap2_decode and\r\n- heap_decode (not xact_decode).\r\n\r\n\r\n\r\nBut note that in xact_decode(), case&nbsp;XLOG_XACT_INVALIDATIONS, we call\r\nReorderBufferXidSetCatalogChanges() even if we are fast-forwarding, it might\r\nbe better to be consistent.\r\n\r\n\r\nIn addition, we don't decode anything during fast forward, but the snapshot might\r\nserialize to disk. If we skip calling ReorderBufferXidSetCatalogChanges(), the snapshot\r\nmay be wrong on disk.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nHi,Thanks for the comments!- I think it's fine to skip during fast forward as we are not generating logical- changes. It's done that way in master, in your proposal and in my \"if\" proposals.- Note that my proposals related to the if conditions are for heap2_decode and- heap_decode (not xact_decode).But note that in xact_decode(), case XLOG_XACT_INVALIDATIONS, we callReorderBufferXidSetCatalogChanges() even if we are fast-forwarding, it mightbe better to be consistent.In addition, we don't decode anything during fast forward, but the snapshot mightserialize to disk. If we skip calling ReorderBufferXidSetCatalogChanges(), the snapshotmay be wrong on disk.--Regards,ChangAo Chen", "msg_date": "Mon, 12 Aug 2024 19:38:53 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\r\n\r\nI refactor the code and fix the git apply warning&nbsp;according to [1].\r\n\r\n\r\nHere are the new version patches.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\r\n\r\n\r\n\r\n[1]&nbsp;https://www.postgresql.org/message-id/Zrmh7X8jYCbFYXjH%40ip-10-97-1-34.eu-west-3.compute.internal", "msg_date": "Tue, 13 Aug 2024 12:23:04 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 13, 2024 at 12:23:04PM +0800, cca5507 wrote:\n> Hi,\n> \n> I refactor the code and fix the git apply warning&nbsp;according to [1].\n> \n> \n> Here are the new version patches.\n\nThanks!\n\n1 ===\n\n+ /* True if the xlog marks the transaction as containing catalog changes */\n+ bool set_catalog_changes = (info == XLOG_HEAP2_NEW_CID);\n\n if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n ctx->fast_forward)\n+ {\n+ /*\n+ * If the transaction contains catalog changes, we need mark it in\n+ * reorder buffer before return as the snapshot only tracks catalog\n+ * modifying transactions. The transaction before BUILDING_SNAPSHOT\n+ * won't be tracked anyway(see SnapBuildCommitTxn), so skip it.\n+ */\n+ if (set_catalog_changes && TransactionIdIsValid(xid) &&\n+ SnapBuildCurrentState(builder) >= SNAPBUILD_BUILDING_SNAPSHOT)\n+ ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n+\n return;\n+ }\n\nI still prefer to replace the above with:\n\nif (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n (SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT && info != XLOG_HEAP2_NEW_CID) ||\n ctx->fast_forward)\n return;\n\nLet's see what others think.\n\n2 ===\n\n+ /* True if the xlog marks the transaction as containing catalog changes */\n+ bool set_catalog_changes = (info == XLOG_HEAP_INPLACE);\n\n if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n ctx->fast_forward)\n+ {\n+ /*\n+ * If the transaction contains catalog changes, we need mark it in\n+ * reorder buffer before return as the snapshot only tracks catalog\n+ * modifying transactions. The transaction before BUILDING_SNAPSHOT\n+ * won't be tracked anyway(see SnapBuildCommitTxn), so skip it.\n+ */\n+ if (set_catalog_changes && TransactionIdIsValid(xid) &&\n+ SnapBuildCurrentState(builder) >= SNAPBUILD_BUILDING_SNAPSHOT)\n+ ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n+\n return;\n+ }\n\nI still prefer to replace the above with:\n\nif (SnapBuildCurrentState(builder) < SNAPBUILD_BUILDING_SNAPSHOT ||\n (SnapBuildCurrentState(builder) == SNAPBUILD_BUILDING_SNAPSHOT && info != XLOG_HEAP_INPLACE) ||\n ctx->fast_forward)\n return;\n\nLet's see what others think.\n\n3 ===\n\nI re-read your comments in [0] and it looks like you've concern about\nthe 2 \"if\" I'm proposing above and the fast forward handling. Is that the case\nor is your fast forward concern unrelated to my proposals?\n\n\n\nNot sure what happened but it looks like your reply in [0] is not part of the\ninitial thread [1], but created a new thread instead, making the whole\nconversation difficult to follow.\n\n[0]: https://www.postgresql.org/message-id/tencent_8DEC9842690A9B6AFD52D4659EF0700E9409%40qq.com\n[1]: https://www.postgresql.org/message-id/flat/tencent_6AAF072A7623A11A85C0B5FD290232467808%40qq.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 06:19:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" } ]
[ { "msg_contents": "Hello,\n\nI believe the topic on the link below\nhttps://www.postgresql.org/message-id/flat/98760523-a0d2-8705-38e3-c4602ecf2448%402ndquadrant.com\nshould be re-discussed, as more than 6 years passed from the last message.\nLet me summarize the topic first and ask the real question: Can declared\ncursors be developed that support parallelism?\n\n[1] Tomas Vondra stated that cursors generally do not benefit from parallel\nquery. There are two main reasons why parallelism is disabled for\nuser-defined cursors (or queries that might get suspended):\n(a) We can't predict what will happen while the query is suspended (and the\ntransaction is still in \"parallel mode\"), e.g. the user might run arbitrary\nDML which is not allowed.\n(b) If the cursor gets suspended, the parallel workers would be still\nassigned to it and could not be used for anything else.\nHe proposed to add a new cursor option (PARALLEL), which would allow\nparallel plans for that particular user-defined cursor. He also attached\nthe first version of the patch and indicated that the patch is experimental.\n\n[2] Robert Haas stated that the proposed patch is a small portion of what\nshould be done to cover all the holes.\n\n[3] Tomas Vondra stated that it should be checked at planning time if the\ntransaction is in parallel mode and the query contains unsafe/restricted\nfunctions.\n\n[4] Robert Haas said;\nThe main points I want to make clearly understood is the current design\nrelies on (1) functions being labeled correctly and (2) other dangerous\ncode paths being unreachable because there's nothing that runs between\nEnterParallelMode and ExitParallelMode which could invoke them, except by\ncalling a mislabeled function. Your patch expands the vulnerability\nsurface from \"executor code that can be reached without calling a\nmislabeled function\" to \"any code that can be reached by typing an SQL\ncommand\". Just rejecting any queries that are parallel-unsafe probably\ncloses a good chunk of the holes, but that still leaves a lot of code\nthat's never been run in parallel mode before potentially now running in\nparallel mode. Error handling might be a problem, too: what happens if a\nparallel worker is killed while the query is suspended?\nHe also proposed to write a test patch that keeps parallel mode active at\nall times except when running a query that contains something\nparallel-unsafe.\n\n[5] Tomas Vondra proposed a simpler option - what if we only allow fetches\nfrom the PARALLEL cursor while the cursor is open?\n BEGIN;\n ...\n DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;\n FETCH 1000 FROM x;\n FETCH 1000 FROM x;\n FETCH 1000 FROM x;\n CLOSE x;\n ...\n COMMIT;\nbut adding any other command between the OPEN/CLOSE commands would fail.\nThat should close all the holes with parallel-unsafe stuff, right?\nOf course, this won't solve the issue with error handling / killing\nsuspended workers (which didn't occur to me before as a possible issue at\nall, so that's for pointing that out). But that's a significantly more\nlimited issue to fix than all the parallel-unsafe bits.\n\n[6] Robert Haas asked some questions about the suggestion that Tomas stated\nand also made a comment about the materialization issue that Craig stated.\nHe also said; If you're running a query like \"SELECT * FROM bigtable\", you\ndon't need parallel query in the first place, because a single backend is\nquite capable of sending back the rows as fast as a client can read them.\nIf you're running a query like \"SELECT * FROM bigtable WHERE <highly\nselective predicate>\" then that's a good use case for parallel query, but\nthen materializing it isn't that bad because the result set is a lot\nsmaller than the original table.\n\n[7] Simon Riggs said;\nAt present, one major use of Cursors is in postgres_fdw. In that usage, the\nquery executes and returns all the rows. No other side execution is\npossible. How do we make parallel query work for Cursors, if not by Tomas'\nproposal? If a parallel cursor is requested, we could simply prevent other\nintermediate commands other than FETCH (next).\n\n[8] Robert Haas proposed providing some kind of infrastructure for workers\nto detach from a parallel query. Let's say the leader is either (a)\nsuspending execution of the query, because it's a cursor, or (b) not able\nto absorb rows as fast as workers are generating them.\nIn the former situation, we'd like to get rid of all workers; in the latter\nsituation, some workers. In the former situation, getting all workers to\nshut down cleanly would let us exit parallel mode (and perhaps re-enter it\nlater when we resume execution of the query). In the latter situation, we\ncould avoid wasting workers on queries where the leader can't keep up so\nthat those worker slots are available to other queries that can benefit\nfrom them.\nHe stated that there were some problems finding a point at which tuples\ncould be safely stopped from being returned.\nHe considered two approaches for handling parallel workers when a query is\nsuspended:\n(a) Keep Workers Running: This would involve keeping workers active during\nsuspension, but it requires complex state synchronization and could tie up\nresources for long periods.\n(b) Restrict Backend Activity: Imposing strict limits on actions while a\nparallel cursor is open (e.g., only allowing FETCH) could prevent some\nissues, but it's highly restrictive and doesn't address all problems, such\nas potential conflicts in transaction state and lock management. Both\napproaches are challenging but potentially feasible with enough effort.\n\n[9] Robert Haas stated if you restrict operations to only fetching from the\ncursor and ensure that both protocol messages and SQL commands are locked\ndown, with errors automatically killing the workers, this approach might\nwork. However, challenges remain, such as handling errors when the leader\nis idle, since reporting these errors is tricky due to protocol\nlimitations. Additionally, if fetching from the cursor is the only action\nallowed, using `PQsetSingleRowMode` instead of a cursor might be a simpler\nsolution.\n\n[10] Robert Haas stated parallel mode is designed to handle errors during\nquery execution only. Extending parallel mode beyond a query's context\nintroduces significant risks, including crashes or incorrect results.\nRestricting post-query operations to a very limited set might work, but\nallowing arbitrary PL/pgSQL code execution is too broad and risky.\n\nThis is the summary of the topic. Let me ask the question again: Can\ndeclared cursors be developed that support parallelism?\n\n[1]\nhttps://www.postgresql.org/message-id/98760523-a0d2-8705-38e3-c4602ecf2448%402ndquadrant.com\n[2]\nhttps://www.postgresql.org/message-id/CA%2BTgmoaJVMbQbHn3i_Uzz_vSGsbDsavYkPV4Tqz%3DUbg%2Bv8%2BLUQ%40mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/dfe4adec-ad9b-723c-d501-1eedc96837a3%402ndquadrant.com\n[4]\nhttps://www.postgresql.org/message-id/CA%2BTgmobcUxTPLbjFL4PQa_07RC5wdEr-N_Zi%2BEOGuJ8Ui1Vy8Q%40mail.gmail.com\n[5]\nhttps://www.postgresql.org/message-id/8bdb6684-09d7-f799-0a6a-362cdc251b31%402ndquadrant.com\n[6]\nhttps://www.postgresql.org/message-id/CA%2BTgmoYWoqDUViwPTk-rOJbGG8aqEVAQWBgauVbYaUxznSA%3D%3Dg%40mail.gmail.com\n[7]\nhttps://www.postgresql.org/message-id/CANP8%2BjJoAitOY5uM6L8xYjPHK-RjJtmsZ5TZKGrH8m54U9sfzA%40mail.gmail.com\n[7-more]\nhttps://www.postgresql.org/message-id/CANP8%2BjJBHDu%2BkgH2Jy34zzx9C5x61LF7JkWODPaRKuXMhD0vKw%40mail.gmail.com\n[8]\nhttps://www.postgresql.org/message-id/CA%2BTgmobXWEqRUr0k1RS%3Dx4NaAbi%2B9R-0z%2Bpp%2BVEuLYKorjbbYg%40mail.gmail.com\n[9]\nhttps://www.postgresql.org/message-id/CA%2BTgmoaBYa_WDSFhZA6sPYVnBWH_vwqpXcy3iA93oqaecj2p7A%40mail.gmail.com\n[10]\nhttps://www.postgresql.org/message-id/CA%2BTgmoZfwTc8DcoNQ_V2%3DUbmY%3DsVuPg%2B%2BMF6eNaWDVjaBczwrQ%40mail.gmail.com\n\nRegards,\n\n--\nMeftun Cincioglu\n\nHello,I believe the topic on the link belowhttps://www.postgresql.org/message-id/flat/98760523-a0d2-8705-38e3-c4602ecf2448%402ndquadrant.comshould be re-discussed, as more than 6 years passed from the last message. Let me summarize the topic first and ask the real question: Can declared cursors be developed that support parallelism?[1] Tomas Vondra stated that cursors generally do not benefit from parallel query. There are two main reasons why parallelism is disabled for user-defined cursors (or queries that might get suspended):(a) We can't predict what will happen while the query is suspended (and the transaction is still in \"parallel mode\"), e.g. the user might run arbitrary DML which is not allowed.(b) If the cursor gets suspended, the parallel workers would be still assigned to it and could not be used for anything else.He proposed to add a new cursor option (PARALLEL), which would allow parallel plans for that particular user-defined cursor. He also attached the first version of the patch and indicated that the patch is experimental.[2] Robert Haas stated that the proposed patch is a small portion of what should be done to cover all the holes.[3] Tomas Vondra stated that it should be checked at planning time if the transaction is in parallel mode and the query contains unsafe/restricted functions.[4] Robert Haas said;The main points I want to make clearly understood is the current design relies on (1) functions being labeled correctly and (2) other dangerous code paths being unreachable because there's nothing that runs between EnterParallelMode and ExitParallelMode which could invoke them, except by calling a mislabeled function.  Your patch expands the vulnerability surface from \"executor code that can be reached without calling a mislabeled function\" to \"any code that can be reached by typing an SQL command\". Just rejecting any queries that are parallel-unsafe probably closes a good chunk of the holes, but that still leaves a lot of code that's never been run in parallel mode before potentially now running in parallel mode. Error handling might be a problem, too: what happens if a parallel worker is killed while the query is suspended?He also proposed to write a test patch that keeps parallel mode active at all times except when running a query that contains something parallel-unsafe.[5] Tomas Vondra proposed a simpler option - what if we only allow fetches from the PARALLEL cursor while the cursor is open?    BEGIN;    ...    DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;    FETCH 1000 FROM x;    FETCH 1000 FROM x;    FETCH 1000 FROM x;    CLOSE x;    ...    COMMIT;but adding any other command between the OPEN/CLOSE commands would fail. That should close all the holes with parallel-unsafe stuff, right?Of course, this won't solve the issue with error handling / killing suspended workers (which didn't occur to me before as a possible issue at all, so that's for pointing that out). But that's a significantly more limited issue to fix than all the parallel-unsafe bits.[6] Robert Haas asked some questions about the suggestion that Tomas stated and also made a comment about the materialization issue that Craig stated. He also said; If you're running a query like \"SELECT * FROM bigtable\", you don't need parallel query in the first place, because a single backend is quite capable of sending back the rows as fast as a client can read them.  If you're running a query like \"SELECT * FROM bigtable WHERE <highly selective predicate>\" then that's a good use case for parallel query, but then materializing it isn't that bad because the result set is a lot smaller than the original table.[7] Simon Riggs said;At present, one major use of Cursors is in postgres_fdw. In that usage, the query executes and returns all the rows. No other side execution is possible. How do we make parallel query work for Cursors, if not by Tomas' proposal? If a parallel cursor is requested, we could simply prevent other intermediate commands other than FETCH (next).[8] Robert Haas proposed providing some kind of infrastructure for workers to detach from a parallel query. Let's say the leader is either (a) suspending execution of the query, because it's a cursor, or (b) not able to absorb rows as fast as workers are generating them.In the former situation, we'd like to get rid of all workers; in the latter situation, some workers. In the former situation, getting all workers to shut down cleanly would let us exit parallel mode (and perhaps re-enter it later when we resume execution of the query). In the latter situation, we could avoid wasting workers on queries where the leader can't keep up so that those worker slots are available to other queries that can benefit from them.He stated that there were some problems finding a point at which tuples could be safely stopped from being returned.He considered two approaches for handling parallel workers when a query is suspended:(a) Keep Workers Running: This would involve keeping workers active during suspension, but it requires complex state synchronization and could tie up resources for long periods.(b) Restrict Backend Activity: Imposing strict limits on actions while a parallel cursor is open (e.g., only allowing FETCH) could prevent some issues, but it's highly restrictive and doesn't address all problems, such as potential conflicts in transaction state and lock management. Both approaches are challenging but potentially feasible with enough effort.[9] Robert Haas stated if you restrict operations to only fetching from the cursor and ensure that both protocol messages and SQL commands are locked down, with errors automatically killing the workers, this approach might work. However, challenges remain, such as handling errors when the leader is idle, since reporting these errors is tricky due to protocol limitations. Additionally, if fetching from the cursor is the only action allowed, using `PQsetSingleRowMode` instead of a cursor might be a simpler solution.[10] Robert Haas stated parallel mode is designed to handle errors during query execution only. Extending parallel mode beyond a query's context introduces significant risks, including crashes or incorrect results. Restricting post-query operations to a very limited set might work, but allowing arbitrary PL/pgSQL code execution is too broad and risky.This is the summary of the topic. Let me ask the question again: Can declared cursors be developed that support parallelism?[1] https://www.postgresql.org/message-id/98760523-a0d2-8705-38e3-c4602ecf2448%402ndquadrant.com[2] https://www.postgresql.org/message-id/CA%2BTgmoaJVMbQbHn3i_Uzz_vSGsbDsavYkPV4Tqz%3DUbg%2Bv8%2BLUQ%40mail.gmail.com[3] https://www.postgresql.org/message-id/dfe4adec-ad9b-723c-d501-1eedc96837a3%402ndquadrant.com[4] https://www.postgresql.org/message-id/CA%2BTgmobcUxTPLbjFL4PQa_07RC5wdEr-N_Zi%2BEOGuJ8Ui1Vy8Q%40mail.gmail.com[5] https://www.postgresql.org/message-id/8bdb6684-09d7-f799-0a6a-362cdc251b31%402ndquadrant.com[6] https://www.postgresql.org/message-id/CA%2BTgmoYWoqDUViwPTk-rOJbGG8aqEVAQWBgauVbYaUxznSA%3D%3Dg%40mail.gmail.com[7] https://www.postgresql.org/message-id/CANP8%2BjJoAitOY5uM6L8xYjPHK-RjJtmsZ5TZKGrH8m54U9sfzA%40mail.gmail.com[7-more] https://www.postgresql.org/message-id/CANP8%2BjJBHDu%2BkgH2Jy34zzx9C5x61LF7JkWODPaRKuXMhD0vKw%40mail.gmail.com[8] https://www.postgresql.org/message-id/CA%2BTgmobXWEqRUr0k1RS%3Dx4NaAbi%2B9R-0z%2Bpp%2BVEuLYKorjbbYg%40mail.gmail.com[9] https://www.postgresql.org/message-id/CA%2BTgmoaBYa_WDSFhZA6sPYVnBWH_vwqpXcy3iA93oqaecj2p7A%40mail.gmail.com[10] https://www.postgresql.org/message-id/CA%2BTgmoZfwTc8DcoNQ_V2%3DUbmY%3DsVuPg%2B%2BMF6eNaWDVjaBczwrQ%40mail.gmail.comRegards,--Meftun Cincioglu", "msg_date": "Mon, 12 Aug 2024 11:55:17 +0300", "msg_from": "=?UTF-8?Q?Meftun_Cincio=C4=9Flu?= <[email protected]>", "msg_from_op": true, "msg_subject": "Enabling parallel execution for cursors" } ]
[ { "msg_contents": "Hey. Currently synchronous_commit is by default disabled for logical \napply worker on the\nground that reported flush_lsn includes only locally flushed data so slot\n(publisher) preserves everything higher than this, and so in case of \nsubscriber\nrestart no data is lost. However, imagine that subscriber is made highly\navailable by standby to which synchronous replication is enabled. Then \nreported\nflush_lsn is ignorant of this synchronous replication progress, and in \ncase of\nfailover data loss may occur if subscriber managed to ack flush_lsn ahead of\nsyncrep. Moreover, it is almost silent due to this\n\n     else if (start_lsn < slot->data.confirmed_flush)\n     {\n         /*\n          * It might seem like we should error out in this case, but it's\n          * pretty common for a client to acknowledge a LSN it doesn't \nhave to\n          * do anything for, and thus didn't store persistently, because the\n          * xlog records didn't result in anything relevant for logical\n          * decoding. Clients have to be able to do that to support \nsynchronous\n          * replication.\n          *\n          * Starting at a different LSN than requested might not catch \ncertain\n          * kinds of client errors; so the client may wish to check that\n          * confirmed_flush_lsn matches its expectations.\n          */\n         elog(LOG, \"%X/%X has been already streamed, forwarding to %X/%X\",\n              LSN_FORMAT_ARGS(start_lsn),\n              LSN_FORMAT_ARGS(slot->data.confirmed_flush));\n\n         start_lsn = slot->data.confirmed_flush;\n     }\n\n\nin logical.c\n\nAttached draft patch fixes this by taking into account syncrep progress in\n\nworker reporting.\n\n\n-- cheers, arseny", "msg_date": "Mon, 12 Aug 2024 12:41:55 +0300", "msg_from": "Arseny Sher <[email protected]>", "msg_from_op": true, "msg_subject": "Taking into account syncrep position in flush_lsn reported by apply\n worker" }, { "msg_contents": "Sorry for the poor formatting of the message above, this should be better:\n\nHey. Currently synchronous_commit is disabled for logical apply worker\non the ground that reported flush_lsn includes only locally flushed data\nso slot (publisher) preserves everything higher than this, and so in\ncase of subscriber restart no data is lost. However, imagine that\nsubscriber is made highly available by standby to which synchronous\nreplication is enabled. Then reported flush_lsn is ignorant of this\nsynchronous replication progress, and in case of failover data loss may\noccur if subscriber managed to ack flush_lsn ahead of syncrep. Moreover,\nit is silent due to this\n\n\telse if (start_lsn < slot->data.confirmed_flush)\n\t{\n\t\t/*\n\t\t * It might seem like we should error out in this case, but it's\n\t\t * pretty common for a client to acknowledge a LSN it doesn't have to\n\t\t * do anything for, and thus didn't store persistently, because the\n\t\t * xlog records didn't result in anything relevant for logical\n\t\t * decoding. Clients have to be able to do that to support synchronous\n\t\t * replication.\n\t\t *\n\t\t * Starting at a different LSN than requested might not catch certain\n\t\t * kinds of client errors; so the client may wish to check that\n\t\t * confirmed_flush_lsn matches its expectations.\n\t\t */\n\t\telog(LOG, \"%X/%X has been already streamed, forwarding to %X/%X\",\n\t\t\t LSN_FORMAT_ARGS(start_lsn),\n\t\t\t LSN_FORMAT_ARGS(slot->data.confirmed_flush));\n\n\t\tstart_lsn = slot->data.confirmed_flush;\n\t}\n\n\nin logical.c\n\nAttached draft patch fixes this by taking into account syncrep progress\nin worker reporting.", "msg_date": "Mon, 12 Aug 2024 13:13:19 +0300", "msg_from": "Arseny Sher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" }, { "msg_contents": "On Mon, Aug 12, 2024 at 3:43 PM Arseny Sher <[email protected]> wrote:\n>\n> Sorry for the poor formatting of the message above, this should be better:\n>\n> Hey. Currently synchronous_commit is disabled for logical apply worker\n> on the ground that reported flush_lsn includes only locally flushed data\n> so slot (publisher) preserves everything higher than this, and so in\n> case of subscriber restart no data is lost. However, imagine that\n> subscriber is made highly available by standby to which synchronous\n> replication is enabled. Then reported flush_lsn is ignorant of this\n> synchronous replication progress, and in case of failover data loss may\n> occur if subscriber managed to ack flush_lsn ahead of syncrep.\n>\n\nWon't the same can be achieved by enabling the synchronous_commit\nparameter for a subscription?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Aug 2024 09:05:48 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" }, { "msg_contents": "\n\nOn 8/13/24 06:35, Amit Kapila wrote:\n> On Mon, Aug 12, 2024 at 3:43 PM Arseny Sher <[email protected]> wrote:\n>>\n>> Sorry for the poor formatting of the message above, this should be better:\n>>\n>> Hey. Currently synchronous_commit is disabled for logical apply worker\n>> on the ground that reported flush_lsn includes only locally flushed data\n>> so slot (publisher) preserves everything higher than this, and so in\n>> case of subscriber restart no data is lost. However, imagine that\n>> subscriber is made highly available by standby to which synchronous\n>> replication is enabled. Then reported flush_lsn is ignorant of this\n>> synchronous replication progress, and in case of failover data loss may\n>> occur if subscriber managed to ack flush_lsn ahead of syncrep.\n>>\n> \n> Won't the same can be achieved by enabling the synchronous_commit\n> parameter for a subscription?\n\nNope, because it would force WAL flush and wait for replication to the\nstandby in the apply worker, slowing down it. The logic missing\ncurrently is not to wait for the synchronous commit, but still mind its\nprogress in the flush_lsn reporting.\n\n-- cheers, arseny\n\n\n", "msg_date": "Wed, 14 Aug 2024 16:54:52 +0300", "msg_from": "Arseny Sher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" }, { "msg_contents": "On 14/08/2024 16:54, Arseny Sher wrote:\n> On 8/13/24 06:35, Amit Kapila wrote:\n>> On Mon, Aug 12, 2024 at 3:43 PM Arseny Sher <[email protected]> wrote:\n>>>\n>>> Sorry for the poor formatting of the message above, this should be \n>>> better:\n>>>\n>>> Hey. Currently synchronous_commit is disabled for logical apply worker\n>>> on the ground that reported flush_lsn includes only locally flushed data\n>>> so slot (publisher) preserves everything higher than this, and so in\n>>> case of subscriber restart no data is lost. However, imagine that\n>>> subscriber is made highly available by standby to which synchronous\n>>> replication is enabled. Then reported flush_lsn is ignorant of this\n>>> synchronous replication progress, and in case of failover data loss may\n>>> occur if subscriber managed to ack flush_lsn ahead of syncrep.\n>>\n>> Won't the same can be achieved by enabling the synchronous_commit\n>> parameter for a subscription?\n> \n> Nope, because it would force WAL flush and wait for replication to the\n> standby in the apply worker, slowing down it. The logic missing\n> currently is not to wait for the synchronous commit, but still mind its\n> progress in the flush_lsn reporting.\n\nI think this patch makes sense. I'm not sure we've actually made any \npromises on it, but it feels wrong that the slot's LSN might be advanced \npast the LSN that's been has been acknowledged by the replica, if \nsynchronous replication is configured. I see little downside in making \nthat promise.\n\n> +\t/*\n> +\t * If synchronous replication is configured, take into account its position.\n> +\t */\n> +\tif (SyncRepStandbyNames != NULL && SyncRepStandbyNames[0] != '\\0')\n> +\t{\n> +\t\tLWLockAcquire(SyncRepLock, LW_SHARED);\n> +\t\tlocal_flush = Min(local_flush, WalSndCtl->lsn[SYNC_REP_WAIT_FLUSH]);\n> +\t\tLWLockRelease(SyncRepLock);\n> +\t}\n> +\n\nShould probably use the SyncStandbysDefined() macro here. Or check \nWalSndCtl->sync_standbys_defined like SyncRepWaitForLSN() does; not sure \nwhich would be more appropriate here.\n\nShould the synchronous_commit setting also affect this?\n\nPlease also check if the docs need to be updated, or if a paragraph \nshould be added somewhere on this behavior.\n\nA TAP test case would be nice. Not sure how complicated it will be, but \nif not too complicated, it'd be nice to include it in check-world.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 20 Aug 2024 23:55:12 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" }, { "msg_contents": "On Wed, Aug 21, 2024 at 2:25 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 14/08/2024 16:54, Arseny Sher wrote:\n> > On 8/13/24 06:35, Amit Kapila wrote:\n> >> On Mon, Aug 12, 2024 at 3:43 PM Arseny Sher <[email protected]> wrote:\n> >>>\n> >>> Sorry for the poor formatting of the message above, this should be\n> >>> better:\n> >>>\n> >>> Hey. Currently synchronous_commit is disabled for logical apply worker\n> >>> on the ground that reported flush_lsn includes only locally flushed data\n> >>> so slot (publisher) preserves everything higher than this, and so in\n> >>> case of subscriber restart no data is lost. However, imagine that\n> >>> subscriber is made highly available by standby to which synchronous\n> >>> replication is enabled. Then reported flush_lsn is ignorant of this\n> >>> synchronous replication progress, and in case of failover data loss may\n> >>> occur if subscriber managed to ack flush_lsn ahead of syncrep.\n> >>\n> >> Won't the same can be achieved by enabling the synchronous_commit\n> >> parameter for a subscription?\n> >\n> > Nope, because it would force WAL flush and wait for replication to the\n> > standby in the apply worker, slowing down it. The logic missing\n> > currently is not to wait for the synchronous commit, but still mind its\n> > progress in the flush_lsn reporting.\n>\n> I think this patch makes sense. I'm not sure we've actually made any\n> promises on it, but it feels wrong that the slot's LSN might be advanced\n> past the LSN that's been has been acknowledged by the replica, if\n> synchronous replication is configured. I see little downside in making\n> that promise.\n>\n\nOne possible downside of such a promise could be that the publisher\nmay slow down for sync replication because it has to wait for all the\nconfigured sync_standbys of subscribers to acknowledge the LSN. I\ndon't know how many applications can be impacted due to this if we do\nit by default but if we feel there won't be any such cases or they\nwill be in the minority then it is okay to proceed with this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Aug 2024 11:55:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" }, { "msg_contents": "On 21/08/2024 09:25, Amit Kapila wrote:\n> On Wed, Aug 21, 2024 at 2:25 AM Heikki Linnakangas <[email protected]> wrote:\n>>\n>> On 14/08/2024 16:54, Arseny Sher wrote:\n>>> On 8/13/24 06:35, Amit Kapila wrote:\n>>>> On Mon, Aug 12, 2024 at 3:43 PM Arseny Sher <[email protected]> wrote:\n>>>>>\n>>>>> Sorry for the poor formatting of the message above, this should be\n>>>>> better:\n>>>>>\n>>>>> Hey. Currently synchronous_commit is disabled for logical apply worker\n>>>>> on the ground that reported flush_lsn includes only locally flushed data\n>>>>> so slot (publisher) preserves everything higher than this, and so in\n>>>>> case of subscriber restart no data is lost. However, imagine that\n>>>>> subscriber is made highly available by standby to which synchronous\n>>>>> replication is enabled. Then reported flush_lsn is ignorant of this\n>>>>> synchronous replication progress, and in case of failover data loss may\n>>>>> occur if subscriber managed to ack flush_lsn ahead of syncrep.\n>>>>\n>>>> Won't the same can be achieved by enabling the synchronous_commit\n>>>> parameter for a subscription?\n>>>\n>>> Nope, because it would force WAL flush and wait for replication to the\n>>> standby in the apply worker, slowing down it. The logic missing\n>>> currently is not to wait for the synchronous commit, but still mind its\n>>> progress in the flush_lsn reporting.\n>>\n>> I think this patch makes sense. I'm not sure we've actually made any\n>> promises on it, but it feels wrong that the slot's LSN might be advanced\n>> past the LSN that's been has been acknowledged by the replica, if\n>> synchronous replication is configured. I see little downside in making\n>> that promise.\n> \n> One possible downside of such a promise could be that the publisher\n> may slow down for sync replication because it has to wait for all the\n> configured sync_standbys of subscribers to acknowledge the LSN. I\n> don't know how many applications can be impacted due to this if we do\n> it by default but if we feel there won't be any such cases or they\n> will be in the minority then it is okay to proceed with this.\n\nIt only slows down updating the flush LSN on the publisher, which is \nupdated quite lazily anyway.\n\nA more serious scenario is if the sync replica crashes or is not \nresponding at all. In that case, the flush LSN on the publisher cannot \nadvance, and WAL starts to accumulate. However, if a sync replica is not \nresponding, that's very painful for the (subscribing) server anyway: all \ncommits will hang waiting for the replica. Holding back the flush LSN on \nthe publisher seems like a minor problem compared to that.\n\nIt would be good to have some kind of an escape hatch though. If you get \ninto that situation, is there a way to advance the publisher's flush LSN \neven though the synchronous replica has crashed? You can temporarily \nturn off synchronous replication on the subscriber. That will release \nany COMMITs on the server too. In theory you might not want that, but in \npractice stuck COMMITs are so painful that if you are taking manual \naction, you probably do want to release them as well.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:10:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" }, { "msg_contents": "On Wed, Aug 21, 2024 at 12:40 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 21/08/2024 09:25, Amit Kapila wrote:\n> >>\n> >> I think this patch makes sense. I'm not sure we've actually made any\n> >> promises on it, but it feels wrong that the slot's LSN might be advanced\n> >> past the LSN that's been has been acknowledged by the replica, if\n> >> synchronous replication is configured. I see little downside in making\n> >> that promise.\n> >\n> > One possible downside of such a promise could be that the publisher\n> > may slow down for sync replication because it has to wait for all the\n> > configured sync_standbys of subscribers to acknowledge the LSN. I\n> > don't know how many applications can be impacted due to this if we do\n> > it by default but if we feel there won't be any such cases or they\n> > will be in the minority then it is okay to proceed with this.\n>\n> It only slows down updating the flush LSN on the publisher, which is\n> updated quite lazily anyway.\n>\n\nBut doesn't that also mean that if the logical subscriber is\nconfigured in synchronous_standby_names, then the publisher's\ntransactions also need to wait for such an update? We do update it\nlazily but as soon as the operation is applied to the subscriber the\ntransaction on publisher will be released, however, IIUC the same\nwon't be true after the patch.\n\n> A more serious scenario is if the sync replica crashes or is not\n> responding at all. In that case, the flush LSN on the publisher cannot\n> advance, and WAL starts to accumulate. However, if a sync replica is not\n> responding, that's very painful for the (subscribing) server anyway: all\n> commits will hang waiting for the replica. Holding back the flush LSN on\n> the publisher seems like a minor problem compared to that.\n>\n\nYeah, but as per my understanding that also means holding all the\nactive work/transactions on the publisher which doesn't sound to be a\nminor problem.\n\n> It would be good to have some kind of an escape hatch though. If you get\n> into that situation, is there a way to advance the publisher's flush LSN\n> even though the synchronous replica has crashed? You can temporarily\n> turn off synchronous replication on the subscriber. That will release\n> any COMMITs on the server too. In theory you might not want that, but in\n> practice stuck COMMITs are so painful that if you are taking manual\n> action, you probably do want to release them as well.\n>\n\nThis will work in the scenario you mentioned.\n\nIf the above understanding is correct and you agree that it is not a\ngood idea to hold back transactions on the publisher then we can think\nof a new subscription that allows the apply worker to wait for\nsynchronous replicas.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Aug 2024 14:29:43 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Taking into account syncrep position in flush_lsn reported by\n apply worker" } ]
[ { "msg_contents": "hi.\n{\n /*\n * For UPDATE/DELETE result relations, the attribute number of the row\n * identity junk attribute in the source plan's output tuples\n */\n AttrNumber ri_RowIdAttNo;\n\n /* Projection to generate new tuple in an INSERT/UPDATE */\n ProjectionInfo *ri_projectNew;\n\n /* arrays of stored generated columns expr states, for INSERT and UPDATE */\n ExprState **ri_GeneratedExprsI;\n ExprState **ri_GeneratedExprsU;\n}\nfor the struct ResultRelInfo, i've checked the above fields.\n\nI think first ri_RowIdAttNo applies to MERGE also. so the comments may\nnot be correct?\nOther files comments are fine.\n\n\nsee:\nExecInitModifyTable\n /*\n * For UPDATE/DELETE/MERGE, find the appropriate junk attr now, either\n * a 'ctid' or 'wholerow' attribute depending on relkind. For foreign\n * tables, the FDW might have created additional junk attr(s), but\n * those are no concern of ours.\n */\n if (operation == CMD_UPDATE || operation == CMD_DELETE ||\n operation == CMD_MERGE)\n\n\n", "msg_date": "Mon, 12 Aug 2024 18:03:22 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "minor comments issue in ResultRelInfo src/include/nodes/execnodes.h" }, { "msg_contents": "On Mon, 12 Aug 2024 at 22:03, jian he <[email protected]> wrote:\n> AttrNumber ri_RowIdAttNo;\n>\n> /* arrays of stored generated columns expr states, for INSERT and UPDATE */\n> ExprState **ri_GeneratedExprsI;\n> ExprState **ri_GeneratedExprsU;\n> }\n> for the struct ResultRelInfo, i've checked the above fields.\n>\n> I think first ri_RowIdAttNo applies to MERGE also. so the comments may\n> not be correct?\n\nYeah, ri_RowIdAttNo is used for MERGE. We should fix that comment.\n\n> Other files comments are fine.\n\nI'd say ri_GeneratedExprsI and ri_GeneratedExprsU are also used for\nMERGE and the comment for those is also outdated. See:\n\nExecMergeMatched -> ExecUpdateAct -> ExecUpdatePrepareSlot ->\nExecComputeStoredGenerated(..., CMD_UPDATE)\nExecMergeNotMatched -> ExecInsert -> ExecComputeStoredGenerated(..., CMD_INSERT)\n\nDavid\n\n\n", "msg_date": "Mon, 12 Aug 2024 22:32:22 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor comments issue in ResultRelInfo\n src/include/nodes/execnodes.h" }, { "msg_contents": "On Mon, 12 Aug 2024 at 22:32, David Rowley <[email protected]> wrote:\n>\n> On Mon, 12 Aug 2024 at 22:03, jian he <[email protected]> wrote:\n> > I think first ri_RowIdAttNo applies to MERGE also. so the comments may\n> > not be correct?\n>\n> Yeah, ri_RowIdAttNo is used for MERGE. We should fix that comment.\n>\n> > Other files comments are fine.\n>\n> I'd say ri_GeneratedExprsI and ri_GeneratedExprsU are also used for\n> MERGE and the comment for those is also outdated. See:\n\nI've pushed a patch to fix these. Thanks for the report.\n\nDavid\n\n\n", "msg_date": "Mon, 12 Aug 2024 23:42:18 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor comments issue in ResultRelInfo\n src/include/nodes/execnodes.h" } ]
[ { "msg_contents": "Hi hackers,\n\nI'd like to submit a patch that improves the estimated rows for queries \ncontaining (Var op Var) clauses by applying extended MCV statistics.\n\n*New functions:*\n\n * mcv_clauselist_selectivity_var_op_var() - calculates the selectivity\n for (Var op Var) clauses.\n * is_opclause_var_op_var() - Checks whether a clause is of the (Var op\n Var) form.\n\n*Implementation Details:*\n\n * A new 'if' statement was added to the 'clause_selectivity_ext()'\n function to handle (Var op Var) clauses. This allows the process to\n locate matching MCV extended statistics and calculate selectivity\n using the newly introduced function.\n * Additionally, I added 'if' statement\n in statext_is_compatible_clause_internal() function to determine\n which columns are included in the clause, find matching extended\n statistics, and then calculate selectivity through the new function.\n I did the same in mcv_get_match_bitmap() to check what values are\n true for (Var op Var).\n * To support this, I created a new enum type to differentiate between\n OR/AND and (Var op Var) clauses.\n\n*Examples:*\n\ncreate table t (a int, b int);\ninsert into t select mod(i,10), mod(i,10)+1 from \ngenerate_series(1,100000) s(i);\nanalyze t;\nexplain select * from t where a < b;\n`\n     Estimated:   33333\n     Actual:       100000\n\nexplain select * from t where a > b;\n`\n     Estimated:   33333\n     Actual:       100000\n\ncreate statistics s (mcv) on a,b from t;\nanalyze t;\nexplain select * from t where a < b;\n`\n     Estimated without patch:  33333\n     Estimated with patch:     100000\n     Actual:                             100000\n\nexplain select * from t where a > b;\n`\n     Estimated without patch:  33333\n     Estimated with patch:     100000\n     Actual:                             100000\n\n\nIf you want to see more examples, see regress tests in the patch.\n\n*Previous thread:*\n\nThis feature was originally developed two years ago in [1], and at that \ntime, the approach was almost the same. My implementation uses dedicated \nfunctions and 'if' statements directly for better readability and \nmaintainability. Additionally, there was a bug in the previous approach \nthat has been resolved with my patch. Here’s an example of the bug and \nits fix:\n\nCREATE TABLE foo (a int, b int);\nINSERT INTO foo SELECT x/10+1, x FROM generate_series(1,10000) g(x);\nANALYZE foo;\nEXPLAIN ANALYZE SELECT * FROM foo WHERE a = 1 OR (b > 0 AND b < 10);\n`\n     Estimated:   18\n     Actual:           9\n\nCREATE STATISTICS foo_s (mcv) ON a,b FROM foo;\nANALYZE foo;\nEXPLAIN ANALYZE SELECT * FROM foo WHERE a = 1 OR (b > 0 AND b < 10);\n`\n     Estimated previous patch:  18\n     Estimated current patch:      9\n     Actual:                                  9\n\n\n[1]: \nhttps://www.postgresql.org/message-id/flat/9e0a12e0-c05f-b193-ed3d-fe88bc1e5fe1%40enterprisedb.com\n\nI look forward to any feedback or suggestions from the community.\n\nBest regars,\nIlia Evdokimov\nTantor Labs LLC.", "msg_date": "Mon, 12 Aug 2024 13:42:07 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "Another issue mentioned in [1] involves cases where the clause is in the \nform (A op A). In my view, this isn't related to the current patch, as \nit can be addressed by rewriting the clause, similar to transforming A = \nA into A IS NOT NULL. This adjustment would result in more accurate \nestimation.\n\n[1]: \nhttps://www.postgresql.org/message-id/7C0F91B5-8A43-428B-8D31-556458720305%40enterprisedb.com\n\nIlia Evdokimov,\nTantor Labs LLC.\n\n\n\n", "msg_date": "Mon, 12 Aug 2024 13:59:24 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "Hi! I think your work is important)\n\nI started reviewing it and want to suggest some changes to better code: \nI think we should consider the case where the expression is not neither \nan OpExpr and VarOpVar expression.\n\nHave you tested this code with any benchmarks?\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 12 Aug 2024 14:44:05 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "On 8/12/24 13:44, Alena Rybakina wrote:\n> Hi! I think your work is important)\n> \n\nI agree, and I'm grateful someone picked up the original patch. I'll try\nto help to keep it moving forward. If the thread gets stuck, feel free\nto ping me to take a look.\n\n> I started reviewing it and want to suggest some changes to better code:\n> I think we should consider the case where the expression is not neither\n> an OpExpr and VarOpVar expression.\n> \n\nDo you have some specific type of clauses in mind? Most of the extended\nstatistics only really handles this type of clauses, so I'm not sure\nit's feasible to extend that - at least not in this patch.\n\n> Have you tested this code with any benchmarks?\n>\nFWIW I think we need to test two things - that it (a) improves the\nestimates and (b) does not have significant overhead.\n\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 12 Aug 2024 13:53:32 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "On 12.8.24 14:53, Tomas Vondra wrote:\n\n> I agree, and I'm grateful someone picked up the original patch. I'll try\n> to help to keep it moving forward. If the thread gets stuck, feel free\n> to ping me to take a look.\nGood. Thank you!\n>> I started reviewing it and want to suggest some changes to better code:\n>> I think we should consider the case where the expression is not neither\n>> an OpExpr and VarOpVar expression.\n>>\n> Do you have some specific type of clauses in mind? Most of the extended\n> statistics only really handles this type of clauses, so I'm not sure\n> it's feasible to extend that - at least not in this patch.\n\nI agree with Alena that we need to consider the following clauses: (Expr \nop Var), (Var op Expr) and (Expr op Expr). And we need to return false \nin these cases because we did it before my patch in\n\n         /* Check if the expression has the right shape */\n         if (!examine_opclause_args(expr->args, &clause_expr, NULL, NULL))\n             return false;\n\nIn is_opclause_var_op_var() function it is really useless local Node \n*expr_left, *expr_right variables. However, we can't assign them NULL at \nthe begin because if I passed not-null pointers I have to return the \nvalues. Otherwise remain them NULL.\n\nNevertheless, thank you for review, Alena.\n\n>> Have you tested this code with any benchmarks?\n>>\n> FWIW I think we need to test two things - that it (a) improves the\n> estimates and (b) does not have significant overhead.\nYes, but only TPC-B. And the performance did not drop. In general, it'd \nbe better to do more tests and those listed by Tomas with new attached \npatch.", "msg_date": "Mon, 12 Aug 2024 18:57:19 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "On 8/12/24 17:57, Ilia Evdokimov wrote:\n> On 12.8.24 14:53, Tomas Vondra wrote:\n> \n>> I agree, and I'm grateful someone picked up the original patch. I'll try\n>> to help to keep it moving forward. If the thread gets stuck, feel free\n>> to ping me to take a look.\n> Good. Thank you!\n>>> I started reviewing it and want to suggest some changes to better code:\n>>> I think we should consider the case where the expression is not neither\n>>> an OpExpr and VarOpVar expression.\n>>>\n>> Do you have some specific type of clauses in mind? Most of the extended\n>> statistics only really handles this type of clauses, so I'm not sure\n>> it's feasible to extend that - at least not in this patch.\n> \n> I agree with Alena that we need to consider the following clauses: (Expr\n> op Var), (Var op Expr) and (Expr op Expr). And we need to return false\n> in these cases because we did it before my patch in\n> \n>         /* Check if the expression has the right shape */\n>         if (!examine_opclause_args(expr->args, &clause_expr, NULL, NULL))\n>             return false;\n> \n> In is_opclause_var_op_var() function it is really useless local Node\n> *expr_left, *expr_right variables. However, we can't assign them NULL at\n> the begin because if I passed not-null pointers I have to return the\n> values. Otherwise remain them NULL.\n> \n> Nevertheless, thank you for review, Alena.\n> \n\nAh, right. I agree we should handle clauses with expressions.\n\nI don't recall why I wrote is_opclause_var_op_var() like this, but I\nbelieve this was before we allowed extended statistics on expressions\n(which was added in 2021, the patch is from 2020). I don't see why it\ncould not return expressions, but I haven't tried.\n\n>>> Have you tested this code with any benchmarks?\n>>>\n>> FWIW I think we need to test two things - that it (a) improves the\n>> estimates and (b) does not have significant overhead.\n> Yes, but only TPC-B. And the performance did not drop. In general, it'd\n> be better to do more tests and those listed by Tomas with new attached\n> patch.\n\nIs TPC-B really interesting/useful for this patch? The queries are super\nsimple, with only a single clause (so it may not even get to the code\nhandling extended statistics). Did you create any extended stats?\n\nI think you'll need to construct a custom test, with queries that have\nmultiple (var op var) clauses, extended stats created, etc. And\nbenchmark that.\n\nFWIW I don't think it makes sense to benchmark the query execution - if\nthe estimate improves, it's possible to get arbitrary speedup, but\nthat's expected and mostly mostly irrelevant I think.\n\nWhat I'd focus on is benchmarking just the query planning - we need the\noverhead to be negligible (or at least small) so that it does not hurt\npeople with already good plans.\n\nBTW can you elaborate why you are interested in this patch? Do you just\nthink it's interesting/useful, or do you have a workload where it would\nactually help? I'm asking because me being uncertain how beneficial this\nis in practice (not just being nice in theory) was one of the reasons\nwhy I didn't do more work on this in 2021.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 12 Aug 2024 18:25:34 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "On 12.8.24 19:25, Tomas Vondra wrote:\n\n> Is TPC-B really interesting/useful for this patch? The queries are super\n> simple, with only a single clause (so it may not even get to the code\n> handling extended statistics). Did you create any extended stats?\n\nNo, it's not the case. I simply wanted to verify that other queries are \nnot slowed down after applying my patch.\n>\n> I think you'll need to construct a custom test, with queries that have\n> multiple (var op var) clauses, extended stats created, etc. And\n> benchmark that.\n\n\nI used the test generator from a previous thread [1] and ran it with \n|default_statistics_target = 1000| to achieve more accurate estimates \nfor 3000 rows. It would also be beneficial to run tests with 10,000 and \n100,000 rows for a broader perspective. I've attached both the Python \ntest and the results, including the data. Here’s a breakdown of the issues:\n\n 1. (A op A) Clause: Before applying my patch, there were poor estimates\n for expressions like |(A op A)|. Currently, we only have correct\n estimates for the |(A = A)| clause, which transforms into |A IS NOT\n NULL|. Should I address this in this thread? I believe we should\n extend the same correction to clauses like |(A != A)|, |(A < A)|,\n and similar conditions. However, this issue is not for current thread.\n 2. AND Clauses: The estimates for AND clauses were inaccurate before my\n patch. I noticed code segments where I could add something specific\n for the |(Var op Var)| clause, but I'm unsure if I'm missing\n anything crucial. If my understanding is incorrect, I'd appreciate\n any guidance or corrections.\n\n> FWIW I don't think it makes sense to benchmark the query execution - if\n> the estimate improves, it's possible to get arbitrary speedup, but\n> that's expected and mostly mostly irrelevant I think.\n>\n> What I'd focus on is benchmarking just the query planning - we need the\n> overhead to be negligible (or at least small) so that it does not hurt\n> people with already good plans.\n>\n> BTW can you elaborate why you are interested in this patch? Do you just\n> think it's interesting/useful, or do you have a workload where it would\n> actually help? I'm asking because me being uncertain how beneficial this\n> is in practice (not just being nice in theory) was one of the reasons\n> why I didn't do more work on this in 2021.\n\nI have two reasons for pursuing this. Firstly, I've encountered some of \nthese queries in practice, although they are quite rare. While it might \nbe easy to dismiss these cases due to their infrequency, I believe that \nwe shouldn't overlook the opportunity to develop better handling for \nthem, regardless of how seldom they occur.\n\nSecondly, I see that you're working on improving estimates for JOIN \nclauses in thread [2]. I believe that enhancing estimates for these rare \ncases could also benefit future work on JOIN queries, particularly those \nwith multiple |ON (T1.column = T2.column)| conditions, which are \nessentially |(Var op Var)| clauses. My idea is to start with non-JOIN \nqueries, and then apply the same approach to improve JOIN estimates. Of \ncourse, I might be wrong, but I think this approach has potential.\n\n\n[1]: \nhttps://www.postgresql.org/message-id/ecc0b08a-518d-7ad6-17ed-a5e962fc4f5f%40enterprisedb.com\n\n[2]: \nhttps://www.postgresql.org/message-id/flat/c8c0ff31-3a8a-7562-bbd3-78b2ec65f16c%40enterprisedb.com\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.", "msg_date": "Mon, 19 Aug 2024 18:30:15 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "On 12.8.24 19:25, Tomas Vondra wrote:\n\n> Is TPC-B really interesting/useful for this patch? The queries are super\n> simple, with only a single clause (so it may not even get to the code\n> handling extended statistics). Did you create any extended stats?\n\nNo, it's not the case. I simply wanted to verify that other queries are \nnot slowed down after applying my patch.\n>\n> I think you'll need to construct a custom test, with queries that have\n> multiple (var op var) clauses, extended stats created, etc. And\n> benchmark that.\n\n\nI used the test generator from a previous thread [1] and ran it with \n|default_statistics_target = 1000| to achieve more accurate estimates \nfor 3000 rows. It would also be beneficial to run tests with 10,000 and \n100,000 rows for a broader perspective. I've attached the python test. \nHere’s a breakdown of the issues:\n\n 1. (A op A) Clause: Before applying my patch, there were poor estimates\n for expressions like |(A op A)|. Currently, we only have correct\n estimates for the |(A = A)| clause, which transforms into |A IS NOT\n NULL|. Should I address this in this thread? I believe we should\n extend the same correction to clauses like |(A != A)|, |(A < A)|,\n and similar conditions. However, this issue is not for current thread.\n 2. AND Clauses: The estimates for AND clauses were inaccurate before my\n patch. I noticed code segments where I could add something specific\n for the |(Var op Var)| clause, but I'm unsure if I'm missing\n anything crucial. If my understanding is incorrect, I'd appreciate\n any guidance or corrections.\n\n> FWIW I don't think it makes sense to benchmark the query execution - if\n> the estimate improves, it's possible to get arbitrary speedup, but\n> that's expected and mostly mostly irrelevant I think.\n>\n> What I'd focus on is benchmarking just the query planning - we need the\n> overhead to be negligible (or at least small) so that it does not hurt\n> people with already good plans.\n>\n> BTW can you elaborate why you are interested in this patch? Do you just\n> think it's interesting/useful, or do you have a workload where it would\n> actually help? I'm asking because me being uncertain how beneficial this\n> is in practice (not just being nice in theory) was one of the reasons\n> why I didn't do more work on this in 2021.\n>\n>\n> regards\n>\n\nI have two reasons for pursuing this. Firstly, I've encountered some of \nthese queries in practice, although they are quite rare. While it might \nbe easy to dismiss these cases due to their infrequency, I believe that \nwe shouldn't overlook the opportunity to develop better handling for \nthem, regardless of how seldom they occur.\n\nSecondly, I see that you're working on improving estimates for JOIN \nclauses in thread [2]. I believe that enhancing estimates for these rare \ncases could also benefit future work on JOIN queries, particularly those \nwith multiple |ON (T1.column = T2.column)| conditions, which are \nessentially |(Var op Var)| clauses. My idea is to start with non-JOIN \nqueries, and then apply the same approach to improve JOIN estimates. Of \ncourse, I might be wrong, but I think this approach has potential.\n\n\nP.S. If I sent this mail twice I'm sorry. I wanted to sent results of \nthe test, and it was not sent to hackers because of big size of attached \nfile. Now I sent only test.\n\n[1]: \nhttps://www.postgresql.org/message-id/ecc0b08a-518d-7ad6-17ed-a5e962fc4f5f%40enterprisedb.com\n\n[2]: \nhttps://www.postgresql.org/message-id/flat/c8c0ff31-3a8a-7562-bbd3-78b2ec65f16c%40enterprisedb.com \n\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.", "msg_date": "Mon, 19 Aug 2024 19:04:40 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" }, { "msg_contents": "Hi everyone,\n\nI've taken a closer look at the patch and realized that we don't need \nthe new function 'mcv_clause_selectivity_var_op_var()' we can use \n'mcv_clause_selectivity()' instead.\n\nI'm attaching the updated patch and test generator.\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.", "msg_date": "Mon, 9 Sep 2024 13:43:02 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add support for (Var op Var) clause in extended MCV statistics" } ]
[ { "msg_contents": "Hi all,\n\nAttaching a patch to add remaining cached and loaded stats as mentioned \nin commit f68cd847fa40ead44a786b9c34aff9ccc048004b message. Existing TAP \ntests were updated to handle new stats. This patch has been tested on \nHEAD using \"make check-world\" after enabling injection points via \n\"--enable-injection-points\".\n\n-- \nKind Regards,\nYogesh Sharma", "msg_date": "Mon, 12 Aug 2024 10:25:13 -0400", "msg_from": "Yogesh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Injection Points remaining stats" }, { "msg_contents": "On Mon, Aug 12, 2024 at 10:25:13AM -0400, Yogesh Sharma wrote:\n> Attaching a patch to add remaining cached and loaded stats as mentioned in\n> commit f68cd847fa40ead44a786b9c34aff9ccc048004b message. Existing TAP tests\n> were updated to handle new stats. This patch has been tested on HEAD using\n> \"make check-world\" after enabling injection points via\n> \"--enable-injection-points\".\n\nThanks a lot for the patch. I should have tackled that in\nf68cd847fa40 but I've just lacked a combination of time and energy\nwhile the original commit was already enough.\n\nThe code indentation was a bit incorrect, and I think that we should\nalso have tests to stress that the insertion of the new stats is\ncorrect. I have fixed the indentation, added some tests and improved\na couple of surrounding descriptions while on it.\n\nI'm tempted to propose a separate improvement for the template of the\nfixed-numbered stats. We could do like pgstatfuncs.c where we use a\nmacro to define the routines of the counters, and have one function\nfor each counter incremented. That's a separate refactoring, so I'll\npropose that on a different thread.\n--\nMichael", "msg_date": "Mon, 19 Aug 2024 09:09:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection Points remaining stats" }, { "msg_contents": "On 8/18/24 20:09, Michael Paquier wrote:\n> f68cd847fa40 but I've just lacked a combination of time and energy\n> while the original commit was already enough.\n>\n> The code indentation was a bit incorrect, and I think that we should\n> also have tests to stress that the insertion of the new stats is\n> correct. I have fixed the indentation, added some tests and improved\n> a couple of surrounding descriptions while on it.\n\nThank you for committing. I was thinking to add such test in next patch \nset. I have a updated .vimrc to have correct indentation.\n\n> I'm tempted to propose a separate improvement for the template of the\n> fixed-numbered stats. We could do like pgstatfuncs.c where we use a\n> macro to define the routines of the counters, and have one function\n> for each counter incremented. That's a separate refactoring, so I'll\n> propose that on a different thread.\nI will take a look on this.\n\n-- \nKind Regards,\nYogesh Sharma\nPostgreSQL, Linux, and Networking\nOpen Source Enthusiast and Advocate\nPostgreSQL Contributors Team @ RDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 22 Aug 2024 13:16:37 -0400", "msg_from": "Yogesh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection Points remaining stats" }, { "msg_contents": "On Thu, Aug 22, 2024 at 01:16:37PM -0400, Yogesh Sharma wrote:\n> On 8/18/24 20:09, Michael Paquier wrote:\n>> I'm tempted to propose a separate improvement for the template of the\n>> fixed-numbered stats. We could do like pgstatfuncs.c where we use a\n>> macro to define the routines of the counters, and have one function\n>> for each counter incremented. That's a separate refactoring, so I'll\n>> propose that on a different thread.\n>\n> I will take a look on this.\n\nThanks. If you are interested, here is the CF entry I have created\nfor it:\nhttps://commitfest.postgresql.org/49/5187/\n--\nMichael", "msg_date": "Fri, 23 Aug 2024 09:18:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection Points remaining stats" } ]
[ { "msg_contents": "Hello all,\nWhile working on our internal tools that utilise replication, we\nrealised that a new parameter was added to the internal C function\ncorresponding to pg_replication_origin_session_setup.\nHowever this parameter wasn't included in the user-facing API [1].\n\nIn 'src/backend/replication/logical/origin.c' at line 1359,\npg_replication_origin_session_setup function calls\n\n replorigin_session_setup(origin, 0);\n\nwhere currently 0 is assigned to the acquired_by parameter of the\nreplorigin_session_setup.\n\nI made this patch to the master which adds a way to control this\nparameter by adding a new version of the\npg_replication_origin_session_setup function with user facing\nparameters 'text int4' in place of the current 'text' while keeping\nthe existing variant\n(ensuring backwards compatibility). Could someone take a look at it?\n\n[1]: https://www.postgresql.org/docs/current/functions-admin.html#PG-REPLICATION-ORIGIN-SESSION-SETUP\n---\n\nThanks for the help,\nDoruk Yılmaz", "msg_date": "Mon, 12 Aug 2024 21:43:48 +0300", "msg_from": "Doruk Yilmaz <[email protected]>", "msg_from_op": true, "msg_subject": "[Patch] add new parameter to pg_replication_origin_session_setup" }, { "msg_contents": "On Mon, Aug 12, 2024, at 3:43 PM, Doruk Yilmaz wrote:\n> Hello all,\n\nHi!\n\n> While working on our internal tools that utilise replication, we\n> realised that a new parameter was added to the internal C function\n> corresponding to pg_replication_origin_session_setup.\n> However this parameter wasn't included in the user-facing API [1].\n\nI'm curious about your use case. Is it just because the internal function has a\ndifferent signature or your tool is capable of apply logical replication changes\nin parallel using the SQL API?\n\n> I made this patch to the master which adds a way to control this\n> parameter by adding a new version of the\n> pg_replication_origin_session_setup function with user facing\n> parameters 'text int4' in place of the current 'text' while keeping\n> the existing variant\n> (ensuring backwards compatibility). Could someone take a look at it?\n\nI did a quick look at your patch and have a few suggestions.\n\n* no documentation changes. Since the function you are changing has a new\nsignature, this change should be reflected in the documentation.\n* no need for a new internal function. The second parameter (PID) can be\noptional and defaults to 0 in this case. See how we changed the\npg_create_logical_replication_slot along the years add some IN parameters like\ntwophase and failover in the recent versions.\n* add a CF entry [1] for this patch so we don't forget it. Another advantage is\nthat this patch is covered by CI [2][3].\n\n\n[1] https://commitfest.postgresql.org/49/\n[2] https://wiki.postgresql.org/wiki/Cfbot\n[3] http://cfbot.cputube.org/index.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Aug 12, 2024, at 3:43 PM, Doruk Yilmaz wrote:Hello all,Hi!While working on our internal tools that utilise replication, werealised that a new parameter was added to the internal C functioncorresponding to pg_replication_origin_session_setup.However this parameter wasn't included in the user-facing API [1].I'm curious about your use case. Is it just because the internal function has adifferent signature or your tool is capable of apply logical replication changesin parallel using the SQL API?I made this patch to the master which adds a way to control thisparameter by adding a new version of thepg_replication_origin_session_setup function with user facingparameters 'text int4' in place of the current 'text' while keepingthe existing variant(ensuring backwards compatibility). Could someone take a look at it?I did a quick look at your patch and have a few suggestions.* no documentation changes. Since the function you are changing has a newsignature, this change should be reflected in the documentation.* no need for a new internal function. The second parameter (PID) can beoptional and defaults to 0 in this case. See how we changed thepg_create_logical_replication_slot along the years add some IN parameters liketwophase and failover in the recent versions.* add a CF entry [1] for this patch so we don't forget it. Another advantage isthat this patch is covered by CI [2][3].[1] https://commitfest.postgresql.org/49/[2] https://wiki.postgresql.org/wiki/Cfbot[3] http://cfbot.cputube.org/index.html--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 12 Aug 2024 18:48:17 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Patch] add new parameter to pg_replication_origin_session_setup" }, { "msg_contents": "Hello again,\n\nOn Tue, Aug 13, 2024 at 12:48 AM Euler Taveira <[email protected]> wrote:\n> I'm curious about your use case. Is it just because the internal function has a\n> different signature or your tool is capable of apply logical replication changes\n> in parallel using the SQL API?\n\nThe latter is correct, it applies logical replication changes in parallel.\nSince multiple connections may commit, we need all of them to be able\nto advance the replication origin.\n\n> * no documentation changes. Since the function you are changing has a new\n> signature, this change should be reflected in the documentation.\n> * no need for a new internal function. The second parameter (PID) can be\n> optional and defaults to 0 in this case. See how we changed the\n> pg_create_logical_replication_slot along the years add some IN parameters like\n> twophase and failover in the recent versions.\n\nI updated/rewrote the patch to reflect these suggestions.\nIt now has the same DEFAULT 0 style used in pg_create_logical_replication_slot.\nI also updated the documentation.\n\n> * add a CF entry [1] for this patch so we don't forget it. Another advantage is\n> that this patch is covered by CI [2][3].\nSadly I still can't log in to the Commitfest due to the cool-off\nperiod. I will create an entry as soon as this period ends.\n\nThanks for all the feedback,\nDoruk Yılmaz", "msg_date": "Thu, 15 Aug 2024 23:53:17 +0300", "msg_from": "Doruk Yilmaz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Patch] add new parameter to pg_replication_origin_session_setup" } ]
[ { "msg_contents": "Hi hackers,\n\nWe encountered an issue lately, that if the database grants too many roles\n`datacl` is toasted, following which, the drop database command will fail\nwith error \"wrong tuple length\".\n\nTo reproduce the issue, please follow below steps:\n\nCREATE DATABASE test;\n\n-- create helper function\nCREATE OR REPLACE FUNCTION data_tuple() returns text as $body$\ndeclare\n mycounter int;\nbegin\n for mycounter in select i from generate_series(1,2000) i loop\n execute 'CREATE\nROLE aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbb ' || mycounter;\n execute 'GRANT ALL ON DATABASE test to\naaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbb ' || mycounter;\n end loop;\n return 'ok';\nend;\n$body$ language plpgsql volatile strict;\n\n-- create roles and grant on the database.\nSELECT data_tuple();\n\n-- drop database command, this will result in \"wrong tuple length\" error.\nDROP DATABASE test;\n\nThe root cause of this behaviour is that the HeapTuple in dropdb\nfunction fetches a copy of pg_database tuple from system cache.\nBut the system cache flattens any toast attributes, which cause the length\ncheck to fail in heap_inplace_update.\n\nA patch for this issue is attached to the mail, the solution is to\nchange the logic to fetch the tuple by directly scanning pg_database rather\nthan using the catcache.\n\nRegards,\nAyush", "msg_date": "Tue, 13 Aug 2024 11:07:39 +0530", "msg_from": "Ayush Tiwari <[email protected]>", "msg_from_op": true, "msg_subject": "Drop database command will raise \"wrong tuple length\" if pg_database\n tuple contains toast attribute." }, { "msg_contents": "Hi Ayush,\n\nOn 8/13/24 07:37, Ayush Tiwari wrote:\n> Hi hackers,\n> \n> We encountered an issue lately, that if the database grants too many\n> roles `datacl` is toasted, following which, the drop database command\n> will fail with error \"wrong tuple length\".\n> \n> To reproduce the issue, please follow below steps:\n> \n> CREATE DATABASE test;\n> \n> -- create helper function\n> CREATE OR REPLACE FUNCTION data_tuple() returns text as $body$\n> declare\n>           mycounter int;\n> begin\n>           for mycounter in select i from generate_series(1,2000) i loop\n>                     execute 'CREATE\n> ROLE aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbb ' || mycounter;\n>                     execute 'GRANT ALL ON DATABASE test to\n> aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbb ' || mycounter;\n>           end loop;\n>           return 'ok';\n> end;\n> $body$ language plpgsql volatile strict; \n> \n> -- create roles and grant on the database.\n> SELECT data_tuple(); \n> \n> -- drop database command, this will result in \"wrong tuple length\" error.\n> DROP DATABASE test;\n> \n> The root cause of this behaviour is that the HeapTuple in dropdb\n> function fetches a copy of pg_database tuple from system cache.\n> But the system cache flattens any toast attributes, which cause the\n> length check to fail in heap_inplace_update.\n> \n> A patch for this issue is attached to the mail, the solution is to\n> change the logic to fetch the tuple by directly scanning pg_database\n> rather than using the catcache.\n> \n\nThanks for the report. I can reproduce the issue following your\ninstructions, and the fix seems reasonable ...\n\n\nBut there's also one thing I don't quite understand. I did look for\nother places that might have a similar issue, that is places that\n\n 1) lookup tuple using SearchSysCacheCopy1\n\n 2) call on the tuple heap_inplace_update\n\n\nAnd I found about four places doing that:\n\n - index_update_stats (src/backend/catalog/index.c)\n\n - create_toast_table (src/backend/catalog/toasting.c)\n\n - vac_update_relstats / vac_update_datfrozenxid (commands/vacuum.c)\n\n\nBut I haven't managed to trigger the same kind of failure for any of\nthose places, despite trying. AFAIK that's because those places update\npg_class, and that doesn't have TOAST, so the tuple length can't change.\n\n\nSo this fix seems reasonable.\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:26:01 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drop database command will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute." }, { "msg_contents": "On 8/16/24 13:26, Tomas Vondra wrote:\n> Hi Ayush,\n> \n> ...\n> \n> So this fix seems reasonable.\n> \n\nI've pushed this to all affected branches, except for 11 which is EOL.\n\nI thought about adding a test, but I couldn't think of a TAP test where\nthis would really fit, and it didn't seem very practical to have a test\ncreating hundreds of roles. So I abandoned the idea.\n\n\nThanks for the report and the fix!\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 19 Aug 2024 00:35:39 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drop database command will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute." }, { "msg_contents": "On Mon, 19 Aug 2024 00:35:39 +0200\nTomas Vondra <[email protected]> wrote:\n\n> On 8/16/24 13:26, Tomas Vondra wrote:\n> > Hi Ayush,\n> > \n> > ...\n> > \n> > So this fix seems reasonable.\n> > \n> \n> I've pushed this to all affected branches, except for 11 which is EOL.\n> \n> I thought about adding a test, but I couldn't think of a TAP test where\n> this would really fit, and it didn't seem very practical to have a test\n> creating hundreds of roles. So I abandoned the idea.\n\nI tried to add Assert in heap_inplace_update to prevent possible similar \nfailures, but I gave up because I could not find a good way to determine if\na tuple is detoasted of not.\n\nBy the way, I found a comment in vac_update_datfrozenxid() and EventTriggerOnLogin()\nthat explains why we could not use tuples from the syscache for heap_inplace_update. \nI think it is better ad d the same comment in dropdb(). I attached a trivial patch for it.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>", "msg_date": "Mon, 19 Aug 2024 18:01:54 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drop database command will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute." }, { "msg_contents": "On 8/19/24 11:01, Yugo Nagata wrote:\n> On Mon, 19 Aug 2024 00:35:39 +0200\n> Tomas Vondra <[email protected]> wrote:\n> \n>> On 8/16/24 13:26, Tomas Vondra wrote:\n>>> Hi Ayush,\n>>>\n>>> ...\n>>>\n>>> So this fix seems reasonable.\n>>>\n>>\n>> I've pushed this to all affected branches, except for 11 which is EOL.\n>>\n>> I thought about adding a test, but I couldn't think of a TAP test where\n>> this would really fit, and it didn't seem very practical to have a test\n>> creating hundreds of roles. So I abandoned the idea.\n> \n> I tried to add Assert in heap_inplace_update to prevent possible similar \n> failures, but I gave up because I could not find a good way to determine if\n> a tuple is detoasted of not.\n> \n\nRight, not sure there's a good way to check for that.\n\n> By the way, I found a comment in vac_update_datfrozenxid() and EventTriggerOnLogin()\n> that explains why we could not use tuples from the syscache for heap_inplace_update. \n> I think it is better ad d the same comment in dropdb(). I attached a trivial patch for it.\n> \n\nAgreed. That seems like a nice improvement to the comment.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:16:16 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drop database command will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute." }, { "msg_contents": "On 8/19/24 12:16, Tomas Vondra wrote:\n> On 8/19/24 11:01, Yugo Nagata wrote:\n>\n> ...\n> \n>> By the way, I found a comment in vac_update_datfrozenxid() and EventTriggerOnLogin()\n>> that explains why we could not use tuples from the syscache for heap_inplace_update. \n>> I think it is better ad d the same comment in dropdb(). I attached a trivial patch for it.\n>>\n> \n> Agreed. That seems like a nice improvement to the comment.\n> \n\nDone, thanks for the suggestion / patch.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:03:13 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Drop database command will raise \"wrong tuple length\" if\n pg_database tuple contains toast attribute." } ]
[ { "msg_contents": "Hi,\n\nOver on the discussion thread about remaining setlocale() work[1], I\nwrote out some long boring theories about $SUBJECT. Here are some\ndraft patches to try those theories out, and make a commitfest entry.\nnl_langinfo_l() is a trivial drop-in replacement, and\npg_localeconv_r() has 4 different implementation strategies:\n\n1. Windows, with ugly _configthreadlocale() and thread-local result.\n2. Glibc, with nice nl_langinfo_l() extensions.\n3. macOS/*BSD, with nice localeconv_l().\n4. Baseline POSIX: uselocale() + localeconv() + honking great lock.\n\nIn reality it'd just be Solaris running #4 (and AIX if it comes back).\nWhether they truly implement it as pessimally as the standard allows,\nwho knows... you could drop the lock if you somehow knew that they\nreturned a pointer to thread-local storage or a member of the locale_t\nobject.\n\n[1] https://www.postgresql.org/message-id/flat/4c5da86af36a0d5e430eee3f60ce5e06f1b5cd34.camel%40j-davis.com", "msg_date": "Tue, 13 Aug 2024 17:45:14 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On 13/08/2024 08:45, Thomas Munro wrote:\n> Hi,\n> \n> Over on the discussion thread about remaining setlocale() work[1], I\n> wrote out some long boring theories about $SUBJECT. Here are some\n> draft patches to try those theories out, and make a commitfest entry.\n> nl_langinfo_l() is a trivial drop-in replacement, and\n> pg_localeconv_r() has 4 different implementation strategies:\n> \n> 1. Windows, with ugly _configthreadlocale() and thread-local result.\n> 2. Glibc, with nice nl_langinfo_l() extensions.\n> 3. macOS/*BSD, with nice localeconv_l().\n> 4. Baseline POSIX: uselocale() + localeconv() + honking great lock.\n> \n> In reality it'd just be Solaris running #4 (and AIX if it comes back).\n> Whether they truly implement it as pessimally as the standard allows,\n> who knows... you could drop the lock if you somehow knew that they\n> returned a pointer to thread-local storage or a member of the locale_t\n> object.\n\nPatches 1 and 2 look good to me.\n\nPatch 3 makes sense too, some comments on the details:\n\nThe #ifdefs and the LCONV_MEMBER stuff makes it a bit hard to follow \nwhat happens in each implementation strategy. I wonder if it would be \nmore clear to duplicate more code.\n\nThere's a comment at the top of pg_locale.c (\"!!! NOW HEAR THIS !!!\") \nthat needs to be removed or adjusted now.\n\n\n> \t * The POSIX standard explicitly says that it is undefined what happens if\n> \t * LC_MONETARY or LC_NUMERIC imply an encoding (codeset) different from\n> \t * that implied by LC_CTYPE. In practice, all Unix-ish platforms seem to\n> \t * believe that localeconv() should return strings that are encoded in the\n> \t * codeset implied by the LC_MONETARY or LC_NUMERIC locale name. Hence,\n> \t * once we have successfully collected the localeconv() results, we will\n> \t * convert them from that codeset to the desired server encoding.\n\nThe patch loses this comment, leaving just a much shorter comment in the \nWIN32 implementation. But it still seems like a relevant comment for the \n!WIN32 implementation too.\n\n\n> This gets rid of some setlocale() calls and makes the returned value\n> unclobberable with a defined lifetime. The remaining call to\n> setlocale() is only a query of the name of the current local (in a\n\ntypo: local -> locale\n\n> multi-threaded future this would have to be changed, perhaps to use a\n> per-database or per-backend locale_t instead of LC_GLOBAL_LOCALE).\n> \n> All known non-Windows targets have nl_langinfo_l(), from POSIX 2018.\n\nI think that's supposed to be POSIX 2008\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 09:23:10 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On Tue, Aug 13, 2024 at 6:23 PM Heikki Linnakangas <[email protected]> wrote:\n> On 13/08/2024 08:45, Thomas Munro wrote:\n> Patches 1 and 2 look good to me.\n\nThanks. I went ahead and pushed these ones. A couple of Macs in the\nbuild farm are failing, as if they didn't include <xlocale.h> and\nhaven't seen the type locale_t. CI's macOS build is OK, and my own\nlocal Mac is building master OK, and animal \"indri\" is OK... hmm,\nthose are all using MacPorts, but I don't yet see why that would be\nit...\n\n\n", "msg_date": "Tue, 13 Aug 2024 23:25:04 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On Tue, Aug 13, 2024 at 11:25 PM Thomas Munro <[email protected]> wrote:\n> On Tue, Aug 13, 2024 at 6:23 PM Heikki Linnakangas <[email protected]> wrote:\n> > On 13/08/2024 08:45, Thomas Munro wrote:\n> > Patches 1 and 2 look good to me.\n>\n> Thanks. I went ahead and pushed these ones. A couple of Macs in the\n> build farm are failing, as if they didn't include <xlocale.h> and\n> haven't seen the type locale_t. CI's macOS build is OK, and my own\n> local Mac is building master OK, and animal \"indri\" is OK... hmm,\n> those are all using MacPorts, but I don't yet see why that would be\n> it...\n\nAh, got it. It was OK under meson but not autoconf for my Mac, so I\nguess it must be transitive headers coming from somewhere making it\nwork for some systems. I just have a typo in an #ifdef macro. Will\nfix.\n\n\n", "msg_date": "Tue, 13 Aug 2024 23:39:45 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "Here's another mystery from Windows + MinGW. Although \"fairywren\" is\ngreen, that is because it lacks ICU, which would activate extra tests.\nCI is green too, but the optional CI task \"Windows - Server 2019,\nMinGW64 - Meson\" has ICU and it is now failing if you trigger it[1]\nafter commit 35eeea62, in initdb/001_initdb:\n\n[05:43:49.764] | 146/305 - options --locale-provider=icu --locale=und\n--lc-*=C: no stderr FAIL\n\n... because it logs a warning to stderr:\n\nWARNING: no usable system locales were found\n\nI can only assume there was some extra dependency on setlocale()\nglobal state changes in the removed code. I don't quite get it, but\nwhatever the reason, it's less than helpful to have different\ncompilers taking different code paths on our weirdest OS that most of\nus don't use, so I propose to push this change to take the regular\nMSVC code path for MinGW too, when looking up code pages. Somehow,\nthis fixes that, though it'd probably take someone with a local MinGW\nsetup to dig into what exactly is happening there.\n\n(There are plenty more places where we do something different for\nMinGW. I suspect they are all obsolete problems. We should probably\njust harmonise everything and see what breaks now that we have a CI\nsystem, but that can be for another day.)\n\nThat warning is from pg_import_system_locales(), which is new-ish\n(v16) on that OS. It was recently discovered to trigger a\npre-existing problem[2]: the simple setlocale() save/restore pattern\ndoesn't work in general on Windows, because some local names are\nnon-ASCII, and the restore can fail (abort in the system library due\nto bad encoding, because the intermediate setlocale() changed the\nexpected encoding of the locale name itself). So it's good that we\naren't doing that anymore in this location; I'm just thinking out loud\nabout whether that phenomenon could also be somehow connected to this\nfailure, though I don't see it. Another realisation is that my\npg_localeconv_r() patch, which can't avoid a thread-safe setlocale()\nsave-and-restore on that OS (and might finish up being the only one\nleft in the tree by the time we're done?), had better use wsetlocale()\ninstead to defend itself against that particular madness.\n\n[1] https://cirrus-ci.com/task/5928104793735168\n[2] https://www.postgresql.org/message-id/CA%2BhUKG%2BFxeRLURZ%3Dn8NPyLwgjFds_SqU_cQvE40ks6RQKUGbGg%40mail.gmail.com", "msg_date": "Wed, 14 Aug 2024 11:27:50 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On Tue, Aug 13, 2024 at 6:23 PM Heikki Linnakangas <[email protected]> wrote:\n> Patch 3 makes sense too, some comments on the details:\n> The #ifdefs and the LCONV_MEMBER stuff makes it a bit hard to follow\n> what happens in each implementation strategy. I wonder if it would be\n> more clear to duplicate more code.\n\nI tried to make it easier to follow.\n\n> There's a comment at the top of pg_locale.c (\"!!! NOW HEAR THIS !!!\")\n> that needs to be removed or adjusted now.\n\nYeah. We can remove that PSA if we also fix up the equivalent code\nfor LC_TIME. First attempt at that attached.\n\n> > * The POSIX standard explicitly says that it is undefined what happens if\n> > * LC_MONETARY or LC_NUMERIC imply an encoding (codeset) different from\n> > * that implied by LC_CTYPE. In practice, all Unix-ish platforms seem to\n> > * believe that localeconv() should return strings that are encoded in the\n> > * codeset implied by the LC_MONETARY or LC_NUMERIC locale name. Hence,\n> > * once we have successfully collected the localeconv() results, we will\n> > * convert them from that codeset to the desired server encoding.\n>\n> The patch loses this comment, leaving just a much shorter comment in the\n> WIN32 implementation. But it still seems like a relevant comment for the\n> !WIN32 implementation too.\n\nNew version makes it much clearer, and also is much more careful about\nwhat exactly happens if you have mismatched encodings.\n\n(Over in CF #3772 I was exploring the idea of banning the use of\nlocales that are not compatible with the database encoding. As far as\nI can guess, that idea must have come from the time when Windows\ndidn't have native UTF-8 support. Now it does. There I was mostly\ninterested in killing all the whcar_t conversion code, but maybe we\ncould also delete a few lines of transcoding around here too?)", "msg_date": "Wed, 14 Aug 2024 23:38:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "Here's a new patch to add to this pile, this time for check_locale().\nI also improved the error handling and comments in the other patches.", "msg_date": "Thu, 15 Aug 2024 20:03:47 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On 15/08/2024 11:03, Thomas Munro wrote:\n> Here's a new patch to add to this pile, this time for check_locale().\n> I also improved the error handling and comments in the other patches.\nThere's a similar function in initdb, check_locale_name. I wonder if \nthat could reuse the same code.\n\n\nI wonder if it would be more clear to split this into three functions:\n\n/*\n * Get the name of the locale in \"native environment\",\n * like setlocale(category, NULL) does\n */\nchar *get_native_locale(int category);\n\n/*\n * Return true if 'locale' is valid locale name for 'category\n */\nbool check_locale(int category, const char *locale);\n\n/*\n * Return a canonical name of given locale\n */\nchar *canonicalize_locale(int category, const char *locale);\n\n> \tresult = malloc(strlen(canonical) + 1);\n> \tif (!result)\n> \t\tgoto exit;\n> \tstrcpy(result, canonical);\n\nCould use \"result = strdup(canonical)\" here. Or even better, could we \nuse palloc()/pstrdup() here, and save the caller the trouble to copy it?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 14:11:07 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On Thu, Aug 15, 2024 at 11:11 PM Heikki Linnakangas <[email protected]> wrote:\n> There's a similar function in initdb, check_locale_name. I wonder if\n> that could reuse the same code.\n\nThanks, between this comment and some observations from Peter E and\nTom, I think I have a better plan now. I think they should *not*\nmatch, and a comment saying so should be deleted. In the backend, we\nshould do neither \"\"-expansion (ie getenv(\"LC_...\") whether direct or\nindirect) nor canonicalisation (of Windows' deprecated pre-BCP 47\nlocale names), making this v4 patch extremely simple.\n\n1. CREATE DATABASE doesn't really need to accept LOCALE = ''. What\nis the point? It's not documented or desirable behavior AFAIK. If\nyou like defaults you can just not provide a locale at all and get the\ntemplate database's (which often comes from initdb, which often uses\nthe server environment). That behavior was already inconsistent with\nCREATE COLLATION. So I think we should just reject \"\" in the backend\ncheck_locale().\n\n2. A similar argument applies to Windows canonicalisation. CREATE\nCOLLATION isn't doing it. CREATE DATABASE is, but again, what is the\npoint? See previous.\n\n(I also think that initdb should use a different mechanism for finding\nthe native locale on Windows, but I already have a CF #3772 for that,\nie migration plan for BCP 47 and native UTF-8 on Windows, but I don't\nwant *this* thread to get blocked by our absence of Windows\nreviewers/testers, so let's not tangle that up with this thread-safety\nexpedition.)\n\nTo show a concrete example of commands no longer accepted with this\nversion, because they call check_locale():\n\npostgres=# set lc_monetary = '';\nERROR: invalid value for parameter \"lc_monetary\": \"\"\n\npostgres=# create database db2 locale = '';\nERROR: invalid LC_COLLATE locale name: \"\"\nHINT: If the locale name is specific to ICU, use ICU_LOCALE.\n\nDoes anyone see a problem with that?\n\nI do see a complication for CREATE COLLATION, though. It doesn't call\ncheck_locale(), is not changed in this patch, and still accepts ''.\nReasoning: There may be systems with '' in their pg_collation catalog\nin the wild, since we never canonicalised with setlocale(), so it\nmight create some kind of unforeseen dump/restore/upgrade hazard if we\njust ban '' outright, I just don't know what yet.\n\nThere is no immediate problem, ie there is no setlocale() to excise,\nfor *this* project. Longer term, we can't actually continue to allow\n'' in COLLATION objects, though: that tells newlocale() to call\ngetenv(), which may be technically OK in a multi-threaded program\n(that never calls setenv()), but is hardly desirable. But it will\nalso give the wrong result, if we pursue the plan that Jeff and I\ndiscussed: we'll stop doing setenv(\"LC_COLLATE\", datacollate) and\nsetenv(\"LC_CTYPE\", datctype) in postinit.c (see pg_perm_setlocale()\ncalls). So I think any pg_collation catalog entries holding '' need\nto be translated to datcollate/datctype... somewhere. I just don't\nknow where yet and don't want to tackle that in the same patch.", "msg_date": "Fri, 16 Aug 2024 12:48:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "Here is a slightly better version of patch 0003. I removed some\nunnecessary refactoring, making the patch smaller.\n\nFTR I wrote a small program[1] for CI to test the assumptions about\nWindows in 0001. I printed out the addresses of the objects, to\nconfirm that different threads were looking at different objects once\nthe thread local mode was activated, and also assert that the struct\ncontents were as expected while 8 threads switched locales in a tight\nloop, and the output[2] looked OK to me.\n\n[1] https://github.com/macdice/hello-windows/blob/793eb2fe3e6738c200781f681a22a7e6358f39e5/test.c\n[2] https://cirrus-ci.com/task/4650412253380608", "msg_date": "Mon, 19 Aug 2024 18:29:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On Tue, Aug 13, 2024 at 5:45 PM Thomas Munro <[email protected]> wrote:\n> Over on the discussion thread about remaining setlocale() work[1], I\n> wrote out some long boring theories about $SUBJECT.\n\nJust as an FYI/curiosity, I converted my frustrations with\nlocaleconv() into a request[1] that POSIX consider standardising one\nof the interfaces that doesn't suck, and the reaction seems so far\npositive. Of course that doesn't really have any actionable\nconsequences on any relevant time frame, even if eventually\nsuccessful, and we can already use the saner interfaces on the systems\nmost of us really care about, but still... it's nice to think that\nthe pessimistic fallback code (really only used by Solaris and maybe\nAIX) could eventually be redundant if it goes somewhere...\n\n[1] https://www.mail-archive.com/[email protected]/msg12850.html\n\n\n", "msg_date": "Tue, 20 Aug 2024 09:46:10 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On 16.08.24 02:48, Thomas Munro wrote:\n> 2. A similar argument applies to Windows canonicalisation. CREATE\n> COLLATION isn't doing it. CREATE DATABASE is, but again, what is the\n> point? See previous.\n\nI don't really know about Windows locales. But we are doing \ncanonicalization of ICU locale names now. So there seems to be a desire \nto do canonicalization in general. (Obviously, if we're doing it \npoorly, then we don't have to keep it that way indefinitely.)\n\n\n\n", "msg_date": "Wed, 28 Aug 2024 21:07:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" }, { "msg_contents": "On 19.08.24 08:29, Thomas Munro wrote:\n> Here is a slightly better version of patch 0003. I removed some\n> unnecessary refactoring, making the patch smaller.\n> \n> FTR I wrote a small program[1] for CI to test the assumptions about\n> Windows in 0001. I printed out the addresses of the objects, to\n> confirm that different threads were looking at different objects once\n> the thread local mode was activated, and also assert that the struct\n> contents were as expected while 8 threads switched locales in a tight\n> loop, and the output[2] looked OK to me.\n> \n> [1] https://github.com/macdice/hello-windows/blob/793eb2fe3e6738c200781f681a22a7e6358f39e5/test.c\n> [2] https://cirrus-ci.com/task/4650412253380608\n\nReview of the patch \nv5-0002-Use-thread-safe-strftime_l-instead-of-strftime.patch:\n\nThis all looks very sensible. My only comment on the code is that for \nhandling error returns from newlocale() and _create_locale() we already \nhave report_newlocale_failure(), which handles various special cases. \n(But it doesn't do the _dosmaperr() that your patch does.) It would be \nbest if you used that as well (and maybe improve as needed).\n\nA couple of comments on the commit message:\n\n > Use thread-safe strftime_l() instead of strftime().\n\nI don't think this is about thread-safety of either function? It's more \nthat the latter requires a non-thread-safe code structure around it. I \nwould frame this more around the use-locale_t-everywhere theme than the \nthread-safety theme.\n\n > While here, adjust error message for strftime_l() failure: it does not\n > set errno, so no %m.\n\nAlthough POSIX says that strftime() and strftime_l() should change \nerrno, experimentation shows that they do not. So this is fine. But I \nthought also that the previous code was problematic because errno could \nbe overwritten since the failing call, so you wouldn't get a very \naccurate error message anyway.\n\n\n\n", "msg_date": "Thu, 5 Sep 2024 14:50:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thread-safe nl_langinfo() and localeconv()" } ]
[ { "msg_contents": "Hi hackers.\n\nWhile reviewing another thread I had cause to be looking more\ncarefully at the SEQUENCE documentation.\n\nI found it curious that, unlike other clauses, the syntax of the CYCLE\nclause was not displayed on a line by itself.\n\nI have changed that, and at the same time I have moved the CYCLE\nsyntax (plus its description) to be adjacent to MINVALUE/MAXVALUE,\nwhich IMO is where it naturally belongs.\n\nPlease see the attached patch.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 13 Aug 2024 15:46:36 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "PG docs - Sequence CYCLE clause" }, { "msg_contents": "On 13.08.24 07:46, Peter Smith wrote:\n> While reviewing another thread I had cause to be looking more\n> carefully at the SEQUENCE documentation.\n> \n> I found it curious that, unlike other clauses, the syntax of the CYCLE\n> clause was not displayed on a line by itself.\n> \n> I have changed that, and at the same time I have moved the CYCLE\n> syntax (plus its description) to be adjacent to MINVALUE/MAXVALUE,\n> which IMO is where it naturally belongs.\n> \n> Please see the attached patch.\n\nI agree, it makes a bit more sense with your patches.\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 13:46:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG docs - Sequence CYCLE clause" }, { "msg_contents": "On Wed, Aug 14, 2024 at 01:46:23PM +0200, Peter Eisentraut wrote:\n> On 13.08.24 07:46, Peter Smith wrote:\n> > While reviewing another thread I had cause to be looking more\n> > carefully at the SEQUENCE documentation.\n> > \n> > I found it curious that, unlike other clauses, the syntax of the CYCLE\n> > clause was not displayed on a line by itself.\n> > \n> > I have changed that, and at the same time I have moved the CYCLE\n> > syntax (plus its description) to be adjacent to MINVALUE/MAXVALUE,\n> > which IMO is where it naturally belongs.\n> > \n> > Please see the attached patch.\n> \n> I agree, it makes a bit more sense with your patches.\n\nGreat, patch applied to master.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 19 Aug 2024 20:18:15 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG docs - Sequence CYCLE clause" }, { "msg_contents": "On Tue, Aug 20, 2024 at 10:18 AM Bruce Momjian <[email protected]> wrote:\n>\n> Great, patch applied to master.\n>\n\nThanks for pushing!\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 20 Aug 2024 13:05:33 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG docs - Sequence CYCLE clause" } ]
[ { "msg_contents": "Hi,\r\n\r\n- I re-read your comments in [0] and it looks like you've concern about\r\n- the 2 \"if\" I'm proposing above and the fast forward handling. Is that the case\r\n- or is your fast forward concern unrelated to my proposals?\r\n\r\n\r\n\r\nIn your proposals, we will just return when fast forward. But I think we need\r\nhandle&nbsp;XLOG_HEAP2_NEW_CID or&nbsp;XLOG_HEAP_INPLACE even if we are fast\r\nforwarding as it decides whether the snapshot will track the transaction or not.\r\n\r\n\r\nDuring fast forward, if there is a transaction that generates XLOG_HEAP2_NEW_CID\r\nbut no XLOG_XACT_INVALIDATIONS(I'm not sure), the snapshot won't track this\r\ntransaction in your proposals, I think it's wrong from a build snapshot perspective.\r\n\r\n\r\nAlthough we don't decode anything during fast forward, the snapshot might be\r\nserialized to disk when CONSISTENT, it would be better to keep the snapshot correct.\r\n\r\n\r\n- Not sure what happened but it looks like your reply in [0] is not part of the\r\n- initial thread [1], but created a new thread instead, making the whole\r\n- conversation difficult to follow.\r\n\r\n\r\n\r\nI'm not sure what happened but I attach the new thread to the CF:\r\n\r\n\r\nhttps://commitfest.postgresql.org/49/5029\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nHi,- I re-read your comments in [0] and it looks like you've concern about- the 2 \"if\" I'm proposing above and the fast forward handling. Is that the case- or is your fast forward concern unrelated to my proposals?In your proposals, we will just return when fast forward. But I think we needhandle XLOG_HEAP2_NEW_CID or XLOG_HEAP_INPLACE even if we are fastforwarding as it decides whether the snapshot will track the transaction or not.During fast forward, if there is a transaction that generates XLOG_HEAP2_NEW_CIDbut no XLOG_XACT_INVALIDATIONS(I'm not sure), the snapshot won't track thistransaction in your proposals, I think it's wrong from a build snapshot perspective.Although we don't decode anything during fast forward, the snapshot might beserialized to disk when CONSISTENT, it would be better to keep the snapshot correct.- Not sure what happened but it looks like your reply in [0] is not part of the- initial thread [1], but created a new thread instead, making the whole- conversation difficult to follow.I'm not sure what happened but I attach the new thread to the CF:https://commitfest.postgresql.org/49/5029--Regards,ChangAo Chen", "msg_date": "Tue, 13 Aug 2024 15:32:42 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 13, 2024 at 03:32:42PM +0800, cca5507 wrote:\n> Hi,\n> \n> - I re-read your comments in [0] and it looks like you've concern about\n> - the 2 \"if\" I'm proposing above and the fast forward handling. Is that the case\n> - or is your fast forward concern unrelated to my proposals?\n> \n> \n> \n> In your proposals, we will just return when fast forward. But I think we need\n> handle&nbsp;XLOG_HEAP2_NEW_CID or&nbsp;XLOG_HEAP_INPLACE even if we are fast\n> forwarding as it decides whether the snapshot will track the transaction or not.\n> \n> \n> During fast forward, if there is a transaction that generates XLOG_HEAP2_NEW_CID\n> but no XLOG_XACT_INVALIDATIONS(I'm not sure), the snapshot won't track this\n> transaction in your proposals, I think it's wrong from a build snapshot perspective.\n> \n> \n> Although we don't decode anything during fast forward, the snapshot might be\n> serialized to disk when CONSISTENT, it would be better to keep the snapshot correct.\n\nIIUC your \"fast forward\" concern is not related to this particular thread but you\nthink it's already an issue on the master branch (outside of the BUILDING_SNAPSHOT\nhandling we are discussing here), is that correct? (that's also what your coding\nchanges makes me think of). If so, I'd suggest to open a dedicated thread for that\nparticular \"fast forward\" point and do the coding in the current thread as if the\nfast forward is not an issue.\n\nDoes that make sense?\n\n> \n> - Not sure what happened but it looks like your reply in [0] is not part of the\n> - initial thread [1], but created a new thread instead, making the whole\n> - conversation difficult to follow.\n> \n> I'm not sure what happened but I attach the new thread to the CF:\n\nUnfortunately your last reply did start a new email thread again.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 08:11:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" }, { "msg_contents": "Hi,\r\n\r\n\r\n- IIUC your \"fast forward\" concern is not related to this particular thread but you\r\n- think it's already an issue on the master branch (outside of the BUILDING_SNAPSHOT\r\n- handling we are discussing here), is that correct? (that's also what your coding\r\n- changes makes me think of). If so, I'd suggest to open a dedicated thread for that\r\n- particular \"fast forward\" point and do the coding in the current thread as if the\r\n- fast forward is not an issue.\r\n\r\n- Does that make sense?\r\n\r\n\r\nYes.\r\n\r\n\r\nBut I think the v4-0001 in [1] is fine.\r\n\r\n\r\nLet's see what others think.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\r\n\r\n\r\n[1]:&nbsp;https://www.postgresql.org/message-id/tencent_925A991463194F3C97830C3BB7D0A2C2BD07%40qq.com\nHi,- IIUC your \"fast forward\" concern is not related to this particular thread but you- think it's already an issue on the master branch (outside of the BUILDING_SNAPSHOT- handling we are discussing here), is that correct? (that's also what your coding- changes makes me think of). If so, I'd suggest to open a dedicated thread for that- particular \"fast forward\" point and do the coding in the current thread as if the- fast forward is not an issue.- Does that make sense?Yes.But I think the v4-0001 in [1] is fine.Let's see what others think.--Regards,ChangAo Chen[1]: https://www.postgresql.org/message-id/tencent_925A991463194F3C97830C3BB7D0A2C2BD07%40qq.com", "msg_date": "Tue, 13 Aug 2024 18:07:49 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Historic snapshot doesn't track txns committed in\n BUILDING_SNAPSHOT state" } ]
[ { "msg_contents": "Hi hackers,\nI found that the entry size is not aligned with MAXALIGN, but\ndshash_table_item did that in dshash_table. IMHO, both entry size and\ndshash_table_item should be aligned with MAXALIGN.\n\nstatic dshash_table_item *\ninsert_into_bucket(dshash_table *hash_table,\nconst void *key,\ndsa_pointer *bucket)\n{\ndsa_pointer item_pointer;\ndshash_table_item *item;\n\nitem_pointer = dsa_allocate(hash_table->area,\nhash_table->params.entry_size +\nMAXALIGN(sizeof(dshash_table_item)));\nitem = dsa_get_address(hash_table->area, item_pointer);\ncopy_key(hash_table, ENTRY_FROM_ITEM(item), key);\ninsert_item_into_bucket(hash_table, item_pointer, item, bucket);\nreturn item;\n}\n\nWith regards,\nHao Zhang\n\nHi hackers,I found that the entry size is not aligned with MAXALIGN, but dshash_table_item did that in dshash_table. IMHO, both entry size and dshash_table_item should be aligned with MAXALIGN.static dshash_table_item *insert_into_bucket(dshash_table *hash_table, const void *key, dsa_pointer *bucket){ dsa_pointer item_pointer; dshash_table_item *item; item_pointer = dsa_allocate(hash_table->area, hash_table->params.entry_size + MAXALIGN(sizeof(dshash_table_item))); item = dsa_get_address(hash_table->area, item_pointer); copy_key(hash_table, ENTRY_FROM_ITEM(item), key); insert_item_into_bucket(hash_table, item_pointer, item, bucket); return item;}With regards,Hao Zhang", "msg_date": "Tue, 13 Aug 2024 17:45:21 +0800", "msg_from": "Hao Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Why not align item size in dshash_table?" } ]
[ { "msg_contents": "Hi,\n\nI am working on using the read stream in pg_visibility. There are two\nplaces to use it:\n\n1- collect_visibility_data()\n\nThis one is straightforward. I created a read stream object if\n'include_pd' is true because read I/O is done when 'include_pd' is\ntrue. There is ~4% timing improvement with this change. I started the\nserver with the default settings and created a 6 GB table. Then run\n100 times pg_visibility() by clearing the OS cache between each run.\n----------\n\n2- collect_corrupt_items()\n\nThis one is more complicated. The read stream callback function loops\nuntil it finds a suitable block to read. So, if the callback returns\nan InvalidBlockNumber; it means that the stream processed all possible\nblocks and the stream should be finished. There is ~3% timing\nimprovement with this change. I started the server with the default\nsettings and created a 6 GB table. Then run 100 times\npg_check_visible() by clearing the OS cache between each run.\n\nThe downside of this approach is there are too many \"vmbuffer is valid\nbut BufferGetBlockNumber(*vmbuffer) is not equal to mapBlock, so\nvmbuffer needs to be read again\" cases in the read stream version (700\nvs 20 for the 6 GB table). This is caused by the callback function of\nthe read stream reading a new vmbuf while getting next block numbers.\nSo, vmbuf is wrong when we are checking visibility map bits that might\nhave changed while we were acquiring the page lock.\n----------\n\nBoth patches are attached.\n\nAny feedback would be appreciated.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 13 Aug 2024 15:22:27 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Use read streams in pg_visibility" }, { "msg_contents": "On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> I am working on using the read stream in pg_visibility. There are two\n> places to use it:\n> \n> 1- collect_visibility_data()\n> \n> This one is straightforward. I created a read stream object if\n> 'include_pd' is true because read I/O is done when 'include_pd' is\n> true. There is ~4% timing improvement with this change. I started the\n> server with the default settings and created a 6 GB table. Then run\n> 100 times pg_visibility() by clearing the OS cache between each run.\n> ----------\n\nMind sharing a script for reproducibility? Except for the drop_caches\npart, of course.. \n--\nMichael", "msg_date": "Mon, 19 Aug 2024 15:30:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Mon, 19 Aug 2024 at 09:30, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n> > Hi,\n> >\n> > I am working on using the read stream in pg_visibility. There are two\n> > places to use it:\n> >\n> > 1- collect_visibility_data()\n> >\n> > This one is straightforward. I created a read stream object if\n> > 'include_pd' is true because read I/O is done when 'include_pd' is\n> > true. There is ~4% timing improvement with this change. I started the\n> > server with the default settings and created a 6 GB table. Then run\n> > 100 times pg_visibility() by clearing the OS cache between each run.\n> > ----------\n>\n> Mind sharing a script for reproducibility? Except for the drop_caches\n> part, of course..\n\nSure, the script is attached.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 19 Aug 2024 14:01:00 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n> 2- collect_corrupt_items()\n> \n> This one is more complicated. The read stream callback function loops\n> until it finds a suitable block to read. So, if the callback returns\n> an InvalidBlockNumber; it means that the stream processed all possible\n> blocks and the stream should be finished. There is ~3% timing\n> improvement with this change. I started the server with the default\n> settings and created a 6 GB table. Then run 100 times\n> pg_check_visible() by clearing the OS cache between each run.\n> \n> The downside of this approach is there are too many \"vmbuffer is valid\n> but BufferGetBlockNumber(*vmbuffer) is not equal to mapBlock, so\n> vmbuffer needs to be read again\" cases in the read stream version (700\n> vs 20 for the 6 GB table). This is caused by the callback function of\n> the read stream reading a new vmbuf while getting next block numbers.\n> So, vmbuf is wrong when we are checking visibility map bits that might\n> have changed while we were acquiring the page lock.\n\nDid the test that found 700 \"read again\" use a different procedure than the\none shared as setup.sh down-thread? Using that script alone, none of the vm\nbits are set, so I'd not expect any heap page reads.\n\nThe \"vmbuffer needs to be read again\" regression fits what I would expect the\nv1 patch to do with a table having many vm bits set. In general, I think the\nfix is to keep two heap Buffer vars, so the callback can work on one vmbuffer\nwhile collect_corrupt_items() works on another vmbuffer. Much of the time,\nthey'll be the same buffer. It could be as simple as that, but you could\nconsider further optimizations like these:\n\n- Use ReadRecentBuffer() or a similar technique, to avoid a buffer mapping\n lookup when the other Buffer var already has the block you want.\n\n- [probably not worth it] Add APIs for pg_visibility.c to tell read_stream.c\n to stop calling the ReadStreamBlockNumberCB for awhile. This could help if\n nonzero vm bits are infrequent, causing us to visit few heap blocks per vm\n block. For example, if one block out of every 33000 is all-visible, every\n heap block we visit has a different vmbuffer. It's likely not optimal for\n the callback to pin and unpin 20 vmbuffers, then have\n collect_corrupt_items() pin and unpin the same 20 vmbuffers. pg_visibility\n could notice this trend and request a stop of the callbacks until more of\n the heap block work completes. If pg_visibility is going to be the only\n place in the code with this use case, it's probably not worth carrying the\n extra API just for pg_visibility. However, if we get a stronger use case\n later, pg_visibility could be another beneficiary.\n\n> +/*\n> + * Callback function to get next block for read stream object used in\n> + * collect_visibility_data() function.\n> + */\n> +static BlockNumber\n> +collect_visibility_data_read_stream_next_block(ReadStream *stream,\n> +\t\t\t\t\t\t\t\t\t\t\t void *callback_private_data,\n> +\t\t\t\t\t\t\t\t\t\t\t void *per_buffer_data)\n> +{\n> +\tstruct collect_visibility_data_read_stream_private *p = callback_private_data;\n> +\n> +\tif (p->blocknum < p->nblocks)\n> +\t\treturn p->blocknum++;\n> +\n> +\treturn InvalidBlockNumber;\n\nThis is the third callback that just iterates over a block range, after\npg_prewarm_read_stream_next_block() and\ncopy_storage_using_buffer_read_stream_next_block(). While not a big problem,\nI think it's time to have a general-use callback for block range scans. The\nquantity of duplicate code is low, but the existing function names are long\nand less informative than a behavior-based name.\n\n> +static BlockNumber\n> +collect_corrupt_items_read_stream_next_block(ReadStream *stream,\n> +\t\t\t\t\t\t\t\t\t\t\t void *callback_private_data,\n> +\t\t\t\t\t\t\t\t\t\t\t void *per_buffer_data)\n> +{\n> +\tstruct collect_corrupt_items_read_stream_private *p = callback_private_data;\n> +\n> +\tfor (; p->blocknum < p->nblocks; p->blocknum++)\n> +\t{\n> +\t\tbool\t\tcheck_frozen = false;\n> +\t\tbool\t\tcheck_visible = false;\n> +\n> +\t\tif (p->all_frozen && VM_ALL_FROZEN(p->rel, p->blocknum, p->vmbuffer))\n> +\t\t\tcheck_frozen = true;\n> +\t\tif (p->all_visible && VM_ALL_VISIBLE(p->rel, p->blocknum, p->vmbuffer))\n> +\t\t\tcheck_visible = true;\n> +\t\tif (!check_visible && !check_frozen)\n> +\t\t\tcontinue;\n\nIf a vm has zero bits set, this loop will scan the entire vm before returning.\nHence, this loop deserves a CHECK_FOR_INTERRUPTS() or a comment about how\nVM_ALL_FROZEN/VM_ALL_VISIBLE reaches a CHECK_FOR_INTERRUPTS().\n\n> @@ -687,6 +734,20 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n> \titems->count = 64;\n> \titems->tids = palloc(items->count * sizeof(ItemPointerData));\n> \n> +\tp.blocknum = 0;\n> +\tp.nblocks = nblocks;\n> +\tp.rel = rel;\n> +\tp.vmbuffer = &vmbuffer;\n> +\tp.all_frozen = all_frozen;\n> +\tp.all_visible = all_visible;\n> +\tstream = read_stream_begin_relation(READ_STREAM_FULL,\n> +\t\t\t\t\t\t\t\t\t\tbstrategy,\n> +\t\t\t\t\t\t\t\t\t\trel,\n> +\t\t\t\t\t\t\t\t\t\tMAIN_FORKNUM,\n> +\t\t\t\t\t\t\t\t\t\tcollect_corrupt_items_read_stream_next_block,\n> +\t\t\t\t\t\t\t\t\t\t&p,\n> +\t\t\t\t\t\t\t\t\t\t0);\n> +\n> \t/* Loop over every block in the relation. */\n> \tfor (blkno = 0; blkno < nblocks; ++blkno)\n\nThe callback doesn't return blocks having zero vm bits, so the blkno variable\nis not accurate. I didn't test, but I think the loop's \"Recheck to avoid\nreturning spurious results.\" looks at the bit for the wrong block. If that's\nwhat v1 does, could you expand the test file to catch that? For example, make\na two-block table with only the second block all-visible.\n\nSince the callback skips blocks, this function should use the\nread_stream_reset() style of looping:\n\n\twhile ((buffer = read_stream_next_buffer(stream, NULL)) != InvalidBuffer) ...\n\nThanks,\nnm\n\n\n", "msg_date": "Tue, 20 Aug 2024 11:47:42 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nThanks for the review and feedback!\n\nOn Tue, 20 Aug 2024 at 21:47, Noah Misch <[email protected]> wrote:\n>\n> On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n> > 2- collect_corrupt_items()\n> >\n> > This one is more complicated. The read stream callback function loops\n> > until it finds a suitable block to read. So, if the callback returns\n> > an InvalidBlockNumber; it means that the stream processed all possible\n> > blocks and the stream should be finished. There is ~3% timing\n> > improvement with this change. I started the server with the default\n> > settings and created a 6 GB table. Then run 100 times\n> > pg_check_visible() by clearing the OS cache between each run.\n> >\n> > The downside of this approach is there are too many \"vmbuffer is valid\n> > but BufferGetBlockNumber(*vmbuffer) is not equal to mapBlock, so\n> > vmbuffer needs to be read again\" cases in the read stream version (700\n> > vs 20 for the 6 GB table). This is caused by the callback function of\n> > the read stream reading a new vmbuf while getting next block numbers.\n> > So, vmbuf is wrong when we are checking visibility map bits that might\n> > have changed while we were acquiring the page lock.\n>\n> Did the test that found 700 \"read again\" use a different procedure than the\n> one shared as setup.sh down-thread? Using that script alone, none of the vm\n> bits are set, so I'd not expect any heap page reads.\n\nYes, it is basically the same script but the query is 'SELECT\npg_check_visible('${TABLE}'::regclass);'.\n\n> The \"vmbuffer needs to be read again\" regression fits what I would expect the\n> v1 patch to do with a table having many vm bits set. In general, I think the\n> fix is to keep two heap Buffer vars, so the callback can work on one vmbuffer\n> while collect_corrupt_items() works on another vmbuffer. Much of the time,\n> they'll be the same buffer. It could be as simple as that, but you could\n> consider further optimizations like these:\n\nThanks for the suggestion. Keeping two vmbuffers lowered the count of\n\"read-again\" cases to ~40. I ran the test again and the timing\nimprovement is ~5% now.\n\n> - Use ReadRecentBuffer() or a similar technique, to avoid a buffer mapping\n> lookup when the other Buffer var already has the block you want.\n>\n> - [probably not worth it] Add APIs for pg_visibility.c to tell read_stream.c\n> to stop calling the ReadStreamBlockNumberCB for awhile. This could help if\n> nonzero vm bits are infrequent, causing us to visit few heap blocks per vm\n> block. For example, if one block out of every 33000 is all-visible, every\n> heap block we visit has a different vmbuffer. It's likely not optimal for\n> the callback to pin and unpin 20 vmbuffers, then have\n> collect_corrupt_items() pin and unpin the same 20 vmbuffers. pg_visibility\n> could notice this trend and request a stop of the callbacks until more of\n> the heap block work completes. If pg_visibility is going to be the only\n> place in the code with this use case, it's probably not worth carrying the\n> extra API just for pg_visibility. However, if we get a stronger use case\n> later, pg_visibility could be another beneficiary.\n\nI think the number of \"read-again\" cases are low enough now. I am\nworking on 'using ReadRecentBuffer() or a similar technique' but that\nmay take more time, so I attached patches without these optimizations.\n\n> > +/*\n> > + * Callback function to get next block for read stream object used in\n> > + * collect_visibility_data() function.\n> > + */\n> > +static BlockNumber\n> > +collect_visibility_data_read_stream_next_block(ReadStream *stream,\n> > + void *callback_private_data,\n> > + void *per_buffer_data)\n> > +{\n> > + struct collect_visibility_data_read_stream_private *p = callback_private_data;\n> > +\n> > + if (p->blocknum < p->nblocks)\n> > + return p->blocknum++;\n> > +\n> > + return InvalidBlockNumber;\n>\n> This is the third callback that just iterates over a block range, after\n> pg_prewarm_read_stream_next_block() and\n> copy_storage_using_buffer_read_stream_next_block(). While not a big problem,\n> I think it's time to have a general-use callback for block range scans. The\n> quantity of duplicate code is low, but the existing function names are long\n> and less informative than a behavior-based name.\n\nI agree. Does creating something like\npg_general_read_stream_next_block() in read_stream code and exporting\nit makes sense?\n\n> > +static BlockNumber\n> > +collect_corrupt_items_read_stream_next_block(ReadStream *stream,\n> > + void *callback_private_data,\n> > + void *per_buffer_data)\n> > +{\n> > + struct collect_corrupt_items_read_stream_private *p = callback_private_data;\n> > +\n> > + for (; p->blocknum < p->nblocks; p->blocknum++)\n> > + {\n> > + bool check_frozen = false;\n> > + bool check_visible = false;\n> > +\n> > + if (p->all_frozen && VM_ALL_FROZEN(p->rel, p->blocknum, p->vmbuffer))\n> > + check_frozen = true;\n> > + if (p->all_visible && VM_ALL_VISIBLE(p->rel, p->blocknum, p->vmbuffer))\n> > + check_visible = true;\n> > + if (!check_visible && !check_frozen)\n> > + continue;\n>\n> If a vm has zero bits set, this loop will scan the entire vm before returning.\n> Hence, this loop deserves a CHECK_FOR_INTERRUPTS() or a comment about how\n> VM_ALL_FROZEN/VM_ALL_VISIBLE reaches a CHECK_FOR_INTERRUPTS().\n\nDone. I added CHECK_FOR_INTERRUPTS() to the loop in callback.\n\n> > @@ -687,6 +734,20 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n> > items->count = 64;\n> > items->tids = palloc(items->count * sizeof(ItemPointerData));\n> >\n> > + p.blocknum = 0;\n> > + p.nblocks = nblocks;\n> > + p.rel = rel;\n> > + p.vmbuffer = &vmbuffer;\n> > + p.all_frozen = all_frozen;\n> > + p.all_visible = all_visible;\n> > + stream = read_stream_begin_relation(READ_STREAM_FULL,\n> > + bstrategy,\n> > + rel,\n> > + MAIN_FORKNUM,\n> > + collect_corrupt_items_read_stream_next_block,\n> > + &p,\n> > + 0);\n> > +\n> > /* Loop over every block in the relation. */\n> > for (blkno = 0; blkno < nblocks; ++blkno)\n>\n> The callback doesn't return blocks having zero vm bits, so the blkno variable\n> is not accurate. I didn't test, but I think the loop's \"Recheck to avoid\n> returning spurious results.\" looks at the bit for the wrong block. If that's\n> what v1 does, could you expand the test file to catch that? For example, make\n> a two-block table with only the second block all-visible.\n\nYes, it was not accurate. I am getting blockno from the buffer now. I\nchecked and confirmed it is working as expected by manually logging\nblocknos returned from the read stream. I am not sure how to add a\ntest case for this.\n\n> Since the callback skips blocks, this function should use the\n> read_stream_reset() style of looping:\n\nDone.\n\nThere is one behavioral change introduced with the patches. It could\nhappen when collect_corrupt_items() is called with both all_visible\nand all_frozen being true.\n-> Let's say VM_ALL_FROZEN() returns true but VM_ALL_VISIBLE() returns\nfalse in callback. Callback returns this block number because\nVM_ALL_FROZEN() is true.\n-> At the /* Recheck to avoid returning spurious results. */ part, we\nshould only check VM_ALL_FROZEN() again as this was returning true in\nthe callback. But we check both VM_ALL_FROZEN() and VM_ALL_VISIBLE().\nVM_ALL_FROZEN() returns false and VM_ALL_VISIBLE() returns true now\n(vice versa of callback).\n-> We were skipping this block in the master but the patched version\ndoes not skip that.\n\nIs this a problem? If this is a problem, a solution might be to\ncallback return values of VM_ALL_FROZEN() and VM_ALL_VISIBLE() in the\nper_buffer_data.\n\nv2 of the patches are attached, only 0002 is updated.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Fri, 23 Aug 2024 14:20:06 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Fri, Aug 23, 2024 at 02:20:06PM +0300, Nazir Bilal Yavuz wrote:\n> On Tue, 20 Aug 2024 at 21:47, Noah Misch <[email protected]> wrote:\n> > On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n> > > The downside of this approach is there are too many \"vmbuffer is valid\n> > > but BufferGetBlockNumber(*vmbuffer) is not equal to mapBlock, so\n> > > vmbuffer needs to be read again\" cases in the read stream version (700\n> > > vs 20 for the 6 GB table). This is caused by the callback function of\n> > > the read stream reading a new vmbuf while getting next block numbers.\n> > > So, vmbuf is wrong when we are checking visibility map bits that might\n> > > have changed while we were acquiring the page lock.\n> >\n> > Did the test that found 700 \"read again\" use a different procedure than the\n> > one shared as setup.sh down-thread? Using that script alone, none of the vm\n> > bits are set, so I'd not expect any heap page reads.\n> \n> Yes, it is basically the same script but the query is 'SELECT\n> pg_check_visible('${TABLE}'::regclass);'.\n\nI gather the scripts deal exclusively in tables with no vm bits set, so\npg_visibility did no heap page reads under those scripts. But the '700 \"read\nagain\"' result suggests either I'm guessing wrong, or that result came from a\ndifferent test procedure. Thoughts?\n\n> > The \"vmbuffer needs to be read again\" regression fits what I would expect the\n> > v1 patch to do with a table having many vm bits set. In general, I think the\n> > fix is to keep two heap Buffer vars, so the callback can work on one vmbuffer\n> > while collect_corrupt_items() works on another vmbuffer. Much of the time,\n> > they'll be the same buffer. It could be as simple as that, but you could\n> > consider further optimizations like these:\n> \n> Thanks for the suggestion. Keeping two vmbuffers lowered the count of\n> \"read-again\" cases to ~40. I ran the test again and the timing\n> improvement is ~5% now.\n\n> I think the number of \"read-again\" cases are low enough now.\n\nAgreed. The increase from 20 to 40 is probably an increase in buffer mapping\nlookups, not an increase in I/O.\n\n> Does creating something like\n> pg_general_read_stream_next_block() in read_stream code and exporting\n> it makes sense?\n\nTo me, that name isn't conveying the function's behavior, and the words \"pg_\"\nand \"general_\" aren't adding information there. How about one of these names\nor similar:\n\nseq_read_stream_cb\nsequential_read_stream_cb\nblock_range_read_stream_cb\n\n> > The callback doesn't return blocks having zero vm bits, so the blkno variable\n> > is not accurate. I didn't test, but I think the loop's \"Recheck to avoid\n> > returning spurious results.\" looks at the bit for the wrong block. If that's\n> > what v1 does, could you expand the test file to catch that? For example, make\n> > a two-block table with only the second block all-visible.\n> \n> Yes, it was not accurate. I am getting blockno from the buffer now. I\n> checked and confirmed it is working as expected by manually logging\n> blocknos returned from the read stream. I am not sure how to add a\n> test case for this.\n\nVACUUM FREEZE makes an all-visible, all-frozen table. DELETE of a particular\nTID, even if rolled back, clears both vm bits for the TID's page. Past tests\nlike that had instability problems. One cause is a concurrent session's XID\nor snapshot, which can prevent VACUUM setting vm bits. Using a TEMP table may\nhave been one of the countermeasures, but I don't remember clearly. Hence,\nplease search the archives or the existing pg_visibility tests for how we\ndealt with that. It may not be problem for this particular test.\n\n> There is one behavioral change introduced with the patches. It could\n> happen when collect_corrupt_items() is called with both all_visible\n> and all_frozen being true.\n> -> Let's say VM_ALL_FROZEN() returns true but VM_ALL_VISIBLE() returns\n> false in callback. Callback returns this block number because\n> VM_ALL_FROZEN() is true.\n> -> At the /* Recheck to avoid returning spurious results. */ part, we\n> should only check VM_ALL_FROZEN() again as this was returning true in\n> the callback. But we check both VM_ALL_FROZEN() and VM_ALL_VISIBLE().\n> VM_ALL_FROZEN() returns false and VM_ALL_VISIBLE() returns true now\n> (vice versa of callback).\n> -> We were skipping this block in the master but the patched version\n> does not skip that.\n> \n> Is this a problem? If this is a problem, a solution might be to\n> callback return values of VM_ALL_FROZEN() and VM_ALL_VISIBLE() in the\n> per_buffer_data.\n\nNo, I don't consider that a problem. That's not making us do additional I/O,\njust testing more bits within the pages we're already loading. The difference\nin behavior is user-visible, but it's the same behavior change the user would\nget if the timing had been a bit different.\n\n\n", "msg_date": "Fri, 23 Aug 2024 12:01:24 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Fri, 23 Aug 2024 at 22:01, Noah Misch <[email protected]> wrote:\n>\n> On Fri, Aug 23, 2024 at 02:20:06PM +0300, Nazir Bilal Yavuz wrote:\n> > On Tue, 20 Aug 2024 at 21:47, Noah Misch <[email protected]> wrote:\n> > > On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n> > > > The downside of this approach is there are too many \"vmbuffer is valid\n> > > > but BufferGetBlockNumber(*vmbuffer) is not equal to mapBlock, so\n> > > > vmbuffer needs to be read again\" cases in the read stream version (700\n> > > > vs 20 for the 6 GB table). This is caused by the callback function of\n> > > > the read stream reading a new vmbuf while getting next block numbers.\n> > > > So, vmbuf is wrong when we are checking visibility map bits that might\n> > > > have changed while we were acquiring the page lock.\n> > >\n> > > Did the test that found 700 \"read again\" use a different procedure than the\n> > > one shared as setup.sh down-thread? Using that script alone, none of the vm\n> > > bits are set, so I'd not expect any heap page reads.\n> >\n> > Yes, it is basically the same script but the query is 'SELECT\n> > pg_check_visible('${TABLE}'::regclass);'.\n>\n> I gather the scripts deal exclusively in tables with no vm bits set, so\n> pg_visibility did no heap page reads under those scripts. But the '700 \"read\n> again\"' result suggests either I'm guessing wrong, or that result came from a\n> different test procedure. Thoughts?\n\nSorry, yes. You need to run VACUUM on the test table before running\nthe query. The script is attached. You can run the script with\n[corrupt | visibility] arguments to test the related patches.\n\n> > > The \"vmbuffer needs to be read again\" regression fits what I would expect the\n> > > v1 patch to do with a table having many vm bits set. In general, I think the\n> > > fix is to keep two heap Buffer vars, so the callback can work on one vmbuffer\n> > > while collect_corrupt_items() works on another vmbuffer. Much of the time,\n> > > they'll be the same buffer. It could be as simple as that, but you could\n> > > consider further optimizations like these:\n> >\n> > Thanks for the suggestion. Keeping two vmbuffers lowered the count of\n> > \"read-again\" cases to ~40. I ran the test again and the timing\n> > improvement is ~5% now.\n>\n> > I think the number of \"read-again\" cases are low enough now.\n>\n> Agreed. The increase from 20 to 40 is probably an increase in buffer mapping\n> lookups, not an increase in I/O.\n>\n> > Does creating something like\n> > pg_general_read_stream_next_block() in read_stream code and exporting\n> > it makes sense?\n>\n> To me, that name isn't conveying the function's behavior, and the words \"pg_\"\n> and \"general_\" aren't adding information there. How about one of these names\n> or similar:\n>\n> seq_read_stream_cb\n> sequential_read_stream_cb\n> block_range_read_stream_cb\n\nI liked the block_range_read_stream_cb. Attached patches for that\n(first 3 patches). I chose an nblocks way instead of last_blocks in\nthe struct.\n\n> > > The callback doesn't return blocks having zero vm bits, so the blkno variable\n> > > is not accurate. I didn't test, but I think the loop's \"Recheck to avoid\n> > > returning spurious results.\" looks at the bit for the wrong block. If that's\n> > > what v1 does, could you expand the test file to catch that? For example, make\n> > > a two-block table with only the second block all-visible.\n> >\n> > Yes, it was not accurate. I am getting blockno from the buffer now. I\n> > checked and confirmed it is working as expected by manually logging\n> > blocknos returned from the read stream. I am not sure how to add a\n> > test case for this.\n>\n> VACUUM FREEZE makes an all-visible, all-frozen table. DELETE of a particular\n> TID, even if rolled back, clears both vm bits for the TID's page. Past tests\n> like that had instability problems. One cause is a concurrent session's XID\n> or snapshot, which can prevent VACUUM setting vm bits. Using a TEMP table may\n> have been one of the countermeasures, but I don't remember clearly. Hence,\n> please search the archives or the existing pg_visibility tests for how we\n> dealt with that. It may not be problem for this particular test.\n\nThanks for the information, I will check these. What I still do not\nunderstand is how to make sure that only the second block is processed\nand the first one is skipped. pg_check_visible() and pg_check_frozen()\nreturns TIDs that cause corruption in the visibility map, there is no\ninformation about block numbers.\n\n> > There is one behavioral change introduced with the patches. It could\n> > happen when collect_corrupt_items() is called with both all_visible\n> > and all_frozen being true.\n> > -> Let's say VM_ALL_FROZEN() returns true but VM_ALL_VISIBLE() returns\n> > false in callback. Callback returns this block number because\n> > VM_ALL_FROZEN() is true.\n> > -> At the /* Recheck to avoid returning spurious results. */ part, we\n> > should only check VM_ALL_FROZEN() again as this was returning true in\n> > the callback. But we check both VM_ALL_FROZEN() and VM_ALL_VISIBLE().\n> > VM_ALL_FROZEN() returns false and VM_ALL_VISIBLE() returns true now\n> > (vice versa of callback).\n> > -> We were skipping this block in the master but the patched version\n> > does not skip that.\n> >\n> > Is this a problem? If this is a problem, a solution might be to\n> > callback return values of VM_ALL_FROZEN() and VM_ALL_VISIBLE() in the\n> > per_buffer_data.\n>\n> No, I don't consider that a problem. That's not making us do additional I/O,\n> just testing more bits within the pages we're already loading. The difference\n> in behavior is user-visible, but it's the same behavior change the user would\n> get if the timing had been a bit different.\n\nThanks for the clarification.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 27 Aug 2024 10:49:19 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Tue, Aug 27, 2024 at 10:49:19AM +0300, Nazir Bilal Yavuz wrote:\n> On Fri, 23 Aug 2024 at 22:01, Noah Misch <[email protected]> wrote:\n> > On Fri, Aug 23, 2024 at 02:20:06PM +0300, Nazir Bilal Yavuz wrote:\n> > > On Tue, 20 Aug 2024 at 21:47, Noah Misch <[email protected]> wrote:\n> > > > On Tue, Aug 13, 2024 at 03:22:27PM +0300, Nazir Bilal Yavuz wrote:\n\n> I liked the block_range_read_stream_cb. Attached patches for that\n> (first 3 patches). I chose an nblocks way instead of last_blocks in\n> the struct.\n\nTo read blocks 10 and 11, I would expect to initialize the struct with one of:\n\n{ .first=10, .nblocks=2 }\n{ .first=10, .last_inclusive=11 }\n{ .first=10, .last_exclusive=12 }\n\nWith the patch's API, I would need {.first=10,.nblocks=12}. The struct field\nnamed \"nblocks\" behaves like a last_block_exclusive. Please either make the\nbehavior an \"nblocks\" behavior or change the field name to replace the term\n\"nblocks\" with something matching the behavior. (I used longer field names in\nmy examples here, to disambiguate those examples. It's okay if the final\nfield names aren't those, as long as the field names and the behavior align.)\n\n> > > > The callback doesn't return blocks having zero vm bits, so the blkno variable\n> > > > is not accurate. I didn't test, but I think the loop's \"Recheck to avoid\n> > > > returning spurious results.\" looks at the bit for the wrong block. If that's\n> > > > what v1 does, could you expand the test file to catch that? For example, make\n> > > > a two-block table with only the second block all-visible.\n> > >\n> > > Yes, it was not accurate. I am getting blockno from the buffer now. I\n> > > checked and confirmed it is working as expected by manually logging\n> > > blocknos returned from the read stream. I am not sure how to add a\n> > > test case for this.\n> >\n> > VACUUM FREEZE makes an all-visible, all-frozen table. DELETE of a particular\n> > TID, even if rolled back, clears both vm bits for the TID's page. Past tests\n> > like that had instability problems. One cause is a concurrent session's XID\n> > or snapshot, which can prevent VACUUM setting vm bits. Using a TEMP table may\n> > have been one of the countermeasures, but I don't remember clearly. Hence,\n> > please search the archives or the existing pg_visibility tests for how we\n> > dealt with that. It may not be problem for this particular test.\n> \n> Thanks for the information, I will check these. What I still do not\n> understand is how to make sure that only the second block is processed\n> and the first one is skipped. pg_check_visible() and pg_check_frozen()\n> returns TIDs that cause corruption in the visibility map, there is no\n> information about block numbers.\n\nI see what you're saying. collect_corrupt_items() needs a corrupt table to\nreport anything; all corruption-free tables get the same output. Testing this\nwould need extra C code or techniques like corrupt_page_checksum() to create\nthe corrupt state. That wouldn't be a bad thing to have, but it's big enough\nthat I'll consider it out of scope for $SUBJECT. With the callback change\nabove, I'll be ready to push all this.\n\n\n", "msg_date": "Fri, 30 Aug 2024 16:51:06 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Sat, 31 Aug 2024 at 02:51, Noah Misch <[email protected]> wrote:\n>\n> To read blocks 10 and 11, I would expect to initialize the struct with one of:\n>\n> { .first=10, .nblocks=2 }\n> { .first=10, .last_inclusive=11 }\n> { .first=10, .last_exclusive=12 }\n>\n> With the patch's API, I would need {.first=10,.nblocks=12}. The struct field\n> named \"nblocks\" behaves like a last_block_exclusive. Please either make the\n> behavior an \"nblocks\" behavior or change the field name to replace the term\n> \"nblocks\" with something matching the behavior. (I used longer field names in\n> my examples here, to disambiguate those examples. It's okay if the final\n> field names aren't those, as long as the field names and the behavior align.)\n\nI decided to use 'current_blocknum' and 'last_exclusive'. I think\nthese are easier to understand and use.\n\n> > Thanks for the information, I will check these. What I still do not\n> > understand is how to make sure that only the second block is processed\n> > and the first one is skipped. pg_check_visible() and pg_check_frozen()\n> > returns TIDs that cause corruption in the visibility map, there is no\n> > information about block numbers.\n>\n> I see what you're saying. collect_corrupt_items() needs a corrupt table to\n> report anything; all corruption-free tables get the same output. Testing this\n> would need extra C code or techniques like corrupt_page_checksum() to create\n> the corrupt state. That wouldn't be a bad thing to have, but it's big enough\n> that I'll consider it out of scope for $SUBJECT. With the callback change\n> above, I'll be ready to push all this.\n\nThanks, updated patches are attached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 2 Sep 2024 15:20:12 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Mon, Sep 02, 2024 at 03:20:12PM +0300, Nazir Bilal Yavuz wrote:\n> Thanks, updated patches are attached.\n\n> +/*\n> + * Ask the callback which block it would like us to read next, with a small\n> + * buffer in front to allow read_stream_unget_block() to work and to allow the\n> + * fast path to skip this function and work directly from the array.\n> */\n> static inline BlockNumber\n> read_stream_get_block(ReadStream *stream, void *per_buffer_data)\n\nv4-0001-Add-general-use-struct-and-callback-to-read-strea.patch introduced\nthis update to the read_stream_get_block() header comment, but we're not\nchanging read_stream_get_block() here. I reverted this.\n\nPushed with some other cosmetic changes.\n\n\n", "msg_date": "Tue, 3 Sep 2024 10:50:11 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Tue, Sep 03, 2024 at 10:50:11AM -0700, Noah Misch wrote:\n> On Mon, Sep 02, 2024 at 03:20:12PM +0300, Nazir Bilal Yavuz wrote:\n> > Thanks, updated patches are attached.\n> \n> > +/*\n> > + * Ask the callback which block it would like us to read next, with a small\n> > + * buffer in front to allow read_stream_unget_block() to work and to allow the\n> > + * fast path to skip this function and work directly from the array.\n> > */\n> > static inline BlockNumber\n> > read_stream_get_block(ReadStream *stream, void *per_buffer_data)\n> \n> v4-0001-Add-general-use-struct-and-callback-to-read-strea.patch introduced\n> this update to the read_stream_get_block() header comment, but we're not\n> changing read_stream_get_block() here. I reverted this.\n> \n> Pushed with some other cosmetic changes.\n\nI see I pushed something unacceptable under ASAN. I will look into that:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-09-03%2017%3A47%3A20\n\n\n", "msg_date": "Tue, 3 Sep 2024 12:20:30 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Tue, 3 Sept 2024 at 22:20, Noah Misch <[email protected]> wrote:\n>\n> On Tue, Sep 03, 2024 at 10:50:11AM -0700, Noah Misch wrote:\n> > On Mon, Sep 02, 2024 at 03:20:12PM +0300, Nazir Bilal Yavuz wrote:\n> > > Thanks, updated patches are attached.\n> >\n> > > +/*\n> > > + * Ask the callback which block it would like us to read next, with a small\n> > > + * buffer in front to allow read_stream_unget_block() to work and to allow the\n> > > + * fast path to skip this function and work directly from the array.\n> > > */\n> > > static inline BlockNumber\n> > > read_stream_get_block(ReadStream *stream, void *per_buffer_data)\n> >\n> > v4-0001-Add-general-use-struct-and-callback-to-read-strea.patch introduced\n> > this update to the read_stream_get_block() header comment, but we're not\n> > changing read_stream_get_block() here. I reverted this.\n\nSorry, it should be left from rebase. Thanks for reverting it.\n\n> > Pushed with some other cosmetic changes.\n\nThanks!\n\n> I see I pushed something unacceptable under ASAN. I will look into that:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-09-03%2017%3A47%3A20\n\nI think it is related to the scope of BlockRangeReadStreamPrivate in\nthe collect_visibility_data() function. Attached a small fix for that.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 3 Sep 2024 22:46:34 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Tue, Sep 03, 2024 at 10:46:34PM +0300, Nazir Bilal Yavuz wrote:\n> On Tue, 3 Sept 2024 at 22:20, Noah Misch <[email protected]> wrote:\n> > On Tue, Sep 03, 2024 at 10:50:11AM -0700, Noah Misch wrote:\n> > > Pushed with some other cosmetic changes.\n> \n> Thanks!\n> \n> > I see I pushed something unacceptable under ASAN. I will look into that:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-09-03%2017%3A47%3A20\n> \n> I think it is related to the scope of BlockRangeReadStreamPrivate in\n> the collect_visibility_data() function. Attached a small fix for that.\n\nRight.\n\nhttps://postgr.es/m/CAEudQAozv3wTY5TV2t29JcwPydbmKbiWQkZD42S2OgzdixPMDQ@mail.gmail.com\nthen observed that collect_corrupt_items() was now guaranteed to never detect\ncorruption. I have pushed revert ddfc556 of the pg_visibility.c changes. For\nthe next try, could you add that testing we discussed?\n\n\n", "msg_date": "Wed, 4 Sep 2024 11:43:22 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Wed, 4 Sept 2024 at 21:43, Noah Misch <[email protected]> wrote:\n>\n> https://postgr.es/m/CAEudQAozv3wTY5TV2t29JcwPydbmKbiWQkZD42S2OgzdixPMDQ@mail.gmail.com\n> then observed that collect_corrupt_items() was now guaranteed to never detect\n> corruption. I have pushed revert ddfc556 of the pg_visibility.c changes. For\n> the next try, could you add that testing we discussed?\n\nDo you think that corrupting the visibility map and then seeing if\npg_check_visible() and pg_check_frozen() report something is enough?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 5 Sep 2024 15:59:53 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Thu, Sep 05, 2024 at 03:59:53PM +0300, Nazir Bilal Yavuz wrote:\n> On Wed, 4 Sept 2024 at 21:43, Noah Misch <[email protected]> wrote:\n> > https://postgr.es/m/CAEudQAozv3wTY5TV2t29JcwPydbmKbiWQkZD42S2OgzdixPMDQ@mail.gmail.com\n> > then observed that collect_corrupt_items() was now guaranteed to never detect\n> > corruption. I have pushed revert ddfc556 of the pg_visibility.c changes. For\n> > the next try, could you add that testing we discussed?\n> \n> Do you think that corrupting the visibility map and then seeing if\n> pg_check_visible() and pg_check_frozen() report something is enough?\n\nI think so. Please check that it would have caught both the blkno bug and the\nabove bug.\n\n\n", "msg_date": "Thu, 5 Sep 2024 08:54:35 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Thu, 5 Sept 2024 at 18:54, Noah Misch <[email protected]> wrote:\n>\n> On Thu, Sep 05, 2024 at 03:59:53PM +0300, Nazir Bilal Yavuz wrote:\n> > On Wed, 4 Sept 2024 at 21:43, Noah Misch <[email protected]> wrote:\n> > > https://postgr.es/m/CAEudQAozv3wTY5TV2t29JcwPydbmKbiWQkZD42S2OgzdixPMDQ@mail.gmail.com\n> > > then observed that collect_corrupt_items() was now guaranteed to never detect\n> > > corruption. I have pushed revert ddfc556 of the pg_visibility.c changes. For\n> > > the next try, could you add that testing we discussed?\n> >\n> > Do you think that corrupting the visibility map and then seeing if\n> > pg_check_visible() and pg_check_frozen() report something is enough?\n>\n> I think so. Please check that it would have caught both the blkno bug and the\n> above bug.\n\nThe test and updated patch files are attached. In that test I\noverwrite the visibility map file with its older state. Something like\nthis:\n\n1- Create the table, then run VACUUM FREEZE on the table.\n2- Shutdown the server, create a copy of the vm file, restart the server.\n3- Run the DELETE command on some $random_tuples.\n4- Shutdown the server, overwrite the vm file with the #2 vm file,\nrestart the server.\n5- pg_check_visible and pg_check_frozen must report $random_tuples as corrupted.\n\nDo you think this test makes sense and enough?\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 9 Sep 2024 18:25:07 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Mon, Sep 09, 2024 at 06:25:07PM +0300, Nazir Bilal Yavuz wrote:\n> On Thu, 5 Sept 2024 at 18:54, Noah Misch <[email protected]> wrote:\n> > On Thu, Sep 05, 2024 at 03:59:53PM +0300, Nazir Bilal Yavuz wrote:\n> > > On Wed, 4 Sept 2024 at 21:43, Noah Misch <[email protected]> wrote:\n> > > > https://postgr.es/m/CAEudQAozv3wTY5TV2t29JcwPydbmKbiWQkZD42S2OgzdixPMDQ@mail.gmail.com\n> > > > then observed that collect_corrupt_items() was now guaranteed to never detect\n> > > > corruption. I have pushed revert ddfc556 of the pg_visibility.c changes. For\n> > > > the next try, could you add that testing we discussed?\n> > >\n> > > Do you think that corrupting the visibility map and then seeing if\n> > > pg_check_visible() and pg_check_frozen() report something is enough?\n> >\n> > I think so. Please check that it would have caught both the blkno bug and the\n> > above bug.\n> \n> The test and updated patch files are attached. In that test I\n> overwrite the visibility map file with its older state. Something like\n> this:\n> \n> 1- Create the table, then run VACUUM FREEZE on the table.\n> 2- Shutdown the server, create a copy of the vm file, restart the server.\n> 3- Run the DELETE command on some $random_tuples.\n> 4- Shutdown the server, overwrite the vm file with the #2 vm file,\n> restart the server.\n> 5- pg_check_visible and pg_check_frozen must report $random_tuples as corrupted.\n\nCopying the vm file is a good technique. In my test runs, this does catch the\n\"never detect\" bug, but it doesn't catch the blkno bug. Can you make it catch\nboth? It's possible my hand-patching to recreate the blkno bug is what went\nwrong, so I'm attaching what I used. It consists of\nv1-0002-Use-read-stream-in-pg_visibility-in-collect_corru.patch plus these\nfixes for the \"never detect\" bug from your v6-0002:\n\n--- a/contrib/pg_visibility/pg_visibility.c\n+++ b/contrib/pg_visibility/pg_visibility.c\n@@ -724,4 +724,4 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n \t{\n-\t\tbool\t\tcheck_frozen = false;\n-\t\tbool\t\tcheck_visible = false;\n+\t\tbool\t\tcheck_frozen = all_frozen;\n+\t\tbool\t\tcheck_visible = all_visible;\n \t\tBuffer\t\tbuffer;\n@@ -757,5 +757,5 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)\n \t\t */\n-\t\tif (all_frozen && !VM_ALL_FROZEN(rel, blkno, &vmbuffer))\n+\t\tif (check_frozen && !VM_ALL_FROZEN(rel, blkno, &vmbuffer))\n \t\t\tcheck_frozen = false;\n-\t\tif (all_visible && !VM_ALL_VISIBLE(rel, blkno, &vmbuffer))\n+\t\tif (check_visible && !VM_ALL_VISIBLE(rel, blkno, &vmbuffer))\n \t\t\tcheck_visible = false;", "msg_date": "Mon, 9 Sep 2024 14:32:50 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Tue, 10 Sept 2024 at 00:32, Noah Misch <[email protected]> wrote:\n>\n> Copying the vm file is a good technique. In my test runs, this does catch the\n> \"never detect\" bug, but it doesn't catch the blkno bug. Can you make it catch\n> both? It's possible my hand-patching to recreate the blkno bug is what went\n> wrong, so I'm attaching what I used. It consists of\n> v1-0002-Use-read-stream-in-pg_visibility-in-collect_corru.patch plus these\n> fixes for the \"never detect\" bug from your v6-0002:\n\nYour patch is correct. I wrongly assumed it would catch blockno bug,\nthe attached version catches it. I made blockno = 0 invisible and not\nfrozen before copying the vm file. So, in the blockno buggy version;\ncallback will skip that block but the main loop in the\ncollect_corrupt_items() will not skip it. I tested it with your patch\nand there is exactly 1 blockno difference between expected and result\noutput.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 10 Sep 2024 14:35:46 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Tue, Sep 10, 2024 at 02:35:46PM +0300, Nazir Bilal Yavuz wrote:\n> Your patch is correct. I wrongly assumed it would catch blockno bug,\n> the attached version catches it. I made blockno = 0 invisible and not\n> frozen before copying the vm file. So, in the blockno buggy version;\n> callback will skip that block but the main loop in the\n> collect_corrupt_items() will not skip it. I tested it with your patch\n> and there is exactly 1 blockno difference between expected and result\n> output.\n\nPushed. I added autovacuum=off so auto-analyze of a system catalog can't take\na snapshot that blocks VACUUM updating the vismap. I doubt that could happen\nunder default settings, but this lets us disregard the possibility entirely.\n\nI also fixed the mix of tabs and spaces inside test file string literals.\n\n\n", "msg_date": "Tue, 10 Sep 2024 15:38:38 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "Hi,\n\nOn Wed, 11 Sept 2024 at 01:38, Noah Misch <[email protected]> wrote:\n>\n> On Tue, Sep 10, 2024 at 02:35:46PM +0300, Nazir Bilal Yavuz wrote:\n> > Your patch is correct. I wrongly assumed it would catch blockno bug,\n> > the attached version catches it. I made blockno = 0 invisible and not\n> > frozen before copying the vm file. So, in the blockno buggy version;\n> > callback will skip that block but the main loop in the\n> > collect_corrupt_items() will not skip it. I tested it with your patch\n> > and there is exactly 1 blockno difference between expected and result\n> > output.\n>\n> Pushed. I added autovacuum=off so auto-analyze of a system catalog can't take\n> a snapshot that blocks VACUUM updating the vismap. I doubt that could happen\n> under default settings, but this lets us disregard the possibility entirely.\n\nThanks!\n\n> I also fixed the mix of tabs and spaces inside test file string literals.\n\nI ran both pgindent and pgperltidy but they didn't catch them. Is\nthere an automated way to catch them?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 11 Sep 2024 09:19:09 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in pg_visibility" }, { "msg_contents": "On Wed, Sep 11, 2024 at 09:19:09AM +0300, Nazir Bilal Yavuz wrote:\n> On Wed, 11 Sept 2024 at 01:38, Noah Misch <[email protected]> wrote:\n> > I also fixed the mix of tabs and spaces inside test file string literals.\n> \n> I ran both pgindent and pgperltidy but they didn't catch them. Is\n> there an automated way to catch them?\n\ngit diff --check\n\n\n", "msg_date": "Wed, 11 Sep 2024 08:51:54 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in pg_visibility" } ]
[ { "msg_contents": "Hi,\n\nOver on the security mailing list, Tom Lane expressed discontent about\na few things related to astreamer_gzip.c. Here's a patch to improve\nthe comments to try to address those concerns.\n\nThis ended up being discussed on -security because\nf80b09bac87d6b49f5dbb6131da5fbd9b9773c5c moved the code, leading\nCoverity to issue a new round of warnings about everything it doesn't\nlike about these files. Scrutinizing those warnings, Tom believed that\nthere might be a bug in astreamer_gzip_writer_new, because it's not\ntoo obvious who is supposed to close which file descriptor. There's an\nexisting comment about the issue in astreamer_gzip_writer_finalize(),\nbut it was too far away from the code he was looking at for him to see\nit right away, so add another comment pointing to it, using wording\nsuggested by Tom.\n\nTom also heavily criticized the header comment for\nastreamer_gzip_writer_new(). I don't really think there's any big\nproblem, but I'm OK with his suggestion that we should instead\nduplicate the comment from astreamer_plain_writer_new(), so this patch\ndoes that. In response to further comments from Tom, I have also added\na further paragraph to try to make it clear that callers need to be\ncareful about how they use any FILE that they pass to this function.\nIt doesn't matter for current usage, because it's only used by\npg_basebackup when it's writing to stdout, and I don't anticipate that\nit will matter for future usage either, but it doesn't hurt to mention\nit, so here we go.\n\nI'm posting this here rather than on the -security thread so that\nothers may comment if they wish.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 13 Aug 2024 09:39:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "improve astreamer_gzip.c comments" } ]
[ { "msg_contents": "Shared libraries of extensions might want to invalidate or update their\nown caches whenever a CREATE/ALTER/DROP EXTENSION command is run for\ntheir extension (in any backend). Right now this is non-trivial to do\ncorrectly and efficiently. But if the extension catalog was part of a\nsyscache this could simply be done by registering an callback using\nCacheRegisterSyscacheCallback for the relevant syscache.\n\nThis change is only made to make the lives of extension authors easier.\nThe performance impact of this change should be negligible, since\nupdates to pg_extension are very rare.", "msg_date": "Tue, 13 Aug 2024 16:12:57 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Create syscaches for pg_extension" }, { "msg_contents": "út 13. 8. 2024 v 16:13 odesílatel Jelte Fennema-Nio <[email protected]>\nnapsal:\n\n> Shared libraries of extensions might want to invalidate or update their\n> own caches whenever a CREATE/ALTER/DROP EXTENSION command is run for\n> their extension (in any backend). Right now this is non-trivial to do\n> correctly and efficiently. But if the extension catalog was part of a\n> syscache this could simply be done by registering an callback using\n> CacheRegisterSyscacheCallback for the relevant syscache.\n>\n> This change is only made to make the lives of extension authors easier.\n> The performance impact of this change should be negligible, since\n> updates to pg_extension are very rare.\n>\n\n+1\n\nPavel\n\nút 13. 8. 2024 v 16:13 odesílatel Jelte Fennema-Nio <[email protected]> napsal:Shared libraries of extensions might want to invalidate or update their\nown caches whenever a CREATE/ALTER/DROP EXTENSION command is run for\ntheir extension (in any backend). Right now this is non-trivial to do\ncorrectly and efficiently. But if the extension catalog was part of a\nsyscache this could simply be done by registering an callback using\nCacheRegisterSyscacheCallback for the relevant syscache.\n\nThis change is only made to make the lives of extension authors easier.\nThe performance impact of this change should be negligible, since\nupdates to pg_extension are very rare.+1Pavel", "msg_date": "Tue, 13 Aug 2024 16:22:20 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Tue, Aug 13, 2024 at 5:23 PM Pavel Stehule <[email protected]> wrote:\n> út 13. 8. 2024 v 16:13 odesílatel Jelte Fennema-Nio <[email protected]> napsal:\n>>\n>> Shared libraries of extensions might want to invalidate or update their\n>> own caches whenever a CREATE/ALTER/DROP EXTENSION command is run for\n>> their extension (in any backend). Right now this is non-trivial to do\n>> correctly and efficiently. But if the extension catalog was part of a\n>> syscache this could simply be done by registering an callback using\n>> CacheRegisterSyscacheCallback for the relevant syscache.\n>>\n>> This change is only made to make the lives of extension authors easier.\n>> The performance impact of this change should be negligible, since\n>> updates to pg_extension are very rare.\n>\n>\n> +1\n\n+1 from me too\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Tue, 13 Aug 2024 17:38:55 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Tue, Aug 13, 2024 at 05:38:55PM +0300, Alexander Korotkov wrote:\n> +1 from me too\n\nI won't hide that I've wanted that in the past..\n--\nMichael", "msg_date": "Mon, 19 Aug 2024 15:21:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Mon, Aug 19, 2024 at 03:21:30PM +0900, Michael Paquier wrote:\n> I won't hide that I've wanted that in the past..\n\nAnd I have bumped into a case where this has been helpful today, so\napplied. Thanks!\n--\nMichael", "msg_date": "Thu, 22 Aug 2024 10:49:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On 22/8/2024 03:49, Michael Paquier wrote:\n> On Mon, Aug 19, 2024 at 03:21:30PM +0900, Michael Paquier wrote:\n>> I won't hide that I've wanted that in the past..\n> \n> And I have bumped into a case where this has been helpful today, so\n> applied. Thanks!\nIt had been my dream, too, for years. But the reason was the too-costly \ncall of the get_extension_oid routine (no less than pgbench 2-3% of \noverhead when checked it in the planner hook).\nIt seems that the get_extension_oid routine was not modified when the \nsys cache was introduced. What is the reason? It may be that this \nroutine is redundant now, but if not, and we want to hold the API that \nextensions use, maybe we should rewrite it, too.\nSee the attachment proposing changes.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Thu, 5 Sep 2024 15:41:19 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "čt 5. 9. 2024 v 15:41 odesílatel Andrei Lepikhov <[email protected]> napsal:\n\n> On 22/8/2024 03:49, Michael Paquier wrote:\n> > On Mon, Aug 19, 2024 at 03:21:30PM +0900, Michael Paquier wrote:\n> >> I won't hide that I've wanted that in the past..\n> >\n> > And I have bumped into a case where this has been helpful today, so\n> > applied. Thanks!\n> It had been my dream, too, for years. But the reason was the too-costly\n> call of the get_extension_oid routine (no less than pgbench 2-3% of\n> overhead when checked it in the planner hook).\n> It seems that the get_extension_oid routine was not modified when the\n> sys cache was introduced. What is the reason? It may be that this\n> routine is redundant now, but if not, and we want to hold the API that\n> extensions use, maybe we should rewrite it, too.\n> See the attachment proposing changes.\n>\n\n+1\n\nPavel\n\n\n> --\n> regards, Andrei Lepikhov\n>\n\nčt 5. 9. 2024 v 15:41 odesílatel Andrei Lepikhov <[email protected]> napsal:On 22/8/2024 03:49, Michael Paquier wrote:\n> On Mon, Aug 19, 2024 at 03:21:30PM +0900, Michael Paquier wrote:\n>> I won't hide that I've wanted that in the past..\n> \n> And I have bumped into a case where this has been helpful today, so\n> applied.  Thanks!\nIt had been my dream, too, for years. But the reason was the too-costly \ncall of the get_extension_oid routine (no less than pgbench 2-3% of \noverhead when checked it in the planner hook).\nIt seems that the get_extension_oid routine was not modified when the \nsys cache was introduced. What is the reason? It may be that this \nroutine is redundant now, but if not, and we want to hold the API that \nextensions use, maybe we should rewrite it, too.\nSee the attachment proposing changes.+1Pavel\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Thu, 5 Sep 2024 15:43:30 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Thu, 5 Sept 2024 at 15:41, Andrei Lepikhov <[email protected]> wrote:\n> It seems that the get_extension_oid routine was not modified when the\n> sys cache was introduced. What is the reason? It may be that this\n> routine is redundant now, but if not, and we want to hold the API that\n> extensions use, maybe we should rewrite it, too.\n> See the attachment proposing changes.\n\nIt seems reasonable to make this function use the new syscache. I\ndidn't change any existing code in my original patch, because I wanted\nto use the syscache APIs directly anyway and I didn't want to make the\npatch bigger than strictly necessary. But I totally understand that\nfor many usages it's probably enough if the existing APIs are simply\nfaster (on repeated calls). The patch looks fine. But I think\nget_extension_name and get_extension_schema should also be updated.\n\n\n", "msg_date": "Thu, 5 Sep 2024 18:50:59 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On 5/9/2024 18:50, Jelte Fennema-Nio wrote:\n> On Thu, 5 Sept 2024 at 15:41, Andrei Lepikhov <[email protected]> wrote:\n>> It seems that the get_extension_oid routine was not modified when the\n>> sys cache was introduced. What is the reason? It may be that this\n>> routine is redundant now, but if not, and we want to hold the API that\n>> extensions use, maybe we should rewrite it, too.\n>> See the attachment proposing changes.\n> \n> It seems reasonable to make this function use the new syscache. I\n> didn't change any existing code in my original patch, because I wanted\n> to use the syscache APIs directly anyway and I didn't want to make the\n> patch bigger than strictly necessary. But I totally understand that\n> for many usages it's probably enough if the existing APIs are simply\n> faster (on repeated calls). The patch looks fine. But I think\n> get_extension_name and get_extension_schema should also be updated.\nThanks, see new patch in attachment.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Thu, 5 Sep 2024 22:03:09 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Thu, 5 Sept 2024 at 22:03, Andrei Lepikhov <[email protected]> wrote:\n> Thanks, see new patch in attachment.\n\nLGTM now. Added it to the commitfest here:\nhttps://commitfest.postgresql.org/50/5241/\n\n\n", "msg_date": "Thu, 5 Sep 2024 22:56:34 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Thu, Sep 05, 2024 at 10:56:34PM +0200, Jelte Fennema-Nio wrote:\n> LGTM now. Added it to the commitfest here:\n> https://commitfest.postgresql.org/50/5241/\n\nLooks OK at quick glance. I'll take care of that as I've done the\nother one.\n--\nMichael", "msg_date": "Fri, 6 Sep 2024 08:29:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" }, { "msg_contents": "On Fri, Sep 06, 2024 at 08:29:52AM +0900, Michael Paquier wrote:\n> Looks OK at quick glance. I'll take care of that as I've done the\n> other one.\n\nAnd done.\n--\nMichael", "msg_date": "Sat, 7 Sep 2024 20:44:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create syscaches for pg_extension" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that in commit d3cc5ffe81f6 some GUC variables were moved to\nheader files. According to the commit message in 8ec569479, all\nvariables in header files must be marked with PGDLLIMPORT, am I right?\n\nI've made a patch that adds missing PGDLLIMPORTs, please, take a look.\n\n--\nBest regards,\nSofia Kopikova\nPostgres Professional: http://www.postgrespro.com", "msg_date": "Tue, 13 Aug 2024 17:38:42 +0300", "msg_from": "Sofia Kopikova <[email protected]>", "msg_from_op": true, "msg_subject": "Apply PGDLLIMPORT markings to some GUC variables" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:38 AM Sofia Kopikova\n<[email protected]> wrote:\n> I noticed that in commit d3cc5ffe81f6 some GUC variables were moved to\n> header files. According to the commit message in 8ec569479, all\n> variables in header files must be marked with PGDLLIMPORT, am I right?\n>\n> I've made a patch that adds missing PGDLLIMPORTs, please, take a look.\n\nThis seems correct to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 15:00:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apply PGDLLIMPORT markings to some GUC variables" }, { "msg_contents": "On 13.08.24 21:00, Robert Haas wrote:\n> On Tue, Aug 13, 2024 at 10:38 AM Sofia Kopikova\n> <[email protected]> wrote:\n>> I noticed that in commit d3cc5ffe81f6 some GUC variables were moved to\n>> header files. According to the commit message in 8ec569479, all\n>> variables in header files must be marked with PGDLLIMPORT, am I right?\n>>\n>> I've made a patch that adds missing PGDLLIMPORTs, please, take a look.\n> \n> This seems correct to me.\n\ncommitted\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 11:50:08 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apply PGDLLIMPORT markings to some GUC variables" }, { "msg_contents": "\nOn 8/14/24 12:50, Peter Eisentraut wrote:\n> On 13.08.24 21:00, Robert Haas wrote:\n>> On Tue, Aug 13, 2024 at 10:38 AM Sofia Kopikova\n>> <[email protected]> wrote:\n>>> I noticed that in commit d3cc5ffe81f6 some GUC variables were moved to\n>>> header files. According to the commit message in 8ec569479, all\n>>> variables in header files must be marked with PGDLLIMPORT, am I right?\n>>>\n>>> I've made a patch that adds missing PGDLLIMPORTs, please, take a look.\n>>\n>> This seems correct to me.\n>\n> committed\n>\nMany thanks for quick review and commit.\n\n-- \nBest regards,\nSofia Kopikova\nPostgres Professional: http://www.postgrespro.com\n\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 13:14:48 +0300", "msg_from": "Sofia Kopikova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apply PGDLLIMPORT markings to some GUC variables" }, { "msg_contents": "On Tue, Aug 13, 2024 at 03:00:00PM -0400, Robert Haas wrote:\n> This seems correct to me.\n\nIt is not the first time that this happens in recent history. Would\nit be worth automating that? I would guess a TAP test that takes a\ncopy of the headers, applies the changes while filtering the few\nexceptions, then compares it to the origin in the tree?\n--\nMichael", "msg_date": "Mon, 19 Aug 2024 15:18:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apply PGDLLIMPORT markings to some GUC variables" }, { "msg_contents": "On 19.08.24 08:18, Michael Paquier wrote:\n> On Tue, Aug 13, 2024 at 03:00:00PM -0400, Robert Haas wrote:\n>> This seems correct to me.\n> \n> It is not the first time that this happens in recent history. Would\n> it be worth automating that? I would guess a TAP test that takes a\n> copy of the headers, applies the changes while filtering the few\n> exceptions, then compares it to the origin in the tree?\n\nProbably worth thinking about, but it would at least require some kind \nof exclude list, because there are exceptions like src/include/common/ \n/logging.h.\n\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:20:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apply PGDLLIMPORT markings to some GUC variables" } ]
[ { "msg_contents": "Hi, Kirill, Junwang,\n\nI made this patch to address the refactor issue in our previous email\ndiscussion.\nhttps://www.postgresql.org/message-id/flat/CABBtG=cDTCBDCBK7McSy6bJR3s5xUTOg0vSFfuW8oLdUYyCscA@mail.gmail.com\n\nThat is, the for loop in function smgrdestroy() and smgrdounlinkall can be\nreplaced with smgrclose().\n\nfor (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n-->\nsmgrclose(rels[i]);\n\nPlease let me know if you have any questions.\n\nBest Regards,\nSteven from Highgo.com", "msg_date": "Wed, 14 Aug 2024 14:32:17 +0800", "msg_from": "Steven Niu <[email protected]>", "msg_from_op": true, "msg_subject": "Use function smgrclose() to replace the loop" } ]
[ { "msg_contents": "It seems to me that we could implement prefetching support \n(USE_PREFETCH) on macOS using the fcntl() command F_RDADVISE. The man \npage description is a bit terse:\n\n F_RDADVISE Issue an advisory read async with no copy to user.\n\nBut it seems to be the right idea. Was this looked into before? I \ncouldn't find anything in the archives.\n\nAttached is a patch to implement this. It seems to work, but of course \nit's kind of hard to tell whether it actually does anything useful.\n\n(Even if the performance effects were negligible, this would be useful \nto get the prefetch code some exercise on this platform.)\n\nThoughts?", "msg_date": "Wed, 14 Aug 2024 09:03:55 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "macOS prefetching support" }, { "msg_contents": "On Wed, Aug 14, 2024 at 7:04 PM Peter Eisentraut <[email protected]> wrote:\n> Attached is a patch to implement this. It seems to work, but of course\n> it's kind of hard to tell whether it actually does anything useful.\n\nHeader order problem: pg_config_os.h defines __darwin__, but\npg_config_manual.h is included first, and tests __darwin__. I hacked\nmy way around that, and then made a table of 40,000,000 integers in a\n2GB buffer pool. I used \"select count(pg_buffercache_evict(buffered))\nfrom pg_buffer_cache\", and \"sudo purge\", to clear the two layers of\ncache for each test, and then measured:\n\nmaintenance_io_concurrency=0, ANALYZE: 2311ms\nmaintenance_io_concurrency=10, ANALYZE: 652ms\nmaintenance_io_concurrency=25, ANALYZE: 389ms\n\nIt works!\n\n\n", "msg_date": "Thu, 15 Aug 2024 00:36:23 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 14.08.24 14:36, Thomas Munro wrote:\n> On Wed, Aug 14, 2024 at 7:04 PM Peter Eisentraut <[email protected]> wrote:\n>> Attached is a patch to implement this. It seems to work, but of course\n>> it's kind of hard to tell whether it actually does anything useful.\n> \n> Header order problem: pg_config_os.h defines __darwin__, but\n> pg_config_manual.h is included first, and tests __darwin__. I hacked\n> my way around that, and then made a table of 40,000,000 integers in a\n> 2GB buffer pool. I used \"select count(pg_buffercache_evict(buffered))\n> from pg_buffer_cache\", and \"sudo purge\", to clear the two layers of\n> cache for each test, and then measured:\n> \n> maintenance_io_concurrency=0, ANALYZE: 2311ms\n> maintenance_io_concurrency=10, ANALYZE: 652ms\n> maintenance_io_concurrency=25, ANALYZE: 389ms\n> \n> It works!\n\nCool! I'll work on a more polished patch.\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 16:39:31 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 14.08.24 16:39, Peter Eisentraut wrote:\n> On 14.08.24 14:36, Thomas Munro wrote:\n>> On Wed, Aug 14, 2024 at 7:04 PM Peter Eisentraut \n>> <[email protected]> wrote:\n>>> Attached is a patch to implement this.  It seems to work, but of course\n>>> it's kind of hard to tell whether it actually does anything useful.\n>>\n>> Header order problem: pg_config_os.h defines __darwin__, but\n>> pg_config_manual.h is included first, and tests __darwin__.  I hacked\n>> my way around that, and then made a table of 40,000,000 integers in a\n>> 2GB buffer pool.  I used \"select count(pg_buffercache_evict(buffered))\n>> from pg_buffer_cache\", and \"sudo purge\", to clear the two layers of\n>> cache for each test, and then measured:\n>>\n>> maintenance_io_concurrency=0,  ANALYZE: 2311ms\n>> maintenance_io_concurrency=10, ANALYZE:  652ms\n>> maintenance_io_concurrency=25, ANALYZE:  389ms\n>>\n>> It works!\n> \n> Cool!  I'll work on a more polished patch.\n\nHere it is.\n\nSome interesting questions:\n\nWhat to do about the order of the symbols and include files. I threw \nsomething into src/include/port/darwin.h, but I'm not sure if that's \ngood. Alternatively, we could not use __darwin__ but instead the more \nstandard and predefined defined(__APPLE__) && defined(__MACH__).\n\nHow to document it. The current documentation makes references mainly \nto the availability of posix_fadvise(). That seems quite low-level. \nHow could a user of a prepared package even find out about that? Should \nwe just say \"requires OS support\" (kind of like I did here) and you can \nquery the effective state by looking at the *_io_concurrency settings? \nOr do we need a read-only parameter that shows whether prefetch support \nexists (kind of along the lines of huge pages)?\n\nBtw., for context, here is what I gather the prefetch support (with this \npatch) is:\n\ncygwin posix_fadvise\ndarwin fcntl\nfreebsd posix_fadvise\nlinux posix_fadvise\nnetbsd posix_fadvise\nopenbsd no\nsolaris fake\nwin32 no\n\n(There is also the possibility that we provide an implementation of \nposix_fadvise() for macOS that wraps the platform-specific code in this \npatch. And then Apple could just take that. ;-) )", "msg_date": "Fri, 16 Aug 2024 20:58:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On Sat, Aug 17, 2024 at 6:58 AM Peter Eisentraut <[email protected]> wrote:\n> What to do about the order of the symbols and include files. I threw\n> something into src/include/port/darwin.h, but I'm not sure if that's\n> good. Alternatively, we could not use __darwin__ but instead the more\n> standard and predefined defined(__APPLE__) && defined(__MACH__).\n\nHmm. fd.h and fd.c test for F_NOCACHE, which is pretty closely\nrelated. Now I'm wondering why we actually need this in\npg_config_manual.h at all. Who would turn it off at compile time, and\nwhy would they not be satisfied with setting relevant GUCs to 0? Can\nwe just teach fd.h to define USE_PREFETCH if\ndefined(POSIX_FADV_WILLNEED) || defined(F_RDADVISE)?\n\n(I have also thought multiple times about removing the configure\nprobes for F_FULLFSYNC, and just doing #ifdef. Oh, that's in my patch\nfor CF #4453.)\n\n> How to document it. The current documentation makes references mainly\n> to the availability of posix_fadvise(). That seems quite low-level.\n> How could a user of a prepared package even find out about that? Should\n> we just say \"requires OS support\" (kind of like I did here) and you can\n> query the effective state by looking at the *_io_concurrency settings?\n> Or do we need a read-only parameter that shows whether prefetch support\n> exists (kind of along the lines of huge pages)?\n\nI think that's fine. I don't really like the word \"prefetch\", could\nmean many different things. What about \"requires OS support for\nissuing read-ahead advice\", which uses a word that appears in both of\nthose interfaces? And yeah the GUCs being nailed to zero means you\ndon't have it.\n\n> (There is also the possibility that we provide an implementation of\n> posix_fadvise() for macOS that wraps the platform-specific code in this\n> patch. And then Apple could just take that. ;-) )\n\nYeah might be simpler... as long as we make sure that we test for\nhaving the function AND the relevant POSIX_FADV_xxx in places, which I\nsee that we do.\n\n\n", "msg_date": "Sat, 17 Aug 2024 10:01:42 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Hmm. fd.h and fd.c test for F_NOCACHE, which is pretty closely\n> related. Now I'm wondering why we actually need this in\n> pg_config_manual.h at all. Who would turn it off at compile time, and\n> why would they not be satisfied with setting relevant GUCs to 0?\n\n+1 for not bothering with a pg_config_manual.h knob.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Aug 2024 18:20:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On Sat, Aug 17, 2024 at 6:58 AM Peter Eisentraut <[email protected]> wrote:\n> solaris fake\n\nI'm half tempted to suggest that we take this exception out. If it's\nthere, we call it. It doesn't do anything right now, but it's a cheap\nempty user space function, and I heard they are thinking about adding\na real syscall. (In a burst of enthusiasm for standards and stuff, I\ntalked to people involved in all the OSes using ZFS, to suggest\nhooking that up; they added some other stuff I asked about so I think\nit's a real threat.) But I haven't noticed any users on this list in\nmany years, to opine either way.\n\nIt doesn't do anything on Cygwin either. Actually, it's worse than\nnothing, it looks like it makes two useless system calls adjusting the\n\"sequential\" flag on the file handle. But really, of the 3 ways to\nbuild on Windows, only MSVC has real users, so it makes no useless\nsystem calls at all, so I don't propose to change anything due to this\nobservation.\n\n> win32 no\n\nI guess you could implement this in two ways:\n\n* CreateFileMapping(), MapViewOfFile(), PrefetchVirtualMemory(),\nUnmapViewOfFile(), Closehandle(). That's a lot system calls and maybe\n expensive VM stuff, who knows.\n\n* ReadFileEx() in OVERLAPPED mode (async), into a dummy buffer, with a\ncompletion callback that frees the buffer and OVERLAPPED object.\nThat's a lot of extra work allocating memory, copying to user space,\nscheduling a callback, and freeing memory for nothing.\n\nBoth are terrible, but likely still better than an I/O stall, I dunno.\nI think the VMS, NT, Unix-hater view would be: why would you want such\na stupid programming interface anyway, when you could use async I/O\nproperly? I love Unix, but they'd have a point.\n\n\n", "msg_date": "Sat, 17 Aug 2024 13:22:30 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 17.08.24 00:01, Thomas Munro wrote:\n> On Sat, Aug 17, 2024 at 6:58 AM Peter Eisentraut <[email protected]> wrote:\n>> What to do about the order of the symbols and include files. I threw\n>> something into src/include/port/darwin.h, but I'm not sure if that's\n>> good. Alternatively, we could not use __darwin__ but instead the more\n>> standard and predefined defined(__APPLE__) && defined(__MACH__).\n> \n> Hmm. fd.h and fd.c test for F_NOCACHE, which is pretty closely\n> related. Now I'm wondering why we actually need this in\n> pg_config_manual.h at all. Who would turn it off at compile time, and\n> why would they not be satisfied with setting relevant GUCs to 0? Can\n> we just teach fd.h to define USE_PREFETCH if\n> defined(POSIX_FADV_WILLNEED) || defined(F_RDADVISE)?\n\nI thought USE_PREFETCH existed so that we don't have the run-time \noverhead for all the bookkeeping code if we don't have any OS-level \nprefetch support at the end. But it looks like most of that bookkeeping \ncode is skipped anyway if the *_io_concurrency settings are at 0. So \nyes, getting rid of USE_PREFETCH globally would be useful.\n\n> (I have also thought multiple times about removing the configure\n> probes for F_FULLFSYNC, and just doing #ifdef. Oh, that's in my patch\n> for CF #4453.)\n\nUnderstandable, but we should be careful here that we don't create \nsetups that can cause bugs like \n<https://www.postgresql.org/message-id/[email protected]>.\n\n> I think that's fine. I don't really like the word \"prefetch\", could\n> mean many different things. What about \"requires OS support for\n> issuing read-ahead advice\", which uses a word that appears in both of\n> those interfaces?\n\nI like that term.\n\n\n\n", "msg_date": "Sun, 18 Aug 2024 15:35:55 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 18.08.24 15:35, Peter Eisentraut wrote:\n> On 17.08.24 00:01, Thomas Munro wrote:\n>> Hmm.  fd.h and fd.c test for F_NOCACHE, which is pretty closely\n>> related.  Now I'm wondering why we actually need this in\n>> pg_config_manual.h at all.  Who would turn it off at compile time, and\n>> why would they not be satisfied with setting relevant GUCs to 0?  Can\n>> we just teach fd.h to define USE_PREFETCH if\n>> defined(POSIX_FADV_WILLNEED) || defined(F_RDADVISE)?\n> \n> I thought USE_PREFETCH existed so that we don't have the run-time \n> overhead for all the bookkeeping code if we don't have any OS-level \n> prefetch support at the end.  But it looks like most of that bookkeeping \n> code is skipped anyway if the *_io_concurrency settings are at 0.  So \n> yes, getting rid of USE_PREFETCH globally would be useful.\n> \n>> (I have also thought multiple times about removing the configure\n>> probes for F_FULLFSYNC, and just doing #ifdef.  Oh, that's in my patch\n>> for CF #4453.)\n> \n> Understandable, but we should be careful here that we don't create \n> setups that can cause bugs like \n> <https://www.postgresql.org/message-id/[email protected]>.\n> \n>> I think that's fine.  I don't really like the word \"prefetch\", could\n>> mean many different things.  What about \"requires OS support for\n>> issuing read-ahead advice\", which uses a word that appears in both of\n>> those interfaces?\n> \n> I like that term.\n\nHere is another patch, with the documentation wording adjusted like \nthis, and a bit of other tidying.\n\nI opted against pursuing some of the other refactoring ideas as part of \nthis. Removing USE_PREFETCH seems worthwhile, but has some open \nquestions. For example, pg_prewarm has a prefetch mode, which currently \nerrors if there is no prefetch support. So we'd still require some \nknowledge outside of fd.c, unless we want to change that behavior. The \nidea of creating a src/port/posix_fadvise.c also got a bit too \ncomplicated in terms of how to weave that into configure and meson. \nThere are apparently various old problems, like old Linux systems with \nincompatible declarations, or something like that, and then the special \ncasing of Solaris (which isn't even in meson.build). If we could get \nrid of some of that, then it would be easier to add new behavior there, \notherwise it's just likely to break things. So I'm leaving these as \nseparate projects.\n\nIn terms of $subject, this patch seems sufficient for now.", "msg_date": "Fri, 23 Aug 2024 14:28:10 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On Sat, Aug 24, 2024 at 12:28 AM Peter Eisentraut <[email protected]> wrote:\n> In terms of $subject, this patch seems sufficient for now.\n\nWFM. I noticed you don't have an EINTR retry loop, but the man page\ndoesn't say you need one, so overall this patch LGTM.\n\n+ * posix_fadvise() is the simplest standardized interface that accomplishes\n+ * this. We could add an implementation using libaio in the future; but note\n+ * that this API is inappropriate for libaio, which wants to have a buffer\n+ * provided to read into.\n\nI would consider just dropping that comment about libaio, if touching\nthis paragraph. Proposals exist for AIO of course, but not with\nlibaio, and better predictions with useful discussion of the topic\nseem unlikely to fit in this margin.\n\n\n", "msg_date": "Mon, 26 Aug 2024 17:54:26 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 26.08.24 07:54, Thomas Munro wrote:\n> On Sat, Aug 24, 2024 at 12:28 AM Peter Eisentraut <[email protected]> wrote:\n>> In terms of $subject, this patch seems sufficient for now.\n> \n> WFM. I noticed you don't have an EINTR retry loop, but the man page\n> doesn't say you need one, so overall this patch LGTM.\n> \n> + * posix_fadvise() is the simplest standardized interface that accomplishes\n> + * this. We could add an implementation using libaio in the future; but note\n> + * that this API is inappropriate for libaio, which wants to have a buffer\n> + * provided to read into.\n> \n> I would consider just dropping that comment about libaio, if touching\n> this paragraph. Proposals exist for AIO of course, but not with\n> libaio, and better predictions with useful discussion of the topic\n> seem unlikely to fit in this margin.\n\ncommitted with that change\n\n\n\n", "msg_date": "Wed, 28 Aug 2024 08:21:04 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "Oh, I missed something: I think we're missing FileAcces(), in case the\nvfd has to be re-opened, no?\n\n\n", "msg_date": "Thu, 29 Aug 2024 13:22:16 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 29.08.24 03:22, Thomas Munro wrote:\n> Oh, I missed something: I think we're missing FileAcces(), in case the\n> vfd has to be re-opened, no?\n\nFixed, thanks.\n\n\n\n", "msg_date": "Thu, 29 Aug 2024 08:28:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On Mon, Aug 19, 2024 at 1:35 AM Peter Eisentraut <[email protected]> wrote:\n> On 17.08.24 00:01, Thomas Munro wrote:\n> > I think that's fine. I don't really like the word \"prefetch\", could\n> > mean many different things. What about \"requires OS support for\n> > issuing read-ahead advice\", which uses a word that appears in both of\n> > those interfaces?\n>\n> I like that term.\n\nA couple of other places still use the old specific terminology. PSA.", "msg_date": "Tue, 3 Sep 2024 13:47:34 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS prefetching support" }, { "msg_contents": "On 03.09.24 03:47, Thomas Munro wrote:\n> On Mon, Aug 19, 2024 at 1:35 AM Peter Eisentraut <[email protected]> wrote:\n>> On 17.08.24 00:01, Thomas Munro wrote:\n>>> I think that's fine. I don't really like the word \"prefetch\", could\n>>> mean many different things. What about \"requires OS support for\n>>> issuing read-ahead advice\", which uses a word that appears in both of\n>>> those interfaces?\n>>\n>> I like that term.\n> \n> A couple of other places still use the old specific terminology. PSA.\n\nlooks good to me\n\n\n", "msg_date": "Tue, 3 Sep 2024 22:28:19 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS prefetching support" } ]
[ { "msg_contents": "Hi,\n\nWhile discussing another patch [1] it was discovered that we don't\nhave a convenient way of casting a bytea to an integer / bigint and\nvice versa, extracting integers larger than one byte from byteas, etc.\n\nFor instance, casting '\\x11223344' to 0x11223344 may look like this:\n\n```\nWITH vals AS (\n SELECT '\\x11223344'::bytea AS x\n)\nSELECT\n (get_byte(x, 0) :: bigint << 24) |\n (get_byte(x, 1) << 16) |\n (get_byte(x, 2) << 8) |\n get_byte(x, 3)\nFROM vals;\n```\n\nThere seems to be a demand for this functionality [2][3] and it costs\nus nothing to maintain it, so I propose adding it.\n\nThe proposed patch adds get_bytes() and set_bytes() functions. The\nsemantics is similar to get_byte() and set_byte() we already have but\nthe functions operate with bigints rather than bytes and the user can\nspecify the size of the integer. This allows working with int2s,\nint4s, int8s or even int5s if needed.\n\nExamples:\n\n```\nSELECT get_bytes('\\x1122334455667788'::bytea, 1, 2) = 0x2233;\n ?column?\n----------\n t\n\nSELECT set_bytes('\\x1122334455667788'::bytea, 1, 2, 0xAABB);\n set_bytes\n--------------------\n \\x11aabb4455667788\n```\n\nThoughts?\n\n[1]: https://postgr.es/m/CAJ7c6TNMTGnqnG%3DyXXUQh9E88JDckmR45H2Q%2B%3DucaCLMOW1QQw%40mail.gmail.com\n[2]: https://stackoverflow.com/questions/32944267/postgresql-converting-bytea-to-bigint\n[3]: https://postgr.es/m/AANLkTikip9xs8iXc8e%2BMgz1T1701i8Xk6QtbVB3KJQzX%40mail.gmail.com\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 14 Aug 2024 14:01:22 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024, at 13:01, Aleksander Alekseev wrote:\n> The proposed patch adds get_bytes() and set_bytes() functions. The\n> semantics is similar to get_byte() and set_byte() we already have but\n> the functions operate with bigints rather than bytes and the user can\n> specify the size of the integer. This allows working with int2s,\n> int4s, int8s or even int5s if needed.\n\n+1\n\nI wanted this myself many times.\n\nI wonder if get_bytes() and set_bytes() will behave differently\non little-endian vs big-endian systems?\n\nIf so, then I think it would be nice to enforce a consistent byte order\n(e.g., big-endian), to ensure consistent behavior across platforms.\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 14 Aug 2024 13:21:49 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "Hi,\n\n> +1\n>\n> I wanted this myself many times.\n>\n> I wonder if get_bytes() and set_bytes() will behave differently\n> on little-endian vs big-endian systems?\n>\n> If so, then I think it would be nice to enforce a consistent byte order\n> (e.g., big-endian), to ensure consistent behavior across platforms.\n\nNo, the returned value will not depend on the CPU endiness. Current\nimplementation uses big-endian / network order which in my humble\nopinion is what most users would expect.\n\nI believe we also need reverse(bytea) and repeat(bytea, integer)\nfunctions e.g. for those who want little-endian. However I want to\npropose them separately when we are done with this patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 14 Aug 2024 14:31:31 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024, at 13:31, Aleksander Alekseev wrote:\n>> I wonder if get_bytes() and set_bytes() will behave differently\n>> on little-endian vs big-endian systems?\n> No, the returned value will not depend on the CPU endiness. Current\n> implementation uses big-endian / network order which in my humble\n> opinion is what most users would expect.\n\nNice.\n\nI've reviewed and tested the patch.\nIt looks straight-forward to me.\nI don't see any potential problems.\nI've marked it Ready for Committer.\n\n> I believe we also need reverse(bytea) and repeat(bytea, integer)\n> functions e.g. for those who want little-endian. However I want to\n> propose them separately when we are done with this patch.\n\nI agree those functions would be nice too.\n\nI also think it would be nice to provide these convenience functions:\nto_bytes(bigint) -> bytea\nfrom_bytes(bytea) -> bigint\n\nSince if not having a current bytea value,\nand just wanting to convert a bigint to bytea,\nthen one would need to construct an zeroed bytea\nof the proper size first, to then use set_bytes().\n\nAnd if just wanting to convert the entire bytea to a bigint,\nthen one would need to pass 0 as offset and the length\nof the bytea as size.\n\nRegards,\n\nJoel\n\n\n", "msg_date": "Wed, 14 Aug 2024 14:34:06 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024 at 02:34:06PM +0200, Joel Jacobson wrote:\n> On Wed, Aug 14, 2024, at 13:31, Aleksander Alekseev wrote:\n> >> I wonder if get_bytes() and set_bytes() will behave differently\n> >> on little-endian vs big-endian systems?\n> > No, the returned value will not depend on the CPU endiness. Current\n> > implementation uses big-endian / network order which in my humble\n> > opinion is what most users would expect.\n> \n> Nice.\n\nIndeed!\n\n> I've reviewed and tested the patch.\n> It looks straight-forward to me.\n> I don't see any potential problems.\n> I've marked it Ready for Committer.\n> \n> > I believe we also need reverse(bytea) and repeat(bytea, integer)\n> > functions e.g. for those who want little-endian. However I want to\n> > propose them separately when we are done with this patch.\n> \n> I agree those functions would be nice too.\n> \n> I also think it would be nice to provide these convenience functions:\n> to_bytes(bigint) -> bytea\n> from_bytes(bytea) -> bigint\n\nAlong with these, would it make sense to have other forms of these\nthat won't choke at 63 bits, e.g. NUMERIC or TEXT?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\n\n", "msg_date": "Wed, 14 Aug 2024 16:43:51 +0200", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024, at 16:43, David Fetter wrote:\n>> I also think it would be nice to provide these convenience functions:\n>> to_bytes(bigint) -> bytea\n>> from_bytes(bytea) -> bigint\n>\n> Along with these, would it make sense to have other forms of these\n> that won't choke at 63 bits, e.g. NUMERIC or TEXT?\n\nI wonder what would be good names for such functions though?\n\nSince NUMERIC can have decimal digits, and since BYTEA can already be casted to\nTEXT, which will just be \\x followed by the hex digits, maybe such names should\ninclude \"integer\" or some other word, to indicate what is being returned?\n\nIt's already quite easy to convert to NUMERIC though,\nfor users who are aware of tricks like this:\nSELECT ('0x'||encode('\\xCAFEBABEDEADBEEF'::bytea,'hex'))::numeric;\n numeric\n----------------------\n 14627333968688430831\n(1 row)\n\nBut, I think it would be better to provide functions,\nsince many users probably have to google+stackoverflow or gpt\nto learn such tricks, which are not in the official documentation.\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 14 Aug 2024 17:39:32 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024 at 05:39:32PM +0200, Joel Jacobson wrote:\n> On Wed, Aug 14, 2024, at 16:43, David Fetter wrote:\n> >> I also think it would be nice to provide these convenience functions:\n> >> to_bytes(bigint) -> bytea\n> >> from_bytes(bytea) -> bigint\n> >\n> > Along with these, would it make sense to have other forms of these\n> > that won't choke at 63 bits, e.g. NUMERIC or TEXT?\n> \n> I wonder what would be good names for such functions though?\n\ndecimal_to_bytes(numeric), decimal_to_bytes(text) on one side\ndecimal_from_bytes(bytea, typeoid)\n\nNaming Things™ is one of the hard problems in computer science. Bad\njoke that includes cache coherency and off-by-one included by\nreference.\n\n> Since NUMERIC can have decimal digits, and since BYTEA can already be casted to\n> TEXT, which will just be \\x followed by the hex digits, maybe such names should\n> include \"integer\" or some other word, to indicate what is being returned?\n> \n> It's already quite easy to convert to NUMERIC though,\n> for users who are aware of tricks like this:\n> SELECT ('0x'||encode('\\xCAFEBABEDEADBEEF'::bytea,'hex'))::numeric;\n> numeric\n> ----------------------\n> 14627333968688430831\n> (1 row)\n> \n> But, I think it would be better to provide functions,\n> since many users probably have to google+stackoverflow or gpt\n> to learn such tricks, which are not in the official documentation.\n\nAs usual, I see \"official documentation lacks helpful and/or\nnon-obvious examples\" as a problem best approached by making good the\nlack. I am aware that my ideas about pedagogy, documentation, etc. are\nnot shared universally, but they're widely shared by people whose main\ninteraction with documents is trying to get help from them.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\n\n", "msg_date": "Wed, 14 Aug 2024 18:31:44 +0200", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024, at 18:31, David Fetter wrote:\n> On Wed, Aug 14, 2024 at 05:39:32PM +0200, Joel Jacobson wrote:\n>> On Wed, Aug 14, 2024, at 16:43, David Fetter wrote:\n>> >> I also think it would be nice to provide these convenience functions:\n>> >> to_bytes(bigint) -> bytea\n>> >> from_bytes(bytea) -> bigint\n>> >\n>> > Along with these, would it make sense to have other forms of these\n>> > that won't choke at 63 bits, e.g. NUMERIC or TEXT?\n>> \n>> I wonder what would be good names for such functions though?\n>\n> decimal_to_bytes(numeric), decimal_to_bytes(text) on one side\n> decimal_from_bytes(bytea, typeoid)\n\nI assume decimal_to_bytes() will only accept integer numerics,\nthat is, that don't have a decimal digits part?\nHmm, it's perhaps then a bit counter intuitive that the name\ncontains \"decimal\", since some people might associate the word\n\"decimal\" stronger with \"decimal digits\" rather than the radix/base 10.\n\nWhat do we want to happen if passing a numeric with decimal digits,\nto decimal_to_bytes()? It must be an error, right?\n\nExample: SELECT decimal_to_bytes(1.23);\n\n> Naming Things™ is one of the hard problems in computer science. Bad\n> joke that includes cache coherency and off-by-one included by\n> reference.\n\nSo true :)\n\n> As usual, I see \"official documentation lacks helpful and/or\n> non-obvious examples\" as a problem best approached by making good the\n> lack. I am aware that my ideas about pedagogy, documentation, etc. are\n> not shared universally, but they're widely shared by people whose main\n> interaction with documents is trying to get help from them.\n\nWell spoken.\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 14 Aug 2024 19:25:51 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Wed, Aug 14, 2024, at 19:25, Joel Jacobson wrote:\n> What do we want to happen if passing a numeric with decimal digits,\n> to decimal_to_bytes()? It must be an error, right?\n>\n> Example: SELECT decimal_to_bytes(1.23);\n\nHmm, an error feels quite ugly on second thought.\nWould be nicer if all numerics could be represented,\nin a bytea representation that is meaningful to other systems.\n\nI think we need a tuple somehow.\n\nExample:\n\nSELECT numeric_to_bytes(223195403574957);\n(\\xcafebabedeadbeef,0,false)\n\nSELECT numeric_to_bytes(-223195403574957);\n(\\xcafebabedeadbeef,0,true)\n\nSELECT numeric_to_bytes(2231954035749.57);\n(\\xcafebabedeadbeef,2,false)\n\nSELECT numeric_from_bytes('\\xcafebabedeadbeef'::bytea,0,false);\n223195403574957\n\nSELECT numeric_from_bytes('\\xcafebabedeadbeef'::bytea,0,true);\n-223195403574957\n\nSELECT numeric_from_bytes('\\xcafebabedeadbeef'::bytea,2,false);\n2231954035749.57\n\nBut then what about Inf,-Inf,NaN?\n\nRegards,\nJoel\n\n\n", "msg_date": "Thu, 15 Aug 2024 06:19:36 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Thu, 15 Aug 2024 at 05:20, Joel Jacobson <[email protected]> wrote:\n>\n> On Wed, Aug 14, 2024, at 19:25, Joel Jacobson wrote:\n> > What do we want to happen if passing a numeric with decimal digits,\n> > to decimal_to_bytes()? It must be an error, right?\n> >\n> > Example: SELECT decimal_to_bytes(1.23);\n>\n> Hmm, an error feels quite ugly on second thought.\n> Would be nicer if all numerics could be represented,\n>\n> But then what about Inf,-Inf,NaN?\n>\n\nPerhaps we should also add casts between bytea and the integer/numeric\ntypes. That might be easier to use than functions in some\ncircumstances.\n\nWhen casting a numeric to an integer, the result is rounded to the\nnearest integer, and NaN/Inf generate errors, so we should probably do\nthe same here.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 15 Aug 2024 11:14:55 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "Hi,\n\n> Perhaps we should also add casts between bytea and the integer/numeric\n> types. That might be easier to use than functions in some\n> circumstances.\n>\n> When casting a numeric to an integer, the result is rounded to the\n> nearest integer, and NaN/Inf generate errors, so we should probably do\n> the same here.\n\nYes, I was also thinking about adding NUMERIC versions of get_bytes()\n/ set_bytes(). This would allow converting more than 8 bytes to/from\nan integer. I dropped this idea because I thought there would be not\nmuch practical use for it. On the flip side you never know who uses\nPostgres and for what purpose.\n\nI will add corresponding casts unless the idea will get a push-back\nfrom the community. IMO the existence of these casts will at least not\nmake things worse.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 15 Aug 2024 13:58:03 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Thu, 15 Aug 2024 13:58:03 +0300\nAleksander Alekseev <[email protected]> wrote:\n\n> Hi,\n> \n> > Perhaps we should also add casts between bytea and the integer/numeric\n> > types. That might be easier to use than functions in some\n> > circumstances.\n> >\n> > When casting a numeric to an integer, the result is rounded to the\n> > nearest integer, and NaN/Inf generate errors, so we should probably do\n> > the same here.\n> \n> Yes, I was also thinking about adding NUMERIC versions of get_bytes()\n> / set_bytes(). This would allow converting more than 8 bytes to/from\n> an integer. I dropped this idea because I thought there would be not\n> much practical use for it. On the flip side you never know who uses\n> Postgres and for what purpose.\n> \n> I will add corresponding casts unless the idea will get a push-back\n> from the community. IMO the existence of these casts will at least not\n> make things worse.\n\nWhen we add such casts between bytea and the integer/numeric types,\none of the problems mentioned the first of the thread, that is, \n\"we don't have a convenient way of casting a bytea to an integer / bigint\nand vice versa\", would seem be resolved. \n\nOn the other hand, I suppose get_bytes() and set_bytes() are still useful\nfor extracting bytes from byteas, etc. If casting is no longer the main\npurpose of these functions, are variations that get_bytes returns bytea\ninstead of bigint, and set_bytes receives bytea as the newvalue argument\nuseful? I wonder it would eliminate the restrict that size cannot be larger\nthan 8.\n\n\nHere are my very trivial comments on the patch.\n\n+ * this routine treats \"bytea\" as an array of bytes.\n\nMaybe, the sentence should start with \"This ... \".\n\n+\twhile(size)\n+\t{\n\nI wonder inserting a space after \"while\" is the standard style.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Fri, 16 Aug 2024 16:49:36 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "Hi,\n\n> When we add such casts between bytea and the integer/numeric types,\n> one of the problems mentioned the first of the thread, that is,\n> \"we don't have a convenient way of casting a bytea to an integer / bigint\n> and vice versa\", would seem be resolved.\n>\n> On the other hand, I suppose get_bytes() and set_bytes() are still useful\n> for extracting bytes from byteas, etc. If casting is no longer the main\n> purpose of these functions, are variations that get_bytes returns bytea\n> instead of bigint, and set_bytes receives bytea as the newvalue argument\n> useful? I wonder it would eliminate the restrict that size cannot be larger\n> than 8.\n\nNo, casting between bytea and numeric will not replace get_bytes() /\nset_bytes() for performance reasons.\n\nConsider the case when you want to extract an int4 from a bytea.\nget_bytes() is going to be very fast while substr() -> ::numeric ->\n::integer chain will require unnecessary copying and conversions.\nCasting between bytea and numeric is only useful when one has to deal\nwith integers larger than 8 bytes. Whether this happens often is a\ndebatable question.\n\n> Here are my very trivial comments on the patch.\n>\n> + * this routine treats \"bytea\" as an array of bytes.\n>\n> Maybe, the sentence should start with \"This ... \".\n>\n> + while(size)\n> + {\n>\n> I wonder inserting a space after \"while\" is the standard style.\n\nThanks, fixed.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 16 Aug 2024 11:41:37 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On 14.08.24 13:01, Aleksander Alekseev wrote:\n> The proposed patch adds get_bytes() and set_bytes() functions. The\n> semantics is similar to get_byte() and set_byte() we already have but\n> the functions operate with bigints rather than bytes and the user can\n> specify the size of the integer. This allows working with int2s,\n> int4s, int8s or even int5s if needed.\n> \n> Examples:\n> \n> ```\n> SELECT get_bytes('\\x1122334455667788'::bytea, 1, 2) = 0x2233;\n> ?column?\n> ----------\n> t\n> \n> SELECT set_bytes('\\x1122334455667788'::bytea, 1, 2, 0xAABB);\n> set_bytes\n> --------------------\n> \\x11aabb4455667788\n> ```\n\nI think these functions do about three things at once, and I don't think \nthey address the originally requested purpose very well.\n\nConverting between integers and byte arrays of matching size seems like \nreasonable functionality. (You can already do one half of that by \ncalling int2send(), int4send(), and int8send(), but the other direction \n(intXrecv()) is not user-callable).\n\nThe other things are extracting that byte array from a larger byte array \nand sticking it back into a larger byte array; those seem like separate \noperations. There is already substr() for bytea for the first part, and \nthere might be another string-like operationg for the second part, or \nmaybe we could add one.\n\n\n", "msg_date": "Fri, 16 Aug 2024 12:11:18 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On Fri, 16 Aug 2024 11:41:37 +0300\nAleksander Alekseev <[email protected]> wrote:\n\n> Hi,\n> \n> > When we add such casts between bytea and the integer/numeric types,\n> > one of the problems mentioned the first of the thread, that is,\n> > \"we don't have a convenient way of casting a bytea to an integer / bigint\n> > and vice versa\", would seem be resolved.\n> >\n> > On the other hand, I suppose get_bytes() and set_bytes() are still useful\n> > for extracting bytes from byteas, etc. If casting is no longer the main\n> > purpose of these functions, are variations that get_bytes returns bytea\n> > instead of bigint, and set_bytes receives bytea as the newvalue argument\n> > useful? I wonder it would eliminate the restrict that size cannot be larger\n> > than 8.\n> \n> No, casting between bytea and numeric will not replace get_bytes() /\n> set_bytes() for performance reasons.\n> \n> Consider the case when you want to extract an int4 from a bytea.\n> get_bytes() is going to be very fast while substr() -> ::numeric ->\n> ::integer chain will require unnecessary copying and conversions.\n> Casting between bytea and numeric is only useful when one has to deal\n> with integers larger than 8 bytes. Whether this happens often is a\n> debatable question.\n\nThank you for explanation. I understood the performance drawback.\n\nI supposed interfaces similar to lo_get, lo_put, loread, lowrite of\nlarge objects since they might be useful to access or modify a part of\nbytea like a binary file read by pg_read_binary_file. \n\n> \n> > Here are my very trivial comments on the patch.\n> >\n> > + * this routine treats \"bytea\" as an array of bytes.\n> >\n> > Maybe, the sentence should start with \"This ... \".\n> >\n> > + while(size)\n> > + {\n> >\n> > I wonder inserting a space after \"while\" is the standard style.\n> \n> Thanks, fixed.\n\nShould we fix the comment on byteaGetByte in passing, too?\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Fri, 16 Aug 2024 19:32:24 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "Hi Peter,\n\nThanks for the feedback.\n\n> I think these functions do about three things at once, and I don't think\n> they address the originally requested purpose very well.\n\nThe amount of things that the function does is a matter of\ninterpretation. I can claim that it does one thing (\"extracts an\ninteger from a bytea\"), or as many things as there are lines of code.\nIMO the actual question is whether this is a good user interface or\nnot. Since we already have get_byte() / set_byte() the interface is\narguably OK.\n\n> Converting between integers and byte arrays of matching size seems like\n> reasonable functionality. (You can already do one half of that by\n> calling int2send(), int4send(), and int8send(), but the other direction\n> (intXrecv()) is not user-callable).\n>\n> The other things are extracting that byte array from a larger byte array\n> and sticking it back into a larger byte array; those seem like separate\n> operations. There is already substr() for bytea for the first part, and\n> there might be another string-like operationg for the second part, or\n> maybe we could add one.\n\nIf I understand correctly, you propose doing (1):\n\n```\nSELECT substr('\\x1122334455667788'::bytea, 2, 2) :: int2;\n```\n\n... instead of:\n\n```\nSELECT get_bytes('\\x1122334455667788'::bytea, 1, 2)\n```\n\n... and (2):\n\n```\nWITH vals AS (\n SELECT '\\x1122334455667788'::bytea AS x\n) SELECT substr(x, 1, 1) || int2send(1234::int2) || substr(x, 4, 5) FROM vals;\n```\n\n... instead of:\n\n```\nSELECT set_bytes('\\x1122334455667788'::bytea, 1, 2, 0xAABB);\n```\n\nThere is nothing to do for query (2), it already works. It's not much\nbetter than the query from my first email though.\n\nTo clarify, supporting bytea<->integer (and/or bytea<->numeric) casts\ndoesn't strike me as a terrible idea but it doesn't address the issue\nI'm proposing to solve.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 19 Aug 2024 17:10:15 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "On 19.08.24 16:10, Aleksander Alekseev wrote:\n> To clarify, supporting bytea<->integer (and/or bytea<->numeric) casts\n> doesn't strike me as a terrible idea but it doesn't address the issue\n> I'm proposing to solve.\n\nWhat is the issue you are proposing to solve?\n\nYou linked to a couple of threads and stackoverflow pages, and you \nconcluded from that that we should add get_bytes() and set_bytes() \nfunctions. It's not obvious how you get from the former to the latter, \nand I don't think the functions you propose are well-designed in isolation.\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 18:39:21 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" }, { "msg_contents": "Hi,\n\n> On 19.08.24 16:10, Aleksander Alekseev wrote:\n> > To clarify, supporting bytea<->integer (and/or bytea<->numeric) casts\n> > doesn't strike me as a terrible idea but it doesn't address the issue\n> > I'm proposing to solve.\n>\n> What is the issue you are proposing to solve?\n>\n> You linked to a couple of threads and stackoverflow pages, and you\n> concluded from that that we should add get_bytes() and set_bytes()\n> functions. It's not obvious how you get from the former to the latter,\n> and I don't think the functions you propose are well-designed in isolation.\n\nI guess there are in fact two problems, not one.\n\n1. Converting between bytea and integer types\n2. Multibyte versions of get_byte() / set_byte()\n\nAs you rightly pointed out, for (1) we just need to add missing casts.\nHere is the corresponding patch, v3-0001. Note that I couldn't re-use\nint{2,4,8}recv because its first argument is StringInfo, so I ended up\nimplementing my own bytea->int{2,4,8} functions.\n\nI think there may be value in (2) as well. It's implemented in v3-0002\nand I did my best to clarify the commit message. On the flip side the\nsituation when one wants something like extracting int4 from a\nbytea(or vice versa) and is not happy with convenience and/or\nperformance of substr()+casts is arguably rare. I'll be fine with\nwhatever consensus the community reaches about this patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 26 Aug 2024 14:31:37 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add get_bytes() and set_bytes() functions" } ]
[ { "msg_contents": "Hi hackers,\n\nwhile working on a replication slot tool (idea is to put it in contrib, not\nshared yet), I realized that \"pg_replslot\" is being used > 25 times in\n.c files.\n\nI think it would make sense to replace those occurrences with a $SUBJECT, attached\na patch doing so.\n\nThere is 2 places where it is not done:\n\nsrc/bin/initdb/initdb.c\nsrc/bin/pg_rewind/filemap.c\n\nfor consistency with the existing PG_STAT_TMP_DIR define.\n\nOut of curiosity, checking the sizes of affected files (O2, no debug):\n\nwith patch:\n\n text data bss dec hex filename\n 20315 224 17 20556 504c src/backend/backup/basebackup.o\n 30304 0 8 30312 7668 src/backend/replication/logical/reorderbuffer.o\n 23812 36 40 23888 5d50 src/backend/replication/slot.o\n 6367 0 0 6367 18df src/backend/utils/adt/genfile.o\n 40997 2574 2528 46099 b413 src/bin/initdb/initdb.o\n 6963 224 8 7195 1c1b src/bin/pg_rewind/filemap.o\n\nwithout patch:\n\n text data bss dec hex filename\n 20315 224 17 20556 504c src/backend/backup/basebackup.o\n 30286 0 8 30294 7656 src/backend/replication/logical/reorderbuffer.o\n 23766 36 40 23842 5d22 src/backend/replication/slot.o\n 6363 0 0 6363 18db src/backend/utils/adt/genfile.o\n 40997 2574 2528 46099 b413 src/bin/initdb/initdb.o\n 6963 224 8 7195 1c1b src/bin/pg_rewind/filemap.o\n\nAlso, I think we could do the same for:\n\npg_notify\npg_serial\npg_subtrans\npg_wal\npg_multixact\npg_tblspc\npg_logical\n\nAnd I volunteer to do so if you think that makes sense.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 14 Aug 2024 11:32:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "define PG_REPLSLOT_DIR" }, { "msg_contents": "On 2024-Aug-14, Bertrand Drouvot wrote:\n\n> Out of curiosity, checking the sizes of affected files (O2, no debug):\n> \n> with patch:\n> \n> text data bss dec hex filename\n> 30304 0 8 30312 7668 src/backend/replication/logical/reorderbuffer.o\n\n> without patch:\n> 30286 0 8 30294 7656 src/backend/replication/logical/reorderbuffer.o\n\nHmm, I suppose this delta can be explained because because the literal\nstring is used inside larger snprintf() format strings or similar, so\nthings like\n\n snprintf(path, sizeof(path),\n- \"pg_replslot/%s/%s\", slotname,\n+ \"%s/%s/%s\", PG_REPLSLOT_DIR, slotname,\n spill_de->d_name);\n\nand\n\n ereport(ERROR,\n (errcode_for_file_access(),\n- errmsg(\"could not remove file \\\"%s\\\" during removal of pg_replslot/%s/xid*: %m\",\n- path, slotname)));\n+ errmsg(\"could not remove file \\\"%s\\\" during removal of %s/%s/xid*: %m\",\n+ PG_REPLSLOT_DIR, path, slotname)));\n\nI don't disagree with making this change, but changing some of the other\ndirectory names you suggest, such as\n\n> pg_notify\n> pg_serial\n> pg_subtrans\n> pg_multixact\n> pg_wal\n\nwould probably make no difference, since there are no literal strings\nthat contain that as a substring(*). It might some sense to handle\npg_tblspc similarly. As for pg_logical, I think you'll want separate\ndefines for pg_logical/mappings, pg_logical/snapshots and so on.\n\n\n(*) I hope you're not going to suggest this kind of change (word-diff):\n\n if (GetOldestSnapshot())\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"[-pg_wal-]{+%s+}_replay_wait() must be only called without an active or registered snapshot\"{+, PG_WAL_DIR+}),\n errdetail(\"Make sure [-pg_wal-]{+%s+}_replay_wait() isn't called within a transaction with an isolation level higher than READ COMMITTED, another procedure, or a function.\"{+, PG_WAL_DIR+})));\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)\n\n\n", "msg_date": "Thu, 15 Aug 2024 20:40:42 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Wed, 14 Aug 2024 11:32:14 +0000\nBertrand Drouvot <[email protected]> wrote:\n\n> Hi hackers,\n> \n> while working on a replication slot tool (idea is to put it in contrib, not\n> shared yet), I realized that \"pg_replslot\" is being used > 25 times in\n> .c files.\n> \n> I think it would make sense to replace those occurrences with a $SUBJECT, attached\n> a patch doing so.\n> \n> There is 2 places where it is not done:\n> \n> src/bin/initdb/initdb.c\n> src/bin/pg_rewind/filemap.c\n> \n> for consistency with the existing PG_STAT_TMP_DIR define.\n> \n> Out of curiosity, checking the sizes of affected files (O2, no debug):\n> \n> with patch:\n> \n> text data bss dec hex filename\n> 20315 224 17 20556 504c src/backend/backup/basebackup.o\n> 30304 0 8 30312 7668 src/backend/replication/logical/reorderbuffer.o\n> 23812 36 40 23888 5d50 src/backend/replication/slot.o\n> 6367 0 0 6367 18df src/backend/utils/adt/genfile.o\n> 40997 2574 2528 46099 b413 src/bin/initdb/initdb.o\n> 6963 224 8 7195 1c1b src/bin/pg_rewind/filemap.o\n> \n> without patch:\n> \n> text data bss dec hex filename\n> 20315 224 17 20556 504c src/backend/backup/basebackup.o\n> 30286 0 8 30294 7656 src/backend/replication/logical/reorderbuffer.o\n> 23766 36 40 23842 5d22 src/backend/replication/slot.o\n> 6363 0 0 6363 18db src/backend/utils/adt/genfile.o\n> 40997 2574 2528 46099 b413 src/bin/initdb/initdb.o\n> 6963 224 8 7195 1c1b src/bin/pg_rewind/filemap.o\n> \n> Also, I think we could do the same for:\n> \n> pg_notify\n> pg_serial\n> pg_subtrans\n> pg_wal\n> pg_multixact\n> pg_tblspc\n> pg_logical\n> \n> And I volunteer to do so if you think that makes sense.\n> \n> Looking forward to your feedback,\n\n /* restore all slots by iterating over all on-disk entries */\n- replication_dir = AllocateDir(\"pg_replslot\");\n- while ((replication_de = ReadDir(replication_dir, \"pg_replslot\")) != NULL)\n+ replication_dir = AllocateDir(PG_REPLSLOT_DIR);\n+ while ((replication_de = ReadDir(replication_dir, PG_REPLSLOT_DIR)) != NULL)\n {\n char path[MAXPGPATH + 12];\n PGFileType de_type;\n\nI think the size of path can be rewritten to \"MAXPGPATH + sizeof(PG_REPLSLOT_DIR)\" \nand it seems easier to understand the reason why this size is used. \n(That said, I wonder the path would never longer than MAXPGPATH...)\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> \n> -- \n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:31:11 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Wed, Aug 14, 2024 at 5:02 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi hackers,\n>\n> while working on a replication slot tool (idea is to put it in contrib, not\n> shared yet), I realized that \"pg_replslot\" is being used > 25 times in\n> .c files.\n>\n> I think it would make sense to replace those occurrences with a $SUBJECT, attached\n> a patch doing so.\n\nMany of these places are slot specific directory/file names within\npg_replslot. I think we can further improve the code by creating macro\non the lines of LSN_FORMAT_ARGS\n#define SLOT_DIRNAME_ARGS(slotname) (PG_REPLSLOT_DIR, slotname)\nthis way we \"codify\" method to construct the slot directory name\neverywhere. We may add another macro\n#define SLOT_DIRNAME_FORMAT \"%s/%s\" to further enforce the same. But\nI didn't find similar LSN_FORMAT macro defined as \"%X/%X\". I don't\nremember exactly why we didn't add it. May be because of trouble with\ntranslations.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Aug 2024 16:11:31 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 15, 2024 at 08:40:42PM -0400, Alvaro Herrera wrote:\n> On 2024-Aug-14, Bertrand Drouvot wrote:\n> \n> > Out of curiosity, checking the sizes of affected files (O2, no debug):\n> > \n> > with patch:\n> > \n> > text data bss dec hex filename\n> > 30304 0 8 30312 7668 src/backend/replication/logical/reorderbuffer.o\n> \n> > without patch:\n> > 30286 0 8 30294 7656 src/backend/replication/logical/reorderbuffer.o\n> \n> Hmm, I suppose this delta can be explained because because the literal\n> string is used inside larger snprintf() format strings or similar, so\n> things like\n> \n> snprintf(path, sizeof(path),\n> - \"pg_replslot/%s/%s\", slotname,\n> + \"%s/%s/%s\", PG_REPLSLOT_DIR, slotname,\n> spill_de->d_name);\n> \n> and\n> \n> ereport(ERROR,\n> (errcode_for_file_access(),\n> - errmsg(\"could not remove file \\\"%s\\\" during removal of pg_replslot/%s/xid*: %m\",\n> - path, slotname)));\n> + errmsg(\"could not remove file \\\"%s\\\" during removal of %s/%s/xid*: %m\",\n> + PG_REPLSLOT_DIR, path, slotname)));\n> \n\nI did not look in details, but yeah that could probably be explained that way.\n\n> I don't disagree with making this change, but changing some of the other\n> directory names you suggest, such as\n> \n> > pg_notify\n> > pg_serial\n> > pg_subtrans\n> > pg_multixact\n> > pg_wal\n> \n> would probably make no difference, since there are no literal strings\n> that contain that as a substring(*). It might some sense to handle\n> pg_tblspc similarly. As for pg_logical, I think you'll want separate\n> defines for pg_logical/mappings, pg_logical/snapshots and so on.\n> \n> \n> (*) I hope you're not going to suggest this kind of change (word-diff):\n> \n> if (GetOldestSnapshot())\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"[-pg_wal-]{+%s+}_replay_wait() must be only called without an active or registered snapshot\"{+, PG_WAL_DIR+}),\n> errdetail(\"Make sure [-pg_wal-]{+%s+}_replay_wait() isn't called within a transaction with an isolation level higher than READ COMMITTED, another procedure, or a function.\"{+, PG_WAL_DIR+})));\n> \n\nYeah, would not cross my mind ;-). I might have been tempted to do the change\nin pg_combinebackup.c though.\n\nHaving said that, I agree to focus only on:\n\npg_replslot\npg_tblspc\npg_logical/*\n\nI made the changes for pg_tblspc in pg_combinebackup.c as the number of occurences\nare greater that the \"pg_wal\" ones and we were to define PG_TBLSPC_DIR in any\ncase.\n\nPlease find attached the related patches.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 19 Aug 2024 14:11:55 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Fri, Aug 16, 2024 at 01:31:11PM +0900, Yugo Nagata wrote:\n> On Wed, 14 Aug 2024 11:32:14 +0000\n> Bertrand Drouvot <[email protected]> wrote:\n> > Looking forward to your feedback,\n> \n> /* restore all slots by iterating over all on-disk entries */\n> - replication_dir = AllocateDir(\"pg_replslot\");\n> - while ((replication_de = ReadDir(replication_dir, \"pg_replslot\")) != NULL)\n> + replication_dir = AllocateDir(PG_REPLSLOT_DIR);\n> + while ((replication_de = ReadDir(replication_dir, PG_REPLSLOT_DIR)) != NULL)\n> {\n> char path[MAXPGPATH + 12];\n> PGFileType de_type;\n> \n> I think the size of path can be rewritten to \"MAXPGPATH + sizeof(PG_REPLSLOT_DIR)\" \n> and it seems easier to understand the reason why this size is used. \n\nThanks for the feedback!\n\nYeah, fully agree, it has been done that way in v2 that I just shared up-thread.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:13:37 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 19, 2024 at 04:11:31PM +0530, Ashutosh Bapat wrote:\n> On Wed, Aug 14, 2024 at 5:02 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > while working on a replication slot tool (idea is to put it in contrib, not\n> > shared yet), I realized that \"pg_replslot\" is being used > 25 times in\n> > .c files.\n> >\n> > I think it would make sense to replace those occurrences with a $SUBJECT, attached\n> > a patch doing so.\n> \n> Many of these places are slot specific directory/file names within\n> pg_replslot. I think we can further improve the code by creating macro\n> on the lines of LSN_FORMAT_ARGS\n> #define SLOT_DIRNAME_ARGS(slotname) (PG_REPLSLOT_DIR, slotname)\n> this way we \"codify\" method to construct the slot directory name\n> everywhere.\n\nThanks for the feedback!\n\nI think that could make sense. As the already proposed mechanical changes are\nerror prone (from my point of view), I would suggest to have a look at your\nproposal once the proposed changes go in.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:17:50 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Mon, Aug 19, 2024 at 7:47 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Aug 19, 2024 at 04:11:31PM +0530, Ashutosh Bapat wrote:\n> > On Wed, Aug 14, 2024 at 5:02 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > Hi hackers,\n> > >\n> > > while working on a replication slot tool (idea is to put it in contrib, not\n> > > shared yet), I realized that \"pg_replslot\" is being used > 25 times in\n> > > .c files.\n> > >\n> > > I think it would make sense to replace those occurrences with a $SUBJECT, attached\n> > > a patch doing so.\n> >\n> > Many of these places are slot specific directory/file names within\n> > pg_replslot. I think we can further improve the code by creating macro\n> > on the lines of LSN_FORMAT_ARGS\n> > #define SLOT_DIRNAME_ARGS(slotname) (PG_REPLSLOT_DIR, slotname)\n> > this way we \"codify\" method to construct the slot directory name\n> > everywhere.\n>\n> Thanks for the feedback!\n>\n> I think that could make sense. As the already proposed mechanical changes are\n> error prone (from my point of view), I would suggest to have a look at your\n> proposal once the proposed changes go in.\n\nWhat do you mean by error prone?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 20 Aug 2024 09:26:59 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 09:26:59AM +0530, Ashutosh Bapat wrote:\n> On Mon, Aug 19, 2024 at 7:47 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Mon, Aug 19, 2024 at 04:11:31PM +0530, Ashutosh Bapat wrote:\n> > > On Wed, Aug 14, 2024 at 5:02 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > Hi hackers,\n> > > >\n> > > > while working on a replication slot tool (idea is to put it in contrib, not\n> > > > shared yet), I realized that \"pg_replslot\" is being used > 25 times in\n> > > > .c files.\n> > > >\n> > > > I think it would make sense to replace those occurrences with a $SUBJECT, attached\n> > > > a patch doing so.\n> > >\n> > > Many of these places are slot specific directory/file names within\n> > > pg_replslot. I think we can further improve the code by creating macro\n> > > on the lines of LSN_FORMAT_ARGS\n> > > #define SLOT_DIRNAME_ARGS(slotname) (PG_REPLSLOT_DIR, slotname)\n> > > this way we \"codify\" method to construct the slot directory name\n> > > everywhere.\n> >\n> > Thanks for the feedback!\n> >\n> > I think that could make sense. As the already proposed mechanical changes are\n> > error prone (from my point of view), I would suggest to have a look at your\n> > proposal once the proposed changes go in.\n> \n> What do you mean by error prone?\n\nI meant to say that it is \"easy\" to make mistakes when doing those manual\nmechanical changes. Also I think it's not that fun/easy to review, that's why\nI think it's better to do one change at a time. Does that make sense to you?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 05:19:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Tue, Aug 20, 2024 at 10:49 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Aug 20, 2024 at 09:26:59AM +0530, Ashutosh Bapat wrote:\n> > On Mon, Aug 19, 2024 at 7:47 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On Mon, Aug 19, 2024 at 04:11:31PM +0530, Ashutosh Bapat wrote:\n> > > > On Wed, Aug 14, 2024 at 5:02 PM Bertrand Drouvot\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > Hi hackers,\n> > > > >\n> > > > > while working on a replication slot tool (idea is to put it in contrib, not\n> > > > > shared yet), I realized that \"pg_replslot\" is being used > 25 times in\n> > > > > .c files.\n> > > > >\n> > > > > I think it would make sense to replace those occurrences with a $SUBJECT, attached\n> > > > > a patch doing so.\n> > > >\n> > > > Many of these places are slot specific directory/file names within\n> > > > pg_replslot. I think we can further improve the code by creating macro\n> > > > on the lines of LSN_FORMAT_ARGS\n> > > > #define SLOT_DIRNAME_ARGS(slotname) (PG_REPLSLOT_DIR, slotname)\n> > > > this way we \"codify\" method to construct the slot directory name\n> > > > everywhere.\n> > >\n> > > Thanks for the feedback!\n> > >\n> > > I think that could make sense. As the already proposed mechanical changes are\n> > > error prone (from my point of view), I would suggest to have a look at your\n> > > proposal once the proposed changes go in.\n> >\n> > What do you mean by error prone?\n>\n> I meant to say that it is \"easy\" to make mistakes when doing those manual\n> mechanical changes. Also I think it's not that fun/easy to review, that's why\n> I think it's better to do one change at a time. Does that make sense to you?\n\nSince these are all related changes, doing them at once might make it\nfaster. You may use multiple commits (one for each change) to\n\"contain\" errors.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 20 Aug 2024 11:10:46 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Tue, Aug 20, 2024 at 11:10:46AM +0530, Ashutosh Bapat wrote:\n> Since these are all related changes, doing them at once might make it\n> faster. You may use multiple commits (one for each change)\n\nDoing multiple commits with individual definitions for each path would\nbe the way to go for me. All that is mechanical, still that feels\nslightly cleaner.\n\n> to \"contain\" errors.\n\nI am not sure what you mean by that.\n--\nMichael", "msg_date": "Tue, 20 Aug 2024 17:41:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Mon, Aug 19, 2024 at 02:11:55PM +0000, Bertrand Drouvot wrote:\n> I made the changes for pg_tblspc in pg_combinebackup.c as the number of occurences\n> are greater that the \"pg_wal\" ones and we were to define PG_TBLSPC_DIR in any\n> case.\n> \n> Please find attached the related patches.\n\nNo real objection about the replslot and pg_logical bits.\n\n- * $PGDATA/pg_tblspc/spcoid/PG_MAJORVER_CATVER/dboid/relfilenumber\n+ * $PGDATA/PG_TBLSPC_DIR/spcoid/PG_MAJORVER_CATVER/dboid/relfilenumber\n\nFor the tablespace parts, I am not sure that I would update the\ncomments to reflect the variables, TBH. Somebody reading the comments\nwould need to refer back to pg_tblspc/ in the header.\n--\nMichael", "msg_date": "Tue, 20 Aug 2024 17:47:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 05:41:48PM +0900, Michael Paquier wrote:\n> On Tue, Aug 20, 2024 at 11:10:46AM +0530, Ashutosh Bapat wrote:\n> > Since these are all related changes, doing them at once might make it\n> > faster. You may use multiple commits (one for each change)\n> \n> Doing multiple commits with individual definitions for each path would\n> be the way to go for me. All that is mechanical, still that feels\n> slightly cleaner.\n\nRight, that's what v2 has done. If there is a need for v3 then I'll add one\ndedicated patch for Ashutosh's proposal in the v3 series.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:06:52 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 05:47:57PM +0900, Michael Paquier wrote:\n> On Mon, Aug 19, 2024 at 02:11:55PM +0000, Bertrand Drouvot wrote:\n> > I made the changes for pg_tblspc in pg_combinebackup.c as the number of occurences\n> > are greater that the \"pg_wal\" ones and we were to define PG_TBLSPC_DIR in any\n> > case.\n> > \n> > Please find attached the related patches.\n> \n> No real objection about the replslot and pg_logical bits.\n\nThanks for looking at it!\n\n> \n> - * $PGDATA/pg_tblspc/spcoid/PG_MAJORVER_CATVER/dboid/relfilenumber\n> + * $PGDATA/PG_TBLSPC_DIR/spcoid/PG_MAJORVER_CATVER/dboid/relfilenumber\n> \n> For the tablespace parts, I am not sure that I would update the\n> comments to reflect the variables, TBH. Somebody reading the comments\n> would need to refer back to pg_tblspc/ in the header.\n\nI'm not sure as, for example, for PG_STAT_TMP_DIR we have those ones:\n\nsrc/backend/backup/basebackup.c: * Skip temporary statistics files. PG_STAT_TMP_DIR must be skipped\nsrc/bin/pg_rewind/filemap.c: * Skip temporary statistics files. PG_STAT_TMP_DIR must be skipped\n\nso I thought it would be better to be consistent.\n\nThat said, I don't have a strong opinion about it, but then I guess you'd want to\ndo the same for the ones related to replslot:\n\nsrc/backend/replication/slot.c: * Each replication slot gets its own directory inside the $PGDATA/PG_REPLSLOT_DIR\nsrc/backend/utils/adt/genfile.c: * Function to return the list of files in the PG_REPLSLOT_DIR/<replication_slot>\n\nand pg_logical:\n\nsrc/backend/utils/adt/genfile.c: * Function to return the list of files in the PG_LOGICAL_SNAPSHOTS_DIR directory.\nsrc/backend/utils/adt/genfile.c: * Function to return the list of files in the PG_LOGICAL_MAPPINGS_DIR directory.\n\n, right?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:16:10 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Tue, 20 Aug 2024 17:47:57 +0900\nMichael Paquier <[email protected]> wrote:\n\n> On Mon, Aug 19, 2024 at 02:11:55PM +0000, Bertrand Drouvot wrote:\n> > I made the changes for pg_tblspc in pg_combinebackup.c as the number of occurences\n> > are greater that the \"pg_wal\" ones and we were to define PG_TBLSPC_DIR in any\n> > case.\n> > \n> > Please find attached the related patches.\n> \n> No real objection about the replslot and pg_logical bits.\n> \n> - * $PGDATA/pg_tblspc/spcoid/PG_MAJORVER_CATVER/dboid/relfilenumber\n> + * $PGDATA/PG_TBLSPC_DIR/spcoid/PG_MAJORVER_CATVER/dboid/relfilenumber\n> \n> For the tablespace parts, I am not sure that I would update the\n> comments to reflect the variables, TBH. Somebody reading the comments\n> would need to refer back to pg_tblspc/ in the header.\n\nI also think that it is not necessary to change the comments even for pg_replslot.\n\n- * Each replication slot gets its own directory inside the $PGDATA/pg_replslot\n+ * Each replication slot gets its own directory inside the $PGDATA/PG_REPLSLOT_DIR\n\nFor example, I found that comments in xlog.c use \"pg_wal\" even though XLOGDIR is used\nin the codes as below, and I don't feel any problem for this.\n\n> static void \n> ValidateXLOGDirectoryStructure(void)\n> {\n> char path[MAXPGPATH];\n> struct stat stat_buf;\n>\n> /* Check for pg_wal; if it doesn't exist, error out */\n> if (stat(XLOGDIR, &stat_buf) != 0 || \n> !S_ISDIR(stat_buf.st_mode))\n\n\n\nShould be the follwing also rewritten using sizeof(PG_REPLSLOT_DIR)?\n\n struct stat statbuf;\n char path[MAXPGPATH * 2 + 12];\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Tue, 20 Aug 2024 21:30:48 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On 2024-Aug-19, Bertrand Drouvot wrote:\n\n> diff --git a/src/include/common/relpath.h b/src/include/common/relpath.h\n> index 6f006d5a93..a6cb091635 100644\n> --- a/src/include/common/relpath.h\n> +++ b/src/include/common/relpath.h\n> @@ -33,6 +33,10 @@ typedef Oid RelFileNumber;\n> #define TABLESPACE_VERSION_DIRECTORY\t\"PG_\" PG_MAJORVERSION \"_\" \\\n> \t\t\t\t\t\t\t\t\tCppAsString2(CATALOG_VERSION_NO)\n> \n> +#define PG_TBLSPC_DIR \"pg_tblspc\"\n\nThis one is missing some commentary along the lines of \"This must not be\nchanged, unless you want to break every tool in the universe\". As is,\nit's quite tempting.\n\n> +#define PG_TBLSPC_DIR_SLASH PG_TBLSPC_DIR \"/\"\n\nI would make this simply \"pg_tblspc/\", since it's not really possible to\nchange pg_tblspc anyway. Also, have a comment explaining why we have\nit.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n", "msg_date": "Tue, 20 Aug 2024 10:15:44 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 10:15:44AM -0400, Alvaro Herrera wrote:\n> On 2024-Aug-19, Bertrand Drouvot wrote:\n> \n> > diff --git a/src/include/common/relpath.h b/src/include/common/relpath.h\n> > index 6f006d5a93..a6cb091635 100644\n> > --- a/src/include/common/relpath.h\n> > +++ b/src/include/common/relpath.h\n> > @@ -33,6 +33,10 @@ typedef Oid RelFileNumber;\n> > #define TABLESPACE_VERSION_DIRECTORY\t\"PG_\" PG_MAJORVERSION \"_\" \\\n> > \t\t\t\t\t\t\t\t\tCppAsString2(CATALOG_VERSION_NO)\n> > \n> > +#define PG_TBLSPC_DIR \"pg_tblspc\"\n> \n> This one is missing some commentary along the lines of \"This must not be\n> changed, unless you want to break every tool in the universe\". As is,\n> it's quite tempting.\n\nYeah, makes sense, thanks.\n\n> > +#define PG_TBLSPC_DIR_SLASH PG_TBLSPC_DIR \"/\"\n> \n> I would make this simply \"pg_tblspc/\", since it's not really possible to\n> change pg_tblspc anyway. Also, have a comment explaining why we have\n> it.\n\nPlease find attached v3 that:\n\n- takes care of your comments (and also removed the use of PG_TBLSPC_DIR in\nRELATIVE_PG_TBLSPC_DIR).\n- removes the new macros from the comments (see Michael's and Yugo-San's\ncomments in [0] resp. [1]).\n- adds a missing sizeof() (see [1]).\n- implements Ashutosh's idea of adding a new SLOT_DIRNAME_ARGS (see [2]). It's\ndone in 0002 (I used REPLSLOT_DIR_ARGS though).\n- fixed a macro usage in ReorderBufferCleanupSerializedTXNs() (was not at the\nright location, discovered while implementing 0002).\n\n[0]: https://www.postgresql.org/message-id/ZsRYPcOtoqbWzjGG%40paquier.xyz\n[1]: https://www.postgresql.org/message-id/20240820213048.207aade6a75e0dc1fe4d1067%40sraoss.co.jp\n[2]: https://www.postgresql.org/message-id/CAExHW5vkjxuvyQ1fPPnuDW4nAT5jqox09ie36kciOV2%2BrhjbHA%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Aug 2024 16:23:06 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 09:30:48PM +0900, Yugo Nagata wrote:\n> Should be the follwing also rewritten using sizeof(PG_REPLSLOT_DIR)?\n> \n> struct stat statbuf;\n> char path[MAXPGPATH * 2 + 12];\n> \n> \n\nYeah, done in v3 that I just shared up-thread (also removed the macros from\nthe comments).\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 16:26:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 12:06:52PM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Tue, Aug 20, 2024 at 05:41:48PM +0900, Michael Paquier wrote:\n> > On Tue, Aug 20, 2024 at 11:10:46AM +0530, Ashutosh Bapat wrote:\n> > > Since these are all related changes, doing them at once might make it\n> > > faster. You may use multiple commits (one for each change)\n> > \n> > Doing multiple commits with individual definitions for each path would\n> > be the way to go for me. All that is mechanical, still that feels\n> > slightly cleaner.\n> \n> Right, that's what v2 has done. If there is a need for v3 then I'll add one\n> dedicated patch for Ashutosh's proposal in the v3 series.\n\nAshutosh's idea is implemented in v3 that I just shared up-thread (I used \nREPLSLOT_DIR_ARGS though).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 16:28:34 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "If this isn't in commitfest already, please add it there.\n\nOn Tue, Aug 20, 2024 at 9:58 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Aug 20, 2024 at 12:06:52PM +0000, Bertrand Drouvot wrote:\n> > Hi,\n> >\n> > On Tue, Aug 20, 2024 at 05:41:48PM +0900, Michael Paquier wrote:\n> > > On Tue, Aug 20, 2024 at 11:10:46AM +0530, Ashutosh Bapat wrote:\n> > > > Since these are all related changes, doing them at once might make it\n> > > > faster. You may use multiple commits (one for each change)\n> > >\n> > > Doing multiple commits with individual definitions for each path would\n> > > be the way to go for me. All that is mechanical, still that feels\n> > > slightly cleaner.\n> >\n> > Right, that's what v2 has done. If there is a need for v3 then I'll add one\n> > dedicated patch for Ashutosh's proposal in the v3 series.\n>\n> Ashutosh's idea is implemented in v3 that I just shared up-thread (I used\n> REPLSLOT_DIR_ARGS though).\n>\n> Regards,\n>\n> --\n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:14:20 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 21, 2024 at 10:14:20AM +0530, Ashutosh Bapat wrote:\n> If this isn't in commitfest already, please add it there.\n> \n\nDone in [0].\n\n[0]: https://commitfest.postgresql.org/49/5193/\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 05:03:57 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Tue, Aug 20, 2024 at 04:23:06PM +0000, Bertrand Drouvot wrote:\n> Please find attached v3 that:\n> \n> - takes care of your comments (and also removed the use of PG_TBLSPC_DIR in\n> RELATIVE_PG_TBLSPC_DIR).\n> - removes the new macros from the comments (see Michael's and Yugo-San's\n> comments in [0] resp. [1]).\n> - adds a missing sizeof() (see [1]).\n> - implements Ashutosh's idea of adding a new SLOT_DIRNAME_ARGS (see [2]). It's\n> done in 0002 (I used REPLSLOT_DIR_ARGS though).\n> - fixed a macro usage in ReorderBufferCleanupSerializedTXNs() (was not at the\n> right location, discovered while implementing 0002).\n\nI was looking at that, and applied 0001 for pg_replslot and merged\n0003~0005 together for pg_logical. For the first one, at the end I\nhave updated the comments in genfile.c. For the second one, it looked\na bit strange to ignore \"pg_logical/\", which is the base for all the\nothers, so I have added a variable for it. Locating them in\nreorderbuffer.h with the GUCs was OK, but perhaps there was an\nargument for logical.h. The paths in origin.c refer to files, not\ndirectories.\n\nNot sure that 0002 is an improvement overall, so I have left this part\nout.\n\nIn 0006, I am not sure that RELATIVE_PG_TBLSPC_DIR is really something\nwe should have. Sure, that's a special case for base backups when\nsending a directory, but we have also pg_wal/ and its XLOGDIR so\nthat's inconsistent. Perhaps this part should be left as-is.\n--\nMichael", "msg_date": "Fri, 30 Aug 2024 15:34:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Fri, Aug 30, 2024 at 03:34:56PM +0900, Michael Paquier wrote:\n> On Tue, Aug 20, 2024 at 04:23:06PM +0000, Bertrand Drouvot wrote:\n> > Please find attached v3 that:\n> > \n> > - takes care of your comments (and also removed the use of PG_TBLSPC_DIR in\n> > RELATIVE_PG_TBLSPC_DIR).\n> > - removes the new macros from the comments (see Michael's and Yugo-San's\n> > comments in [0] resp. [1]).\n> > - adds a missing sizeof() (see [1]).\n> > - implements Ashutosh's idea of adding a new SLOT_DIRNAME_ARGS (see [2]). It's\n> > done in 0002 (I used REPLSLOT_DIR_ARGS though).\n> > - fixed a macro usage in ReorderBufferCleanupSerializedTXNs() (was not at the\n> > right location, discovered while implementing 0002).\n> \n> I was looking at that, and applied 0001 for pg_replslot and merged\n> 0003~0005 together for pg_logical.\n\nThanks!\n\n> In 0006, I am not sure that RELATIVE_PG_TBLSPC_DIR is really something\n> we should have. Sure, that's a special case for base backups when\n> sending a directory, but we have also pg_wal/ and its XLOGDIR so\n> that's inconsistent.\n\nThat's right but OTOH there is no (for good reason) PG_WAL_DIR or such defined.\n\nThat said, I don't have a strong opinion on this one, I think that also makes\nsense to leave it as it is. Please find attached v4 doing so.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 30 Aug 2024 12:21:29 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Fri, Aug 30, 2024 at 12:21:29PM +0000, Bertrand Drouvot wrote:\n> That said, I don't have a strong opinion on this one, I think that also makes\n> sense to leave it as it is. Please find attached v4 doing so.\n\nThe changes in astreamer_file.c are actually wrong regarding the fact\nthat should_allow_existing_directory() needs to be able to work with\nthe branch where this code is located as well as back-branches,\nbecause pg_basebackup from version N supports ~(N-1) versions down to\na certain version, so changing it is not right. This is why pg_xlog\nand pg_wal are both listed there.\n\nPerhaps we should to more for the two entries in basebackup.c with the\nrelative paths, but I'm not sure that's worth bothering, either. At\nthe end, I got no objections about the remaining pieces, so applied.\n\nHow do people feel about the suggestions to update the comments at the\nend? With the comment in relpath.h suggesting to not change that, the\ncurrent state of HEAD is fine by me.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 09:15:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 09:15:50AM +0900, Michael Paquier wrote:\n> On Fri, Aug 30, 2024 at 12:21:29PM +0000, Bertrand Drouvot wrote:\n> > That said, I don't have a strong opinion on this one, I think that also makes\n> > sense to leave it as it is. Please find attached v4 doing so.\n> \n> The changes in astreamer_file.c are actually wrong regarding the fact\n> that should_allow_existing_directory() needs to be able to work with\n> the branch where this code is located as well as back-branches,\n> because pg_basebackup from version N supports ~(N-1) versions down to\n> a certain version, so changing it is not right. This is why pg_xlog\n> and pg_wal are both listed there.\n\nI understand why pg_xlog and pg_wal both need to be listed here, but I don't\nget why the proposed changes were \"wrong\". Or, are you saying that if for any\nreason PG_TBLSPC_DIR needs to be changed that would not work anymore? If\nthat's the case, then I guess we'd have to add a new define and test like:\n\n strcmp(filename, PG_TBLSPC_DIR) == 0) || strcmp(filename, NEW_PG_TBLSPC_DIR) == 0)\n\n, no? \n\nThe question is more out of curiosity, not saying the changes should be applied\nin astreamer_file.c though.\n\n> Perhaps we should to more for the two entries in basebackup.c with the\n> relative paths, but I'm not sure that's worth bothering, either.\n\nI don't have a strong opinon on those ones.\n\n> At\n> the end, I got no objections about the remaining pieces, so applied.\n\nThanks!\n\n> How do people feel about the suggestions to update the comments at the\n> end? With the comment in relpath.h suggesting to not change that, the\n> current state of HEAD is fine by me.\n\nYeah, I think that's fine and specially because there is still some places (\noutside of the comments) that don't rely on the define(s).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 3 Sep 2024 04:35:28 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: define PG_REPLSLOT_DIR" }, { "msg_contents": "On Tue, Sep 03, 2024 at 04:35:28AM +0000, Bertrand Drouvot wrote:\n> I understand why pg_xlog and pg_wal both need to be listed here, but I don't\n> get why the proposed changes were \"wrong\". Or, are you saying that if for any\n> reason PG_TBLSPC_DIR needs to be changed that would not work\n> anymore?\n\nYes. Folks are not going to do that, but changing PG_TBLSPC_DIR would\nchange the decision-making when streaming from versions older than\nv17 with clients tools from v18~.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 15:27:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: define PG_REPLSLOT_DIR" } ]
[ { "msg_contents": "Hi.\n\n\nI looked at meson.build file at found an incorrectly used function to\n\ndetermine postgres version.\n\n\n > if pg_version.endswith('devel')\n >   pg_version_arr = [pg_version.split('devel')[0], '0']\n\n\nThere should be `pg_version.contains('devel')`, not `endswith`. Like this:\n\n-if pg_version.endswith('devel')\n+if pg_version.contains('devel')\n\n\nNext statement seems to be valid:\n\n >elif pg_version.contains('beta')\n >  pg_version_arr = [pg_version.split('beta')[0], '0']\n >elif pg_version.contains('rc')\n >  pg_version_arr = [pg_version.split('rc')[0], '0']\n >else\n >  pg_version_arr = pg_version.split('.')\n >endif\n\n\nI created a single line patch for it.", "msg_date": "Wed, 14 Aug 2024 17:02:05 +0300", "msg_from": "Sergey Solovev <[email protected]>", "msg_from_op": true, "msg_subject": "[BUG FIX]: invalid meson version detection" }, { "msg_contents": "On 14/08/2024 17:02, Sergey Solovev wrote:\n> I looked at meson.build file at found an incorrectly used function to\n> \n> determine postgres version.\n> \n> \n> > if pg_version.endswith('devel')\n> >   pg_version_arr = [pg_version.split('devel')[0], '0']\n> \n> \n> There should be `pg_version.contains('devel')`, not `endswith`. Like this:\n> \n> -if pg_version.endswith('devel')\n> +if pg_version.contains('devel')\n\nI believe it's correct as it is. With \"beta\" and \"rc\" version, the \nversion strings look like \"17beta1\" or \"16rc2\", i.e. there's a number at \nthe end of the string. But the \"devel\" version strings never have that, \ne.g. \"17devel\" or \"18devel\".\n\nSee also src/tools/version_stamp.pl\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:09:38 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUG FIX]: invalid meson version detection" } ]
[ { "msg_contents": "I'd like to send an automatic mail to a thread whenever it gets added\nto a commitfest. Since this would impact everyone that's subscribed to\nthe mailinglist I'd like some feedback on this. This mail would\ninclude:\n\n1. A very short blurb like: \"This thread was added to the commitfest\nwith ID 1234\"\n2. A link to the commitfest entry\n3. A link to the cfbot CI runs\n4. A link to the diff on GitHub\n5. Any other suggestions?\n\nThe main reason I'm proposing this is that currently it's not trivial\nto go from an email in my inbox to the commitfest entry. This can be\nespecially hard to do when the subject of the email is not the same as\nthe title of the commitfest entry. This then in turn increases the\nbarrier for people to change the commitfest status, to look at the CI\nresults, or look at the diff on GitHub. I also had once that I\naccidentally added a thread twice to the commitfest, because I forgot\nI had already added it a while back.\n\nTo be clear, this would **not** be a repeat email. It would be sent\nonly once per thread, i.e. it is sent when the thread is added to a\ncommitfest entry. So there won't be a flood of new emails you'll\nreceive.\n\nAlso the github link does not allow comments to be posted to github,\nit's read-only. An example being:\nhttps://github.com/postgresql-cfbot/postgresql/compare/cf/5025~1...cf/5025\n\n\n", "msg_date": "Thu, 15 Aug 2024 00:40:29 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> I'd like to send an automatic mail to a thread whenever it gets added\n> to a commitfest. Since this would impact everyone that's subscribed to\n> the mailinglist I'd like some feedback on this. This mail would\n> include:\n\n> 1. A very short blurb like: \"This thread was added to the commitfest\n> with ID 1234\"\n> 2. A link to the commitfest entry\n> 3. A link to the cfbot CI runs\n> 4. A link to the diff on GitHub\n> 5. Any other suggestions?\n\n+1 for the basic idea. An awful lot of people send such mails\nmanually already, so this'd replace that effort in a more predictable\nway. Not sure about your 3 & 4 points though, especially if that'd\nmean some delay in sending the mail. For one thing, wouldn't those\nlinks be stale as soon as the initial patch got replaced? Those\nlinks seem like they ought to live in the commitfest entry.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Aug 2024 19:19:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Wed, Aug 14, 2024, at 8:19 PM, Tom Lane wrote:\n> Jelte Fennema-Nio <[email protected]> writes:\n> > I'd like to send an automatic mail to a thread whenever it gets added\n> > to a commitfest. Since this would impact everyone that's subscribed to\n> > the mailinglist I'd like some feedback on this. This mail would\n> > include:\n> \n> > 1. A very short blurb like: \"This thread was added to the commitfest\n> > with ID 1234\"\n> > 2. A link to the commitfest entry\n> > 3. A link to the cfbot CI runs\n> > 4. A link to the diff on GitHub\n> > 5. Any other suggestions?\n> \n> +1 for the basic idea. An awful lot of people send such mails\n> manually already, so this'd replace that effort in a more predictable\n> way. Not sure about your 3 & 4 points though, especially if that'd\n> mean some delay in sending the mail. For one thing, wouldn't those\n> links be stale as soon as the initial patch got replaced? Those\n> links seem like they ought to live in the commitfest entry.\n\n+1. Regarding the CI link, I would be good if the CF entry automatically adds a\nlink to the CI run. It can be a separate field or even add it to \"Links\".\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf/$CFENTRYID\n\nI'm not sure about 4, you can always check the latest patch in the CF entry (it\nis usually the \"latest attachment\") and that's what the cfbot uses to run.\n\nIf I understand your proposal correctly, there will be another email to the\nthread if the previous CF was closed and someone opened a new CF entry.\nSometimes some CF entries are about the same thread.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Aug 14, 2024, at 8:19 PM, Tom Lane wrote:Jelte Fennema-Nio <[email protected]> writes:> I'd like to send an automatic mail to a thread whenever it gets added> to a commitfest. Since this would impact everyone that's subscribed to> the mailinglist I'd like some feedback on this. This mail would> include:> 1. A very short blurb like: \"This thread was added to the commitfest> with ID 1234\"> 2. A link to the commitfest entry> 3. A link to the cfbot CI runs> 4. A link to the diff on GitHub> 5. Any other suggestions?+1 for the basic idea.  An awful lot of people send such mailsmanually already, so this'd replace that effort in a more predictableway.  Not sure about your 3 & 4 points though, especially if that'dmean some delay in sending the mail.  For one thing, wouldn't thoselinks be stale as soon as the initial patch got replaced?  Thoselinks seem like they ought to live in the commitfest entry.+1. Regarding the CI link, I would be good if the CF entry automatically adds alink to the CI run. It can be a separate field or even add it to \"Links\".https://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf/$CFENTRYIDI'm not sure about 4, you can always check the latest patch in the CF entry (itis usually the \"latest attachment\") and that's what the cfbot uses to run.If I understand your proposal correctly, there will be another email to thethread if the previous CF was closed and someone opened a new CF entry.Sometimes some CF entries are about the same thread.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 15 Aug 2024 00:16:40 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to\n the commitfest" }, { "msg_contents": "On Thu, 15 Aug 2024 at 01:19, Tom Lane <[email protected]> wrote:\n> +1 for the basic idea.\n\nThat's great to hear.\n\n> Not sure about your 3 & 4 points though, especially if that'd\n> mean some delay in sending the mail. For one thing, wouldn't those\n> links be stale as soon as the initial patch got replaced?\n\nI recently made those links predictable based on only the ID of the CF\nentry. So they won't get stale, and I would want to send them straight\naway (even if that means that they might return a 404 until cfbot\nactually pushes the patches to github). The links would be:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf/$CFENTRYID\n\nhttps://github.com/postgresql-cfbot/postgresql/compare/cf/$CFENTRYID~1...cf/$CFENTRYID\n\n> Those links seem like they ought to live in the commitfest entry.\n\nAgreed. I'll start with that, since that should be very easy.\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:29:43 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 15 Aug 2024 at 05:17, Euler Taveira <[email protected]> wrote:\n> +1. Regarding the CI link, I would be good if the CF entry automatically adds a\n> link to the CI run. It can be a separate field or even add it to \"Links\".\n\nI'm on it. I think this email should be a subset of the info on the CF\nentry webpage, so I'll first change the cf entry page to include all\nthis info.\n\n> I'm not sure about 4, you can always check the latest patch in the CF entry (it\n> is usually the \"latest attachment\") and that's what the cfbot uses to run.\n\nThis is definitely a personal preference thing, but I like reading\npatches on GitHub much better than looking at raw patch files. It has\nsyntax highlighting and has those little arrow buttons at the top of a\ndiff, to show more context about the file.\n\nI realized a 5th thing that I would want in the email and cf entry page\n\n5. A copy-pastable set of git command that checks out the patch by\ndownloading it from the cfbot repo like this:\n\ngit config branch.cf/5107.remote\nhttps://github.com/postgresql-cfbot/postgresql.git\ngit config branch.cf/5107.merge refs/heads/cf/5107\ngit checkout -b cf/5107\ngit pull\n\n> If I understand your proposal correctly, there will be another email to the\n> thread if the previous CF was closed and someone opened a new CF entry.\n> Sometimes some CF entries are about the same thread.\n\nYeah, that's correct. If a new CF entry is created for an existing\nthread a new email would be sent. But to be clear, if CF entry is\npushed to the next commitfest, **no** new email would be sent.\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:59:47 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On 15.08.24 00:40, Jelte Fennema-Nio wrote:\n> I'd like to send an automatic mail to a thread whenever it gets added\n> to a commitfest. Since this would impact everyone that's subscribed to\n> the mailinglist I'd like some feedback on this. This mail would\n> include:\n> \n> 1. A very short blurb like: \"This thread was added to the commitfest\n> with ID 1234\"\n\nHow would you attach such an email to a thread? Where in the thread \nwould you attach it? I'm not quite sure how well that would work.\n\n> The main reason I'm proposing this is that currently it's not trivial\n> to go from an email in my inbox to the commitfest entry.\n\nMaybe there could be a feature in the commitfest app to enter a message \nID and get the commitfest entries that contain that message.\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 15:28:08 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On 15.08.24 09:59, Jelte Fennema-Nio wrote:\n> I realized a 5th thing that I would want in the email and cf entry page\n> \n> 5. A copy-pastable set of git command that checks out the patch by\n> downloading it from the cfbot repo like this:\n> \n> git config branch.cf/5107.remote\n> https://github.com/postgresql-cfbot/postgresql.git\n> git config branch.cf/5107.merge refs/heads/cf/5107\n> git checkout -b cf/5107\n> git pull\n\nMaybe this kind of thing should rather be on the linked-to web page, not \nin every email.\n\nBut a more serious concern here is that the patches created by the cfbot \nare not canonical. There are various heuristics when they get applied. \nI would prefer that people work with the actual patches sent by email, \nat least unless they know exactly what they are doing. We don't want to \ncreate parallel worlds of patches that are like 90% similar but not \nreally identical.\n\n\n\n", "msg_date": "Thu, 15 Aug 2024 15:33:17 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 15 Aug 2024 at 15:28, Peter Eisentraut <[email protected]> wrote:\n> How would you attach such an email to a thread? Where in the thread\n> would you attach it? I'm not quite sure how well that would work.\n\nMy idea would be to have the commitfest app send it in reply to the\nmessage id that was entered in the \"New patch\" page:\nhttps://commitfest.postgresql.org/open/new/\n\n> Maybe there could be a feature in the commitfest app to enter a message\n> ID and get the commitfest entries that contain that message.\n\nThat's a good idea.\n\n\n", "msg_date": "Thu, 15 Aug 2024 15:47:29 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 15 Aug 2024 at 15:33, Peter Eisentraut <[email protected]> wrote:\n> Maybe this kind of thing should rather be on the linked-to web page, not\n> in every email.\n\nYeah, I'll first put a code snippet on the page for the commitfest entry.\n\n> But a more serious concern here is that the patches created by the cfbot\n> are not canonical. There are various heuristics when they get applied.\n> I would prefer that people work with the actual patches sent by email,\n> at least unless they know exactly what they are doing. We don't want to\n> create parallel worlds of patches that are like 90% similar but not\n> really identical.\n\nI'm not really sure what kind of heuristics and resulting differences\nyou're worried about here. The heuristics it uses are very simple and\nare good enough for our CI. Basically they are:\n1. Unzip/untar based on file extension\n2. Apply patches using \"patch\" in alphabetic order\n\nAlso, when I apply patches myself, I use heuristics too. And my\nheuristics are probably different from yours. So I'd expect that many\npeople using the exact same heuristic would only make the situation\nbetter. Especially because if people don't know exactly what they are\ndoing, then their heuristics are probably not as good as the one of\nour cfbot. I know I've struggled a lot the first few times when I was\nmanually applying patches.\n\n\n", "msg_date": "Thu, 15 Aug 2024 16:01:47 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, Aug 15, 2024 at 12:40:29AM +0200, Jelte Fennema-Nio wrote:\n> I'd like to send an automatic mail to a thread whenever it gets added\n> to a commitfest. Since this would impact everyone that's subscribed to\n> the mailinglist I'd like some feedback on this. This mail would\n> include:\n\nI haven't thought too much about the details, but in general, +1 for\nsending links to cfest/cfbot entries to the thread.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:26:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it\n gets added to the commitfest" }, { "msg_contents": "(sorry for the formatting, my mobile phone doesn't have the capabilities I\nusually get when using my laptop)\n\nOn Thu, 15 Aug 2024, 16:02 Jelte Fennema-Nio, <[email protected]> wrote:\n\n> On Thu, 15 Aug 2024 at 15:33, Peter Eisentraut <[email protected]>\n> wrote:\n> > Maybe this kind of thing should rather be on the linked-to web page, not\n> > in every email.\n>\n> Yeah, I'll first put a code snippet on the page for the commitfest entry.\n>\n> > But a more serious concern here is that the patches created by the cfbot\n> > are not canonical. There are various heuristics when they get applied.\n> > I would prefer that people work with the actual patches sent by email,\n> > at least unless they know exactly what they are doing. We don't want to\n> > create parallel worlds of patches that are like 90% similar but not\n> > really identical.\n>\n> I'm not really sure what kind of heuristics and resulting differences\n> you're worried about here. The heuristics it uses are very simple and\n> are good enough for our CI. Basically they are:\n> 1. Unzip/untar based on file extension\n> 2. Apply patches using \"patch\" in alphabetic order\n>\n> Also, when I apply patches myself, I use heuristics too. And my\n> heuristics are probably different from yours. So I'd expect that many\n> people using the exact same heuristic would only make the situation\n> better. Especially because if people don't know exactly what they are\n> doing, then their heuristics are probably not as good as the one of\n> our cfbot. I know I've struggled a lot the first few times when I was\n> manually applying patches.\n\n\nOne serious issue with this is that in cases of apply failures, CFBot\ndelays, or other issues, the CFBot repo won't contain the latest version of\nthe series' patchsets. E.g. a hacker can accidentally send an incremental\npatch, or an unrelated patch to fix an issue mentioned in the thread\nwithout splitting into a different thread, etc. This can easily cause users\n(and CFBot) to test and review the wrong patch, esp. when the mail thread\nproper is not looked by the reviewer, which would be somewhat promoted by a\nCFA+github -centric workflow.\n\nApart from the above issue, I'm -0.5 on what to me equates with automated\nspam to -hackers: the volume of mails would put this around the 16th most\ncommon sender on -hackers, with about 400 mails/year (based on 80 new\npatches for next CF, and 5 CFs/year, combined with Robert's 2023 statistics\nat [0]).\n\nI also don't quite like the suggested contents of such mail: (1) and (2)\nare essentially duplicative information, and because CF's entries' IDs are\nnot shown in the app the \"with ID 0000\" part of (1) is practically useless\n(better use the CFE's title), (3) would best be stored and/or integrated in\nthe CFA, as would (4). Additionally, (4) isn't canonical/guaranteed to be\nup-to-date, see above. As for the \"copy-pastable git commands\" suggestion,\nI'm not sure that's applicable, for the same reasons that (4) won't work\nreliably. CFBot's repo to me seems more like an internal implementation\ndetail of CFBot than an authorative source of patchset diffs.\n\n\nMaybe we could instead automate CF mail thread registration by allowing\nregistration of threadless CF entries (as 'preliminary'), and detecting\n(and subsequently linking) new threads containing references to those CF\nentries, with e.g. an \"CF: https://commitfest.postgresql.org/49/4980/\"\ndirective in the new thread's initial mail's text. This would give the\nbenefits of requiring no second mail for CF referencing purposes, be it\nautomated or manual.\nAlternatively, we could allow threads for new entries to be started through\nthe CF app (which would automatically insert the right form data into the\nmail), providing an alternative avenue to registering patches that doesn't\nhave the chicken-and-egg problem you're trying to address here.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://rhaas.blogspot.com/2024/01/who-contributed-to-postgresql.html\n\n(sorry for the formatting, my mobile phone doesn't have the capabilities I usually get when using my laptop)On Thu, 15 Aug 2024, 16:02 Jelte Fennema-Nio, <[email protected]> wrote:On Thu, 15 Aug 2024 at 15:33, Peter Eisentraut <[email protected]> wrote:\n> Maybe this kind of thing should rather be on the linked-to web page, not\n> in every email.\n\nYeah, I'll first put a code snippet on the page for the commitfest entry.\n\n> But a more serious concern here is that the patches created by the cfbot\n> are not canonical.  There are various heuristics when they get applied.\n> I would prefer that people work with the actual patches sent by email,\n> at least unless they know exactly what they are doing.  We don't want to\n> create parallel worlds of patches that are like 90% similar but not\n> really identical.\n\nI'm not really sure what kind of heuristics and resulting differences\nyou're worried about here. The heuristics it uses are very simple and\nare good enough for our CI. Basically they are:\n1. Unzip/untar based on file extension\n2. Apply patches using \"patch\" in alphabetic order\n\nAlso, when I apply patches myself, I use heuristics too. And my\nheuristics are probably different from yours. So I'd expect that many\npeople using the exact same heuristic would only make the situation\nbetter. Especially because if people don't know exactly what they are\ndoing, then their heuristics are probably not as good as the one of\nour cfbot. I know I've struggled a lot the first few times when I was\nmanually applying patches.One serious issue with this is that in cases of apply failures, CFBot delays, or other issues, the CFBot repo won't contain the latest version of the series' patchsets. E.g. a hacker can  accidentally send an incremental patch, or an unrelated patch to fix an issue mentioned in the thread without splitting into a different thread, etc. This can easily cause users (and CFBot) to test and review the wrong patch, esp. when the mail thread proper is not looked by the reviewer, which would be somewhat promoted by a CFA+github -centric workflow.Apart from the above issue, I'm -0.5 on what to me equates with automated spam to -hackers: the volume of mails would put this around the 16th most common sender on -hackers, with about 400 mails/year (based on 80 new patches for next CF, and 5 CFs/year, combined with Robert's 2023 statistics at [0]).I also don't quite like the suggested contents of such mail: (1) and (2) are essentially duplicative information, and because CF's entries' IDs are not shown in the app the \"with ID 0000\" part of (1) is practically useless (better use the CFE's title), (3) would best be stored and/or integrated in the CFA, as would (4). Additionally, (4) isn't canonical/guaranteed to be up-to-date, see above. As for the \"copy-pastable git commands\" suggestion, I'm not sure that's applicable, for the same reasons that (4) won't work reliably. CFBot's repo to me seems more like an internal implementation detail of CFBot than an authorative source of patchset diffs.Maybe we could instead automate CF mail thread registration by allowing registration of threadless CF entries (as 'preliminary'), and detecting (and subsequently linking) new threads containing references to those CF entries, with e.g. an  \"CF: https://commitfest.postgresql.org/49/4980/\" directive in the new thread's initial mail's text. This would give the benefits of requiring no second mail for CF referencing purposes, be it automated or manual. Alternatively, we could allow threads for new entries to be started through the CF app (which would automatically insert the right form data into the mail), providing an alternative avenue to registering patches that doesn't have the chicken-and-egg problem you're trying to address here.Kind regards,Matthias van de MeentNeon (https://neon.tech)[0] https://rhaas.blogspot.com/2024/01/who-contributed-to-postgresql.html", "msg_date": "Thu, 15 Aug 2024 19:25:15 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On 15.08.24 19:25, Matthias van de Meent wrote:\n> Apart from the above issue, I'm -0.5 on what to me equates with \n> automated spam to -hackers: the volume of mails would put this around \n> the 16th most common sender on -hackers, with about 400 mails/year \n> (based on 80 new patches for next CF, and 5 CFs/year, combined with \n> Robert's 2023 statistics at [0]).\n\nYeah, I'd rather not open the can of worms that we send automated emails \nto this list at all. If we do this, then there will be other requests, \nand why this one and not that one. If people want to get emails from \nthe commitfest app, it should be that you subscribe there and it sends \nthose emails to those who want them.\n\n> I also don't quite like the suggested contents of such mail: (1) and (2) \n> are essentially duplicative information, and because CF's entries' IDs \n> are not shown in the app the \"with ID 0000\" part of (1) is practically \n> useless (better use the CFE's title), (3) would best be stored and/or \n> integrated in the CFA, as would (4). Additionally, (4) isn't \n> canonical/guaranteed to be up-to-date, see above. As for the \n> \"copy-pastable git commands\" suggestion, I'm not sure that's applicable, \n> for the same reasons that (4) won't work reliably. CFBot's repo to me \n> seems more like an internal implementation detail of CFBot than an \n> authorative source of patchset diffs.\n\nI agree. And this also smells a bit like \"my favorite workflow\". Maybe \nstart with a blog post or a wiki page if you want to suggest this.\n\n\n\n", "msg_date": "Fri, 16 Aug 2024 09:12:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 15.08.24 19:25, Matthias van de Meent wrote:\n>> Apart from the above issue, I'm -0.5 on what to me equates with \n>> automated spam to -hackers: the volume of mails would put this around \n>> the 16th most common sender on -hackers, with about 400 mails/year \n>> (based on 80 new patches for next CF, and 5 CFs/year, combined with \n>> Robert's 2023 statistics at [0]).\n\n> Yeah, I'd rather not open the can of worms that we send automated emails \n> to this list at all. If we do this, then there will be other requests, \n> and why this one and not that one. If people want to get emails from \n> the commitfest app, it should be that you subscribe there and it sends \n> those emails to those who want them.\n\nThat would destroy the one good argument for this, which was to\nprovide an easy way to get from a mail list thread (in the archives)\nto the corresponding CF entry or entries. You pull up the \"flat\"\nthread, search for the bot mail, and click the link. Without that\nI see little point at all.\n\nHowever, there are other ways to accomplish that. I liked the\nsuggestion of extending the CF webapp with a way to search for entries\nmentioning a particular mail message ID. I dunno how hard it'd be to\nget it to recognize *any* message-ID from a thread, but surely at\nleast the head message ought not be too hard to match.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Aug 2024 10:43:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Wed, Aug 14, 2024 at 3:40 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> I'd like to send an automatic mail to a thread whenever it gets added\n> to a commitfest. Since this would impact everyone that's subscribed to\n> the mailinglist I'd like some feedback on this. This mail would\n> include:\n>\n> 1. A very short blurb like: \"This thread was added to the commitfest\n> with ID 1234\"\n> 2. A link to the commitfest entry\n> 3. A link to the cfbot CI runs\n> 4. A link to the diff on GitHub\n> 5. Any other suggestions?\n\nI would find (2) very useful.\n\n\n", "msg_date": "Fri, 16 Aug 2024 10:22:42 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Fri, Aug 16, 2024 at 10:23 AM Maciek Sakrejda <[email protected]> wrote:\n> > 1. A very short blurb like: \"This thread was added to the commitfest\n> > with ID 1234\"\n> > 2. A link to the commitfest entry\n> > 3. A link to the cfbot CI runs\n> > 4. A link to the diff on GitHub\n> > 5. Any other suggestions?\n>\n> I would find (2) very useful.\n\nMe too.\n\n--Jacob\n\n\n", "msg_date": "Fri, 16 Aug 2024 11:12:16 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, Aug 15, 2024 at 9:33 AM Peter Eisentraut <[email protected]> wrote:\n> But a more serious concern here is that the patches created by the cfbot\n> are not canonical. There are various heuristics when they get applied.\n\nIt's true that the code required for CFBot to simply apply a patch is\nnontrivial. We're accounting for various edge-cases, and soldiering\non, where we deem that it makes sense. I'm referring to those cases\nwhere \"git am\" won't work perfectly, but we have a pretty good chance\nof successfully generating what the patch author intended (I assume\nthat that's what you meant by \"heuristics\").\n\nOne reason why this works better than might be expected is\nbecause...we then test the result (that's the whole point, of course).\nObviously, if we apply a patch in a way that isn't quite perfectly\nclean (according to whatever the \"git am\" criteria is), but that CFBot\nis nevertheless capable of applying, and we then find that the end\nresult passes all tests, that gives us a fairly strong signal. We can\nhave high confidence that CFBot has done the right thing at that\npoint. We can reasonably present the resulting feature branch as a\n\"known good\" usable version of the patch.\n\nOf course you can quibble with this. Fundamentally, a patch file can\nnever be updated, but we want to apply it on top of a moving target\n(as best we can). We're always making some kind of trade-off. I just\ndon't think that the heuristics that humans might choose to apply are\nnecessarily much better on average. Humans are bad at routine boring\nthings, but good at noticing and coping with special cases.\n\n> I would prefer that people work with the actual patches sent by email,\n> at least unless they know exactly what they are doing. We don't want to\n> create parallel worlds of patches that are like 90% similar but not\n> really identical.\n\nThere's a reason why tech companies place so much emphasis on offering\na \"low friction experience\". The aggregate effect on user/consumer\nbehavior is huge. I'm not saying that this is good or bad (let's not\nget into that now); just that it is an empirical fact that people tend\nto behave like that. We want more reviewers. Why not try to meet\npeople where they are a bit more?\n\nI have to admit that I think that I'd be far more likely to quickly\ntest out a patch if I'm presented with a workflow that makes the setup\nas painless as possible. Particularly if I'm all but guaranteed to get\na working Postgres server with the patch applied (which is possible\nwhen I'm building exactly the same feature branch as the one that\npassed CI by CFBot). I'm entirely willing to believe that it wouldn't\nwork that way for you, but I'm confident that a lot of people are in\nthe same camp as me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Aug 2024 14:12:49 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 15 Aug 2024 at 10:40, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> I'd like to send an automatic mail to a thread whenever it gets added\n> to a commitfest. Since this would impact everyone that's subscribed to\n> the mailinglist I'd like some feedback on this. This mail would\n> include:\n>\n> 1. A very short blurb like: \"This thread was added to the commitfest\n> with ID 1234\"\n> 2. A link to the commitfest entry\n\nOne thing I have seen which #2 would have helped with was that during\na CF, a patch author posted a trivial patch to -hackers and I\ncommitted it. I didn't know the patch author had added a CF entry to\nthe *next* CF. I seldom think to look at the next CF page during a CF.\nThe entry was closed by the patch author in this case.\n\nFor that reason, I think #2 would be useful.\n\n> 3. A link to the cfbot CI runs\n> 4. A link to the diff on GitHub\n> 5. Any other suggestions?\n\nI don't really object, but I think a better aim would be to merge\nCFbot with CF so that we could get to those places from the CF entry.\n\nDavid\n\n\n", "msg_date": "Sat, 17 Aug 2024 11:36:36 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Sat, Aug 17, 2024 at 11:36 AM David Rowley <[email protected]> wrote:\n> I don't really object, but I think a better aim would be to merge\n> CFbot with CF so that we could get to those places from the CF entry.\n\nAck. Some progress is happening on that front, working with Jelte and\nMagnus off-list...\n\n\n", "msg_date": "Sat, 17 Aug 2024 21:44:26 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "\tJelte Fennema-Nio wrote:\n\n> I'd like to send an automatic mail to a thread whenever it gets added\n> to a commitfest\n\nInstead of sending a specific mail, what about automatically adding a\nmail header like:\n\n X-CommitFest-Entry: <https://commitfest.postgresql.org/X/Y>\n\nto every outgoing mail whose thread is associated to a CF entry?\n\nIt feels more principled and less \"in-your-face\" than one dedicated\nmessage, plus it would work for all the patches that are already\nregistered.\n\nOn the web archive, the displayer could show it as a clickable link,\nalong with the main mail headers (from, to, date, Message-ID...), if\n it's deemed important enough to make it prominent.\nOtherwise one can click on \"Raw Message\" to see all headers.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Sat, 17 Aug 2024 13:59:34 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Fri, Aug 16, 2024 at 3:13 AM Peter Eisentraut <[email protected]> wrote:\n> Yeah, I'd rather not open the can of worms that we send automated emails\n> to this list at all.\n\n+1.\n\n> If people want to get emails from\n> the commitfest app, it should be that you subscribe there and it sends\n> those emails to those who want them.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 09:10:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Fri, 16 Aug 2024 at 16:43, Tom Lane <[email protected]> wrote:\n> However, there are other ways to accomplish that. I liked the\n> suggestion of extending the CF webapp with a way to search for entries\n> mentioning a particular mail message ID. I dunno how hard it'd be to\n> get it to recognize *any* message-ID from a thread, but surely at\n> least the head message ought not be too hard to match.\n\nI sent a patch to support this on the commitfest app to Magnus\noff-list. It was pretty easy to implement, even for *any* message-ID.\n\n\n", "msg_date": "Mon, 19 Aug 2024 15:31:17 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Fri, 16 Aug 2024 at 16:43, Tom Lane <[email protected]> wrote:\n>> However, there are other ways to accomplish that. I liked the\n>> suggestion of extending the CF webapp with a way to search for entries\n>> mentioning a particular mail message ID. I dunno how hard it'd be to\n>> get it to recognize *any* message-ID from a thread, but surely at\n>> least the head message ought not be too hard to match.\n\n> I sent a patch to support this on the commitfest app to Magnus\n> off-list. It was pretty easy to implement, even for *any* message-ID.\n\nCool, thank you!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:16:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Sat, Aug 17, 2024 at 9:44 PM Thomas Munro <[email protected]> wrote:\n> On Sat, Aug 17, 2024 at 11:36 AM David Rowley <[email protected]> wrote:\n> > I don't really object, but I think a better aim would be to merge\n> > CFbot with CF so that we could get to those places from the CF entry.\n\nI've built a system for pushing all the data required to show the\ncfbot traffic lights over to the CF app, and shared it with Magnus to\nsee if he likes it. If so, hopefully some more progress soon...\n\n\n", "msg_date": "Tue, 27 Aug 2024 13:38:53 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "So, there was limited enthusiasm for a new message here, but I noticed\nthat current CF messages don't include the CF entry link [1]. What\nabout adding a link to its existing e-mails?\n\nThanks,\nMaciek\n\n[1]: e.g., https://www.postgresql.org/message-id/172579582925.1126.2395496302629349708.pgcf%40coridan.postgresql.org\n\n\n", "msg_date": "Mon, 9 Sep 2024 12:40:13 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Mon, Aug 26, 2024 at 9:39 PM Thomas Munro <[email protected]> wrote:\n> I've built a system for pushing all the data required to show the\n> cfbot traffic lights over to the CF app, and shared it with Magnus to\n> see if he likes it. If so, hopefully some more progress soon...\n\nI don't see any traffic lights on commitfest.postgresql.org yet.\nShould I #blamemagnus?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 16:02:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Mon, Sep 9, 2024, 22:02 Robert Haas <[email protected]> wrote:\n\n> Should I #blamemagnus?\n>\n\nYes. He seems unavailable for whatever reason based on my private holidays\n(maybe summer holidays)\n\n>\n\nOn Mon, Sep 9, 2024, 22:02 Robert Haas <[email protected]> wrote:\nShould I #blamemagnus? Yes. He seems unavailable for whatever reason based on my private holidays (maybe summer holidays)", "msg_date": "Tue, 10 Sep 2024 00:52:36 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Tue, Sep 10, 2024, 00:52 Jelte Fennema-Nio <[email protected]> wrote:\n\n> On Mon, Sep 9, 2024, 22:02 Robert Haas <[email protected]> wrote:\n>\n>> Should I #blamemagnus?\n>>\n>\n> Yes. He seems unavailable for whatever reason based on my private holidays\n> (maybe summer holidays)\n>\n\n*based on my private/off-list communication with him\n\n>\n\nOn Tue, Sep 10, 2024, 00:52 Jelte Fennema-Nio <[email protected]> wrote:On Mon, Sep 9, 2024, 22:02 Robert Haas <[email protected]> wrote:\nShould I #blamemagnus? Yes. He seems unavailable for whatever reason based on my private holidays (maybe summer holidays) *based on my private/off-list communication with him", "msg_date": "Tue, 10 Sep 2024 00:54:20 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 15 Aug 2024 at 20:00, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Thu, 15 Aug 2024 at 05:17, Euler Taveira <[email protected]> wrote:\n> > If I understand your proposal correctly, there will be another email to the\n> > thread if the previous CF was closed and someone opened a new CF entry.\n> > Sometimes some CF entries are about the same thread.\n>\n> Yeah, that's correct. If a new CF entry is created for an existing\n> thread a new email would be sent. But to be clear, if CF entry is\n> pushed to the next commitfest, **no** new email would be sent.\n\nI was thinking about this today when looking at [1], you can see at\nthe end of that email someone posted a link to the CF entry.\nUnfortunately, that was for the Jan-2024 CF, which is not really that\nuseful to look at today.\n\nIt looks like only about 35% of patches in this CF have \"num CFs\" = 1,\nso it might be annoying to be taken to an old entry 65% of the time\nyou click the proposed URL. Maybe instead of the URLs showing the CF\nnumber that the patch was first added to, if it could have a reference\nin the URL that looks up the maximum CF which contains the patch and\nshow that one. e.g. https://commitfest.postgresql.org/latest/4751/ .\nWithout that, if you try to modify the status of a patch in an older\nCF, you'll just be told that you can't do that. (it is true you can\nclick the latest CF in the status column, but that's at least one more\nclick than I'd have hoped)\n\nDavid\n\n[1] https://postgr.es/m/CACJufxEdavJhkUDhJ1jraXnZ9ayNQU+TvjuQjzQbuGS06oNZEQ@mail.gmail.com\n\n\n", "msg_date": "Thu, 12 Sep 2024 14:07:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 12 Sept 2024 at 04:07, David Rowley <[email protected]> wrote:\n> Maybe instead of the URLs showing the CF\n> number that the patch was first added to, if it could have a reference\n> in the URL that looks up the maximum CF which contains the patch and\n> show that one. e.g. https://commitfest.postgresql.org/latest/4751/ .\n\nYeah agreed, that's why I added support for\nhttps://commitfest.postgresql.org/patch/4751/ a while ago. That URL\nwill be easily discoverable on the commitfest entry page soon (patch\nfor this is in flight).\n\n\n", "msg_date": "Thu, 12 Sep 2024 10:44:04 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Fri, 16 Aug 2024 at 07:43, Tom Lane <[email protected]> wrote:\n> However, there are other ways to accomplish that. I liked the\n> suggestion of extending the CF webapp with a way to search for entries\n> mentioning a particular mail message ID. I dunno how hard it'd be to\n> get it to recognize *any* message-ID from a thread, but surely at\n> least the head message ought not be too hard to match.\n\nThis is now deployed, so you can now find a CF entry based on the email ID.\n\nA bunch of other improvements also got deployed:\n- Improved homepage[1] (now with useful and bookmarkable links at the top)\n- More links on the cf entry page[2] (cfbot results, github diff, and\nstable link to entry itself)\n- Instructions on how to checkout an cfbot entry\n\nCFBot traffic lights directly on the cfentry and probably the\ncommifest list page are the next thing I'm planning to work on\n\nAfter that I'll take a look at sending opt-in emails\n\nAnother thing that I'm interested in adding is some metric of patch\nsize, so it's easier to find small patches that are thus hopefully\n\"easy\" to review. To accommodate multi-patch emails, I'm thinking of\nshowing lines changed in the first patch and lines changed in all\npatches together. Possibly showing it clearly, if significantly more\nlines were deleted than added, so it's easy to spot effective\nrefactorings.\n\n\n", "msg_date": "Tue, 24 Sep 2024 23:54:42 -0700", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Tue, 24 Sept 2024 at 23:54, Jelte Fennema-Nio <[email protected]> wrote:\n> A bunch of other improvements also got deployed:\n> - Improved homepage[1] (now with useful and bookmarkable links at the top)\n> - More links on the cf entry page[2] (cfbot results, github diff, and\n\n[1]: https://commitfest.postgresql.org/\n[2]: https://commitfest.postgresql.org/patch/5070\n\n\n", "msg_date": "Wed, 25 Sep 2024 00:10:53 -0700", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Wed, Sep 25, 2024 at 2:55 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Another thing that I'm interested in adding is some metric of patch\n> size, so it's easier to find small patches that are thus hopefully\n> \"easy\" to review. To accommodate multi-patch emails, I'm thinking of\n> showing lines changed in the first patch and lines changed in all\n> patches together. Possibly showing it clearly, if significantly more\n> lines were deleted than added, so it's easy to spot effective\n> refactorings.\n\nI like this general idea. Anything that helps us figure out what to\npay attention to in the CommitFest is great stuff. Focusing on the\nfirst patch seems odd to me, though: often the earlier patches will be\npreparatory patches, so small, and the big patch is someplace near the\nend of the series.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Sep 2024 11:05:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, 26 Sept 2024 at 08:06, Robert Haas <[email protected]> wrote:\n> Focusing on the first patch seems odd to me, though\n\nIndeed the first few patches will often be small, and the big patch\nwill appear later. When I split patches up, those small patches should\nusually be reviewable without looking at the big patch in detail, and\nhopefully they shouldn't be too contentious: e.g. a comment\nimprovement or some small refactor. But often those patches don't seem\nto be reviewed significantly quicker or merged significantly earlier\nthan the big patch. That makes it seem to me that even though they\nshould be relatively low-risk to commit and low-effort to review,\nreviewers are scared away by the sheer number of patches in the\npatchset, or by the size of the final patch. That's why I thought it\ncould be useful to specifically show the size of the first patch in\naddition to the total patchset size, so that reviewers can easily spot\nsome small hopefully easy to review patch at the start of a patchset.\n\n\n", "msg_date": "Thu, 26 Sep 2024 11:57:21 -0700", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" }, { "msg_contents": "On Thu, Sep 26, 2024 at 2:57 PM Jelte Fennema-Nio <[email protected]> wrote:\n> On Thu, 26 Sept 2024 at 08:06, Robert Haas <[email protected]> wrote:\n> > Focusing on the first patch seems odd to me, though\n>\n> Indeed the first few patches will often be small, and the big patch\n> will appear later. When I split patches up, those small patches should\n> usually be reviewable without looking at the big patch in detail, and\n> hopefully they shouldn't be too contentious: e.g. a comment\n> improvement or some small refactor. But often those patches don't seem\n> to be reviewed significantly quicker or merged significantly earlier\n> than the big patch. That makes it seem to me that even though they\n> should be relatively low-risk to commit and low-effort to review,\n> reviewers are scared away by the sheer number of patches in the\n> patchset, or by the size of the final patch. That's why I thought it\n> could be useful to specifically show the size of the first patch in\n> addition to the total patchset size, so that reviewers can easily spot\n> some small hopefully easy to review patch at the start of a patchset.\n\nFair enough! Personally what I'd want to know is how large the biggest\npatch is, but I see your point, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Sep 2024 08:08:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Opinion poll: Sending an automated email to a thread when it gets\n added to the commitfest" } ]
[ { "msg_contents": "Hi,\n\nThe PG17 documentation for unicode_assigned() in doc/src/sgml/func.sgml \nincorrectly states that the function returns a 'text' value. It actually \nreturns a boolean value.\n\nunicode_assigned(): \nhttps://github.com/postgres/postgres/blob/ef6e028f05b3e4ab23c5edfdfff457e0d2a649f6/src/backend/utils/adt/varlena.c#L6302\n\nBest regards,\n\n\n", "msg_date": "Thu, 15 Aug 2024 09:34:34 +0900", "msg_from": "Hironobu SUZUKI <[email protected]>", "msg_from_op": true, "msg_subject": "Typo in unicode_assigned() document PG17" }, { "msg_contents": "On Thu, 2024-08-15 at 09:34 +0900, Hironobu SUZUKI wrote:\n> Hi,\n> \n> The PG17 documentation for unicode_assigned() in\n> doc/src/sgml/func.sgml \n> incorrectly states that the function returns a 'text' value. It\n> actually \n> returns a boolean value.\n\nThank you for the report. Fixed and backported to version 17, where the\nfunction was introduced.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 19:09:29 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typo in unicode_assigned() document PG17" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhile reviewing another logical replication thread [1], I found an\nERROR scenario that seems to be untested.\n\nTEST CASE: Attempt CREATE SUBSCRIPTION where the subscriber table is\nmissing some expected column(s).\n\nAttached is a patch to add the missing test for this error message.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPt5FqV7J9GnnWFRNW_Z1KOMMAZXNTRcRNdtFrfMBz_GLA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 15 Aug 2024 17:25:16 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE SUBSCRIPTION - add missing test case" }, { "msg_contents": "On Thu, 15 Aug 2024 at 12:55, Peter Smith <[email protected]> wrote:\n>\n> Hi Hackers,\n>\n> While reviewing another logical replication thread [1], I found an\n> ERROR scenario that seems to be untested.\n>\n> TEST CASE: Attempt CREATE SUBSCRIPTION where the subscriber table is\n> missing some expected column(s).\n>\n> Attached is a patch to add the missing test for this error message.\n\nI agree currently there is no test to hit this code. I'm not sure if\nthis is the correct location for the test, should it be included in\nthe 008_diff_schema.pl file? Additionally, the commenting style for\nthis test appears quite different from the others. Could we use a\ncommenting style consistent with the earlier tests?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 16 Aug 2024 09:45:33 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE SUBSCRIPTION - add missing test case" }, { "msg_contents": "On Fri, Aug 16, 2024 at 2:15 PM vignesh C <[email protected]> wrote:\n>\n\nThanks for the review.\n\n>\n> I agree currently there is no test to hit this code. I'm not sure if\n> this is the correct location for the test, should it be included in\n> the 008_diff_schema.pl file?\n\nYes, that is a better home for this test. Done as suggested in\nattached patch v2.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 20 Aug 2024 12:51:08 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE SUBSCRIPTION - add missing test case" }, { "msg_contents": "On Tue, 20 Aug 2024 at 08:21, Peter Smith <[email protected]> wrote:\n>\n> On Fri, Aug 16, 2024 at 2:15 PM vignesh C <[email protected]> wrote:\n> >\n>\n> Thanks for the review.\n>\n> >\n> > I agree currently there is no test to hit this code. I'm not sure if\n> > this is the correct location for the test, should it be included in\n> > the 008_diff_schema.pl file?\n>\n> Yes, that is a better home for this test. Done as suggested in\n> attached patch v2.\n\nThanks, this looks good to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 20 Aug 2024 17:49:54 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE SUBSCRIPTION - add missing test case" }, { "msg_contents": "On Fri, Aug 16, 2024 at 9:45 AM vignesh C <[email protected]> wrote:\n>\n> On Thu, 15 Aug 2024 at 12:55, Peter Smith <[email protected]> wrote:\n> >\n> > Hi Hackers,\n> >\n> > While reviewing another logical replication thread [1], I found an\n> > ERROR scenario that seems to be untested.\n> >\n> > TEST CASE: Attempt CREATE SUBSCRIPTION where the subscriber table is\n> > missing some expected column(s).\n> >\n> > Attached is a patch to add the missing test for this error message.\n>\n> I agree currently there is no test to hit this code.\n>\n\nI also don't see a test for this error condition. However, it is not\nclear to me how important is it to cover this error code path. This\ncode has existed for a long time and I didn't notice any bugs related\nto this. There is a possibility that in the future we might break\nsomething because of a lack of this test but not sure if we want to\ncover every code path via tests as each additional test also has some\ncost. OTOH, If others think it is important or a good idea to have\nthis test then I don't have any objection to the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:18:03 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE SUBSCRIPTION - add missing test case" }, { "msg_contents": "On Wed, Aug 21, 2024 at 8:48 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Aug 16, 2024 at 9:45 AM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 15 Aug 2024 at 12:55, Peter Smith <[email protected]> wrote:\n> > >\n> > > Hi Hackers,\n> > >\n> > > While reviewing another logical replication thread [1], I found an\n> > > ERROR scenario that seems to be untested.\n> > >\n> > > TEST CASE: Attempt CREATE SUBSCRIPTION where the subscriber table is\n> > > missing some expected column(s).\n> > >\n> > > Attached is a patch to add the missing test for this error message.\n> >\n> > I agree currently there is no test to hit this code.\n> >\n>\n> I also don't see a test for this error condition. However, it is not\n> clear to me how important is it to cover this error code path. This\n> code has existed for a long time and I didn't notice any bugs related\n> to this. There is a possibility that in the future we might break\n> something because of a lack of this test but not sure if we want to\n> cover every code path via tests as each additional test also has some\n> cost. OTOH, If others think it is important or a good idea to have\n> this test then I don't have any objection to the same.\n\nYes, AFAIK there were no bugs related to this; The test was proposed\nto prevent accidental future bugs.\n\nBACKGROUND\n\nAnother pending feature thread (replication of generated columns) [1]\nrequired many test combinations to confirm all the different expected\nresults which are otherwise easily accidentally broken without\nnoticing. This *current* thread test shares one of the same error\nmessages, which is how it was discovered missing in the first place.\n\n~~~\n\nPROPOSAL\n\nI think this is not the first time a logical replication test has been\nquestioned due mostly to concern about creeping \"costs\".\n\nHow about we create a new test file and put test cases like this one\ninto it, guarded by code like the below using PG_TEST_EXTRA [2]?\n\nDoing it this way we can have better code coverage and higher\nconfidence when we want it, but zero test cost overheads when we don't\nwant it.\n\ne.g.\n\nsrc/test/subscription/t/101_extra.pl:\n\nif (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\\bsubscription\\b/)\n{\nplan skip_all =>\n 'Due to execution costs these tests are skipped unless subscription\nis enabled in PG_TEST_EXTRA';\n}\n\n# Add tests here...\n\n======\n[1] https://www.postgresql.org/message-id/flat/B80D17B2-2C8E-4C7D-87F2-E5B4BE3C069E%40gmail.com\n[2] https://www.postgresql.org/docs/devel/regress-run.html#REGRESS-ADDITIONAL\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 22 Aug 2024 08:54:40 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE SUBSCRIPTION - add missing test case" }, { "msg_contents": "On Thu, Aug 22, 2024 at 8:54 AM Peter Smith <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 8:48 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Aug 16, 2024 at 9:45 AM vignesh C <[email protected]> wrote:\n> > >\n> > > On Thu, 15 Aug 2024 at 12:55, Peter Smith <[email protected]> wrote:\n> > > >\n> > > > Hi Hackers,\n> > > >\n> > > > While reviewing another logical replication thread [1], I found an\n> > > > ERROR scenario that seems to be untested.\n> > > >\n> > > > TEST CASE: Attempt CREATE SUBSCRIPTION where the subscriber table is\n> > > > missing some expected column(s).\n> > > >\n> > > > Attached is a patch to add the missing test for this error message.\n> > >\n> > > I agree currently there is no test to hit this code.\n> > >\n> >\n> > I also don't see a test for this error condition. However, it is not\n> > clear to me how important is it to cover this error code path. This\n> > code has existed for a long time and I didn't notice any bugs related\n> > to this. There is a possibility that in the future we might break\n> > something because of a lack of this test but not sure if we want to\n> > cover every code path via tests as each additional test also has some\n> > cost. OTOH, If others think it is important or a good idea to have\n> > this test then I don't have any objection to the same.\n>\n> Yes, AFAIK there were no bugs related to this; The test was proposed\n> to prevent accidental future bugs.\n>\n> BACKGROUND\n>\n> Another pending feature thread (replication of generated columns) [1]\n> required many test combinations to confirm all the different expected\n> results which are otherwise easily accidentally broken without\n> noticing. This *current* thread test shares one of the same error\n> messages, which is how it was discovered missing in the first place.\n>\n> ~~~\n>\n> PROPOSAL\n>\n> I think this is not the first time a logical replication test has been\n> questioned due mostly to concern about creeping \"costs\".\n>\n> How about we create a new test file and put test cases like this one\n> into it, guarded by code like the below using PG_TEST_EXTRA [2]?\n>\n> Doing it this way we can have better code coverage and higher\n> confidence when we want it, but zero test cost overheads when we don't\n> want it.\n>\n> e.g.\n>\n> src/test/subscription/t/101_extra.pl:\n>\n> if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\\bsubscription\\b/)\n> {\n> plan skip_all =>\n> 'Due to execution costs these tests are skipped unless subscription\n> is enabled in PG_TEST_EXTRA';\n> }\n>\n> # Add tests here...\n>\n\nTo help strengthen the above proposal, here are a couple of examples\nwhere TAP tests already use this strategy to avoid tests for various\nreasons.\n\n[1] Avoids some test because of cost\n# WAL consistency checking is resource intensive so require opt-in with the\n# PG_TEST_EXTRA environment variable.\nif ( $ENV{PG_TEST_EXTRA}\n && $ENV{PG_TEST_EXTRA} =~ m/\\bwal_consistency_checking\\b/)\n{\n $node_primary->append_conf('postgresql.conf',\n 'wal_consistency_checking = all');\n}\n\n[2] Avoids some tests because of safety\nif (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\\bload_balance\\b/)\n{\n plan skip_all =>\n 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';\n}\n\n======\n[1] https://github.com/postgres/postgres/blob/master/src/test/recovery/t/027_stream_regress.pl\n[2] https://github.com/postgres/postgres/blob/master/src/interfaces/libpq/t/004_load_balance_dns.pl\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 22 Aug 2024 13:21:48 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE SUBSCRIPTION - add missing test case" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that there is a magic number which can be replaced by CATCACHE_MAXKEYS\nin struct cachedesc, I checked some other struct like CatCache, CatCTup, they\nall use CATCACHE_MAXKEYS.\n\nI did some search on pg-hackers, and found an old thread[0] that\nRobert proposed to change\nthe maximum number of keys for a syscache from 4 to 5.\n\nIt seems to me that the *five-key syscaches* feature is not necessary\nsince the idea was\n14 years old and we still use 4 keys without any problems(I might be wrong).\n\nHowever, in that patch, there is a change that seems reasonable.\n\n--- a/src/backend/utils/cache/syscache.c\n+++ b/src/backend/utils/cache/syscache.c\n@@ -95,7 +95,7 @@ struct cachedesc\n Oid reloid; /* OID of the\nrelation being cached */\n Oid indoid; /* OID of\nindex relation for this cache */\n int nkeys; /* # of keys\nneeded for cache lookup */\n- int key[4]; /* attribute\nnumbers of key attrs */\n+ int key[CATCACHE_MAXKEYS]; /* attribute\nnumbers of key attrs */\n int nbuckets; /* number of\nhash buckets for this cache */\n };\n\n\n[0]: https://www.postgresql.org/message-id/flat/603c8f071003281532t5e6c68eex458825485d4fcd98%40mail.gmail.com\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Thu, 15 Aug 2024 18:25:13 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "replace magic num in struct cachedesc with CATCACHE_MAXKEYS" }, { "msg_contents": "On 15.08.24 12:25, Junwang Zhao wrote:\n> I noticed that there is a magic number which can be replaced by CATCACHE_MAXKEYS\n> in struct cachedesc, I checked some other struct like CatCache, CatCTup, they\n> all use CATCACHE_MAXKEYS.\n\nThe \"syscache\" is the only user of the \"catcache\" right now. But I \nthink they are formally separate. So I don't think the \"4\" in the \nsyscache is necessarily the same as CATCACHE_MAXKEYS. For example, \nincreasing CATCACHE_MAXKEYS, hypothetically, wouldn't by itself make the \nsyscache support more than 4 keys.\n\n\n", "msg_date": "Fri, 23 Aug 2024 15:02:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: replace magic num in struct cachedesc with CATCACHE_MAXKEYS" }, { "msg_contents": "On Fri, Aug 23, 2024 at 9:02 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 15.08.24 12:25, Junwang Zhao wrote:\n> > I noticed that there is a magic number which can be replaced by CATCACHE_MAXKEYS\n> > in struct cachedesc, I checked some other struct like CatCache, CatCTup, they\n> > all use CATCACHE_MAXKEYS.\n>\n> The \"syscache\" is the only user of the \"catcache\" right now. But I\n> think they are formally separate. So I don't think the \"4\" in the\n> syscache is necessarily the same as CATCACHE_MAXKEYS. For example,\n> increasing CATCACHE_MAXKEYS, hypothetically, wouldn't by itself make the\n> syscache support more than 4 keys.\n\nThanks for your explanation.\n\nCF status changed to withdrawn.\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sat, 24 Aug 2024 10:01:37 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: replace magic num in struct cachedesc with CATCACHE_MAXKEYS" } ]
[ { "msg_contents": "Attached patch has EXPLAIN ANALYZE display the total number of\r\nprimitive index scans for all 3 kinds of index scan node. This is\r\nuseful for index scans that happen to use SAOP arrays. It also seems\r\nalmost essential to offer this kind of instrumentation for the skip\r\nscan patch [1]. Skip scan works by reusing all of the Postgres 17 work\r\n(see commit 5bf748b8) to skip over irrelevant sections of a composite\r\nindex with a low cardinality leading column, so it has all the same\r\nissues.\r\n\r\nOne reason to have this patch is to differentiate between similar\r\ncases involving simple SAOP arrays. The user will have some reasonable\r\nway of determining how a query such as this:\r\n\r\npg@regression:5432 [2070325]=# explain (analyze, buffers, costs off,\r\nsummary off)\r\nselect\r\n abalance\r\nfrom\r\n pgbench_accounts\r\nwhere\r\n aid in (1, 2, 3, 4, 5);\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├──────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Index Scan using pgbench_accounts_pkey on pgbench_accounts (actual\r\ntime=0.007..0.008 rows=5 loops=1) │\r\n│ Index Cond: (aid = ANY ('{1,2,3,4,5}'::integer[]))\r\n │\r\n│ Primitive Index Scans: 1\r\n │\r\n│ Buffers: shared hit=4\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(4 rows)\r\n\r\n...differs from a similar query, such as this:\r\n\r\npg@regression:5432 [2070325]=# explain (analyze, buffers, costs off,\r\nsummary off)\r\nselect\r\n abalance\r\nfrom\r\n pgbench_accounts\r\nwhere\r\n aid in (1000, 2000, 3000, 4000, 5000);\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├──────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Index Scan using pgbench_accounts_pkey on pgbench_accounts (actual\r\ntime=0.006..0.012 rows=5 loops=1) │\r\n│ Index Cond: (aid = ANY ('{1000,2000,3000,4000,5000}'::integer[]))\r\n │\r\n│ Primitive Index Scans: 5\r\n │\r\n│ Buffers: shared hit=20\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(4 rows)\r\n\r\nAnother reason to have this patch is consistency. We're only showing\r\nthe user the number of times we've incremented\r\npg_stat_user_tables.idx_scan in each case. The fact that\r\npg_stat_user_tables.idx_scan counts primitive index scans like this is\r\nnothing new. That issue was only documented quite recently, as part of\r\nthe Postgres 17 work, and it seems quite misleading. It's consistent,\r\nbut not necessarily nonsurprising. Making it readily apparent that\r\nthere is more than one primitive index scan involved here makes the\r\nissue less surprising.\r\n\r\nSkip scan\r\n---------\r\n\r\nHere's an example with this EXPLAIN ANALYZE patch applied on top of my\r\nskip scan patch [1], using the tenk1 table left behind when the\r\nstandard regression tests are run:\r\n\r\npg@regression:5432 [2070865]=# create index on tenk1 (four, stringu1);\r\nCREATE INDEX\r\npg@regression:5432 [2070865]=# explain (analyze, buffers, costs off,\r\nsummary off)\r\nselect\r\n stringu1\r\nfrom\r\n tenk1\r\nwhere\r\n -- Omitted: the leading column on \"four\"\r\n stringu1 = 'YGAAAA';\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├───────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Index Only Scan using tenk1_four_stringu1_idx on tenk1 (actual\r\ntime=0.011..0.017 rows=15 loops=1) │\r\n│ Index Cond: (stringu1 = 'YGAAAA'::name)\r\n │\r\n│ Heap Fetches: 0\r\n │\r\n│ Primitive Index Scans: 5\r\n │\r\n│ Buffers: shared hit=11\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(5 rows)\r\n\r\nNotice that there are 5 primitive index scans here. That's what I'd\r\nexpect, given that there are exactly 4 distinct \"logical subindexes\"\r\nimplied by our use of a leading column on \"four\" as the scan's skip\r\ncolumn. Under the hood, an initial primitive index scan locates the\r\nlowest \"four\" value. There are then 4 additional primitive index scans\r\nto locate the next \"four\" value (needed when the current \"four\" value\r\ngets past the value's \"stringu1 = 'YGAAAA'\" tuples).\r\n\r\nObviously, the cardinality of the leading column predicts the number\r\nof primitive index scans at runtime. But it can be much more\r\ncomplicated of a relationship than what I've shown here may suggest.\r\nSkewness matters, too. Small clusters of index tuples with unique\r\nleading column values will greatly increase column\r\ncardinality/ndistinct, without a commensurate increase in the cost of\r\na skip scan (that skips using that column). Those small clusters of\r\nunique values will appear on the same few leaf pages. It follows that\r\nthey cannot substantially increase the number of primitive scans\r\nrequired at runtime -- they'll just be read all together at once.\r\n\r\nAn important goal of my design for skip scan is that we avoid the need\r\nfor special index paths within the optimizer. Whether or not we skip\r\nis always a runtime decision (when a skippable index attribute exists\r\nat all). The optimizer needs to know about skipping for costing\r\npurposes only -- all of the required optimizer changes are in\r\nselfuncs.c. That's why you didn't see some kind of special new index\r\nscan node here -- you just saw the number of primitive index scans.\r\n\r\nMy motivation for working on this EXPLAIN ANALYZE patch is primarily\r\nskip scan. I don't think that it necessarily matters, though. I think\r\nthat this patch can be treated as independent work. It would have been\r\nweird to not bring it up skip scan even once here, though.\r\n\r\nPresentation design choices\r\n---------------------------\r\n\r\nI've used the term \"primitive index scan\" for this. That is the\r\nexisting user-visible terminology [2], though I suppose that that\r\ncould be revisited now.\r\n\r\nAnother quasi-arbitrary design choice: I don't break out primitive\r\nindex scans for scan nodes with multiple loops (e.g., the inner side\r\nof a nested loop join). The count of primitive scans accumulates\r\nacross index_rescan calls. I did things this way because it felt\r\nslightly more logical to follow what we show for \"Buffers\" --\r\nprimitive index scans are another physical cost. I'm certainly not\r\nopposed to doing that part differently. It doesn't have to be one or\r\nthe other (could break it out both ways), if people think that the\r\nadded verbosity is worth it.\r\n\r\nI think that we shouldn't be counting calls to _bt_first as a\r\nprimitive index scan unless they either call _bt_search or\r\n_bt_endpoint to descend the index (in the case of nbtree scans). This\r\nmeans that cases where we detect a contradictory qual in\r\n_bt_preprocess_keys should count as having zero primitive index scans.\r\nThat is technically an independent thing, though it seems far more\r\nlogical to just do it that way.\r\n\r\nActually, I think that there might be existing bugs on HEAD, with\r\nparallel index scan -- I think we might be overcounting. We're not\r\nproperly accounting for the fact that parallel workers usually don't\r\nperform a primitive index scan when their backend calls into\r\n_bt_first. I wonder if I should address that separately, as a bug\r\nfix...\r\n\r\n[1] https://www.postgresql.org/message-id/CAH2-Wzmn1YsLzOGgjAQZdn1STSG_y8qP__vggTaPAYXJP%2BG4bw%40mail.gmail.com\r\n[2] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ALL-INDEXES-VIEW\r\n-- see \"Note\" box\r\n--\r\nPeter Geoghegan", "msg_date": "Thu, 15 Aug 2024 15:22:32 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Showing primitive index scan count in EXPLAIN ANALYZE (for skip scan\n and SAOP scans)" }, { "msg_contents": "On Thu, 15 Aug 2024 at 21:23, Peter Geoghegan <[email protected]> wrote:\n>\n> Attached patch has EXPLAIN ANALYZE display the total number of\n> primitive index scans for all 3 kinds of index scan node. This is\n> useful for index scans that happen to use SAOP arrays. It also seems\n> almost essential to offer this kind of instrumentation for the skip\n> scan patch [1]. Skip scan works by reusing all of the Postgres 17 work\n> (see commit 5bf748b8) to skip over irrelevant sections of a composite\n> index with a low cardinality leading column, so it has all the same\n> issues.\n\nDid you notice the patch over at [0], where additional diagnostic\nEXPLAIN output for btrees is being discussed, too? I'm asking, because\nI'm not very convinced that 'primitive scans' are a useful metric\nacross all (or even: most) index AMs (e.g. BRIN probably never will\nhave a 'primitive scans' metric that differs from the loop count), so\nmaybe this would better be implemented in that framework?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/flat/TYWPR01MB10982D24AFA7CDC273445BFF0B1DC2%40TYWPR01MB10982.jpnprd01.prod.outlook.com#9c64cf75179da8d657a5eab7c75be480\n\n\n", "msg_date": "Thu, 15 Aug 2024 22:34:28 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "Hi! Thank you for your work on this subject!\n\nOn 15.08.2024 22:22, Peter Geoghegan wrote:\n> Attached patch has EXPLAIN ANALYZE display the total number of\n> primitive index scans for all 3 kinds of index scan node. This is\n> useful for index scans that happen to use SAOP arrays. It also seems\n> almost essential to offer this kind of instrumentation for the skip\n> scan patch [1]. Skip scan works by reusing all of the Postgres 17 work\n> (see commit 5bf748b8) to skip over irrelevant sections of a composite\n> index with a low cardinality leading column, so it has all the same\n> issues.\nI think that it is enough to pass the IndexScanDesc parameter to the \nfunction - this saves us from having to define the planstate type twice.\n\nFor this reason, I suggest some changes that I think may improve your patch.\n\n> One reason to have this patch is to differentiate between similar\n> cases involving simple SAOP arrays. The user will have some reasonable\n> way of determining how a query such as this:\n>\n> pg@regression:5432 [2070325]=# explain (analyze, buffers, costs off,\n> summary off)\n> select\n> abalance\n> from\n> pgbench_accounts\n> where\n> aid in (1, 2, 3, 4, 5);\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ QUERY PLAN\n> │\n> ├──────────────────────────────────────────────────────────────────────────────────────────────────────┤\n> │ Index Scan using pgbench_accounts_pkey on pgbench_accounts (actual\n> time=0.007..0.008 rows=5 loops=1) │\n> │ Index Cond: (aid = ANY ('{1,2,3,4,5}'::integer[]))\n> │\n> │ Primitive Index Scans: 1\n> │\n> │ Buffers: shared hit=4\n> │\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (4 rows)\n>\n> ...differs from a similar query, such as this:\n>\n> pg@regression:5432 [2070325]=# explain (analyze, buffers, costs off,\n> summary off)\n> select\n> abalance\n> from\n> pgbench_accounts\n> where\n> aid in (1000, 2000, 3000, 4000, 5000);\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ QUERY PLAN\n> │\n> ├──────────────────────────────────────────────────────────────────────────────────────────────────────┤\n> │ Index Scan using pgbench_accounts_pkey on pgbench_accounts (actual\n> time=0.006..0.012 rows=5 loops=1) │\n> │ Index Cond: (aid = ANY ('{1000,2000,3000,4000,5000}'::integer[]))\n> │\n> │ Primitive Index Scans: 5\n> │\n> │ Buffers: shared hit=20\n> │\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (4 rows)\n>\n> Another reason to have this patch is consistency. We're only showing\n> the user the number of times we've incremented\n> pg_stat_user_tables.idx_scan in each case. The fact that\n> pg_stat_user_tables.idx_scan counts primitive index scans like this is\n> nothing new. That issue was only documented quite recently, as part of\n> the Postgres 17 work, and it seems quite misleading. It's consistent,\n> but not necessarily nonsurprising. Making it readily apparent that\n> there is more than one primitive index scan involved here makes the\n> issue less surprising.\n>\n> Skip scan\n> ---------\n>\n> Here's an example with this EXPLAIN ANALYZE patch applied on top of my\n> skip scan patch [1], using the tenk1 table left behind when the\n> standard regression tests are run:\n>\n> pg@regression:5432 [2070865]=# create index on tenk1 (four, stringu1);\n> CREATE INDEX\n> pg@regression:5432 [2070865]=# explain (analyze, buffers, costs off,\n> summary off)\n> select\n> stringu1\n> from\n> tenk1\n> where\n> -- Omitted: the leading column on \"four\"\n> stringu1 = 'YGAAAA';\n> ┌───────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ QUERY PLAN\n> │\n> ├───────────────────────────────────────────────────────────────────────────────────────────────────┤\n> │ Index Only Scan using tenk1_four_stringu1_idx on tenk1 (actual\n> time=0.011..0.017 rows=15 loops=1) │\n> │ Index Cond: (stringu1 = 'YGAAAA'::name)\n> │\n> │ Heap Fetches: 0\n> │\n> │ Primitive Index Scans: 5\n> │\n> │ Buffers: shared hit=11\n> │\n> └───────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (5 rows)\n>\n> Notice that there are 5 primitive index scans here. That's what I'd\n> expect, given that there are exactly 4 distinct \"logical subindexes\"\n> implied by our use of a leading column on \"four\" as the scan's skip\n> column. Under the hood, an initial primitive index scan locates the\n> lowest \"four\" value. There are then 4 additional primitive index scans\n> to locate the next \"four\" value (needed when the current \"four\" value\n> gets past the value's \"stringu1 = 'YGAAAA'\" tuples).\n>\n> Obviously, the cardinality of the leading column predicts the number\n> of primitive index scans at runtime. But it can be much more\n> complicated of a relationship than what I've shown here may suggest.\n> Skewness matters, too. Small clusters of index tuples with unique\n> leading column values will greatly increase column\n> cardinality/ndistinct, without a commensurate increase in the cost of\n> a skip scan (that skips using that column). Those small clusters of\n> unique values will appear on the same few leaf pages. It follows that\n> they cannot substantially increase the number of primitive scans\n> required at runtime -- they'll just be read all together at once.\n>\n> An important goal of my design for skip scan is that we avoid the need\n> for special index paths within the optimizer. Whether or not we skip\n> is always a runtime decision (when a skippable index attribute exists\n> at all). The optimizer needs to know about skipping for costing\n> purposes only -- all of the required optimizer changes are in\n> selfuncs.c. That's why you didn't see some kind of special new index\n> scan node here -- you just saw the number of primitive index scans.\n>\n> My motivation for working on this EXPLAIN ANALYZE patch is primarily\n> skip scan. I don't think that it necessarily matters, though. I think\n> that this patch can be treated as independent work. It would have been\n> weird to not bring it up skip scan even once here, though.\n>\n> Presentation design choices\n> ---------------------------\n>\n> I've used the term \"primitive index scan\" for this. That is the\n> existing user-visible terminology [2], though I suppose that that\n> could be revisited now.\n>\n> Another quasi-arbitrary design choice: I don't break out primitive\n> index scans for scan nodes with multiple loops (e.g., the inner side\n> of a nested loop join). The count of primitive scans accumulates\n> across index_rescan calls. I did things this way because it felt\n> slightly more logical to follow what we show for \"Buffers\" --\n> primitive index scans are another physical cost. I'm certainly not\n> opposed to doing that part differently. It doesn't have to be one or\n> the other (could break it out both ways), if people think that the\n> added verbosity is worth it.\n>\n> I think that we shouldn't be counting calls to _bt_first as a\n> primitive index scan unless they either call _bt_search or\n> _bt_endpoint to descend the index (in the case of nbtree scans). This\n> means that cases where we detect a contradictory qual in\n> _bt_preprocess_keys should count as having zero primitive index scans.\n> That is technically an independent thing, though it seems far more\n> logical to just do it that way.\n>\n> Actually, I think that there might be existing bugs on HEAD, with\n> parallel index scan -- I think we might be overcounting. We're not\n> properly accounting for the fact that parallel workers usually don't\n> perform a primitive index scan when their backend calls into\n> _bt_first. I wonder if I should address that separately, as a bug\n> fix...\n>\n> [1]https://www.postgresql.org/message-id/CAH2-Wzmn1YsLzOGgjAQZdn1STSG_y8qP__vggTaPAYXJP%2BG4bw%40mail.gmail.com\n> [2]https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ALL-INDEXES-VIEW\n> -- see \"Note\" box\n> --\n> Peter Geoghegan\n\nTo be honest, I don't quite understand how information in explain \nanalyze about the number of used primitive indexes\nwill help me improve my database system as a user. Perhaps I'm missing \nsomething.\n\nMaybe it can tell me which columns are best to create an index on or \nsomething like that?\n\nCould you explain it me, please?\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 15 Aug 2024 23:58:25 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Thu, Aug 15, 2024 at 4:34 PM Matthias van de Meent\n<[email protected]> wrote:\n> > Attached patch has EXPLAIN ANALYZE display the total number of\n> > primitive index scans for all 3 kinds of index scan node. This is\n> > useful for index scans that happen to use SAOP arrays. It also seems\n> > almost essential to offer this kind of instrumentation for the skip\n> > scan patch [1]. Skip scan works by reusing all of the Postgres 17 work\n> > (see commit 5bf748b8) to skip over irrelevant sections of a composite\n> > index with a low cardinality leading column, so it has all the same\n> > issues.\n>\n> Did you notice the patch over at [0], where additional diagnostic\n> EXPLAIN output for btrees is being discussed, too?\n\nTo be clear, for those that haven't been paying attention to that\nother thread: that other EXPLAIN patch (the one authored by Masahiro\nIkeda) surfaces information about a distinction that the skip scan\npatch renders obsolete. That is, the skip scan patch makes all \"Non\nKey Filter\" quals into quals that can relocate the scan to a later\nleaf page by starting a new primitive index scan. Technically, skip\nscan removes the concept that that patch calls \"Non Key Filter\"\naltogether.\n\nNote that this isn't the same thing as making that other patch\nobsolete. Skip scan renders the whole concept of \"Non Key Filter\"\nobsolete *in name only*. You might prefer to think of it as making\nthat whole concept squishy. Just because we can theoretically use the\nleading column to skip doesn't mean we actually will. It isn't an\neither/or thing. We might skip during some parts of a scan, but not\nduring other parts.\n\nIt's just not clear how to handle those sorts of fuzzy distinctions\nright now. It does seem worth pursuing, but I see no conflict.\n\n> I'm asking, because\n> I'm not very convinced that 'primitive scans' are a useful metric\n> across all (or even: most) index AMs (e.g. BRIN probably never will\n> have a 'primitive scans' metric that differs from the loop count), so\n> maybe this would better be implemented in that framework?\n\nWhat do you mean by \"within that framework\"? They seem orthogonal?\n\nIt's true that BRIN index scans will probably never show more than a\nsingle primitive index scan. I don't think that the same is true of\nany other index AM, though. Don't they all support SAOPs, albeit\nnon-natively?\n\nThe important question is: what do you want to do about cases like the\nBRIN case? Our choices are all fairly obvious choices. We can be\nselective, and *not* show this information when a set of heuristics\nindicate that it's not relevant. This is fairly straightforward to\nimplement. Which do you prefer: overall consistency, or less\nverbosity?\n\nPersonally I think that the consistency argument works in favor of\ndisplaying this information for every kind of index scan. That's a\nhopelessly subjective position, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 15 Aug 2024 17:10:01 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Thu, Aug 15, 2024 at 4:58 PM Alena Rybakina\n<[email protected]> wrote:\n> I think that it is enough to pass the IndexScanDesc parameter to the function - this saves us from having to define the planstate type twice.\n>\n> For this reason, I suggest some changes that I think may improve your patch.\n\nPerhaps it's a little better that way. I'll consider it.\n\n> To be honest, I don't quite understand how information in explain analyze about the number of used primitive indexes\n> will help me improve my database system as a user. Perhaps I'm missing something.\n\nThere is probably no typical case. The patch shows implementation\ndetails, which need to be interpreted in the context of a particular\nproblem.\n\nMaybe the problem is that some of the heuristics added by one of my\nnbtree patches interact relatively badly with some real world query.\nIt would be presumptuous of me to say that that will never happen.\n\n> Maybe it can tell me which columns are best to create an index on or something like that?\n\nThat's definitely going to be important in the case of skip scan.\nSimply showing the user that the index scan skips at all will make\nthem aware that there are missing index columns. That could be a sign\nthat they'd be better off not using skip scan at all, by creating a\nnew index that suits the particular query (by not having the extra\nskipped column).\n\nIt's almost always possible to beat skip scan by creating a new index\n-- whether or not it's worth the trouble/expense of maintaining a\nwhole new index is the important question. Is this particular query\nthe most important query *to the business*, for whatever reason? Or is\nhaving merely adequate performance acceptable?\n\nYour OR-to-SAOP-rewrite patch effectively makes two or more bitmap\nindex scans into one single continuous index scan. Or...does it? The\ntrue number of (primitive) index scans might be \"the same\" as it was\nbefore (without your patch), or there might really only be one\n(primitive) index scan with your patch. Or it might be anywhere in\nbetween those two extremes. Users will benefit from knowing where on\nthis continuum a particular index scan falls. It's just useful to know\nwhere time is spent.\n\nKnowing this information might even allow the user to create a new\nmulticolumn index, with columns in an order better suited to an\naffected query. It's not so much the cost of descending the index\nmultiple times that we need to worry about here, even though that's\nwhat we're talking about counting here. Varying index column order\ncould make an index scan faster by increasing locality. Locality is\nusually very important. Few index scans is a good proxy for greater\nlocality.\n\nIt's easiest to understand what I mean about locality with an example.\nAn index on (a, b) is good for queries with quals such as \"where a =\n42 and b in (1,2,3,4,5,6,7,8,9)\" if it allows such a query to only\naccess one or two leaf pages, where all of the \"b\" values of interest\nlive side by side. Obviously that won't be true if it's the other way\naround -- if the typical qual looks more like \"where b = 7 and a in\n(1,2,3,4,5,6,7,8,9)\". This is the difference between 1 primitive\nindex scan and 9 primitive index scans -- quite a big difference. Note\nthat the main cost we need to worry about here *isn't* the cost of\ndescending the index. It's mostly the cost of reading the leaf pages.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 15 Aug 2024 17:45:02 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Thu, 15 Aug 2024 at 23:10, Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Aug 15, 2024 at 4:34 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > > Attached patch has EXPLAIN ANALYZE display the total number of\n> > > primitive index scans for all 3 kinds of index scan node. This is\n> > > useful for index scans that happen to use SAOP arrays. It also seems\n> > > almost essential to offer this kind of instrumentation for the skip\n> > > scan patch [1]. Skip scan works by reusing all of the Postgres 17 work\n> > > (see commit 5bf748b8) to skip over irrelevant sections of a composite\n> > > index with a low cardinality leading column, so it has all the same\n> > > issues.\n> >\n> > Did you notice the patch over at [0], where additional diagnostic\n> > EXPLAIN output for btrees is being discussed, too?\n>\n> To be clear, for those that haven't been paying attention to that\n> other thread: that other EXPLAIN patch (the one authored by Masahiro\n> Ikeda) surfaces information about a distinction that the skip scan\n> patch renders obsolete. That is, the skip scan patch makes all \"Non\n> Key Filter\" quals into quals that can relocate the scan to a later\n> leaf page by starting a new primitive index scan. Technically, skip\n> scan removes the concept that that patch calls \"Non Key Filter\"\n> altogether.\n>\n> Note that this isn't the same thing as making that other patch\n> obsolete. Skip scan renders the whole concept of \"Non Key Filter\"\n> obsolete *in name only*. You might prefer to think of it as making\n> that whole concept squishy. Just because we can theoretically use the\n> leading column to skip doesn't mean we actually will. It isn't an\n> either/or thing. We might skip during some parts of a scan, but not\n> during other parts.\n\nYes.\n\n> It's just not clear how to handle those sorts of fuzzy distinctions\n> right now. It does seem worth pursuing, but I see no conflict.\n>\n> > I'm asking, because\n> > I'm not very convinced that 'primitive scans' are a useful metric\n> > across all (or even: most) index AMs (e.g. BRIN probably never will\n> > have a 'primitive scans' metric that differs from the loop count), so\n> > maybe this would better be implemented in that framework?\n>\n> What do you mean by \"within that framework\"? They seem orthogonal?\n\nWhat I meant was putting this 'primitive scans' info into the\nAM-specific explain callback as seen in the latest patch version.\n\n> It's true that BRIN index scans will probably never show more than a\n> single primitive index scan. I don't think that the same is true of\n> any other index AM, though. Don't they all support SAOPs, albeit\n> non-natively?\n\nNot always. For Bitmap Index Scan the node's functions can allow\nnon-native SAOP support (it ORs the bitmaps), but normal indexes\nwithout SAOP support won't get SAOP-functionality from the IS/IOS\nnode's infrastructure, it'll need to be added as Filter.\n\n> The important question is: what do you want to do about cases like the\n> BRIN case? Our choices are all fairly obvious choices. We can be\n> selective, and *not* show this information when a set of heuristics\n> indicate that it's not relevant. This is fairly straightforward to\n> implement. Which do you prefer: overall consistency, or less\n> verbosity?\n\nConsistency, I suppose. But adding explain attributes left and right\nin Index Scan's explain output when and where every index type needs\nthem doesn't scale, so I'd put index-specific output into it's own\nsystem (see the linked thread for more rationale). And, in this case,\nthe use case seems quite index-specific, at least for IS/IOS nodes.\n\n> Personally I think that the consistency argument works in favor of\n> displaying this information for every kind of index scan.\n\nAgreed, assuming \"this information\" is indeed shared (and useful)\nacross all AMs.\n\nThis made me notice that you add a new metric that should generally be\nexactly the same as pg_stat_all_indexes.idx_scan (you mention the\nsame). Can't you pull that data, instead of inventing a new place\nevery AMs needs to touch for it's metrics?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 15 Aug 2024 23:46:51 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Thu, Aug 15, 2024 at 5:47 PM Matthias van de Meent\n<[email protected]> wrote:\n> > > I'm asking, because\n> > > I'm not very convinced that 'primitive scans' are a useful metric\n> > > across all (or even: most) index AMs (e.g. BRIN probably never will\n> > > have a 'primitive scans' metric that differs from the loop count), so\n> > > maybe this would better be implemented in that framework?\n> >\n> > What do you mean by \"within that framework\"? They seem orthogonal?\n>\n> What I meant was putting this 'primitive scans' info into the\n> AM-specific explain callback as seen in the latest patch version.\n\nI don't see how that could work. This is fundamentally information\nthat is only known when the query has fully finished execution.\n\nAgain, this is already something that we track at the whole-table\nlevel, within pg_stat_user_tables.idx_scan. It's already considered\nindex AM agnostic information, in that sense.\n\n> > It's true that BRIN index scans will probably never show more than a\n> > single primitive index scan. I don't think that the same is true of\n> > any other index AM, though. Don't they all support SAOPs, albeit\n> > non-natively?\n>\n> Not always. For Bitmap Index Scan the node's functions can allow\n> non-native SAOP support (it ORs the bitmaps), but normal indexes\n> without SAOP support won't get SAOP-functionality from the IS/IOS\n> node's infrastructure, it'll need to be added as Filter.\n\nAgain, what do you want me to do about it? Almost anything is possible\nin principle, and can be implemented without great difficulty. But you\nhave to clearly say what you want, and why you want it.\n\nYeah, non-native SAOP index scans are always bitmap scans. In the case\nof GIN, there are only lossy/bitmap index scans, anyway -- can't see\nthat ever changing. In the case of GiST, we could in the future add\nnative SAOP support, so do we really want to be inconsistent in what\nwe show now? (Tom said something about that recently, in fact.)\n\nI don't hate the idea of selectively not showing this information (for\nBRIN, say). Just as I don't hate the idea of totally omitting\n\"loops=1\" in the common case where we couldn't possibly be more than\none loop in practice. It's just that I don't think that it's worth it,\non balance. Not all redundancy is bad.\n\n> > The important question is: what do you want to do about cases like the\n> > BRIN case? Our choices are all fairly obvious choices. We can be\n> > selective, and *not* show this information when a set of heuristics\n> > indicate that it's not relevant. This is fairly straightforward to\n> > implement. Which do you prefer: overall consistency, or less\n> > verbosity?\n>\n> Consistency, I suppose. But adding explain attributes left and right\n> in Index Scan's explain output when and where every index type needs\n> them doesn't scale, so I'd put index-specific output into it's own\n> system (see the linked thread for more rationale).\n\nI can't argue with that. I just don't think it's directly relevant.\n\n> And, in this case,\n> the use case seems quite index-specific, at least for IS/IOS nodes.\n\nI disagree. It's an existing concept, exposed in system views, and now\nin EXPLAIN ANALYZE. It's precisely that -- nothing more, nothing less.\n\nThe fact that it tends to be much more useful in the case of nbtree\n(at least for now) makes this no less true.\n\n> This made me notice that you add a new metric that should generally be\n> exactly the same as pg_stat_all_indexes.idx_scan (you mention the\n> same).\n\nI didn't imagine that that part was subtle.\n\n> Can't you pull that data, instead of inventing a new place\n> every AMs needs to touch for it's metrics?\n\nNo. At least not in a way that's scoped to a particular index scan.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 15 Aug 2024 18:34:16 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Thu, Aug 15, 2024 at 3:22 PM Peter Geoghegan <[email protected]> wrote:\n> Attached patch has EXPLAIN ANALYZE display the total number of\n> primitive index scans for all 3 kinds of index scan node.\n\nAttached is v2, which fixes bitrot.\n\nv2 also uses new terminology. EXPLAIN ANALYZE will now show \"Index\nSearches: N\", not \"Primitive Index Scans: N\". Although there is\nlimited precedent for using the primitive scan terminology, I think\nthat it's a bit unwieldy.\n\nNo other notable changes.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 27 Aug 2024 11:15:46 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 11:16 AM Peter Geoghegan <[email protected]> wrote:\n> On Thu, Aug 15, 2024 at 3:22 PM Peter Geoghegan <[email protected]> wrote:\n> > Attached patch has EXPLAIN ANALYZE display the total number of\n> > primitive index scans for all 3 kinds of index scan node.\n>\n> Attached is v2, which fixes bitrot.\n>\n> v2 also uses new terminology. EXPLAIN ANALYZE will now show \"Index\n> Searches: N\", not \"Primitive Index Scans: N\". Although there is\n> limited precedent for using the primitive scan terminology, I think\n> that it's a bit unwieldy.\n\nI do like \"Index Searches\" better than \"Primitive Index Scans.\"\n\nBut I think Matthias had some good points about this being\nbtree-specific. I'm not sure whether he was completely correct, but\nyou seemed to just dismiss his argument and say \"well, that can't be\ndone,\" which doesn't seem convincing to me at all. If, for non-btree\nindexes, the number of index searches will always be the same as the\nloop count, then surely there is some way to avoid cluttering the\noutput for non-btree indexes with output that can never be of any use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Aug 2024 13:45:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 1:45 PM Robert Haas <[email protected]> wrote:\n> I do like \"Index Searches\" better than \"Primitive Index Scans.\"\n>\n> But I think Matthias had some good points about this being\n> btree-specific.\n\nIt's not B-Tree specific -- not really. Any index scan that can at\nleast non-natively support ScalarArrayOps (i.e. SAOP scans that the\nexecutor manages using ExecIndexEvalArrayKeys() + bitmap scans) will\nshow information that is exactly equivalent to what B-Tree will show,\ngiven a similar ScalarArrayOps query.\n\nThere is at best one limited sense in which the information shown is\nB-Tree specific: it tends to be more interesting in the case of B-Tree\nindex scans. You cannot trivially derive the number based on the\nnumber of array keys for B-Tree scans, since nbtree is now clever\nabout not needlessly searching the index anew. It's quite possible\nthat other index AMs will in the future be enhanced in about the same\nway as nbtree was in commit 5bf748b86b, at which point even this will\nno longer apply. (Tom speculated about adding something like that to\nGiST recently).\n\n> I'm not sure whether he was completely correct, but\n> you seemed to just dismiss his argument and say \"well, that can't be\n> done,\" which doesn't seem convincing to me at all.\n\nTo be clear, any variation that you can think of *can* be done without\nmuch difficulty. I thought that Matthias was unclear about what he\neven wanted, is all.\n\nThe problem isn't that there aren't any alternatives. The problem, if\nany, is that there are a huge number of slightly different\nalternatives. There are hopelessly subjective questions about what the\nbest trade-off between redundancy and consistency is. I'm absolutely\nnot set on doing things in exactly the way I've laid out.\n\nWhat do you think should be done? Note that the number of loops\nmatters here, in addition to the number of SAOP primitive\nscans/searches. If you want to suppress the information shown in the\ntypical \"nsearches == 1\" case, what does that mean for the less common\n\"nsearches == 0\" case?\n\n> If, for non-btree\n> indexes, the number of index searches will always be the same as the\n> loop count, then surely there is some way to avoid cluttering the\n> output for non-btree indexes with output that can never be of any use.\n\nEven if we assume that a given index/index AM will never use SAOPs,\nit's still possible to show more than one \"Index Search\" per executor\nnode execution. For example, when an index scan node is the inner side\nof a nestloop join.\n\nI see value in making it obvious to users when and how\npg_stat_all_indexes.idx_scan advances. Being able to easily relate it\nto EXPLAIN ANALYZE output is useful, independent of whether or not\nSAOPs happen to be used. That's probably the single best argument in\nfavor of showing \"Index Searches: N\" unconditionally. But I'm\ncertainly not going to refuse to budge over that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Aug 2024 14:13:54 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> I see value in making it obvious to users when and how\n> pg_stat_all_indexes.idx_scan advances. Being able to easily relate it\n> to EXPLAIN ANALYZE output is useful, independent of whether or not\n> SAOPs happen to be used. That's probably the single best argument in\n> favor of showing \"Index Searches: N\" unconditionally. But I'm\n> certainly not going to refuse to budge over that.\n\nTBH, I'm afraid that this patch basically is exposing numbers that\nnobody but Peter Geoghegan and maybe two or three other hackers\nwill understand, and even fewer people will find useful (since the\nhow-many-primitive-scans behavior is not something users have any\ncontrol over, IIUC). I doubt that \"it lines up with\npg_stat_all_indexes.idx_scan\" is enough to justify the additional\nclutter in EXPLAIN. Maybe we should be going the other direction\nand trying to make pg_stat_all_indexes count in a less detailed but\nless surprising way, ie once per indexscan plan node invocation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Aug 2024 15:04:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 3:04 PM Tom Lane <[email protected]> wrote:\n> TBH, I'm afraid that this patch basically is exposing numbers that\n> nobody but Peter Geoghegan and maybe two or three other hackers\n> will understand, and even fewer people will find useful (since the\n> how-many-primitive-scans behavior is not something users have any\n> control over, IIUC).\n\nYou can make about the same argument against showing \"Buffers\". It's\nnot really something that you can address directly, either. It's\nhelpful only in the context of a specific problem.\n\n> I doubt that \"it lines up with\n> pg_stat_all_indexes.idx_scan\" is enough to justify the additional\n> clutter in EXPLAIN.\n\nThe scheme laid out in the patch is just a starting point for\ndiscussion. I just think that it's particularly important that we have\nthis for skip scan -- that's the part that I feel strongly about.\n\nWith skip scan in place, every scan of the kind we'd currently call a\n\"full index scan\" will be eligible to skip. Whether and to what extent\nwe actually skip is determined at runtime. We really need some way of\ndetermining how much skipping has taken place. (There are many\ndisadvantages to having a dedicated skip scan index path, which I can\ngo into if you want.)\n\n> Maybe we should be going the other direction\n> and trying to make pg_stat_all_indexes count in a less detailed but\n> less surprising way, ie once per indexscan plan node invocation.\n\nIs that less surprising, though? I think that it's more surprising.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Aug 2024 15:13:11 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Fri, 16 Aug 2024 at 00:34, Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Aug 15, 2024 at 5:47 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > > > I'm asking, because\n> > > > I'm not very convinced that 'primitive scans' are a useful metric\n> > > > across all (or even: most) index AMs (e.g. BRIN probably never will\n> > > > have a 'primitive scans' metric that differs from the loop count), so\n> > > > maybe this would better be implemented in that framework?\n> > >\n> > > What do you mean by \"within that framework\"? They seem orthogonal?\n> >\n> > What I meant was putting this 'primitive scans' info into the\n> > AM-specific explain callback as seen in the latest patch version.\n>\n> I don't see how that could work. This is fundamentally information\n> that is only known when the query has fully finished execution.\n\nIf the counter was put into the BTScanOpaque, rather than the\nIndexScanDesc, then this could trivially be used in an explain AM\ncallback, as IndexScanDesc and ->opaque are both still available while\nbuilding the explain output. As a result, it wouldn't bloat the\nIndexScanDesc for other index AMs who might not be interested in this\nmetric.\n\n> Again, this is already something that we track at the whole-table\n> level, within pg_stat_user_tables.idx_scan. It's already considered\n> index AM agnostic information, in that sense.\n\nThat's true, but for most indexes there is a 1:1 relationship between\nloops and idx_scan counts, with ony btree behaving differently in that\nregard. Not to say it isn't an important insight for btree, but just\nthat it seems to be only relevant for btree and no other index I can\nthink of right now.\n\n> > > It's true that BRIN index scans will probably never show more than a\n> > > single primitive index scan. I don't think that the same is true of\n> > > any other index AM, though. Don't they all support SAOPs, albeit\n> > > non-natively?\n> >\n> > Not always. For Bitmap Index Scan the node's functions can allow\n> > non-native SAOP support (it ORs the bitmaps), but normal indexes\n> > without SAOP support won't get SAOP-functionality from the IS/IOS\n> > node's infrastructure, it'll need to be added as Filter.\n>\n> Again, what do you want me to do about it? Almost anything is possible\n> in principle, and can be implemented without great difficulty. But you\n> have to clearly say what you want, and why you want it.\n\nI don't want anything, or anything done about it, but your statement\nthat all index AMs support SAOP (potentially non-natively) is not\ntrue, as the non-native SAOP support is only for bitmap index scans,\nand index AMs aren't guaranteed to support bitmap index scans (e.g.\npgvector's IVFFLAT and HNSW are good examples, as they only support\namgettuple).\n\n> Yeah, non-native SAOP index scans are always bitmap scans. In the case\n> of GIN, there are only lossy/bitmap index scans, anyway -- can't see\n> that ever changing.\n\nGIN had amgettuple-based index scans until the fastinsert path was\nadded, and with some work (I don't think it needs to be a lot) the\nfeature can probably be returned to the AM. The GIN internals would\nprobably only need relatively few changes, as they already seem to\nmostly use precise TID-based scans - the only addition would be a\nfilter that prohibits returning tuples that were previously returned\nwhile scanning the fastinsert path during the normal index scan.\n\n> > And, in this case,\n> > the use case seems quite index-specific, at least for IS/IOS nodes.\n>\n> I disagree. It's an existing concept, exposed in system views, and now\n> in EXPLAIN ANALYZE. It's precisely that -- nothing more, nothing less.\n\nTo be precise, it is not precisely that, because it's a different\ncounter that an AM must update when the pgstat data is updated if it\nwants the explain output to reflect the stats counter accurately. When\nan AM forgets to update one of these metrics (or fails to realize they\nhave to both be updated) then they'd be out-of-sync. I'd prefer if an\nAM didn't have to account for it's statistics in more than one place.\n\n> > This made me notice that you add a new metric that should generally be\n> > exactly the same as pg_stat_all_indexes.idx_scan (you mention the\n> > same).\n>\n> I didn't imagine that that part was subtle.\n\nIt wasn't, but it was not present in the first two paragraphs of the\nmail, which I had only skimmed when I sent my first reply (as you\nmaybe could see indicated by the quote). That's why it took me until\nmy second reply to realise these were considered to be equivalent,\nespecially after I noticed the headerfile changes where you added a\nnew metric rather than pulling data from existing stats.\n\n> > Can't you pull that data, instead of inventing a new place\n> > every AMs needs to touch for it's metrics?\n>\n> No. At least not in a way that's scoped to a particular index scan.\n\nSimilar per-node counter data is pulled for the global (!) counters of\npgBufferUsage, so why would it be less possible to gather such metrics\nfor just one index's stats here? While I do think it won't be easy to\nfind a good way to integrate this into EXPLAIN's Instrumentation, I\nimagine other systems (e.g. table scans) may benefit from a better\nintegration and explanation of pgstat statistics in EXPLAIN, too. E.g.\nI'd love to be able to explain how many times which function was\ncalled in a plans' projections, and what the relevant time expendature\nfor those functions is in my plans. This data is available with\ntrack_functions enabled, and diffing in the execution nodes should\nallow this to be shown in EXPLAIN output. It'd certainly be more\nexpensive than not doing the analysis, but I believe that's what\nEXPLAIN options are for - you can show a more detailed analysis at the\ncost of increased overhead in the plan execution.\n\nAlternatively, you could update the patch so that only the field in\nIndexScan would need to be updated by the index AM by making the\nexecutor responsible to update the relation's stats at once at the end\nof the query with the data from the IndexScanDesc.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 27 Aug 2024 23:02:52 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 5:03 PM Matthias van de Meent\n<[email protected]> wrote:\n> If the counter was put into the BTScanOpaque, rather than the\n> IndexScanDesc, then this could trivially be used in an explain AM\n> callback, as IndexScanDesc and ->opaque are both still available while\n> building the explain output.\n\nRight, \"trivial\". Except in that it requires inventing a whole new\ngeneral purpose infrastructure. Meanwhile, Tom is arguing against even\nshowing this very basic information in EXPLAIN ANALYZE.You see the\nproblem?\n\n> As a result, it wouldn't bloat the\n> IndexScanDesc for other index AMs who might not be interested in this\n> metric.\n\nWhy do you persist with the idea that it isn't useful for other index\nAMs? I mean it literally works in exactly the same way! It's literally\nindistinguishable to users, and works in a way that's consistent with\nhistorical behavior/definitions.\n\n> I don't want anything, or anything done about it, but your statement\n> that all index AMs support SAOP (potentially non-natively) is not\n> true, as the non-native SAOP support is only for bitmap index scans,\n> and index AMs aren't guaranteed to support bitmap index scans (e.g.\n> pgvector's IVFFLAT and HNSW are good examples, as they only support\n> amgettuple).\n\nYes, there are some very minor exceptions -- index AMs where even\nnon-native SAOPs won't be used. What difference does it make?\n\n> > > And, in this case,\n> > > the use case seems quite index-specific, at least for IS/IOS nodes.\n> >\n> > I disagree. It's an existing concept, exposed in system views, and now\n> > in EXPLAIN ANALYZE. It's precisely that -- nothing more, nothing less.\n>\n> To be precise, it is not precisely that, because it's a different\n> counter that an AM must update when the pgstat data is updated if it\n> wants the explain output to reflect the stats counter accurately.\n\nWhy does that matter? I could easily move the counter to the opaque\nstruct, but that would make the patch longer and more complicated, for\nabsolutely no benefit.\n\n> When an AM forgets to update one of these metrics (or fails to realize they\n> have to both be updated) then they'd be out-of-sync. I'd prefer if an\n> AM didn't have to account for it's statistics in more than one place.\n\nI could easily change the pgstat_count_index_scan macro so that index\nAMs were forced to do both, or neither. (Not that this is a real\nproblem.)\n\n> > > Can't you pull that data, instead of inventing a new place\n> > > every AMs needs to touch for it's metrics?\n> >\n> > No. At least not in a way that's scoped to a particular index scan.\n>\n> Similar per-node counter data is pulled for the global (!) counters of\n> pgBufferUsage, so why would it be less possible to gather such metrics\n> for just one index's stats here?\n\nI told you why already, when we talked about this privately: there is\nno guarantee that it's the index indicated by the scan\ninstrumentation. For example, due to syscache lookups. There's also\nthe question of how we maintain the count for things like nestloop\njoins, where invocations of different index scan nodes may be freely\nwoven together. So it just won't work.\n\nBesides, I thought that you wanted me to use some new field in\nBTScanOpaque? But now you want me to use a global counter. Which is\nit?\n\n> While I do think it won't be easy to\n> find a good way to integrate this into EXPLAIN's Instrumentation, I\n> imagine other systems (e.g. table scans) may benefit from a better\n> integration and explanation of pgstat statistics in EXPLAIN, too. E.g.\n> I'd love to be able to explain how many times which function was\n> called in a plans' projections, and what the relevant time expendature\n> for those functions is in my plans.\n\nSeems completely unrelated.\n\n> Alternatively, you could update the patch so that only the field in\n> IndexScan would need to be updated by the index AM by making the\n> executor responsible to update the relation's stats at once at the end\n> of the query with the data from the IndexScanDesc.\n\nI don't understand why this is an alternative to the other thing that\nyou said. Or even why it's desirable.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Aug 2024 17:40:20 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, 27 Aug 2024 at 23:40, Peter Geoghegan <[email protected]> wrote:\n>\n> On Tue, Aug 27, 2024 at 5:03 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > If the counter was put into the BTScanOpaque, rather than the\n> > IndexScanDesc, then this could trivially be used in an explain AM\n> > callback, as IndexScanDesc and ->opaque are both still available while\n> > building the explain output.\n>\n> Right, \"trivial\". Except in that it requires inventing a whole new\n> general purpose infrastructure.\n\nWhich seems to be in the process of being invented already elsewhere.\n\n> Meanwhile, Tom is arguing against even\n> showing this very basic information in EXPLAIN ANALYZE.You see the\n> problem?\n\nI think Tom's main issue is additional clutter when running just plain\n`explain analyze`, and he'd probably be fine with it if this was gated\nbehind e.g. VERBOSE or a new \"get me the AM's view of this node\"\n-flag.\n\n> > As a result, it wouldn't bloat the\n> > IndexScanDesc for other index AMs who might not be interested in this\n> > metric.\n>\n> Why do you persist with the idea that it isn't useful for other index\n> AMs?\n\nBecause\n- there are no other index AMs that would show a count that's\ndifferent from loops (Yes, I'm explicitly ignoring bitmapscan's synthetic SAOP)\n- because there is already a place where this info is stored.\n\n> I mean it literally works in exactly the same way! It's literally\n> indistinguishable to users, and works in a way that's consistent with\n> historical behavior/definitions.\n\nHistorically, no statistics/explain-only info is stored in the\nIndexScanDesc, all data inside that struct is relevant even when\nEXPLAIN was removed from the codebase. The same is true for\nTableScanDesc\nNow, you want to add this metadata to the struct. I'm quite hesitant\nto start walking on such a surface, as it might just be a slippery\nslope.\n\n> > I don't want anything, or anything done about it, but your statement\n> > that all index AMs support SAOP (potentially non-natively) is not\n> > true, as the non-native SAOP support is only for bitmap index scans,\n> > and index AMs aren't guaranteed to support bitmap index scans (e.g.\n> > pgvector's IVFFLAT and HNSW are good examples, as they only support\n> > amgettuple).\n>\n> Yes, there are some very minor exceptions -- index AMs where even\n> non-native SAOPs won't be used. What difference does it make?\n\nThat not all index types (even: most index types) have no interesting\nperformance numbers indicated by the count.\n\n> > > > And, in this case,\n> > > > the use case seems quite index-specific, at least for IS/IOS nodes.\n> > >\n> > > I disagree. It's an existing concept, exposed in system views, and now\n> > > in EXPLAIN ANALYZE. It's precisely that -- nothing more, nothing less.\n> >\n> > To be precise, it is not precisely that, because it's a different\n> > counter that an AM must update when the pgstat data is updated if it\n> > wants the explain output to reflect the stats counter accurately.\n>\n> Why does that matter?\n\nBecause to me it seels like one more thing an existing index AM's\nauthor needs to needlessly add to its index.\n\n> > When an AM forgets to update one of these metrics (or fails to realize they\n> > have to both be updated) then they'd be out-of-sync. I'd prefer if an\n> > AM didn't have to account for it's statistics in more than one place.\n>\n> I could easily change the pgstat_count_index_scan macro so that index\n> AMs were forced to do both, or neither. (Not that this is a real\n> problem.)\n\nThat'd be one way to reduce the chances of accidental bugs, which\nseems like a good start.\n\n> > > > Can't you pull that data, instead of inventing a new place\n> > > > every AMs needs to touch for it's metrics?\n> > >\n> > > No. At least not in a way that's scoped to a particular index scan.\n> >\n> > Similar per-node counter data is pulled for the global (!) counters of\n> > pgBufferUsage, so why would it be less possible to gather such metrics\n> > for just one index's stats here?\n>\n> I told you why already, when we talked about this privately: there is\n> no guarantee that it's the index indicated by the scan\n> instrumentation.\n\nFor the pgstat entry in rel->pgstat_info, it is _exactly_ guaranteed\nto be the index of the IndexScan node. pgBufferUsage happens to be\nglobal, but pgstat_info is gathered at the relation level.\n\n> For example, due to syscache lookups.\n\nSure, if we're executing a query on catalogs looking at index's\nnumscans might count multiple index scans if the index scan needs to\naccess that same catalog table's data through that same catalog index,\nbut in those cases I think it's an acceptable count difference.\n\n> There's also\n> the question of how we maintain the count for things like nestloop\n> joins, where invocations of different index scan nodes may be freely\n> woven together. So it just won't work.\n\nGathering usage counters on interleaving execution nodes has been done\nfor pgBufferUsage, so I don't see how it just won't work. To me, it\nseems very realistically possible.\n\n> Besides, I thought that you wanted me to use some new field in\n> BTScanOpaque? But now you want me to use a global counter. Which is\n> it?\n\nIf you think it's important to have this info on all indexes then I'd\nprefer the pgstat approach over adding a field in IndexScanDescData.\nIf instead you think that this is primarily important to expose for\nnbtree index scans, then I'd prefer putting it in the BTSO using e.g.\nthe index AM analyze hook approach, as I think that's much more\nelegant than this.\n\n> > While I do think it won't be easy to\n> > find a good way to integrate this into EXPLAIN's Instrumentation, I\n> > imagine other systems (e.g. table scans) may benefit from a better\n> > integration and explanation of pgstat statistics in EXPLAIN, too. E.g.\n> > I'd love to be able to explain how many times which function was\n> > called in a plans' projections, and what the relevant time expendature\n> > for those functions is in my plans.\n>\n> Seems completely unrelated.\n\nI'd call \"exposing function's pgstat data in explain\" at least\nsomewhat related to \"exposing indexes' pgstat data in explain\".\n\n> > Alternatively, you could update the patch so that only the field in\n> > IndexScan would need to be updated by the index AM by making the\n> > executor responsible to update the relation's stats at once at the end\n> > of the query with the data from the IndexScanDesc.\n>\n> I don't understand why this is an alternative to the other thing that\n> you said. Or even why it's desirable.\n\nI think it would be preferred over requiring Index AMs to maintain 2\nfields in 2 very different locations but in the same way with the same\nupdate pattern. With the mentioned change, they'd only have to keep\nthe ISD's numscans updated with rescans (or, _bt_first/_bt_search's).\nYour alternative approach of making pgstat_count_index_scan update\nboth would probably have the same desired effect of requiring the AM\nauthor to only mind this one entry point for counting index scan\nstats.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 28 Aug 2024 01:22:20 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 7:22 PM Matthias van de Meent\n<[email protected]> wrote:\n> On Tue, 27 Aug 2024 at 23:40, Peter Geoghegan <[email protected]> wrote:\n> > Right, \"trivial\". Except in that it requires inventing a whole new\n> > general purpose infrastructure.\n>\n> Which seems to be in the process of being invented already elsewhere.\n\nNone of this stuff about implementation details really matters if\nthere isn't agreement on what actual user-visible behavior we want.\nWe're very far from that right now.\n\n> > Meanwhile, Tom is arguing against even\n> > showing this very basic information in EXPLAIN ANALYZE.You see the\n> > problem?\n>\n> I think Tom's main issue is additional clutter when running just plain\n> `explain analyze`, and he'd probably be fine with it if this was gated\n> behind e.g. VERBOSE or a new \"get me the AM's view of this node\"\n> -flag.\n\nI'm not at all confident that you're right about that.\n\n> > I mean it literally works in exactly the same way! It's literally\n> > indistinguishable to users, and works in a way that's consistent with\n> > historical behavior/definitions.\n>\n> Historically, no statistics/explain-only info is stored in the\n> IndexScanDesc, all data inside that struct is relevant even when\n> EXPLAIN was removed from the codebase. The same is true for\n> TableScanDesc\n\nPlease try to separate questions about user-visible behavior from\nquestions about the implementation. Here you're answering a point I'm\nmaking about user visible behavior with a point about where the\ncounter goes. It's just not relevant. At all.\n\n> Now, you want to add this metadata to the struct. I'm quite hesitant\n> to start walking on such a surface, as it might just be a slippery\n> slope.\n\nI don't know why you seem to assume that it's inevitable that we'll\nget a huge amount of similar EXPLAIN ANALYZE instrumentation, of which\nthis is just the start. It isn't. It's far from clear that even\nsomething like my patch will get in.\n\n> > Seems completely unrelated.\n>\n> I'd call \"exposing function's pgstat data in explain\" at least\n> somewhat related to \"exposing indexes' pgstat data in explain\".\n\nNot in any practical sense.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Aug 2024 19:41:43 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, 28 Aug 2024 at 01:42, Peter Geoghegan <[email protected]> wrote:\n>\n> On Tue, Aug 27, 2024 at 7:22 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > On Tue, 27 Aug 2024 at 23:40, Peter Geoghegan <[email protected]> wrote:\n> > > Right, \"trivial\". Except in that it requires inventing a whole new\n> > > general purpose infrastructure.\n> >\n> > Which seems to be in the process of being invented already elsewhere.\n>\n> None of this stuff about implementation details really matters if\n> there isn't agreement on what actual user-visible behavior we want.\n> We're very far from that right now.\n\nI'd expect the value to only be displayed for more verbose outputs\n(such as under VERBOSE, or another option, or an as of yet\nunimplemented unnamed \"get me AM-specific info\" option), and only if\nit differed from nloops or if the index scan is otherwise interesting\nand would benefit from showing this data, which would require AM\ninvolvement to check if the scan is \"interesting\".\nE.g. I think it's \"interesting\" to see only 1 index search /loop for\nan index SAOP (with array >>1 attribute, or parameterized), but not at\nall interesting to see 1 index search /loop for a scan with a single\nequality scankey on the only key attribute: if it were anything else\nthat'd be an indication of serious issues (and we'd show it, because\nit wouldn't be 1 search per loop).\n\n> > > and works in a way that's consistent with\n> > > historical behavior/definitions.\n> >\n> > Historically, no statistics/explain-only info is stored in the\n> > IndexScanDesc, all data inside that struct is relevant even when\n> > EXPLAIN was removed from the codebase. The same is true for\n> > TableScanDesc\n>\n> Please try to separate questions about user-visible behavior from\n> questions about the implementation. Here you're answering a point I'm\n> making about user visible behavior with a point about where the\n> counter goes. It's just not relevant. At all.\n\nI thought you were talking about type definitions with your\n'definitions', but apparently not. What were you referring to with\n\"consistent with historical behavior/definitions\"?\n\n> > Now, you want to add this metadata to the struct. I'm quite hesitant\n> > to start walking on such a surface, as it might just be a slippery\n> > slope.\n>\n> I don't know why you seem to assume that it's inevitable that we'll\n> get a huge amount of similar EXPLAIN ANALYZE instrumentation, of which\n> this is just the start. It isn't. It's far from clear that even\n> something like my patch will get in.\n\nIt doesn't have to be a huge amount, but I'd be extremely careful\nsetting a precedent where scandescs will have space reserved for data\nthat can be derived from other fields, and is also used by\napproximately 0% of queries in any production workload (except when\nautoanalyze is enabled, in which case there are other systems that\ncould probably gather this data).\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 28 Aug 2024 02:40:47 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 3:04 PM Tom Lane <[email protected]> wrote:\n> Peter Geoghegan <[email protected]> writes:\n> > I see value in making it obvious to users when and how\n> > pg_stat_all_indexes.idx_scan advances. Being able to easily relate it\n> > to EXPLAIN ANALYZE output is useful, independent of whether or not\n> > SAOPs happen to be used. That's probably the single best argument in\n> > favor of showing \"Index Searches: N\" unconditionally. But I'm\n> > certainly not going to refuse to budge over that.\n>\n> TBH, I'm afraid that this patch basically is exposing numbers that\n> nobody but Peter Geoghegan and maybe two or three other hackers\n> will understand, and even fewer people will find useful (since the\n> how-many-primitive-scans behavior is not something users have any\n> control over, IIUC). I doubt that \"it lines up with\n> pg_stat_all_indexes.idx_scan\" is enough to justify the additional\n> clutter in EXPLAIN. Maybe we should be going the other direction\n> and trying to make pg_stat_all_indexes count in a less detailed but\n> less surprising way, ie once per indexscan plan node invocation.\n\nI kind of had that reaction too initially, but I think that was mostly\nbecause \"Primitive Index Scans\" seemed extremely unclear. I think\n\"Index Searches\" is pretty comprehensible, honestly. Why shouldn't\nsomeone be able to figure out what that means?\n\nMight make sense to restrict this to VERBOSE mode, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:25:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Tue, Aug 27, 2024 at 7:22 PM Matthias van de Meent\n<[email protected]> wrote:\n> > Besides, I thought that you wanted me to use some new field in\n> > BTScanOpaque? But now you want me to use a global counter. Which is\n> > it?\n>\n> If you think it's important to have this info on all indexes then I'd\n> prefer the pgstat approach over adding a field in IndexScanDescData.\n> If instead you think that this is primarily important to expose for\n> nbtree index scans, then I'd prefer putting it in the BTSO using e.g.\n> the index AM analyze hook approach, as I think that's much more\n> elegant than this.\n\nI agree with this analysis. I don't see why IndexScanDesc would ever\nbe the right place for this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:35:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:35 AM Robert Haas <[email protected]> wrote:\n> > If you think it's important to have this info on all indexes then I'd\n> > prefer the pgstat approach over adding a field in IndexScanDescData.\n> > If instead you think that this is primarily important to expose for\n> > nbtree index scans, then I'd prefer putting it in the BTSO using e.g.\n> > the index AM analyze hook approach, as I think that's much more\n> > elegant than this.\n>\n> I agree with this analysis. I don't see why IndexScanDesc would ever\n> be the right place for this.\n\nThen what do you think is the right place?\n\nThere's no simple way to get to the planstate instrumentation from\nwithin an index scan. You could do it by passing it down as an\nargument to either ambeginscan or amrescan. But, realistically, it'd\nprobably be better to just add a pointer to the instrumentation to the\nIndexScanDesc passed to amrescan. That's very close to what I've done\nalready.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:41:16 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:41 AM Peter Geoghegan <[email protected]> wrote:\n> On Wed, Aug 28, 2024 at 9:35 AM Robert Haas <[email protected]> wrote:\n> > > If you think it's important to have this info on all indexes then I'd\n> > > prefer the pgstat approach over adding a field in IndexScanDescData.\n> > > If instead you think that this is primarily important to expose for\n> > > nbtree index scans, then I'd prefer putting it in the BTSO using e.g.\n> > > the index AM analyze hook approach, as I think that's much more\n> > > elegant than this.\n> >\n> > I agree with this analysis. I don't see why IndexScanDesc would ever\n> > be the right place for this.\n>\n> Then what do you think is the right place?\n\nThe paragraph that I agreed with and quoted in my reply, and that you\nthen quoted in your reply to me, appears to me to address that exact\nquestion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:49:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:49 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 9:41 AM Peter Geoghegan <[email protected]> wrote:\n> > On Wed, Aug 28, 2024 at 9:35 AM Robert Haas <[email protected]> wrote:\n> > > > If you think it's important to have this info on all indexes then I'd\n> > > > prefer the pgstat approach over adding a field in IndexScanDescData.\n> > > > If instead you think that this is primarily important to expose for\n> > > > nbtree index scans, then I'd prefer putting it in the BTSO using e.g.\n> > > > the index AM analyze hook approach, as I think that's much more\n> > > > elegant than this.\n> > >\n> > > I agree with this analysis. I don't see why IndexScanDesc would ever\n> > > be the right place for this.\n> >\n> > Then what do you think is the right place?\n>\n> The paragraph that I agreed with and quoted in my reply, and that you\n> then quoted in your reply to me, appears to me to address that exact\n> question.\n\nAre you talking about adding global counters, in the style of pgBufferUsage?\n\nOr are you talking about adding it to BTSO? If it's the latter, then\nwhy isn't that at least as bad? It's just the IndexScanDesc thing, but\nwith an additional indirection.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:52:38 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:25 AM Robert Haas <[email protected]> wrote:\n> Might make sense to restrict this to VERBOSE mode, too.\n\nIf we have to make the new output appear selectively, I'd prefer to do\nit this way.\n\nThere are lots of small problems with selectively displaying less/no\ninformation based on rules applied against the number of index\nsearches/loops/whatever. While that general approach works quite well\nin the case of the \"Buffers\" instrumentation, it won't really work\nhere. After all, the base case is that there is one index search per\nindex scan node -- not zero searches.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Aug 2024 09:54:12 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:25 AM Robert Haas <[email protected]> wrote:\n> On Tue, Aug 27, 2024 at 3:04 PM Tom Lane <[email protected]> wrote:\n> I kind of had that reaction too initially, but I think that was mostly\n> because \"Primitive Index Scans\" seemed extremely unclear. I think\n> \"Index Searches\" is pretty comprehensible, honestly. Why shouldn't\n> someone be able to figure out what that means?\n\nWorth noting that Lukas Fittl made a point of prominently highlighting\nthe issue with how this works when he explained the Postgres 17 nbtree\nwork:\n\nhttps://pganalyze.com/blog/5mins-postgres-17-faster-btree-index-scans\n\nAnd no, I wasn't asked to give any input to the blog post. Lukas has a\ngeneral interest in making the system easier to understand for\nordinary users. Presumably that's why he zeroed in on this one aspect\nof the work. It's far from an esoteric implementation detail.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Aug 2024 13:33:32 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" }, { "msg_contents": "On Wed, 28 Aug 2024 at 15:53, Peter Geoghegan <[email protected]> wrote:\n> On Wed, Aug 28, 2024 at 9:49 AM Robert Haas <[email protected]> wrote:\n> > On Wed, Aug 28, 2024 at 9:41 AM Peter Geoghegan <[email protected]> wrote:\n> > > On Wed, Aug 28, 2024 at 9:35 AM Robert Haas <[email protected]> wrote:\n> > > > > If you think it's important to have this info on all indexes then I'd\n> > > > > prefer the pgstat approach over adding a field in IndexScanDescData.\n> > > > > If instead you think that this is primarily important to expose for\n> > > > > nbtree index scans, then I'd prefer putting it in the BTSO using e.g.\n> > > > > the index AM analyze hook approach, as I think that's much more\n> > > > > elegant than this.\n> > > >\n> > > > I agree with this analysis. I don't see why IndexScanDesc would ever\n> > > > be the right place for this.\n> > >\n> > > Then what do you think is the right place?\n> >\n> > The paragraph that I agreed with and quoted in my reply, and that you\n> > then quoted in your reply to me, appears to me to address that exact\n> > question.\n>\n> Are you talking about adding global counters, in the style of pgBufferUsage?\n\nMy pgstat approach would be that ExecIndexScan (plus ExecIOS and\nExecBitmapIS) could record the current state of relevant fields from\nnode->ss.ss_currentRelation->pgstat_info, and diff them with the\nrecorded values at the end of that node's execution, pushing the\nresult into e.g. Instrumentation; diffing which is similar to what\nhappens in InstrStartNode() and InstrStopNode() but for the relation's\npgstat_info instead of pgBufferUsage and pgWalUsage. Alternatively\nthis could happen in ExecProcNodeInstr, but it'd need some more\nspecial-casing to make sure it only addresses (index) relation scan\nnodes.\n\nBy pulling the stats directly from Relation->pgstat_info, no catalog\nindex scans are counted if they aren't also the index which is subject\nto that [Bitmap]Index[Only]Scan.\n\nIn effect, this would need no changes in AM code, as this would \"just\"\npull the data from those statistics which are already being updated in\nAM code.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 30 Aug 2024 18:00:13 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip\n scan and SAOP scans)" } ]
[ { "msg_contents": "Hi,\n\n\n--------------------------------------------------------------------------------------------------------------\nActual column names used while creation of foreign table are not allowed to\nbe an\nempty string, but when we use column_name as an empty string in OPTIONS\nduring\nCREATE or ALTER of foreign tables, it is allowed.\n\n*EXAMPLES:-*\n1) CREATE FOREIGN TABLE test_fdw(*\"\" *VARCHAR(15), id VARCHAR(5)) SERVER\nlocalhost_fdw OPTIONS (schema_name 'public', table_name 'test');\nERROR: zero-length delimited identifier at or near \"\"\"\"\nLINE 1: CREATE FOREIGN TABLE test_fdw(\"\" VARCHAR(15), id VARCHAR(5))...\n\n2) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15) *OPTIONS* *(column_name\n'')*, id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public',\ntable_name 'test');\nCREATE FOREIGN TABLE\n\npostgres@43832=#\\d test_fdw\n Foreign table \"public.test_fdw\"\n Column | Type | Collation | Nullable | Default | FDW\noptions\n--------+-----------------------+-----------+----------+---------+------------------\n name | character varying(15) | | | |\n*(column_name\n'')*\n id | character varying(5) | | | |\nServer: localhost_fdw\nFDW options: (schema_name 'public', table_name 'test')\n\n--------------------------------------------------------------------------------------------------------------\n\nDue to the above, when we try to simply select a remote table, the remote\nquery uses\nthe empty column name from the FDW column option and the select fails.\n\n*EXAMPLES:-*\n1) select * from test_fdw;\nERROR: zero-length delimited identifier at or near \"\"\"\"\nCONTEXT: remote SQL command: SELECT \"\", id FROM public.test\n\n2) explain verbose select * from test_fdw;\n QUERY PLAN\n--------------------------------------------------------------------------\n Foreign Scan on public.test_fdw (cost=100.00..297.66 rows=853 width=72)\n Output: name, id\n Remote SQL: SELECT \"\", id FROM public.test\n(3 rows)\n\n--------------------------------------------------------------------------------------------------------------\n\nWe can fix this issue either during fetching of FDW column option names\nwhile\nbuilding remote query or we do not allow at CREATE or ALTER of foreign\ntables itself.\nWe think it would be better to disallow adding the column_name option as\nempty in\nCREATE or ALTER itself as we do not allow empty actual column names for a\nforeign\ntable. Unless I missed to understand the purpose of allowing column_name as\nempty.\n\n*THE PROPOSED SOLUTION OUTPUT:-*\n1) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15) OPTIONS *(column_name '')*,\nid VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public',\ntable_name 'test');\nERROR: column generic option name cannot be empty\n\n2) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15), id VARCHAR(5)) SERVER\nlocalhost_fdw OPTIONS (schema_name 'public', table_name 'test');\nCREATE FOREIGN TABLE\n\nALTER FOREIGN TABLE test_fdw ALTER COLUMN id OPTIONS *(column_name '')*;\nERROR: column generic option name cannot be empty\n\n--------------------------------------------------------------------------------------------------------------\n\nPFA, the fix and test cases patches attached. I ran \"make check world\" and\ndo\nnot see any failure related to patches. But, I do see an existing failure\nt/001_pgbench_with_server.pl\n\n\nRegards,\nNishant.\n\nP.S\nThanks to Jeevan Chalke and Suraj Kharage for their inputs for the proposal.\n\nHi,--------------------------------------------------------------------------------------------------------------Actual column names used while creation of foreign table are not allowed to be anempty string, but when we use column_name as an empty string in OPTIONS duringCREATE or ALTER of foreign tables, it is allowed.EXAMPLES:-1) CREATE FOREIGN TABLE test_fdw(\"\" VARCHAR(15), id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public', table_name 'test');ERROR:  zero-length delimited identifier at or near \"\"\"\"LINE 1: CREATE FOREIGN TABLE test_fdw(\"\" VARCHAR(15), id VARCHAR(5))...2) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15) OPTIONS (column_name ''), id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public', table_name 'test');CREATE FOREIGN TABLEpostgres@43832=#\\d test_fdw                          Foreign table \"public.test_fdw\" Column |         Type          | Collation | Nullable | Default |   FDW options    --------+-----------------------+-----------+----------+---------+------------------ name   | character varying(15) |           |          |         | (column_name '') id     | character varying(5)  |           |          |         | Server: localhost_fdwFDW options: (schema_name 'public', table_name 'test')--------------------------------------------------------------------------------------------------------------Due to the above, when we try to simply select a remote table, the remote query usesthe empty column name from the FDW column option and the select fails.EXAMPLES:-1) select * from test_fdw;ERROR:  zero-length delimited identifier at or near \"\"\"\"CONTEXT:  remote SQL command: SELECT \"\", id FROM public.test2) explain verbose select * from test_fdw;                                QUERY PLAN                                -------------------------------------------------------------------------- Foreign Scan on public.test_fdw  (cost=100.00..297.66 rows=853 width=72)   Output: name, id   Remote SQL: SELECT \"\", id FROM public.test(3 rows)--------------------------------------------------------------------------------------------------------------We can fix this issue either during fetching of FDW column option names whilebuilding remote query or we do not allow at CREATE or ALTER of foreign tables itself.We think it would be better to disallow adding the column_name option as empty inCREATE or ALTER itself as we do not allow empty actual column names for a foreigntable. Unless I missed to understand the purpose of allowing column_name as empty.THE PROPOSED SOLUTION OUTPUT:-1) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15) OPTIONS (column_name ''), id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public', table_name 'test');ERROR:  column generic option name cannot be empty2) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15), id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public', table_name 'test');CREATE FOREIGN TABLEALTER FOREIGN TABLE test_fdw ALTER COLUMN id OPTIONS (column_name '');ERROR:  column generic option name cannot be empty--------------------------------------------------------------------------------------------------------------PFA, the fix and test cases patches attached. I ran \"make check world\" and donot see any failure related to patches. But, I do see an existing failuret/001_pgbench_with_server.plRegards,Nishant.P.SThanks to Jeevan Chalke and Suraj Kharage for their inputs for the proposal.", "msg_date": "Fri, 16 Aug 2024 14:37:40 +0530", "msg_from": "Nishant Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "[PROPOSAL] : Disallow use of empty column name in (column_name '') in\n ALTER or CREATE of foreign table." }, { "msg_contents": "Oops...\nI forgot to attach the patch. Thanks to Amul Sul for pointing that out. :)\n\n\n\nOn Fri, Aug 16, 2024 at 2:37 PM Nishant Sharma <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n> Actual column names used while creation of foreign table are not allowed\n> to be an\n> empty string, but when we use column_name as an empty string in OPTIONS\n> during\n> CREATE or ALTER of foreign tables, it is allowed.\n>\n> *EXAMPLES:-*\n> 1) CREATE FOREIGN TABLE test_fdw(*\"\" *VARCHAR(15), id VARCHAR(5)) SERVER\n> localhost_fdw OPTIONS (schema_name 'public', table_name 'test');\n> ERROR: zero-length delimited identifier at or near \"\"\"\"\n> LINE 1: CREATE FOREIGN TABLE test_fdw(\"\" VARCHAR(15), id VARCHAR(5))...\n>\n> 2) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15) *OPTIONS* *(column_name\n> '')*, id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public',\n> table_name 'test');\n> CREATE FOREIGN TABLE\n>\n> postgres@43832=#\\d test_fdw\n> Foreign table \"public.test_fdw\"\n> Column | Type | Collation | Nullable | Default | FDW\n> options\n>\n> --------+-----------------------+-----------+----------+---------+------------------\n> name | character varying(15) | | | | *(column_name\n> '')*\n> id | character varying(5) | | | |\n> Server: localhost_fdw\n> FDW options: (schema_name 'public', table_name 'test')\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n>\n> Due to the above, when we try to simply select a remote table, the remote\n> query uses\n> the empty column name from the FDW column option and the select fails.\n>\n> *EXAMPLES:-*\n> 1) select * from test_fdw;\n> ERROR: zero-length delimited identifier at or near \"\"\"\"\n> CONTEXT: remote SQL command: SELECT \"\", id FROM public.test\n>\n> 2) explain verbose select * from test_fdw;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Foreign Scan on public.test_fdw (cost=100.00..297.66 rows=853 width=72)\n> Output: name, id\n> Remote SQL: SELECT \"\", id FROM public.test\n> (3 rows)\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n>\n> We can fix this issue either during fetching of FDW column option names\n> while\n> building remote query or we do not allow at CREATE or ALTER of foreign\n> tables itself.\n> We think it would be better to disallow adding the column_name option as\n> empty in\n> CREATE or ALTER itself as we do not allow empty actual column names for a\n> foreign\n> table. Unless I missed to understand the purpose of allowing column_name\n> as empty.\n>\n> *THE PROPOSED SOLUTION OUTPUT:-*\n> 1) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15) OPTIONS *(column_name\n> '')*, id VARCHAR(5)) SERVER localhost_fdw OPTIONS (schema_name 'public',\n> table_name 'test');\n> ERROR: column generic option name cannot be empty\n>\n> 2) CREATE FOREIGN TABLE test_fdw(name VARCHAR(15), id VARCHAR(5)) SERVER\n> localhost_fdw OPTIONS (schema_name 'public', table_name 'test');\n> CREATE FOREIGN TABLE\n>\n> ALTER FOREIGN TABLE test_fdw ALTER COLUMN id OPTIONS *(column_name '')*;\n> ERROR: column generic option name cannot be empty\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n>\n> PFA, the fix and test cases patches attached. I ran \"make check world\" and\n> do\n> not see any failure related to patches. But, I do see an existing failure\n> t/001_pgbench_with_server.pl\n>\n>\n> Regards,\n> Nishant.\n>\n> P.S\n> Thanks to Jeevan Chalke and Suraj Kharage for their inputs for the\n> proposal.\n>", "msg_date": "Fri, 16 Aug 2024 16:57:42 +0530", "msg_from": "Nishant Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] : Disallow use of empty column name in (column_name\n '') in ALTER or CREATE of foreign table." }, { "msg_contents": "Nishant Sharma <[email protected]> writes:\n> Actual column names used while creation of foreign table are not allowed to\n> be an\n> empty string, but when we use column_name as an empty string in OPTIONS\n> during\n> CREATE or ALTER of foreign tables, it is allowed.\n\nIs this really a bug? The valid remote names are determined by\nwhatever underlies the FDW, and I doubt we should assume that\nSQL syntax restrictions apply to every FDW. Perhaps it would\nbe reasonable to apply such checks locally in SQL-based FDWs,\nbut I object to assuming such things at the level of\nATExecAlterColumnGenericOptions.\n\nMore generally, I don't see any meaningful difference between\nthis mistake and the more common one of misspelling the remote\ncolumn name, which is something we're not going to be able\nto check for (at least not in anything like this way). If\nyou wanted to move the ease-of-use goalposts materially,\nyou should be looking for a way to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Aug 2024 10:55:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] : Disallow use of empty column name in (column_name\n '') in ALTER or CREATE of foreign table." }, { "msg_contents": "On Fri, Aug 16, 2024 at 8:26 PM Tom Lane <[email protected]> wrote:\n>\n> Nishant Sharma <[email protected]> writes:\n> > Actual column names used while creation of foreign table are not allowed to\n> > be an\n> > empty string, but when we use column_name as an empty string in OPTIONS\n> > during\n> > CREATE or ALTER of foreign tables, it is allowed.\n>\n> Is this really a bug? The valid remote names are determined by\n> whatever underlies the FDW, and I doubt we should assume that\n> SQL syntax restrictions apply to every FDW. Perhaps it would\n> be reasonable to apply such checks locally in SQL-based FDWs,\n> but I object to assuming such things at the level of\n> ATExecAlterColumnGenericOptions.\n\nI agree.\n\n>\n> More generally, I don't see any meaningful difference between\n> this mistake and the more common one of misspelling the remote\n> column name, which is something we're not going to be able\n> to check for (at least not in anything like this way). If\n> you wanted to move the ease-of-use goalposts materially,\n> you should be looking for a way to do that.\n\nI think this check should be delegated to an FDW validator.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Aug 2024 16:27:17 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PROPOSAL] : Disallow use of empty column name in (column_name\n '') in ALTER or CREATE of foreign table." }, { "msg_contents": "Thanks Tom and Ashutosh for your responses!\n\nI also agree that, v1 patch set was applying SQL syntax restrictions to all\nFDWs,\nwhich is not reasonable.\n\nPFA v2 patch set. This is based on the suggestion given by Ashutosh to have\nthe\ncheck in postgres_fdw validator. As it fits to apply the SQL syntax\nrestriction only\nto FDWs which follow SQL syntax restrictions.\nWith this change, it gives hint/idea to any user using PostgresSQL with\ntheir own\nFDWs to add this check in their equivalent fdw validator.\n\nRegarding, 'empty' vs 'misspelled' remote column name, from my point of\nview,\nI see some differences between them:-\n1. SYNTAX wise - SQL syntax has restrictions for not allowing creating\ncolumn\nnames as empty. And it does not bother about, whether user used a misspelled\nword or not for the column name.\n2. IMPLEMENTATION wise - we don't need any extra info to decide that the\ncolumn\nname received from command is empty, but we do need column name infos from\nremote server to decide whether user misspelled the remote column name,\nwhich\nnot only applies to the column_name options, but maybe to the column names\nused\nwhile creating the table syntax for foreign tables if the fdw column_name\noption is\nnot added. Ex:- \"CREATE FOREIGN TABLE test_fdw(name VARCHAR(15),\nid VARCHAR(5)) ....\" - to 'name' and 'id' here.\n\nI may be wrong, but just wanted to share my thoughts on the differences.\nSo, it\ncan be considered a different issue/mistake and can be handled separately in\nanother email thread.\n\nI ran \"make check world\" and do not see any failure related to patches.\nBut, I do see\nan existing failure \"t/001_pgbench_with_server.pl\".\n\n\nRegards,\nNishant Sharma\nEnterpriseDB, Pune.\n\nOn Mon, Aug 19, 2024 at 4:27 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> On Fri, Aug 16, 2024 at 8:26 PM Tom Lane <[email protected]> wrote:\n> >\n> > Nishant Sharma <[email protected]> writes:\n> > > Actual column names used while creation of foreign table are not\n> allowed to\n> > > be an\n> > > empty string, but when we use column_name as an empty string in OPTIONS\n> > > during\n> > > CREATE or ALTER of foreign tables, it is allowed.\n> >\n> > Is this really a bug? The valid remote names are determined by\n> > whatever underlies the FDW, and I doubt we should assume that\n> > SQL syntax restrictions apply to every FDW. Perhaps it would\n> > be reasonable to apply such checks locally in SQL-based FDWs,\n> > but I object to assuming such things at the level of\n> > ATExecAlterColumnGenericOptions.\n>\n> I agree.\n>\n> >\n> > More generally, I don't see any meaningful difference between\n> > this mistake and the more common one of misspelling the remote\n> > column name, which is something we're not going to be able\n> > to check for (at least not in anything like this way). If\n> > you wanted to move the ease-of-use goalposts materially,\n> > you should be looking for a way to do that.\n>\n> I think this check should be delegated to an FDW validator.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>", "msg_date": "Thu, 22 Aug 2024 16:00:13 +0530", "msg_from": "Nishant Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PROPOSAL] : Disallow use of empty column name in (column_name\n '') in ALTER or CREATE of foreign table." } ]
[ { "msg_contents": "Hi,\n\nThe Assert(buffer != NULL) is placed after the buffer is accessed,\nwhich could lead to a segmentation fault before the check is executed.\n\nAttached a small patch to correct that.\n\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 16 Aug 2024 17:17:47 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Ineffective Assert-check in CopyMultiInsertInfoNextFreeSlot()" }, { "msg_contents": "On Fri, 16 Aug 2024 at 23:48, Amul Sul <[email protected]> wrote:\n> The Assert(buffer != NULL) is placed after the buffer is accessed,\n> which could lead to a segmentation fault before the check is executed.\n\nYeah, that's not great. Technically the Assert does not add any value\nin terms of catching bugs in the code, but it's probably useful to\nkeep it for code documentation purposes. A crash would occur even if\nthe Assert wasn't there.\n\n> Attached a small patch to correct that.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Sat, 17 Aug 2024 10:38:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ineffective Assert-check in CopyMultiInsertInfoNextFreeSlot()" } ]
[ { "msg_contents": "Hello David,\n\nyou wrote:\n\n> v4 patch attached. If nobody else wants to look at this then I'm \n> planning on pushing it soon.\n\nHad a very brief look at this bit caught my attentioon:\n\n+\t\tEEO_CASE(EEOP_HASHDATUM_NEXT32_STRICT)\n+\t\t{\n+\t\t\tFunctionCallInfo fcinfo = op->d.hashdatum.fcinfo_data;\n+\t\t\tuint32\t\texisting_hash = DatumGetUInt32(*op->resvalue);\n+\t\t\tuint32\t\thashvalue;\n+\n+\t\t\t/* combine successive hash values by rotating */\n+\t\t\texisting_hash = pg_rotate_left32(existing_hash, 1);\n+\n+\t\t\tif (fcinfo->args[0].isnull)\n+\t\t\t{\n\nIs it nec. to rotate existing_hash here before checking for isnull? \nBecause in case of isnull, isn't the result of the rotate thrown away?\n\nOr in other words, mnaybe this bit here can be moved to after the isnull \ncheck:\n\n+\t\t\t/* combine successive hash values by rotating */\n+\t\t\texisting_hash = pg_rotate_left32(existing_hash, 1);\n\n-- \nBest regards,\n\nTels\n\n\n", "msg_date": "Sat, 17 Aug 2024 13:21:03 +0200", "msg_from": "Tels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up Hash Join by teaching ExprState about hashing" }, { "msg_contents": "Thanks for having a look.\n\nOn Sat, 17 Aug 2024 at 23:21, Tels <[email protected]> wrote:\n> Is it nec. to rotate existing_hash here before checking for isnull?\n> Because in case of isnull, isn't the result of the rotate thrown away?\n\nYeah, I think that it's worthwhile moving that to after the isnull\ncheck so as not to waste the effort. Unfortunately doing the same in\nthe JIT code meant copying and pasting the code that emits the\nrotation code.\n\nThe attached v5 patch includes this change.\n\nDavid", "msg_date": "Mon, 19 Aug 2024 18:41:58 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up Hash Join by teaching ExprState about hashing" }, { "msg_contents": "On Mon, 19 Aug 2024 at 18:41, David Rowley <[email protected]> wrote:\n> The attached v5 patch includes this change.\n\nI made a few more tweaks to the comments and pushed the result.\n\nThank you both of you for having a look at this.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Aug 2024 13:40:41 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up Hash Join by teaching ExprState about hashing" } ]
[ { "msg_contents": "About 20 minutes ago I starting getting gitmaster pull errors:\n\n\t$ git pull ssh://[email protected]/postgresql.git\n\tssh: connect to host gitmaster.postgresql.org port 22: Connection timed\n\tout\n\tfatal: Could not read from remote repository.\n\t\n\tPlease make sure you have the correct access rights\n\tand the repository exists.\n\nPublic git seems fine:\n\n\t$ git pull https://git.postgresql.org/git/postgresql.git\n\tFrom https://git.postgresql.org/git/postgresql\n\t * branch HEAD -> FETCH_HEAD\n\tAlready up to date.\n\nCan anyone confirm or report on the cause?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 17 Aug 2024 12:41:19 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "gitmaster server problem?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> About 20 minutes ago I starting getting gitmaster pull errors:\n> ...\n> Can anyone confirm or report on the cause?\n\nYeah, for me it's hanging up at the connection establishment\nstage (\"ssh -fMN gitmaster.postgresql.org\"). Traceroute gets\nas far as some rackspace.net machines, but gemulon itself\nisn't answering.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Aug 2024 12:49:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "I wrote:\n> Bruce Momjian <[email protected]> writes:\n>> About 20 minutes ago I starting getting gitmaster pull errors:\n>> Can anyone confirm or report on the cause?\n\n> Yeah, for me it's hanging up at the connection establishment\n> stage (\"ssh -fMN gitmaster.postgresql.org\"). Traceroute gets\n> as far as some rackspace.net machines, but gemulon itself\n> isn't answering.\n\nssh and git pull are working again now for me.\n\nOddly, \"ping gemulon.postgresql.org\" is still showing 98%+\npacket drop rate. Not sure if that's normal.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Aug 2024 13:22:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Sat, 17 Aug 2024 at 18:41, Bruce Momjian <[email protected]> wrote:\n>\n> About 20 minutes ago I starting getting gitmaster pull errors:\n>\n> $ git pull ssh://[email protected]/postgresql.git\n> ssh: connect to host gitmaster.postgresql.org port 22: Connection timed\n> out\n> fatal: Could not read from remote repository.\n>\n> Please make sure you have the correct access rights\n> and the repository exists.\n[...]\n> Can anyone confirm or report on the cause?\n\nAdditional infra that doesn't seem to work right now: mailing list\narchives [0] seem to fail at a 503 produced by Varnish Cache server:\nError 503 Backend fetch failed. Maybe more of infra is down, or\notherwise under maintenance?\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/list/pgsql-hackers/2024-08/\n\n\n", "msg_date": "Sat, 17 Aug 2024 19:59:03 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> Additional infra that doesn't seem to work right now: mailing list\n> archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> Error 503 Backend fetch failed. Maybe more of infra is down, or\n> otherwise under maintenance?\n\nYeah, I'm seeing that too; not at the top level of the site but\nwhen you try to drill down to individual messages.\n\ncommitfest.postgresql.org isn't responding either,\nnor wiki.postgresql.org.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Aug 2024 14:15:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Sat, Aug 17, 2024 at 02:15:49PM -0400, Tom Lane wrote:\n> Matthias van de Meent <[email protected]> writes:\n> > Additional infra that doesn't seem to work right now: mailing list\n> > archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> > Error 503 Backend fetch failed. Maybe more of infra is down, or\n> > otherwise under maintenance?\n> \n> Yeah, I'm seeing that too; not at the top level of the site but\n> when you try to drill down to individual messages.\n> \n> commitfest.postgresql.org isn't responding either,\n> nor wiki.postgresql.org.\n\nOkay, thanks for the confirmation. I emailed pginfra at the same time I\nposted the first email, so they are aware. I have not received a reply\nfrom them yet.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 17 Aug 2024 14:18:11 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On 8/17/24 10:22, Tom Lane wrote:\n> I wrote:\n>> Bruce Momjian <[email protected]> writes:\n>>> About 20 minutes ago I starting getting gitmaster pull errors:\n>>> Can anyone confirm or report on the cause?\n> \n>> Yeah, for me it's hanging up at the connection establishment\n>> stage (\"ssh -fMN gitmaster.postgresql.org\"). Traceroute gets\n>> as far as some rackspace.net machines, but gemulon itself\n>> isn't answering.\n> \n> ssh and git pull are working again now for me.\n> \n> Oddly, \"ping gemulon.postgresql.org\" is still showing 98%+\n> packet drop rate. Not sure if that's normal.\n\nI don't even get that, it is still dying at:\n\n9 cored-aggr181a-1.dfw3.rackspace.net (148.62.65.5)\n\n> \n> \t\t\tregards, tom lane\n> \n> \n\n-- \nAdrian Klaver\[email protected]\n\n\n\n", "msg_date": "Sat, 17 Aug 2024 11:22:29 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Sat, Aug 17, 2024 at 01:22:18PM -0400, Tom Lane wrote:\n> I wrote:\n> > Bruce Momjian <[email protected]> writes:\n> >> About 20 minutes ago I starting getting gitmaster pull errors:\n> >> Can anyone confirm or report on the cause?\n> \n> > Yeah, for me it's hanging up at the connection establishment\n> > stage (\"ssh -fMN gitmaster.postgresql.org\"). Traceroute gets\n> > as far as some rackspace.net machines, but gemulon itself\n> > isn't answering.\n> \n> ssh and git pull are working again now for me.\n\nssh and git pull are still failing for me to gitmaster.postgresql.org.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 17 Aug 2024 14:24:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Sat, Aug 17, 2024 at 02:24:54PM -0400, Bruce Momjian wrote:\n> On Sat, Aug 17, 2024 at 01:22:18PM -0400, Tom Lane wrote:\n> > I wrote:\n> > > Bruce Momjian <[email protected]> writes:\n> > >> About 20 minutes ago I starting getting gitmaster pull errors:\n> > >> Can anyone confirm or report on the cause?\n> > \n> > > Yeah, for me it's hanging up at the connection establishment\n> > > stage (\"ssh -fMN gitmaster.postgresql.org\"). Traceroute gets\n> > > as far as some rackspace.net machines, but gemulon itself\n> > > isn't answering.\n> > \n> > ssh and git pull are working again now for me.\n> \n> ssh and git pull are still failing for me to gitmaster.postgresql.org.\n\nJoe Conway, CC'ed, is aware of the issue and is now working on it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 17 Aug 2024 14:53:44 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Sat, Aug 17, 2024 at 02:53:44PM -0400, Bruce Momjian wrote:\n> On Sat, Aug 17, 2024 at 02:24:54PM -0400, Bruce Momjian wrote:\n> > On Sat, Aug 17, 2024 at 01:22:18PM -0400, Tom Lane wrote:\n> > > I wrote:\n> > > > Bruce Momjian <[email protected]> writes:\n> > > >> About 20 minutes ago I starting getting gitmaster pull errors:\n> > > >> Can anyone confirm or report on the cause?\n> > > \n> > > > Yeah, for me it's hanging up at the connection establishment\n> > > > stage (\"ssh -fMN gitmaster.postgresql.org\"). Traceroute gets\n> > > > as far as some rackspace.net machines, but gemulon itself\n> > > > isn't answering.\n> > > \n> > > ssh and git pull are working again now for me.\n> > \n> > ssh and git pull are still failing for me to gitmaster.postgresql.org.\n> \n> Joe Conway, CC'ed, is aware of the issue and is now working on it.\n\nJoe reports that everything should be working normally now.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 17 Aug 2024 15:04:31 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Joe reports that everything should be working normally now.\n\nACK - things look good from here. (I did have to restart my\nssh tunnel to gitmaster again.)\n\nThanks Joe!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Aug 2024 15:27:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On 8/17/24 13:59, Matthias van de Meent wrote:\n> On Sat, 17 Aug 2024 at 18:41, Bruce Momjian <[email protected]> wrote:\n>>\n>> About 20 minutes ago I starting getting gitmaster pull errors:\n>>\n>> $ git pull ssh://[email protected]/postgresql.git\n>> ssh: connect to host gitmaster.postgresql.org port 22: Connection timed\n>> out\n>> fatal: Could not read from remote repository.\n>>\n>> Please make sure you have the correct access rights\n>> and the repository exists.\n> [...]\n>> Can anyone confirm or report on the cause?\n> \n> Additional infra that doesn't seem to work right now: mailing list\n> archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> Error 503 Backend fetch failed. Maybe more of infra is down, or\n> otherwise under maintenance?\n> \n> Kind regards,\n> \n> Matthias van de Meent\n> \n> [0] https://www.postgresql.org/list/pgsql-hackers/2024-08/\n\n\nJust to close this out -- problem found and fixed about an hour ago. \nSorry for the interruption.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 17 Aug 2024 16:05:30 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Sat, 17 Aug 2024 at 22:05, Joe Conway <[email protected]> wrote:\n> Just to close this out -- problem found and fixed about an hour ago.\n> Sorry for the interruption.\n\nWhatever the problem was it seems to have returned\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:04:19 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On 8/19/24 05:04, Jelte Fennema-Nio wrote:\n> On Sat, 17 Aug 2024 at 22:05, Joe Conway <[email protected]> wrote:\n>> Just to close this out -- problem found and fixed about an hour ago.\n>> Sorry for the interruption.\n> \n> Whatever the problem was it seems to have returned\n\n\nWhat specifically? I am not seeing anything at the moment.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 07:39:27 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, 19 Aug 2024 at 13:39, Joe Conway <[email protected]> wrote:\n> What specifically? I am not seeing anything at the moment.\n\nIt seems to be working again fine. But before\nhttps://git.postgresql.org was throwing 502 and 503 errors. Depending\non the request either by Varnish or nginx.\n\n\n", "msg_date": "Mon, 19 Aug 2024 13:57:07 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, Aug 19, 2024 at 7:57 AM Jelte Fennema-Nio <[email protected]> wrote:\n> On Mon, 19 Aug 2024 at 13:39, Joe Conway <[email protected]> wrote:\n> > What specifically? I am not seeing anything at the moment.\n>\n> It seems to be working again fine. But before\n> https://git.postgresql.org was throwing 502 and 503 errors. Depending\n> on the request either by Varnish or nginx.\n\nI'm still unable to access any https://git.postgresql.org/ URL.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 08:04:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, 19 Aug 2024 at 14:04, Robert Haas <[email protected]> wrote:\n> I'm still unable to access any https://git.postgresql.org/ URL.\n\nYeah it's broken for me again too.\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:05:36 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, 19 Aug 2024 at 14:05, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Mon, 19 Aug 2024 at 14:04, Robert Haas <[email protected]> wrote:\n> > I'm still unable to access any https://git.postgresql.org/ URL.\n>\n> Yeah it's broken for me again too.\n\nIn case the actual error is helpful (although I doubt it):\n\nError 503 Backend fetch failed\n\nBackend fetch failed\n\nGuru Meditation:\n\nXID: 1030914118\n\n________________________________\n\nVarnish cache server\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:06:22 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On 8/19/24 08:06, Jelte Fennema-Nio wrote:\n> On Mon, 19 Aug 2024 at 14:05, Jelte Fennema-Nio <[email protected]> wrote:\n>>\n>> On Mon, 19 Aug 2024 at 14:04, Robert Haas <[email protected]> wrote:\n>> > I'm still unable to access any https://git.postgresql.org/ URL.\n>>\n>> Yeah it's broken for me again too.\n> \n> In case the actual error is helpful (although I doubt it):\n> \n> Error 503 Backend fetch failed\n> \n> Backend fetch failed\n> \n> Guru Meditation:\n> \n> XID: 1030914118\n\n\nI am not seeing any errors here. nagios monitoring is showing an ipv6 \nerror though. Not the same issue as yesterday.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 08:11:36 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, Aug 19, 2024 at 9:56 PM Bruce Momjian <[email protected]> wrote:\n>\n\nas of now.\ncgit working\nhttps://git.postgresql.org/cgit/postgresql.git/\n\n\ngit not working\n( 403 Forbidden nginx)\nhttps://git.postgresql.org/git/postgresql.git/\n\n\n", "msg_date": "Mon, 19 Aug 2024 22:03:28 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, 19 Aug 2024 at 14:56, Bruce Momjian <[email protected]> wrote:\n\n> On Sat, Aug 17, 2024 at 02:15:49PM -0400, Tom Lane wrote:\n> > Matthias van de Meent <[email protected]> writes:\n> > > Additional infra that doesn't seem to work right now: mailing list\n> > > archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> > > Error 503 Backend fetch failed. Maybe more of infra is down, or\n> > > otherwise under maintenance?\n> >\n> > Yeah, I'm seeing that too; not at the top level of the site but\n> > when you try to drill down to individual messages.\n> >\n> > commitfest.postgresql.org isn't responding either,\n> > nor wiki.postgresql.org.\n>\n> Okay, thanks for the confirmation. I emailed pginfra at the same time I\n> posted the first email, so they are aware. I have not received a reply\n> from them yet.\n>\n\nJoe replied.\n\nI've just checked again myself and I don't see any problems with the sites\nmentioned from a user perspective, but there are some non-obvious issues\nbeing reported by Nagios on git.postgresql.org (which is not the same as\ngitmaster). We'll dig in some more.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nPGDay UK 2024, 11th September, London: https://2024.pgday.uk/\n\nOn Mon, 19 Aug 2024 at 14:56, Bruce Momjian <[email protected]> wrote:On Sat, Aug 17, 2024 at 02:15:49PM -0400, Tom Lane wrote:\n> Matthias van de Meent <[email protected]> writes:\n> > Additional infra that doesn't seem to work right now: mailing list\n> > archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> > Error 503 Backend fetch failed. Maybe more of infra is down, or\n> > otherwise under maintenance?\n> \n> Yeah, I'm seeing that too; not at the top level of the site but\n> when you try to drill down to individual messages.\n> \n> commitfest.postgresql.org isn't responding either,\n> nor wiki.postgresql.org.\n\nOkay, thanks for the confirmation.  I emailed pginfra at the same time I\nposted the first email, so they are aware.  I have not received a reply\nfrom them yet.Joe replied.I've just checked again myself and I don't see any problems with the sites mentioned from a user perspective, but there are some non-obvious issues being reported by Nagios on git.postgresql.org (which is not the same as gitmaster). We'll dig in some more.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.comPGDay UK 2024, 11th September, London: https://2024.pgday.uk/", "msg_date": "Mon, 19 Aug 2024 15:07:19 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, Aug 19, 2024 at 3:56 PM Tom Lane <[email protected]> wrote:\n\n> Matthias van de Meent <[email protected]> writes:\n> > Additional infra that doesn't seem to work right now: mailing list\n> > archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> > Error 503 Backend fetch failed. Maybe more of infra is down, or\n> > otherwise under maintenance?\n>\n> Yeah, I'm seeing that too; not at the top level of the site but\n> when you try to drill down to individual messages.\n>\n> commitfest.postgresql.org isn't responding either,\n> nor wiki.postgresql.org.\n>\n\nAre you still seeing these issues? It works fine both from here and from\nour monitoring, but there could be something source-network-specific\nmaybe....\n\nThe git.postgresql.org one specifically looks a lot like a DOS. It may not\nbe an intentional DOS, but the git server side is very inefficient and can\nrapidly turn valid attempts into DOSes if they parallellize too much...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Aug 19, 2024 at 3:56 PM Tom Lane <[email protected]> wrote:Matthias van de Meent <[email protected]> writes:\n> Additional infra that doesn't seem to work right now: mailing list\n> archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> Error 503 Backend fetch failed. Maybe more of infra is down, or\n> otherwise under maintenance?\n\nYeah, I'm seeing that too; not at the top level of the site but\nwhen you try to drill down to individual messages.\n\ncommitfest.postgresql.org isn't responding either,\nnor wiki.postgresql.org.Are you still seeing these issues? It works fine both from here and from our monitoring, but there could be something source-network-specific maybe....The git.postgresql.org one specifically looks a lot like a DOS. It may not be an intentional DOS, but the git server side is very inefficient and can rapidly turn valid attempts into DOSes if they parallellize too much...--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 19 Aug 2024 16:19:05 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, Aug 19, 2024 at 08:11:36AM -0400, Joe Conway wrote:\n> On 8/19/24 08:06, Jelte Fennema-Nio wrote:\n> > On Mon, 19 Aug 2024 at 14:05, Jelte Fennema-Nio <[email protected]> wrote:\n> > > \n> > > On Mon, 19 Aug 2024 at 14:04, Robert Haas <[email protected]> wrote:\n> > > > I'm still unable to access any https://git.postgresql.org/ URL.\n> > > \n> > > Yeah it's broken for me again too.\n> > \n> > In case the actual error is helpful (although I doubt it):\n> > \n> > Error 503 Backend fetch failed\n> > \n> > Backend fetch failed\n> > \n> > Guru Meditation:\n> > \n> > XID: 1030914118\n> \n> I am not seeing any errors here. nagios monitoring is showing an ipv6 error\n> though. Not the same issue as yesterday.\n\ngitmaster.postgresql.org is working fine for me right now.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:24:32 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 3:56 PM Tom Lane <[email protected]> wrote:\n>> Yeah, I'm seeing that too; not at the top level of the site but\n>> when you try to drill down to individual messages.\n>> commitfest.postgresql.org isn't responding either,\n>> nor wiki.postgresql.org.\n\n> Are you still seeing these issues? It works fine both from here and from\n> our monitoring, but there could be something source-network-specific\n> maybe....\n\nRight at the moment,\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git\n\nis failing for me with \"Backend fetch failed\". However, the mail\narchives, commitfest.p.o, and wiki.p.o seem to be responding normally,\nas is gitmaster.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:27:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, 19 Aug 2024 at 15:27, jian he <[email protected]> wrote:\n\n> On Mon, Aug 19, 2024 at 9:56 PM Bruce Momjian <[email protected]> wrote:\n> >\n>\n> as of now.\n> cgit working\n> https://git.postgresql.org/cgit/postgresql.git/\n\n\nYes.\n\n\n>\n>\n>\n> git not working\n> ( 403 Forbidden nginx)\n> https://git.postgresql.org/git/postgresql.git/\n\n\n From a browser, that's what I'd expect. You should be able to clone from it\nthough:\n\ngit clone https://git.postgresql.org/git/postgresql.git\nCloning into 'postgresql'...\nremote: Enumerating objects: 42814, done.\nremote: Counting objects: 100% (42814/42814), done.\nremote: Compressing objects: 100% (15585/15585), done.\nremote: Total 1014138 (delta 34190), reused 33980 (delta 26978),\npack-reused 971324\nReceiving objects: 100% (1014138/1014138), 343.01 MiB | 15.49 MiB/s, done.\nResolving deltas: 100% (873517/873517), done.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nPGDay UK 2024, 11th September, London: https://2024.pgday.uk/\n\nOn Mon, 19 Aug 2024 at 15:27, jian he <[email protected]> wrote:On Mon, Aug 19, 2024 at 9:56 PM Bruce Momjian <[email protected]> wrote:\n>\n\nas of now.\ncgit working\nhttps://git.postgresql.org/cgit/postgresql.git/Yes. \n\n\ngit not working\n( 403 Forbidden nginx)\nhttps://git.postgresql.org/git/postgresql.git/From a browser, that's what I'd expect. You should be able to clone from it though:git clone https://git.postgresql.org/git/postgresql.gitCloning into 'postgresql'...remote: Enumerating objects: 42814, done.remote: Counting objects: 100% (42814/42814), done.remote: Compressing objects: 100% (15585/15585), done.remote: Total 1014138 (delta 34190), reused 33980 (delta 26978), pack-reused 971324Receiving objects: 100% (1014138/1014138), 343.01 MiB | 15.49 MiB/s, done.Resolving deltas: 100% (873517/873517), done. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.comPGDay UK 2024, 11th September, London: https://2024.pgday.uk/", "msg_date": "Mon, 19 Aug 2024 15:30:33 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, 19 Aug 2024 at 15:29, Tom Lane <[email protected]> wrote:\n\n> Magnus Hagander <[email protected]> writes:\n> > On Mon, Aug 19, 2024 at 3:56 PM Tom Lane <[email protected]> wrote:\n> >> Yeah, I'm seeing that too; not at the top level of the site but\n> >> when you try to drill down to individual messages.\n> >> commitfest.postgresql.org isn't responding either,\n> >> nor wiki.postgresql.org.\n>\n> > Are you still seeing these issues? It works fine both from here and from\n> > our monitoring, but there could be something source-network-specific\n> > maybe....\n>\n> Right at the moment,\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git\n>\n> is failing for me with \"Backend fetch failed\".\n\n\nThat's working fine for me now, however Magnus did just give the box some\nmore CPUs.\n\nWe're also considering rate limiting by IP on that server, as it seems to\nbe getting hammered with requests from China.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nPGDay UK 2024, 11th September, London: https://2024.pgday.uk/\n\nOn Mon, 19 Aug 2024 at 15:29, Tom Lane <[email protected]> wrote:Magnus Hagander <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 3:56 PM Tom Lane <[email protected]> wrote:\n>> Yeah, I'm seeing that too; not at the top level of the site but\n>> when you try to drill down to individual messages.\n>> commitfest.postgresql.org isn't responding either,\n>> nor wiki.postgresql.org.\n\n> Are you still seeing these issues? It works fine both from here and from\n> our monitoring, but there could be something source-network-specific\n> maybe....\n\nRight at the moment,\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git\n\nis failing for me with \"Backend fetch failed\".  That's working fine for me now, however Magnus did just give the box some more CPUs.We're also considering rate limiting by IP on that server, as it seems to be getting hammered with requests from China. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.comPGDay UK 2024, 11th September, London: https://2024.pgday.uk/", "msg_date": "Mon, 19 Aug 2024 15:32:15 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On Mon, Aug 19, 2024 at 4:28 PM Dave Page <[email protected]> wrote:\n\n>\n>\n> On Mon, 19 Aug 2024 at 14:56, Bruce Momjian <[email protected]> wrote:\n>\n>> On Sat, Aug 17, 2024 at 02:15:49PM -0400, Tom Lane wrote:\n>> > Matthias van de Meent <[email protected]> writes:\n>> > > Additional infra that doesn't seem to work right now: mailing list\n>> > > archives [0] seem to fail at a 503 produced by Varnish Cache server:\n>> > > Error 503 Backend fetch failed. Maybe more of infra is down, or\n>> > > otherwise under maintenance?\n>> >\n>> > Yeah, I'm seeing that too; not at the top level of the site but\n>> > when you try to drill down to individual messages.\n>> >\n>> > commitfest.postgresql.org isn't responding either,\n>> > nor wiki.postgresql.org.\n>>\n>> Okay, thanks for the confirmation. I emailed pginfra at the same time I\n>> posted the first email, so they are aware. I have not received a reply\n>> from them yet.\n>>\n>\n> Joe replied.\n>\n> In the context of replying it's also worth noting that CCing the same\nemail to both -hackers and -www ensures it gets caught in moderation, and\nwill take longer to reach someone.\n\n/Magnus\n\nOn Mon, Aug 19, 2024 at 4:28 PM Dave Page <[email protected]> wrote:On Mon, 19 Aug 2024 at 14:56, Bruce Momjian <[email protected]> wrote:On Sat, Aug 17, 2024 at 02:15:49PM -0400, Tom Lane wrote:\n> Matthias van de Meent <[email protected]> writes:\n> > Additional infra that doesn't seem to work right now: mailing list\n> > archives [0] seem to fail at a 503 produced by Varnish Cache server:\n> > Error 503 Backend fetch failed. Maybe more of infra is down, or\n> > otherwise under maintenance?\n> \n> Yeah, I'm seeing that too; not at the top level of the site but\n> when you try to drill down to individual messages.\n> \n> commitfest.postgresql.org isn't responding either,\n> nor wiki.postgresql.org.\n\nOkay, thanks for the confirmation.  I emailed pginfra at the same time I\nposted the first email, so they are aware.  I have not received a reply\nfrom them yet.Joe replied.In the context of replying it's also worth noting that CCing the same email to both -hackers and -www ensures it gets caught in moderation, and will take longer to reach someone./Magnus", "msg_date": "Mon, 19 Aug 2024 16:39:10 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "On 8/19/24 07:27, Tom Lane wrote:\n> Magnus Hagander <[email protected]> writes:\n\n> Right at the moment,\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git\n> \n> is failing for me with \"Backend fetch failed\". However, the mail\n\nI just tried it and it worked.\n\n> archives, commitfest.p.o, and wiki.p.o seem to be responding normally,\n> as is gitmaster.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n-- \nAdrian Klaver\[email protected]\n\n\n\n", "msg_date": "Mon, 19 Aug 2024 07:39:27 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> In the context of replying it's also worth noting that CCing the same\n> email to both -hackers and -www ensures it gets caught in moderation, and\n> will take longer to reach someone.\n\nYeah, I think the double CC on this thread is my fault --- I\nmomentarily forgot about that rule.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:51:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gitmaster server problem?" } ]
[ { "msg_contents": "Hi,\n\nThe CompilerWarnings cspluspluscheck started failing recently.\n\n1. LLVM library version issue. See commit message for details.\n2. pg_verify_backup.h now uses simplehash.h, which references\npg_fatal(), which nobody declared.\n\nI'm not sure why the first one started happening at the commit\n(aa2d6b15) that caused the second one, I feel like I'm missing\nsomething...\n\nhttps://github.com/postgres/postgres/commits/master/\n\nAnyway, these patches turn it green for me. Please see attached.\n\nhttps://cirrus-ci.com/task/5014011601747968", "msg_date": "Mon, 19 Aug 2024 00:10:40 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "CI cpluspluscheck failures" }, { "msg_contents": "On Mon, Aug 19, 2024 at 12:10 AM Thomas Munro <[email protected]> wrote:\n> I'm not sure why the first one started happening at the commit\n> (aa2d6b15) that caused the second one, I feel like I'm missing\n> something...\n\nWhat I was missing is that the LLVM warnings were already present, but\nnot causing failures because they are \"system\" headers. So I'll go\nand push the fix for aa2d6b15, and wait longer for comment on the LLVM\none which has been wrong, but green, for a while.\n\n\n", "msg_date": "Mon, 19 Aug 2024 08:02:00 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CI cpluspluscheck failures" }, { "msg_contents": "On Mon, Aug 19, 2024 at 1:30 AM Thomas Munro <[email protected]> wrote:\n>\n> On Mon, Aug 19, 2024 at 12:10 AM Thomas Munro <[email protected]> wrote:\n> > I'm not sure why the first one started happening at the commit\n> > (aa2d6b15) that caused the second one, I feel like I'm missing\n> > something...\n\nThanks, Thomas, for addressing the missing include.\n\nAdding logging.h to the pg_verifybackup.h file makes it redundant in\npg_verifybackup.c. The attached patch proposes removing this redundant\ninclude from pg_verifybackup.c, along with a few other unnecessary\nincludes.\n\nRegards,\nAmul", "msg_date": "Tue, 20 Aug 2024 10:23:58 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CI cpluspluscheck failures" } ]
[ { "msg_contents": "The Cirrus CI for REL_16_STABLE and REL_15_STABLE for macOS is \napparently broken right now. Here is a log example:\n\n[13:57:11.305] sh src/tools/ci/ci_macports_packages.sh \\\n[13:57:11.305] ccache \\\n[13:57:11.305] icu \\\n[13:57:11.305] kerberos5 \\\n[13:57:11.305] lz4 \\\n[13:57:11.305] meson \\\n[13:57:11.305] openldap \\\n[13:57:11.305] openssl \\\n[13:57:11.305] p5.34-io-tty \\\n[13:57:11.305] p5.34-ipc-run \\\n[13:57:11.305] python312 \\\n[13:57:11.305] tcl \\\n[13:57:11.305] zstd\n[13:57:11.325] macOS major version: 14\n[13:57:12.554] MacPorts package URL: \nhttps://github.com/macports/macports-base/releases/download/v2.9.3/MacPorts-2.9.3-14-Sonoma.pkg\n[14:01:37.252] installer: Package name is MacPorts\n[14:01:37.252] installer: Installing at base path /\n[14:01:37.252] installer: The install was successful.\n[14:01:37.257]\n[14:01:37.257] real\t4m23.837s\n[14:01:37.257] user\t0m0.385s\n[14:01:37.257] sys\t0m0.339s\n[14:01:37.282] macportsuser root\n[14:01:37.431] Error: /opt/local/bin/port: Failed to initialize \nMacPorts, sqlite error: attempt to write a readonly database (8) while \nexecuting query: CREATE INDEX registry.snapshot_file_id ON \nsnapshot_files(id)\n[14:01:37.599] Error: No ports matched the given expression\n\n\nREL_17_STABLE and up are working.\n\nI know there have been some changes recently to manage the OS version \nchange. Are these older branches expected to work?\n\n\n", "msg_date": "Sun, 18 Aug 2024 16:07:48 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On 8/18/24 16:07, Peter Eisentraut wrote:\n> The Cirrus CI for REL_16_STABLE and REL_15_STABLE for macOS is\n> apparently broken right now.  Here is a log example:\n> \n> [13:57:11.305] sh src/tools/ci/ci_macports_packages.sh \\\n> [13:57:11.305]   ccache \\\n> [13:57:11.305]   icu \\\n> [13:57:11.305]   kerberos5 \\\n> [13:57:11.305]   lz4 \\\n> [13:57:11.305]   meson \\\n> [13:57:11.305]   openldap \\\n> [13:57:11.305]   openssl \\\n> [13:57:11.305]   p5.34-io-tty \\\n> [13:57:11.305]   p5.34-ipc-run \\\n> [13:57:11.305]   python312 \\\n> [13:57:11.305]   tcl \\\n> [13:57:11.305]   zstd\n> [13:57:11.325] macOS major version: 14\n> [13:57:12.554] MacPorts package URL:\n> https://github.com/macports/macports-base/releases/download/v2.9.3/MacPorts-2.9.3-14-Sonoma.pkg\n> [14:01:37.252] installer: Package name is MacPorts\n> [14:01:37.252] installer: Installing at base path /\n> [14:01:37.252] installer: The install was successful.\n> [14:01:37.257]\n> [14:01:37.257] real    4m23.837s\n> [14:01:37.257] user    0m0.385s\n> [14:01:37.257] sys    0m0.339s\n> [14:01:37.282] macportsuser root\n> [14:01:37.431] Error: /opt/local/bin/port: Failed to initialize\n> MacPorts, sqlite error: attempt to write a readonly database (8) while\n> executing query: CREATE INDEX registry.snapshot_file_id ON\n> snapshot_files(id)\n> [14:01:37.599] Error: No ports matched the given expression\n> \n> \n> REL_17_STABLE and up are working.\n> \n\nAre they? I see exactly the same failure on all branches, including 17\nand master. For here this is on master:\n\nhttps://cirrus-ci.com/task/5918517050998784\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Sun, 18 Aug 2024 20:04:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On Mon, Aug 19, 2024 at 2:07 AM Peter Eisentraut <[email protected]> wrote:\n> [14:01:37.431] Error: /opt/local/bin/port: Failed to initialize\n> MacPorts, sqlite error: attempt to write a readonly database (8) while\n> executing query: CREATE INDEX registry.snapshot_file_id ON\n> snapshot_files(id)\n\nHmmm. Basically there is a loop-back disk device that get cached\nbetween runs (same technique as ccache), on which macports is\ninstalled. This makes it ready to test stuff fast, with all the\ndependencies ready and being updated only when they need to be\nupgraded. It is both clever and scary due to the path dependency...\n(Cf other OSes, where we have a base image with all the right packages\ninstalled already, no \"memory\" between runs like that.)\n\nThe macOS major version and hash of the MacPorts package install\nscript are in the cache key for that (see 64c39bd5), so a change to\nthat script would make a totally fresh installation, and hopefully\nwork. I will look into that, but it would also be nice to understand\nhow it go itself into that state so we can avoid it...\n\n> I know there have been some changes recently to manage the OS version\n> change. Are these older branches expected to work?\n\nYes.\n\n\n", "msg_date": "Mon, 19 Aug 2024 07:52:14 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On Mon, Aug 19, 2024 at 7:52 AM Thomas Munro <[email protected]> wrote:\n> The macOS major version and hash of the MacPorts package install\n> script are in the cache key for that (see 64c39bd5), so a change to\n> that script would make a totally fresh installation, and hopefully\n> work. I will look into that, but it would also be nice to understand\n> how it go itself into that state so we can avoid it...\n\nOh, it already is a cache miss and thus a fresh installation, in\nTomas's example. I can reproduce that in my own Github account by\nmaking a trivial change to ci_macports_packages.sh to I get a cache\nmiss too. It appears to install macports just fine, and then a later\ncommand fails in MacPort's sqlite package registry database, \"attempt\nto write a readonly database\". At a wild guess, what has changed here\nto trigger this new condition is that MacPorts has noticed a new\nstable release of itself available and taken some new code path\nrelated to upgrading. No idea why it thinks its package database is\nread-only, though... looking...\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:01:55 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Oh, it already is a cache miss and thus a fresh installation, in\n> Tomas's example. I can reproduce that in my own Github account by\n> making a trivial change to ci_macports_packages.sh to I get a cache\n> miss too. It appears to install macports just fine, and then a later\n> command fails in MacPort's sqlite package registry database, \"attempt\n> to write a readonly database\". At a wild guess, what has changed here\n> to trigger this new condition is that MacPorts has noticed a new\n> stable release of itself available and taken some new code path\n> related to upgrading. No idea why it thinks its package database is\n> read-only, though... looking...\n\nIndeed, MacPorts seems to have recently put out a 2.10.1 release.\nThis is not specific to the CI installation though. What I saw on\nmy laptop, following my usual process for a MacPorts update, was:\n\n$ sudo port -v selfupdate\n... reported installing 2.10.1 ...\n$ port outdated # to see what will be upgraded\n... failed with \"write a readonly database\" error!\n$ sudo port upgrade outdated\n... it's busily rebuilding a pile o' stuff ...\n\nI didn't think to try it, but I bet \"sudo port outdated\" would\nhave worked. I'm also betting that something in the CI update\nrecipe is taking the same shortcut of omitting \"sudo\". That\nworks in the normal case, but seemingly not after a MacPorts base\nupdate.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Aug 2024 18:43:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On Mon, Aug 19, 2024 at 10:01 AM Thomas Munro <[email protected]> wrote:\n> Oh, it already is a cache miss and thus a fresh installation, in\n> Tomas's example. I can reproduce that in my own Github account by\n> making a trivial change to ci_macports_packages.sh to I get a cache\n> miss too. It appears to install macports just fine, and then a later\n> command fails in MacPort's sqlite package registry database, \"attempt\n> to write a readonly database\". At a wild guess, what has changed here\n> to trigger this new condition is that MacPorts has noticed a new\n> stable release of itself available and taken some new code path\n> related to upgrading. No idea why it thinks its package database is\n> read-only, though... looking...\n\nI still don't know what's happening. In case it helps someone else\nsee it, the error comes from \"sudo port unsetrequested installed\".\nBut in any case, switching to 2.10.1 seems to do the trick. See\nattached.", "msg_date": "Mon, 19 Aug 2024 10:45:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> I still don't know what's happening. In case it helps someone else\n> see it, the error comes from \"sudo port unsetrequested installed\".\n> But in any case, switching to 2.10.1 seems to do the trick. See\n> attached.\n\nInteresting. Now that I've finished \"sudo port upgrade outdated\",\nmy laptop is back to a state where unprivileged \"port outdated\"\nis successful.\n\nWhat this smells like is that MacPorts has to do some kind of database\nupdate as a result of its major version change, and there are code\npaths that are not expecting that to get invoked. It makes sense\nthat unprivileged \"port outdated\" would fail to perform the database\nupdate, but not quite as much for \"sudo port unsetrequested installed\"\nto fail. That case seems like a MacPorts bug; maybe worth filing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Aug 2024 18:55:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On Mon, Aug 19, 2024 at 10:55 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > I still don't know what's happening. In case it helps someone else\n> > see it, the error comes from \"sudo port unsetrequested installed\".\n> > But in any case, switching to 2.10.1 seems to do the trick. See\n> > attached.\n>\n> Interesting. Now that I've finished \"sudo port upgrade outdated\",\n> my laptop is back to a state where unprivileged \"port outdated\"\n> is successful.\n>\n> What this smells like is that MacPorts has to do some kind of database\n> update as a result of its major version change, and there are code\n> paths that are not expecting that to get invoked. It makes sense\n> that unprivileged \"port outdated\" would fail to perform the database\n> update, but not quite as much for \"sudo port unsetrequested installed\"\n> to fail. That case seems like a MacPorts bug; maybe worth filing?\n\nHuh. Right, interesting theory. OK, I'll push that patch to use\n2.10.1 anyway, and report what we observed to see what they say.\n\nIt's funny that when I had an automatic \"pick latest\" thing, it broke\non their beta release, but when I pinned it to 2.9.3, it broke when\nthey made a new stable release anyway. A middle way would be to use a\npattern that skips alpha/beta/etc...\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:44:39 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 10:55 AM Tom Lane <[email protected]> wrote:\n>> What this smells like is that MacPorts has to do some kind of database\n>> update as a result of its major version change, and there are code\n>> paths that are not expecting that to get invoked. It makes sense\n>> that unprivileged \"port outdated\" would fail to perform the database\n>> update, but not quite as much for \"sudo port unsetrequested installed\"\n>> to fail. That case seems like a MacPorts bug; maybe worth filing?\n\n> Huh. Right, interesting theory. OK, I'll push that patch to use\n> 2.10.1 anyway, and report what we observed to see what they say.\n\nActually, it's a bug that it's trying to force an upgrade on us, isn't\nit? Or does the CI recipe include something that asks for that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Aug 2024 19:50:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "I wrote:\n> Interesting. Now that I've finished \"sudo port upgrade outdated\",\n> my laptop is back to a state where unprivileged \"port outdated\"\n> is successful.\n\nI confirmed on another machine that, immediately after \"sudo port\nselfupdate\" from 2.9.3 to 2.10.1, I get\n\n$ port outdated\nsqlite error: attempt to write a readonly database (8) while executing query: CREATE INDEX registry.snapshot_file_id ON snapshot_files(id)\n\nbut if I do \"sudo port outdated\", I get the right thing:\n\n$ sudo port outdated\nThe following installed ports are outdated:\nbash 5.2.26_0 < 5.2.32_0 \nbind9 9.18.27_0 < 9.20.0_3 \n... etc etc ...\n\nand then once I've done that, unprivileged \"port outdated\" works\nagain:\n\n$ port outdated\nThe following installed ports are outdated:\nbash 5.2.26_0 < 5.2.32_0 \nbind9 9.18.27_0 < 9.20.0_3 \n... yadda yadda ...\n\nSo there's definitely some behind-the-back updating going on there.\nI'm not sure why the CI script should trigger that though. It\ndoes do a couple of \"port\" calls without \"sudo\", but not in places\nwhere the state should be only partially upgraded.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Aug 2024 20:51:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On Mon, Aug 19, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n> I'm not sure why the CI script should trigger that though. It\n> does do a couple of \"port\" calls without \"sudo\", but not in places\n> where the state should be only partially upgraded.\n\nOooh, I think I see where we missed a sudo:\n\nif [ -n \"$(port -q installed installed)\" ] ; then\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:53:09 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 12:51 PM Tom Lane <[email protected]> wrote:\n>> I'm not sure why the CI script should trigger that though. It\n>> does do a couple of \"port\" calls without \"sudo\", but not in places\n>> where the state should be only partially upgraded.\n\n> Oooh, I think I see where we missed a sudo:\n\n> if [ -n \"$(port -q installed installed)\" ] ; then\n\nI wondered about that too, but you should still have a plain 2.9.3\ninstallation at that point. AFAICT you'd only be at risk between\n\n sudo port selfupdate\n sudo port upgrade outdated\n\nand there's nothing but a comment there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Aug 2024 21:38:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On 19.08.24 01:44, Thomas Munro wrote:\n> On Mon, Aug 19, 2024 at 10:55 AM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> I still don't know what's happening. In case it helps someone else\n>>> see it, the error comes from \"sudo port unsetrequested installed\".\n>>> But in any case, switching to 2.10.1 seems to do the trick. See\n>>> attached.\n>>\n>> Interesting. Now that I've finished \"sudo port upgrade outdated\",\n>> my laptop is back to a state where unprivileged \"port outdated\"\n>> is successful.\n>>\n>> What this smells like is that MacPorts has to do some kind of database\n>> update as a result of its major version change, and there are code\n>> paths that are not expecting that to get invoked. It makes sense\n>> that unprivileged \"port outdated\" would fail to perform the database\n>> update, but not quite as much for \"sudo port unsetrequested installed\"\n>> to fail. That case seems like a MacPorts bug; maybe worth filing?\n> \n> Huh. Right, interesting theory. OK, I'll push that patch to use\n> 2.10.1 anyway, and report what we observed to see what they say.\n\nREL_15_STABLE is fixed now. REL_16_STABLE now fails with another thing:\n\n[08:57:02.082] export PKG_CONFIG_PATH=\"/opt/local/lib/pkgconfig/\"\n[08:57:02.082] meson setup \\\n[08:57:02.082] --buildtype=debug \\\n[08:57:02.082] -Dextra_include_dirs=/opt/local/include \\\n[08:57:02.082] -Dextra_lib_dirs=/opt/local/lib \\\n[08:57:02.083] -Dcassert=true \\\n[08:57:02.083] -Duuid=e2fs -Ddtrace=auto \\\n[08:57:02.083] -DPG_TEST_EXTRA=\"$PG_TEST_EXTRA\" \\\n[08:57:02.083] build\n[08:57:02.097] \n/var/folders/0n/_7v_bpwd1w71f8l0b0kjw5br0000gn/T/scriptscbc157a91c26ed806bb3701f4d85e91d.sh: \nline 6: meson: command not found\n[08:57:03.078]\n[08:57:03.078] Exit status: 127\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 11:04:57 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On Wed, Aug 21, 2024 at 9:04 PM Peter Eisentraut <[email protected]> wrote:\n> REL_15_STABLE is fixed now. REL_16_STABLE now fails with another thing:\n...\n> line 6: meson: command not found\n\nHuh. I don't see that in my own account. And postgres/postgres is\ncurrently out of juice until the end of the month (I know why,\nsomething to fix, a topic for a different forum). Can you share the\nlink?\n\n\n", "msg_date": "Wed, 21 Aug 2024 21:28:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" }, { "msg_contents": "On 21.08.24 11:28, Thomas Munro wrote:\n> On Wed, Aug 21, 2024 at 9:04 PM Peter Eisentraut <[email protected]> wrote:\n>> REL_15_STABLE is fixed now. REL_16_STABLE now fails with another thing:\n> ...\n>> line 6: meson: command not found\n> \n> Huh. I don't see that in my own account. And postgres/postgres is\n> currently out of juice until the end of the month (I know why,\n> something to fix, a topic for a different forum). Can you share the\n> link?\n\nIt fixed itself after I cleaned the task caches.\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 15:34:19 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cirrus CI for macOS branches 16 and 15 broken" } ]
[ { "msg_contents": "I've gotten annoyed by the notation used for ecpg.addons rules,\nin which all the tokens of the gram.y rule to be modified\nare just concatenated. This is unreadable and potentially\nambiguous:\n\nECPG: fetch_argsABSOLUTE_PSignedIconstopt_from_incursor_name addon\n\nThe attached draft patch changes things so that we can write\n\nECPG: addon fetch_args ABSOLUTE_P SignedIconst opt_from_in cursor_name\n\nwhich is a whole lot closer to what actually appears in gram.y:\n\nfetch_args: cursor_name\n ...\n | ABSOLUTE_P SignedIconst opt_from_in cursor_name\n\n(Note that I've also moved the rule type \"addon\" to the front.\nThis isn't strictly necessary, but it seems less mistake-prone.)\n\nWhile I've not done it in the attached, perhaps it would be\nan improvement to allow a colon after the target nonterminal:\n\nECPG: addon fetch_args: ABSOLUTE_P SignedIconst opt_from_in cursor_name\n\nto bring it even closer to what is written in gram.y. You could\nimagine going further and writing this case as something like\n\nECPG: addon fetch_args | ABSOLUTE_P SignedIconst opt_from_in cursor_name\n\nbut I think that might be a step too far. IMO it's not adding much\nreadability, and it seems like introducing an unnecessary dependency\non exactly how the gram.y alternatives are laid out.\n\nBTW, the attached patch won't apply to HEAD, it's meant to apply\nafter the patch series being discussed at [1]. So I won't stick\nthis in the CF yet.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/[email protected]", "msg_date": "Sun, 18 Aug 2024 13:13:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Improving the notation for ecpg.addons rules" }, { "msg_contents": "On Sun, Aug 18, 2024 at 01:13:36PM -0400, Tom Lane wrote:\n> While I've not done it in the attached, perhaps it would be\n> but I think that might be a step too far. IMO it's not adding much\n> readability, and it seems like introducing an unnecessary dependency\n> on exactly how the gram.y alternatives are laid out.\n\nNot being too aggressive with the changes sounds like a good thing\nhere.\n\n> BTW, the attached patch won't apply to HEAD, it's meant to apply\n> after the patch series being discussed at [1]. So I won't stick\n> this in the CF yet.\n> \n> Thoughts?\n\nSeeing changes like \"stmtClosePortalStmt\" changing to \"stmt\nClosePortalStmt\" is clearly an improvement in readability.\nSignedIconstIconst was also fun. Your change is a good idea.\n\nIt looks like %replace_line expects all its elements to have one space\nbetween each token, still this is not enforced with a check across its\nhardcoded elements?\n--\nMichael", "msg_date": "Mon, 19 Aug 2024 14:17:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the notation for ecpg.addons rules" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> It looks like %replace_line expects all its elements to have one space\n> between each token, still this is not enforced with a check across its\n> hardcoded elements?\n\nYeah, I was wondering about that. I wouldn't do it exactly like\nthat, but with a check that the entry gets matched somewhere.\nThe other patchset adds cross-checks that each ecpg.addons entry is\nused exactly once, but there's no such check for these arrays that\nare internal to parse.pl. Can't quite decide if it's worth adding.\n\nI debugged the patch in this thread by checking that the emitted\npreproc.y didn't change, but that might not be helpful for changes\nthat are actually intended to change the grammar.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 01:23:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the notation for ecpg.addons rules" }, { "msg_contents": "I wrote:\n> Michael Paquier <[email protected]> writes:\n>> It looks like %replace_line expects all its elements to have one space\n>> between each token, still this is not enforced with a check across its\n>> hardcoded elements?\n\n> Yeah, I was wondering about that. I wouldn't do it exactly like\n> that, but with a check that the entry gets matched somewhere.\n\nHere's a patch for that (again based on the other patch series).\nThis did not turn up anything interesting, but it's probably\nworth keeping.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 20 Aug 2024 14:33:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the notation for ecpg.addons rules" }, { "msg_contents": "On Tue, Aug 20, 2024 at 02:33:23PM -0400, Tom Lane wrote:\n> I wrote:\n>> Yeah, I was wondering about that. I wouldn't do it exactly like\n>> that, but with a check that the entry gets matched somewhere.\n> \n> Here's a patch for that (again based on the other patch series).\n> This did not turn up anything interesting, but it's probably\n> worth keeping.\n\nOkay, I see where you're going with this one. It does not seem like\nthis is going to cost much in long-term maintenance while catching\nunfortunate issues, so +1 from me.\n\nThe patch does not apply on HEAD due to the dependency with the other\nthings you are proposing, and I would have hardcoded failures to check\nthat the reports are correct, but that looks neat on read.\n--\nMichael", "msg_date": "Fri, 23 Aug 2024 09:22:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the notation for ecpg.addons rules" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> The patch does not apply on HEAD due to the dependency with the other\n> things you are proposing, and I would have hardcoded failures to check\n> that the reports are correct, but that looks neat on read.\n\nI did test it by injecting errors, but I don't see why we'd leave\nthose in place?\n\nAnyway, my thought was to merge this into the other patch series,\nbut I didn't do that yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Aug 2024 22:03:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the notation for ecpg.addons rules" } ]
[ { "msg_contents": "Hi,\n\nThere's a pgsql-bugs report [1] about a behavior change with batching\nenabled in postgres_fdw. While I ultimately came to the conclusion the\nbehavior change is not a bug, I find it annoying. But I'm not 100% sure\nabout the \"not a bug\" conclusion, and and the same time I can't think of\na way to handle it better ...\n\nThe short story is that given a foreign table \"t\" and a function \"f\"\nthat queries the data, the batching can change the behavior. The bug\nreport uses DEFAULT expressions and constraints, but that's not quite\nnecessary.\n\nConsider this simplified example:\n\n CREATE TABLE t (a INT);\n\n CREATE FOREIGN TABLE f (a INT) SERVER loopback\n OPTIONS (table_name 't');\n\n\n CREATE FUNCTION counter() RETURNS int LANGUAGE sql AS\n $$ SELECT COUNT(*) FROM f $$;\n\nAnd now do\n\n INSERT INTO f SELECT counter() FROM generate_series(1,100);\n\nWith batch_size=1 this produces a nice sequence, with batching we start\nto get duplicate values - which is not surprising, as the function is\noblivious to the data still in the buffer.\n\nThe first question is if this is a bug. At first I thought yes - this is\na bug, but I'm no longer convinced of that. The problem is this function\nis inherently unsafe under concurrency - even without batching, it only\ntakes two concurrent sessions to get the same misbehavior.\n\nOfc, the concurrency can be prevented by locking the table in some way,\nbut the batching can be disabled with the same effect.\n\nAnyway, I was thinking about ways to improve / fix this, and I don't\nhave much. The first thought was that we could inspect the expressions\nand disable batching if any of the expressions is volatile.\n\nUnfortunately, the expressions can come from anywhere (it's not just\ndefault values as in the original report), and it's not even easily\naccessible from nodeModifyTable. Which is where we decide whether to do\nbatching, etc. It might be a subquery, values, ...\n\nThe other option that just occurred to me is that maybe it would be\npossible to flush the batches if we happen to query the same foreign\ntable from other parts of the plan. In this case it'd mean that every\ntime we call counter(), we'd flush data. But that doesn't seem very\nsimple either, because we need to find the nodeModifyTable nodes for the\nsame foreign table, and we'd need to trigger the flush.\n\n\nOpinions? Does this qualify as a bug, of just a manifestation of a\nfunction that doesn't work under concurrency?\n\n\n[1]\nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 19 Aug 2024 01:26:58 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "possible issue in postgres_fdw batching" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> Consider this simplified example:\n\n> CREATE TABLE t (a INT);\n> CREATE FOREIGN TABLE f (a INT) SERVER loopback\n> OPTIONS (table_name 't');\n> CREATE FUNCTION counter() RETURNS int LANGUAGE sql AS\n> $$ SELECT COUNT(*) FROM f $$;\n\n> And now do\n\n> INSERT INTO f SELECT counter() FROM generate_series(1,100);\n\n> With batch_size=1 this produces a nice sequence, with batching we start\n> to get duplicate values - which is not surprising, as the function is\n> oblivious to the data still in the buffer.\n\n> The first question is if this is a bug.\n\nI'd say no; this query is unduly chummy with implementation details.\nEven with no foreign table in the picture at all, we would be fully\nwithin our rights to execute the SELECT to completion before any\nof the insertions became visible. (Arguably, it's the current\nbehavior that is surprising, not that one.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Aug 2024 19:40:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: possible issue in postgres_fdw batching" }, { "msg_contents": "On 8/19/24 01:40, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> Consider this simplified example:\n> \n>> CREATE TABLE t (a INT);\n>> CREATE FOREIGN TABLE f (a INT) SERVER loopback\n>> OPTIONS (table_name 't');\n>> CREATE FUNCTION counter() RETURNS int LANGUAGE sql AS\n>> $$ SELECT COUNT(*) FROM f $$;\n> \n>> And now do\n> \n>> INSERT INTO f SELECT counter() FROM generate_series(1,100);\n> \n>> With batch_size=1 this produces a nice sequence, with batching we start\n>> to get duplicate values - which is not surprising, as the function is\n>> oblivious to the data still in the buffer.\n> \n>> The first question is if this is a bug.\n> \n> I'd say no; this query is unduly chummy with implementation details.\n> Even with no foreign table in the picture at all, we would be fully\n> within our rights to execute the SELECT to completion before any\n> of the insertions became visible. (Arguably, it's the current\n> behavior that is surprising, not that one.)\n> \n\nThanks, that's a great point. It didn't occur to me we're entitled to\njust run the SELECT to completion, before proceeding to the INSERT. That\nindeed means this function is fundamentally unsafe.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 19 Aug 2024 13:15:19 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: possible issue in postgres_fdw batching" } ]
[ { "msg_contents": "Hi all,\n\nWhile reading again the code of injection_stats_fixed.c that holds the\ntemplate for fixed-numbered pgstats, I got an idea to make the code a\nbit more elegant. Rather than using a single routine to increment the\ncounters, we could use a series of routines with its internals hidden\nin a macro like in pgstatfuncs.c.\n\nThe point of a template is to be as elegant as possible, and this\nis arguably a better style, still this comes down to personal taste\nperhaps. Please feel free to comment about the attached.\n\nThanks,\n--\nMichael", "msg_date": "Mon, 19 Aug 2024 09:33:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Some refactoring for fixed-numbered stats template in\n injection_points" } ]
[ { "msg_contents": "https://www.postgresql.org/message-id/CANncwrJTse6WKkUS_Y8Wj2PHVRvaqPxMk_qtEPsf_+NVPYxzyg@mail.gmail.com\n\nAs the problem discussed in the above thread, I also run into that. Besides\nupdating the doc, I would like to report a error for it.\n\nIf the code in PG_TRY contains any non local control flow other than\nereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\nbe called, then the PG_exception_stack will point to the memory whose\nstack frame has been released. So after that, when the pg_re_throw\ncalled, __longjmp() will crash and report Segmentation fault error.\n\n In that case, to help developers to figure out the root cause easily, it is\n better to report that 'the sigjmp_buf is invalid' rather than letting\n the __longjmp report any error.\n\n Addition to sigjmp_buf, add another field 'int magic' which is next to\n the sigjum_buf in the local stack frame memory. The magic's value is always\n 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n the magic's value is still '0x12345678', if not, that means the memory\n where the 'PG_exception_stack' points to has been released, and the\n'sigbuf'\n must be invalid.\n\n The related code is in patch 0001\n\n ------------------------------\n I'm not sure if it is necessary to add a regress test for it. In patch\n0002, to test the\n patch can work correctly, I have added a function 'pg_re_throw_crash' in\nregress.c\n\n create function pg_re_throw_crash()\n RETURNS void\n AS :'regresslib', 'pg_re_throw_crash'\n LANGUAGE C STRICT STABLE PARALLEL SAFE;\n create above function and run 'select pg_re_throw_crash()', then will get\nthe error\n'FATAL: Invalid sigjum_buf, code in PG_TRY cannot contain any non local\ncontrol flow other than ereport'\n\n-- \nBest regards !\nXiaoran Wang", "msg_date": "Mon, 19 Aug 2024 14:17:17 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Improve pg_re_throw: check if sigjmp_buf is valid and report error" }, { "msg_contents": "On Mon, Aug 19, 2024 at 2:17 AM Xiaoran Wang <[email protected]> wrote:\n> If the code in PG_TRY contains any non local control flow other than\n> ereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\n> be called, then the PG_exception_stack will point to the memory whose\n> stack frame has been released. So after that, when the pg_re_throw\n> called, __longjmp() will crash and report Segmentation fault error.\n>\n> Addition to sigjmp_buf, add another field 'int magic' which is next to\n> the sigjum_buf in the local stack frame memory. The magic's value is always\n> 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n> the magic's value is still '0x12345678', if not, that means the memory\n> where the 'PG_exception_stack' points to has been released, and the 'sigbuf'\n> must be invalid.\n>\n> The related code is in patch 0001\n\nThis is an interesting idea. I suspect if we do this we want a\ndifferent magic number and a different error message than what you\nchose here, but those are minor details.\n\nI'm not sure how reliable this would be at actually finding problems,\nthough. It doesn't seem guaranteed that the magic number will get\noverwritten if you do something wrong; it's just a possibility. Maybe\nthat's still useful enough that we should adopt this, but I'm not\nsure.\n\nPersonally, I don't think I've ever made this particular mistake. I\nthink a much more common usage error is exiting the catch-block\nwithout either throwing an error or rolling back a subtransaction. But\nYMMV, of course.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 09:23:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 2:17 AM Xiaoran Wang <[email protected]> wrote:\n>> If the code in PG_TRY contains any non local control flow other than\n>> ereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\n>> be called, then the PG_exception_stack will point to the memory whose\n>> stack frame has been released. So after that, when the pg_re_throw\n>> called, __longjmp() will crash and report Segmentation fault error.\n>> \n>> Addition to sigjmp_buf, add another field 'int magic' which is next to\n>> the sigjum_buf in the local stack frame memory. The magic's value is always\n>> 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n>> the magic's value is still '0x12345678', if not, that means the memory\n>> where the 'PG_exception_stack' points to has been released, and the 'sigbuf'\n>> must be invalid.\n\n> This is an interesting idea. I suspect if we do this we want a\n> different magic number and a different error message than what you\n> chose here, but those are minor details.\n\nI would suggest just adding an Assert; I doubt this check would be\nuseful in production.\n\n> I'm not sure how reliable this would be at actually finding problems,\n> though.\n\nYeah, that's the big problem. I don't have any confidence at all\nthat this would detect misuse. It'd require that the old stack\nframe gets overwritten, which might not happen for a long time,\nand it'd require that somebody eventually do a longjmp, which again\nmight not happen for a long time --- and when those did happen, the\nerror would be detected in someplace far away from the actual bug,\nwith little evidence remaining to help you localize it.\n\nAlso, as soon as some outer level of PG_TRY is exited in the proper\nway, the dangling stack pointer will be fixed up. That means there's\na fairly narrow time frame in which the overwrite and longjmp must\nhappen for this to catch a bug.\n\nSo on the whole I doubt this'd be terribly useful in this form;\nand I don't like the amount of code churn required.\n\n> Personally, I don't think I've ever made this particular mistake. I\n> think a much more common usage error is exiting the catch-block\n> without either throwing an error or rolling back a subtransaction. But\n> YMMV, of course.\n\nWe have had multiple instances of code \"return\"ing out of a PG_TRY,\nso I fully agree that some better way to detect that would be good.\nBut maybe we ought to think about static analysis for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 10:12:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Tom Lane <[email protected]> 于2024年8月19日周一 22:12写道:\n\n> Robert Haas <[email protected]> writes:\n> > On Mon, Aug 19, 2024 at 2:17 AM Xiaoran Wang <[email protected]>\n> wrote:\n> >> If the code in PG_TRY contains any non local control flow other than\n> >> ereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\n> >> be called, then the PG_exception_stack will point to the memory whose\n> >> stack frame has been released. So after that, when the pg_re_throw\n> >> called, __longjmp() will crash and report Segmentation fault error.\n> >>\n> >> Addition to sigjmp_buf, add another field 'int magic' which is next to\n> >> the sigjum_buf in the local stack frame memory. The magic's value is\n> always\n> >> 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n> >> the magic's value is still '0x12345678', if not, that means the memory\n> >> where the 'PG_exception_stack' points to has been released, and the\n> 'sigbuf'\n> >> must be invalid.\n>\n> > This is an interesting idea. I suspect if we do this we want a\n> > different magic number and a different error message than what you\n> > chose here, but those are minor details.\n>\n> I would suggest just adding an Assert; I doubt this check would be\n> useful in production.\n>\n\nAgree, an Assert is enough.\n\n>\n> > I'm not sure how reliable this would be at actually finding problems,\n> > though.\n>\n> Yeah, that's the big problem. I don't have any confidence at all\n> that this would detect misuse. It'd require that the old stack\n> frame gets overwritten, which might not happen for a long time,\n> and it'd require that somebody eventually do a longjmp, which again\n> might not happen for a long time --- and when those did happen, the\n> error would be detected in someplace far away from the actual bug,\n> with little evidence remaining to help you localize it.\n>\n\nExactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,\nbut to implement that is easy I think, recording the line num maybe.\n\nI think this is still useful, at least it tell you that the error is due\nto the\nPG_TRY.\n\n>\n> Also, as soon as some outer level of PG_TRY is exited in the proper\n> way, the dangling stack pointer will be fixed up. That means there's\n> a fairly narrow time frame in which the overwrite and longjmp must\n> happen for this to catch a bug.\n>\n>\nYes, if the outer level PG_TRY call pg_re_throw instead of ereport,\nthe dangling stack pointer will be fixed up. It's would be great to fix\nthat\nup in any case. But I have no idea how to implement that now.\n\nIn pg_re_throw, if we could use '_local_sigjmp_buf' instead of the\nglobal var PG_exception_stack, that would be great. We don't\nneed to worry about the invalid sigjum_buf.\n\nSo on the whole I doubt this'd be terribly useful in this form;\n> and I don't like the amount of code churn required.\n>\n> > Personally, I don't think I've ever made this particular mistake. I\n> > think a much more common usage error is exiting the catch-block\n> > without either throwing an error or rolling back a subtransaction. But\n> > YMMV, of course.\n>\n> We have had multiple instances of code \"return\"ing out of a PG_TRY,\n> so I fully agree that some better way to detect that would be good.\n> But maybe we ought to think about static analysis for that.\n>\n> regards, tom lane\n>\n\n\n-- \nBest regards !\nXiaoran Wang\n\nTom Lane <[email protected]> 于2024年8月19日周一 22:12写道:Robert Haas <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 2:17 AM Xiaoran Wang <[email protected]> wrote:\n>> If the code in PG_TRY contains any non local control flow other than\n>> ereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\n>> be called, then the PG_exception_stack will point to the memory whose\n>> stack frame has been released. So after that, when the pg_re_throw\n>> called, __longjmp() will crash and report Segmentation fault error.\n>> \n>> Addition to sigjmp_buf, add another field 'int magic' which is next to\n>> the sigjum_buf in the local stack frame memory. The magic's value is always\n>> 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n>> the magic's value is still '0x12345678', if not, that means the memory\n>> where the 'PG_exception_stack' points to has been released, and the 'sigbuf'\n>> must be invalid.\n\n> This is an interesting idea. I suspect if we do this we want a\n> different magic number and a different error message than what you\n> chose here, but those are minor details.\n\nI would suggest just adding an Assert; I doubt this check would be\nuseful in production.Agree, an Assert is enough. \n\n> I'm not sure how reliable this would be at actually finding problems,\n> though.\n\nYeah, that's the big problem.  I don't have any confidence at all\nthat this would detect misuse.  It'd require that the old stack\nframe gets overwritten, which might not happen for a long time,\nand it'd require that somebody eventually do a longjmp, which again\nmight not happen for a long time --- and when those did happen, the\nerror would be detected in someplace far away from the actual bug,\nwith little evidence remaining to help you localize it.Exactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,but to implement that is easy I think, recording the line num maybe.I think this is still useful, at least  it tell you that the error is due to thePG_TRY.\n\nAlso, as soon as some outer level of PG_TRY is exited in the proper\nway, the dangling stack pointer will be fixed up.  That means there's\na fairly narrow time frame in which the overwrite and longjmp must\nhappen for this to catch a bug.\nYes, if the outer level PG_TRY call pg_re_throw instead of ereport, the dangling stack pointer will be fixed up.  It's would be great to fix thatup in any case. But I have no idea how to implement that now.In pg_re_throw, if we could use '_local_sigjmp_buf' instead of theglobal var PG_exception_stack, that would be great. We don'tneed to worry about the invalid sigjum_buf.\nSo on the whole I doubt this'd be terribly useful in this form;\nand I don't like the amount of code churn required.\n\n> Personally, I don't think I've ever made this particular mistake. I\n> think a much more common usage error is exiting the catch-block\n> without either throwing an error or rolling back a subtransaction. But\n> YMMV, of course.\n\nWe have had multiple instances of code \"return\"ing out of a PG_TRY,\nso I fully agree that some better way to detect that would be good.\nBut maybe we ought to think about static analysis for that.\n\n                        regards, tom lane\n-- Best regards !Xiaoran Wang", "msg_date": "Tue, 20 Aug 2024 11:32:40 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Xiaoran Wang <[email protected]> 于2024年8月20日周二 11:32写道:\n\n>\n>\n> Tom Lane <[email protected]> 于2024年8月19日周一 22:12写道:\n>\n>> Robert Haas <[email protected]> writes:\n>> > On Mon, Aug 19, 2024 at 2:17 AM Xiaoran Wang <[email protected]>\n>> wrote:\n>> >> If the code in PG_TRY contains any non local control flow other than\n>> >> ereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\n>> >> be called, then the PG_exception_stack will point to the memory whose\n>> >> stack frame has been released. So after that, when the pg_re_throw\n>> >> called, __longjmp() will crash and report Segmentation fault error.\n>> >>\n>> >> Addition to sigjmp_buf, add another field 'int magic' which is next to\n>> >> the sigjum_buf in the local stack frame memory. The magic's value is\n>> always\n>> >> 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n>> >> the magic's value is still '0x12345678', if not, that means the memory\n>> >> where the 'PG_exception_stack' points to has been released, and the\n>> 'sigbuf'\n>> >> must be invalid.\n>>\n>> > This is an interesting idea. I suspect if we do this we want a\n>> > different magic number and a different error message than what you\n>> > chose here, but those are minor details.\n>>\n>> I would suggest just adding an Assert; I doubt this check would be\n>> useful in production.\n>>\n>\n> Agree, an Assert is enough.\n>\n>>\n>> > I'm not sure how reliable this would be at actually finding problems,\n>> > though.\n>>\n>> Yeah, that's the big problem. I don't have any confidence at all\n>> that this would detect misuse. It'd require that the old stack\n>> frame gets overwritten, which might not happen for a long time,\n>> and it'd require that somebody eventually do a longjmp, which again\n>> might not happen for a long time --- and when those did happen, the\n>> error would be detected in someplace far away from the actual bug,\n>> with little evidence remaining to help you localize it.\n>>\n>\n> Exactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,\n> but to implement that is easy I think, recording the line num maybe.\n>\n> I think this is still useful, at least it tell you that the error is due\n> to the\n> PG_TRY.\n>\n>>\n>> Also, as soon as some outer level of PG_TRY is exited in the proper\n>> way, the dangling stack pointer will be fixed up. That means there's\n>> a fairly narrow time frame in which the overwrite and longjmp must\n>> happen for this to catch a bug.\n>>\n>>\n> Yes, if the outer level PG_TRY call pg_re_throw instead of ereport,\n> the dangling stack pointer will be fixed up. It's would be great to fix\n> that\n> up in any case. But I have no idea how to implement that now.\n>\n>\nSorry, \"Yes, if the outer level PG_TRY call pg_re_throw instead of\nereport, \" should\nbe \"Yes, if the outer level PG_TRY reset the PG_exception_stack.\"\n\nIn pg_re_throw, if we could use '_local_sigjmp_buf' instead of the\n> global var PG_exception_stack, that would be great. We don't\n> need to worry about the invalid sigjum_buf.\n>\n> So on the whole I doubt this'd be terribly useful in this form;\n>> and I don't like the amount of code churn required.\n>>\n>> > Personally, I don't think I've ever made this particular mistake. I\n>> > think a much more common usage error is exiting the catch-block\n>> > without either throwing an error or rolling back a subtransaction. But\n>> > YMMV, of course.\n>>\n>> We have had multiple instances of code \"return\"ing out of a PG_TRY,\n>> so I fully agree that some better way to detect that would be good.\n>> But maybe we ought to think about static analysis for that.\n>>\n>> regards, tom lane\n>>\n>\n>\n> --\n> Best regards !\n> Xiaoran Wang\n>\n\n\n-- \nBest regards !\nXiaoran Wang\n\nXiaoran Wang <[email protected]> 于2024年8月20日周二 11:32写道:Tom Lane <[email protected]> 于2024年8月19日周一 22:12写道:Robert Haas <[email protected]> writes:\n> On Mon, Aug 19, 2024 at 2:17 AM Xiaoran Wang <[email protected]> wrote:\n>> If the code in PG_TRY contains any non local control flow other than\n>> ereport(ERROR) like goto, break etc., the PG_CATCH or PG_END_TRY cannot\n>> be called, then the PG_exception_stack will point to the memory whose\n>> stack frame has been released. So after that, when the pg_re_throw\n>> called, __longjmp() will crash and report Segmentation fault error.\n>> \n>> Addition to sigjmp_buf, add another field 'int magic' which is next to\n>> the sigjum_buf in the local stack frame memory. The magic's value is always\n>> 'PG_exception_magic 0x12345678'. And in 'pg_re_throw' routine, check if\n>> the magic's value is still '0x12345678', if not, that means the memory\n>> where the 'PG_exception_stack' points to has been released, and the 'sigbuf'\n>> must be invalid.\n\n> This is an interesting idea. I suspect if we do this we want a\n> different magic number and a different error message than what you\n> chose here, but those are minor details.\n\nI would suggest just adding an Assert; I doubt this check would be\nuseful in production.Agree, an Assert is enough. \n\n> I'm not sure how reliable this would be at actually finding problems,\n> though.\n\nYeah, that's the big problem.  I don't have any confidence at all\nthat this would detect misuse.  It'd require that the old stack\nframe gets overwritten, which might not happen for a long time,\nand it'd require that somebody eventually do a longjmp, which again\nmight not happen for a long time --- and when those did happen, the\nerror would be detected in someplace far away from the actual bug,\nwith little evidence remaining to help you localize it.Exactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,but to implement that is easy I think, recording the line num maybe.I think this is still useful, at least  it tell you that the error is due to thePG_TRY.\n\nAlso, as soon as some outer level of PG_TRY is exited in the proper\nway, the dangling stack pointer will be fixed up.  That means there's\na fairly narrow time frame in which the overwrite and longjmp must\nhappen for this to catch a bug.\nYes, if the outer level PG_TRY call pg_re_throw instead of ereport, the dangling stack pointer will be fixed up.  It's would be great to fix thatup in any case. But I have no idea how to implement that now.Sorry,  \"Yes, if the outer level PG_TRY call pg_re_throw instead of ereport, \" shouldbe \"Yes, if the outer level PG_TRY  reset the PG_exception_stack.\"In pg_re_throw, if we could use '_local_sigjmp_buf' instead of theglobal var PG_exception_stack, that would be great. We don'tneed to worry about the invalid sigjum_buf.\nSo on the whole I doubt this'd be terribly useful in this form;\nand I don't like the amount of code churn required.\n\n> Personally, I don't think I've ever made this particular mistake. I\n> think a much more common usage error is exiting the catch-block\n> without either throwing an error or rolling back a subtransaction. But\n> YMMV, of course.\n\nWe have had multiple instances of code \"return\"ing out of a PG_TRY,\nso I fully agree that some better way to detect that would be good.\nBut maybe we ought to think about static analysis for that.\n\n                        regards, tom lane\n-- Best regards !Xiaoran Wang\n-- Best regards !Xiaoran Wang", "msg_date": "Tue, 20 Aug 2024 11:39:42 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Xiaoran Wang <[email protected]> writes:\n>> Yeah, that's the big problem. I don't have any confidence at all\n>> that this would detect misuse. It'd require that the old stack\n>> frame gets overwritten, which might not happen for a long time,\n>> and it'd require that somebody eventually do a longjmp, which again\n>> might not happen for a long time --- and when those did happen, the\n>> error would be detected in someplace far away from the actual bug,\n>> with little evidence remaining to help you localize it.\n\n> Exactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,\n> but to implement that is easy I think, recording the line num maybe.\n\nI don't think you get to assume that the canary word gets overwritten\nbut debug data a few bytes away survives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 23:44:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Tom Lane <[email protected]> 于2024年8月20日周二 11:44写道:\n\n> Xiaoran Wang <[email protected]> writes:\n> >> Yeah, that's the big problem. I don't have any confidence at all\n> >> that this would detect misuse. It'd require that the old stack\n> >> frame gets overwritten, which might not happen for a long time,\n> >> and it'd require that somebody eventually do a longjmp, which again\n> >> might not happen for a long time --- and when those did happen, the\n> >> error would be detected in someplace far away from the actual bug,\n> >> with little evidence remaining to help you localize it.\n>\n> > Exactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,\n> > but to implement that is easy I think, recording the line num maybe.\n>\n> I don't think you get to assume that the canary word gets overwritten\n> but debug data a few bytes away survives.\n>\n\nWe can have a global var like 'PG_exception_debug' to save the debug info,\nnot saved in the local stack frame.\n\n1. Before PG_TRY, we can save the debug info as 'save_debug_info', just\nlike\n'_save_exception_stack', but not a pointer, memory copy the info into the\n'_save_debug_info'.\n2. In PG_TRY, set the new debug info into the global var\n'PG_exception_debug'\n3. And in PG_CATCH and PG_END_TRY, we can restore it.\nSo that the debug info will not be overwritten.\n\n> It doesn't seem guaranteed that the magic number will get\n> overwritten if you do something wrong;\n\nThat's my concern too. How about besides checking if the\n'PG_exception_stack->magic'\nis overwritten, also compare the address of 'PG_exception_stack->buf' and\ncurrent\nstack top address? if the address of 'PG_exception_stack->buf' is lower\nthan current\nstack top address, it must be invalid, otherwise , it must be overwritten.\nJust like below\n\nint stack_top;\nif (PG_exception_stack->magic <= &stack_top || PG_exception_stack->magic !=\nPG_exception_magic)\n ereport(FATAL,\n (errmsg(\"Invalid sigjum_buf, code in PG_TRY cannot contain\"\n \" any non local control flow other than\nereport\")));\n\nI'm not sure if this can work, any thoughts?\n\n\n\n\n> regards, tom lane\n>\n\n\n-- \nBest regards !\nXiaoran Wang\n\nTom Lane <[email protected]> 于2024年8月20日周二 11:44写道:Xiaoran Wang <[email protected]> writes:\n>> Yeah, that's the big problem.  I don't have any confidence at all\n>> that this would detect misuse.  It'd require that the old stack\n>> frame gets overwritten, which might not happen for a long time,\n>> and it'd require that somebody eventually do a longjmp, which again\n>> might not happen for a long time --- and when those did happen, the\n>> error would be detected in someplace far away from the actual bug,\n>> with little evidence remaining to help you localize it.\n\n> Exactly, it cannot tell you which PG_TRY left the invalid sigjmp_buf,\n> but to implement that is easy I think, recording the line num maybe.\n\nI don't think you get to assume that the canary word gets overwritten\nbut debug data a few bytes away survives.We can have a global var like 'PG_exception_debug'  to save the debug info,not saved in the  local stack frame. 1. Before PG_TRY, we can save the debug info as 'save_debug_info',  just like '_save_exception_stack', but not a pointer, memory copy the info into the'_save_debug_info'.  2. In PG_TRY, set the new debug info into the global var 'PG_exception_debug'3. And in PG_CATCH and PG_END_TRY, we can restore it.So that the debug info will not be overwritten.> It doesn't seem guaranteed that the magic number will get> overwritten if you do something wrong;That's my concern too.  How about besides checking  if the 'PG_exception_stack->magic'is overwritten, also compare the address of  'PG_exception_stack->buf' and currentstack top address? if the address of 'PG_exception_stack->buf' is lower than currentstack top address, it must be invalid, otherwise , it must be overwritten. Just like belowint  stack_top;if (PG_exception_stack->magic <= &stack_top || PG_exception_stack->magic != PG_exception_magic)    ereport(FATAL,                 (errmsg(\"Invalid sigjum_buf, code in PG_TRY cannot contain\"                                \" any non local control flow other than ereport\")));I'm not sure if this can work, any thoughts?\n\n                        regards, tom lane\n-- Best regards !Xiaoran Wang", "msg_date": "Tue, 20 Aug 2024 14:37:11 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Hi\n\nOn Mon, Aug 19, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n>\n> We have had multiple instances of code \"return\"ing out of a PG_TRY,\n> so I fully agree that some better way to detect that would be good.\n> But maybe we ought to think about static analysis for that.\n\nI have some static analysis scripts for detecting this kind of problem\n(of mis-using PG_TRY). Not sure if my scripts are helpful here but I\nwould like to share them.\n\n- A clang plugin for detecting unsafe control flow statements in\nPG_TRY. https://github.com/higuoxing/clang-plugins/blob/main/lib/ReturnInPgTryBlockChecker.cpp\n- Same as above, but in CodeQL[^1] script.\nhttps://github.com/higuoxing/postgres.ql/blob/main/return-in-PG_TRY.ql\n- A CodeQL script for detecting the missing of volatile qualifiers\n(objects have been changed between the setjmp invocation and longjmp\ncall should be qualified with volatile).\nhttps://github.com/higuoxing/postgres.ql/blob/main/volatile-in-PG_TRY.ql\n\nAndres also has some compiler hacking to detect return statements in PG_TRY[^2].\n\n[^1]: https://codeql.github.com/\n[^2]: https://www.postgresql.org/message-id/20230113054900.b7onkvwtkrykeu3z%40awork3.anarazel.de\n\n\n", "msg_date": "Tue, 20 Aug 2024 22:21:26 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" }, { "msg_contents": "Hi,\n\nI would like to update something about this idea.\nI attached a new\npatch 0003-Imporve-pg_re_throw-check-if-sigjmp_buf-is-valid-and.patch.\nNot too many updates in it:\n- replace the 'ereport' with Assert\n- besides checking the PG_exception_stack->magic, also check the address of\nPG_exception_stack,\nif it is lower than current stack, it means that it is an invalid\nlongjmp_buf.\n\nThere are 2 other things I would like to update with you:\n- As you are concerned with that this method is not reliable as the\nPG_exception_stack.magic might\nnot be rewritten even if the longjmp_buf is not invalid anymore. I have\ntested hat,\nyou are right, it is not reliable. I tested it with the flowing function on\nmy MacOS:\n-----------------------\nwrong_pg_try(int i)\n{\n if (i == 100)\n ereport(ERROR,(errmsg(\"called wrong_pg_try\")));\n if ( i == 0)\n {\n PG_TRY();\n {\n return;\n }\n PG_CATCH();\n {\n }\n PG_END_TRY();\n }\n else\n wrong_pg_try(i + 1) + j;\n}\n------------------------\nFirst call wrong_pg_try(0); then call wrong_pg_try(1);\nIt didn't report any error.\nI found that is due to the content of PG_exception_stack is not rewritten.\nThere is no variable saved on the \"wrong_pg_try()\" function stack, but the\nstacks of\nthe two call are not continuous, looks they are aligned.\n\nSure there are other cases that the PG_exception_stack.magic would not be\nrewritten\n\n- More details about the case that segmentfault occurs from __longjmp.\nI have a signal function added in PG, in it the PG_TRY called and returned.\nThen it left a invalid sigjmp_buf. It is a custom signal function handler,\ncan be\ntriggered by another process.\nThen another sql was running and then interrupted it. At that\ntime, ProcessInterrupts\n->ereport->pg_re_throw will called, it crashed\n\nXing Guo <[email protected]> 于2024年8月20日周二 22:21写道:\n\n> Hi\n>\n> On Mon, Aug 19, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n> >\n> > We have had multiple instances of code \"return\"ing out of a PG_TRY,\n> > so I fully agree that some better way to detect that would be good.\n> > But maybe we ought to think about static analysis for that.\n>\n> I have some static analysis scripts for detecting this kind of problem\n> (of mis-using PG_TRY). Not sure if my scripts are helpful here but I\n> would like to share them.\n>\n> - A clang plugin for detecting unsafe control flow statements in\n> PG_TRY.\n> https://github.com/higuoxing/clang-plugins/blob/main/lib/ReturnInPgTryBlockChecker.cpp\n> - Same as above, but in CodeQL[^1] script.\n> https://github.com/higuoxing/postgres.ql/blob/main/return-in-PG_TRY.ql\n> - A CodeQL script for detecting the missing of volatile qualifiers\n> (objects have been changed between the setjmp invocation and longjmp\n> call should be qualified with volatile).\n> https://github.com/higuoxing/postgres.ql/blob/main/volatile-in-PG_TRY.ql\n>\n> Andres also has some compiler hacking to detect return statements in\n> PG_TRY[^2].\n>\n> [^1]: https://codeql.github.com/\n> [^2]:\n> https://www.postgresql.org/message-id/20230113054900.b7onkvwtkrykeu3z%40awork3.anarazel.de\n>\n\n\n-- \nBest regards !\nXiaoran Wang", "msg_date": "Fri, 23 Aug 2024 18:00:47 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve pg_re_throw: check if sigjmp_buf is valid and report\n error" } ]
[ { "msg_contents": "Hi -hackers,\n\n From time to time we hit some memory leaks in PostgreSQL and on one\noccasion Tomas wrote: \"I certainly agree it's annoying that when OOM\nhits, we end up with no information about what used the memory. Maybe\nwe could have a threshold triggering a call to MemoryContextStats?\" -\nsee [1]. I had this in my memory/TODO list for quite some time now,\nand attached is simplistic patch to provide new\n\"debug_palloc_context_threshold\" GUC with aim to help investigating\nsuch memory leaks in contexts (think: mostly on customer side of\nthings) and start discussion maybe if there are better alternatives.\nThe aim is to see which codepath contributes to the case of using e.g.\nmore than 5GB in a single context (it will start then dumping\nstats/stack traces every 100ms). Here the patch assumes that most\nmemory leaks are present in the same context, so it dumps memory into\na log, which can be later analyzed to build a stack trace profile.\nThis patch is just to get discussion as I found that:\n- pg_backend_memory_contexts view: we often cannot modify an\napplication to issue it from time to time.\n- Scripting gdb to call MemoryContextStats() or using 14+\npg_log_backend_memory_contexts() often seems impossible too as we do\nnot know the PID which is going to cause it (the usual pattern is like\nthis: some app starts some SQL, sometimes OOMs, rinse and repeat)\n\nMy doubts are following:\n\n1. Is dumping on per-context threshold enough? That means it is not\nper-backend-RSS threshold. My problem is that palloc() would have to\nrecursively collect context->mem_allocated for every context out there\nstarting from TopMemoryContext on every run, right? (or just some\nglobal form of memory accounting would have to be introduced to avoid\nsuch slowdown of palloc(). What's worse there's seems to be no\nstandardized OS-agnostic way of getting RSS in C/POSIX)\n\n2. Is such stack trace reporting - see example in [2] - enough? Right\nnow it is multi-line, but maybe we could have our own version of\ncertain routines so that it would report all in one line? Like, e.g.:\n2024-08-19 09:11:04.664 CEST [9317] LOG: dumping memory context stats\nfor context \"Caller tuples\": Grand total: 25165712 bytes in 13 blocks;\n1705672 free (0 chunks); 23460040 used; errbacktrace+0x46 <- +0x66c7e5\n<- palloc+0x4a <- datumCopy+0x30 <- tuplesort_putdatum+0x8f <-\n+0x37bf26 <- [..long list..]\n\nThe advantage here would be that - with that sampling every XXX ms, we\ncould extract and statistically investigate what's the code path\nrequesting most of the memory , even by using simple grep(1) / sort(1)\n/ uniq(1)\n\n3. In offlist discussion Tomas, pointed out to me that \"it's not very\nclear the place where we exceed the limit is necessarily the place\nwhere we leak memory\" (that palloc(), might be later pfree'ed())\n\n4. Another idea (by Tomas) is related to the static threshold. Maybe\nwe want it to be dynamic, so add e.g. +1 .. 50% on every such call, so\nto catch only growing mem usage, not just >\ndebug_palloc_context_threshold.\n\nAnything else? What other experiences may be relevant here?\n\n-J.\n\n[1]\nhttps://www.postgresql.org/message-id/9574e4b6-3334-777b-4287-29d81151963a%40enterprisedb.com\n\n[2] - sample use and log:\n\ncreate table zz as select 'AAAAAAAAAAA' || (i % 100) as id from\ngenerate_series(1000000, 3000000) i;\nset work_mem to '1GB';\nset max_parallel_workers_per_gather to 0;\nset enable_hashagg to off;\nset debug_palloc_context_threshold to '1 MB';\nselect id, count(*) from zz group by id having count(*) > 1;\n\n\n2024-08-19 09:11:04.664 CEST [9317] LOG: dumping memory context stats\n2024-08-19 09:11:04.664 CEST [9317] DETAIL: Context \"Caller tuples\".\n2024-08-19 09:11:04.664 CEST [9317] BACKTRACE:\n postgres: postgres postgres [local] SELECT(errbacktrace+0x46)\n[0x55ac9ab06e56]\n postgres: postgres postgres [local] SELECT(+0x66c7e5) [0x55ac9ab2f7e5]\n postgres: postgres postgres [local] SELECT(palloc+0x4a) [0x55ac9ab2fd3a]\n postgres: postgres postgres [local] SELECT(datumCopy+0x30)\n[0x55ac9aa10cf0]\n postgres: postgres postgres [local]\nSELECT(tuplesort_putdatum+0x8f) [0x55ac9ab4032f]\n postgres: postgres postgres [local] SELECT(+0x37bf26) [0x55ac9a83ef26]\n[..]\n2024-08-19 09:11:04.664 CEST [9317] STATEMENT: select id, count(*)\nfrom zz group by id having count(*) > 1;\n2024-08-19 09:11:04.664 CEST [9317] LOG: level: 0; Caller tuples:\n25165712 total in 13 blocks; 1705672 free; 23460040 used\n2024-08-19 09:11:04.664 CEST [9317] LOG: Grand total: 25165712 bytes\nin 13 blocks; 1705672 free (0 chunks); 23460040 used\n//100ms later:\n2024-08-19 09:11:04.764 CEST [9317] LOG: dumping memory context stats\n2024-08-19 09:11:04.764 CEST [9317] DETAIL: Context \"Caller tuples\".\n2024-08-19 09:11:04.764 CEST [9317] BACKTRACE:\n postgres: postgres postgres [local] SELECT(errbacktrace+0x46)\n[0x55ac9ab06e56]\n postgres: postgres postgres [local] SELECT(+0x66c7e5) [0x55ac9ab2f7e5]\n postgres: postgres postgres [local] SELECT(palloc+0x4a) [0x55ac9ab2fd3a]\n postgres: postgres postgres [local] SELECT(datumCopy+0x30)\n[0x55ac9aa10cf0]\n postgres: postgres postgres [local]\nSELECT(tuplesort_putdatum+0x8f) [0x55ac9ab4032f]\n postgres: postgres postgres [local] SELECT(+0x37bf26) [0x55ac9a83ef26]\n[..]\n2024-08-19 09:11:04.764 CEST [9317] STATEMENT: select id, count(*)\nfrom zz group by id having count(*) > 1;\n2024-08-19 09:11:04.765 CEST [9317] LOG: level: 0; Caller tuples:\n41942928 total in 15 blocks; 7015448 free; 34927480 used\n2024-08-19 09:11:04.765 CEST [9317] LOG: Grand total: 41942928 bytes\nin 15 blocks; 7015448 free (0 chunks); 34927480 used", "msg_date": "Mon, 19 Aug 2024 10:07:40 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": true, "msg_subject": "debug_palloc_context_threshold - dump code paths leading to memory\n leaks" } ]
[ { "msg_contents": "Hi, hackers,\n\nI've recently encountered an issue where a standby crashes when\nreconnecting to a new primary after a switchover under certain conditions.\nHere's a procedure of the crash scenario:\n\n\n1) We have three instances: one primary and two standbys (s1 and s2, both\nusing streaming replication).\n\n\n2) The primary crashed when the standby’s `flushed_lsn` was at the\nbeginning of a WAL segment (e.g., `0/22000000`). Both s1 and s2 logged the\nfollowing:\n\n ```\n\n FATAL: could not connect to the primary server...\n\n LOG: waiting for WAL to become available at 0/220000ED\n\n ```\n\n\n3) s1 was promoted to the new primary, s1 logged the following:\n\n ```\n\n LOG: received promote request\n\n LOG: redo done at 0/21FFFEE8\n\n LOG: selected new timeline ID: 2\n\n ```\n\n\n4) s2's `primary_conninfo` was updated to point to s1, s2 logged the\nfollowing:\n\n ```\n\n LOG: received SIGHUP, reloading configuration files\n\n LOG: parameter \"primary_conninfo\" changed to ...\n\n ```\n\n\n5) s2 began replication with s1 and attempted to fetch `0/22000000` on\ntimeline 2, s2 logged the following:\n\n ```\n\n LOG: fetching timeline history file for timeline 2 from primary server\n\n FATAL: could not start WAL streaming: ERROR: requested starting point\n0/22000000 on timeline 1 is not this server's history\n\n DETAIL: This server's history forked from timeline 1 at 0/21FFFFE0.\n\n LOG: started streaming WAL from primary at 0/22000000 on timeline 2\n\n ```\n\n\n6) WAL record mismatch caused the walreceiver process to terminate, s2\nlogged the following:\n\n ```\n\n LOG: invalid contrecord length 10 (expected 213) at 0/21FFFFE0\n\n FATAL: terminating walreceiver process due to administrator command\n\n ```\n\n\n7) s2 then attempted to fetch `0/21000000` on timeline 2. However, the\nstartup process failed to open the WAL file before it was created, leading\nto a crash:\n\n ```\n\n PANIC: could not open file \"pg_wal/000000020000000000000021\": No such\nfile or directory\n\n LOG: startup process was terminated by signal 6: Aborted\n\n ```\n\n\nIn this scenario, s2 attempts replication twice. First, it starts from\n`0/22000000` on timeline 2, setting `walrcv->flushedUpto` to `0/22000000`.\nBut when a mismatch occurs, the walreceiver process is terminated. On the\nsecond attempt, replication starts from `0/21000000` on timeline 2. The\nstartup process expects the WAL file to exist because WalRcvFlushRecPtr()\nreturn `0/22000000`, but the file is not found, causing s2's startup\nprocess to crash.\n\nI think it should check recptr and walrcv->flushedUpto when Request\nXLogStreming, so I create a patch for it.\n\n\nBest regards,\npixian shi", "msg_date": "Mon, 19 Aug 2024 16:43:09 +0800", "msg_from": "px shi <[email protected]>", "msg_from_op": true, "msg_subject": "[Bug Fix]standby may crash when switching-over in certain special\n cases" }, { "msg_contents": "On Mon, 19 Aug 2024 16:43:09 +0800\npx shi <[email protected]> wrote:\n\n> Hi, hackers,\n> \n> I've recently encountered an issue where a standby crashes when\n> reconnecting to a new primary after a switchover under certain conditions.\n> Here's a procedure of the crash scenario:\n> \n> \n> 1) We have three instances: one primary and two standbys (s1 and s2, both\n> using streaming replication).\n> \n> \n> 2) The primary crashed when the standby’s `flushed_lsn` was at the\n> beginning of a WAL segment (e.g., `0/22000000`). Both s1 and s2 logged the\n> following:\n> \n> ```\n> \n> FATAL: could not connect to the primary server...\n> \n> LOG: waiting for WAL to become available at 0/220000ED\n> \n> ```\n> \n> \n> 3) s1 was promoted to the new primary, s1 logged the following:\n> \n> ```\n> \n> LOG: received promote request\n> \n> LOG: redo done at 0/21FFFEE8\n> \n> LOG: selected new timeline ID: 2\n> \n> ```\n> \n> \n> 4) s2's `primary_conninfo` was updated to point to s1, s2 logged the\n> following:\n> \n> ```\n> \n> LOG: received SIGHUP, reloading configuration files\n> \n> LOG: parameter \"primary_conninfo\" changed to ...\n> \n> ```\n> \n> \n> 5) s2 began replication with s1 and attempted to fetch `0/22000000` on\n> timeline 2, s2 logged the following:\n> \n> ```\n> \n> LOG: fetching timeline history file for timeline 2 from primary server\n> \n> FATAL: could not start WAL streaming: ERROR: requested starting point\n> 0/22000000 on timeline 1 is not this server's history\n> \n> DETAIL: This server's history forked from timeline 1 at 0/21FFFFE0.\n> \n> LOG: started streaming WAL from primary at 0/22000000 on timeline 2\n> \n> ```\n> \n> \n> 6) WAL record mismatch caused the walreceiver process to terminate, s2\n> logged the following:\n> \n> ```\n> \n> LOG: invalid contrecord length 10 (expected 213) at 0/21FFFFE0\n> \n> FATAL: terminating walreceiver process due to administrator command\n> \n> ```\n> \n> \n> 7) s2 then attempted to fetch `0/21000000` on timeline 2. However, the\n> startup process failed to open the WAL file before it was created, leading\n> to a crash:\n> \n> ```\n> \n> PANIC: could not open file \"pg_wal/000000020000000000000021\": No such\n> file or directory\n> \n> LOG: startup process was terminated by signal 6: Aborted\n> \n> ```\n> \n> \n> In this scenario, s2 attempts replication twice. First, it starts from\n> `0/22000000` on timeline 2, setting `walrcv->flushedUpto` to `0/22000000`.\n> But when a mismatch occurs, the walreceiver process is terminated. On the\n> second attempt, replication starts from `0/21000000` on timeline 2. The\n> startup process expects the WAL file to exist because WalRcvFlushRecPtr()\n> return `0/22000000`, but the file is not found, causing s2's startup\n> process to crash.\n> \n> I think it should check recptr and walrcv->flushedUpto when Request\n> XLogStreming, so I create a patch for it.\n\nIs s1 a cascading standby of s2? If otherwise s1 and s2 is the standbys of\nthe primary server respectively, it is not surprising that s2 has progressed\nfar than s1 when the primary fails. I believe that this is the case you should\nuse pg_rewind. Even if flushedUpto is reset as proposed in your patch, s2 might\nalready have applied a WAL record that s1 has not processed yet, and there\nwould be no gurantee that subsecuent applys suceed.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Wed, 21 Aug 2024 01:49:20 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bug Fix]standby may crash when switching-over in certain\n special cases" }, { "msg_contents": "Yugo Nagata <[email protected]> 于2024年8月21日周三 00:49写道:\n\n>\n>\n> > Is s1 a cascading standby of s2? If otherwise s1 and s2 is the standbys\n> of\n> > the primary server respectively, it is not surprising that s2 has\n> progressed\n> > far than s1 when the primary fails. I believe that this is the case you\n> should\n> > use pg_rewind. Even if flushedUpto is reset as proposed in your patch,\n> s2 might\n> > already have applied a WAL record that s1 has not processed yet, and\n> there\n> > would be no gurantee that subsecuent applys suceed.\n>\n>\n Thank you for your response. In my scenario, s1 and s2 is the standbys of\nthe primary server respectively, and s1 a synchronous standby and s2 is an\nasynchronous standby. You mentioned that if s2's replay progress is ahead\nof s1, pg_rewind should be used. However, what I'm trying to address is an\nissue where s2 crashes during replay after s1 has been promoted to primary,\neven though s2's progress hasn't surpassed s1.\n\nRegards,\nPixian Shi\n\nYugo Nagata <[email protected]> 于2024年8月21日周三 00:49写道:> Is s1 a cascading standby of s2? If otherwise s1 and s2 is the standbys of> the primary server respectively, it is not surprising that s2 has progressed> far than s1 when the primary fails. I believe that this is the case you should> use pg_rewind. Even if flushedUpto is reset as proposed in your patch, s2 might> already have applied a WAL record that s1 has not processed yet, and there> would be no gurantee that subsecuent applys suceed. Thank you for your response. In my scenario, s1 and s2 is the standbys of the primary server respectively, and s1 a synchronous standby and s2 is an asynchronous standby. You mentioned that if s2's replay progress is ahead of s1, pg_rewind should be used. However, what I'm trying to address is an issue where s2 crashes during replay after s1 has been promoted to primary, even though s2's progress hasn't surpassed s1.Regards,Pixian Shi", "msg_date": "Wed, 21 Aug 2024 09:11:03 +0800", "msg_from": "px shi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Bug Fix]standby may crash when switching-over in certain special\n cases" }, { "msg_contents": "On Wed, 21 Aug 2024 09:11:03 +0800\npx shi <[email protected]> wrote:\n\n> Yugo Nagata <[email protected]> 于2024年8月21日周三 00:49写道:\n> \n> >\n> >\n> > > Is s1 a cascading standby of s2? If otherwise s1 and s2 is the standbys\n> > of\n> > > the primary server respectively, it is not surprising that s2 has\n> > progressed\n> > > far than s1 when the primary fails. I believe that this is the case you\n> > should\n> > > use pg_rewind. Even if flushedUpto is reset as proposed in your patch,\n> > s2 might\n> > > already have applied a WAL record that s1 has not processed yet, and\n> > there\n> > > would be no gurantee that subsecuent applys suceed.\n> >\n> >\n> Thank you for your response. In my scenario, s1 and s2 is the standbys of\n> the primary server respectively, and s1 a synchronous standby and s2 is an\n> asynchronous standby. You mentioned that if s2's replay progress is ahead\n> of s1, pg_rewind should be used. However, what I'm trying to address is an\n> issue where s2 crashes during replay after s1 has been promoted to primary,\n> even though s2's progress hasn't surpassed s1.\n\nI understood your point. It is odd that the standby server crashes when\nreplication fails because the standby would keep retrying to get the\nnext record even in such case.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Pixian Shi\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Mon, 30 Sep 2024 14:47:08 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bug Fix]standby may crash when switching-over in certain\n special cases" }, { "msg_contents": "Thanks for responding.\n\n\n> It is odd that the standby server crashes when\n\nreplication fails because the standby would keep retrying to get the\n\nnext record even in such case.\n\n\n As I mentioned earlier, when replication fails, it retries to establish\nstreaming replication. At this point, the value of *walrcv->flushedUpto *is\nnot necessarily the data actually flushed to disk. However, the startup\nprocess mistakenly believes that the latest flushed LSN is\n*walrcv->flushedUpto* and attempts to open the corresponding WAL file,\nwhich doesn't exist, leading to a file open failure and causing the startup\nprocess to PANIC.\n\nRegards,\nPixian Shi\n\nYugo NAGATA <[email protected]> 于2024年9月30日周一 13:47写道:\n\n> On Wed, 21 Aug 2024 09:11:03 +0800\n> px shi <[email protected]> wrote:\n>\n> > Yugo Nagata <[email protected]> 于2024年8月21日周三 00:49写道:\n> >\n> > >\n> > >\n> > > > Is s1 a cascading standby of s2? If otherwise s1 and s2 is the\n> standbys\n> > > of\n> > > > the primary server respectively, it is not surprising that s2 has\n> > > progressed\n> > > > far than s1 when the primary fails. I believe that this is the case\n> you\n> > > should\n> > > > use pg_rewind. Even if flushedUpto is reset as proposed in your\n> patch,\n> > > s2 might\n> > > > already have applied a WAL record that s1 has not processed yet, and\n> > > there\n> > > > would be no gurantee that subsecuent applys suceed.\n> > >\n> > >\n> > Thank you for your response. In my scenario, s1 and s2 is the standbys\n> of\n> > the primary server respectively, and s1 a synchronous standby and s2 is\n> an\n> > asynchronous standby. You mentioned that if s2's replay progress is ahead\n> > of s1, pg_rewind should be used. However, what I'm trying to address is\n> an\n> > issue where s2 crashes during replay after s1 has been promoted to\n> primary,\n> > even though s2's progress hasn't surpassed s1.\n>\n> I understood your point. It is odd that the standby server crashes when\n> replication fails because the standby would keep retrying to get the\n> next record even in such case.\n>\n> Regards,\n> Yugo Nagata\n>\n> >\n> > Regards,\n> > Pixian Shi\n>\n>\n> --\n> Yugo NAGATA <[email protected]>\n>\n\nThanks for responding. It is odd that the standby server crashes whenreplication fails because the standby would keep retrying to get thenext record even in such case. As I mentioned earlier, when replication fails, it retries to establish streaming replication. At this point, the value of walrcv->flushedUpto is not necessarily the data actually flushed to disk. However, the startup process mistakenly believes that the latest flushed LSN is walrcv->flushedUpto and attempts to open the corresponding WAL file, which doesn't exist, leading to a file open failure and causing the startup process to PANIC.Regards,Pixian ShiYugo NAGATA <[email protected]> 于2024年9月30日周一 13:47写道:On Wed, 21 Aug 2024 09:11:03 +0800\npx shi <[email protected]> wrote:\n\n> Yugo Nagata <[email protected]> 于2024年8月21日周三 00:49写道:\n> \n> >\n> >\n> > > Is s1 a cascading standby of s2? If otherwise s1 and s2 is the standbys\n> > of\n> > > the primary server respectively, it is not surprising that s2 has\n> > progressed\n> > > far than s1 when the primary fails. I believe that this is the case you\n> > should\n> > > use pg_rewind. Even if flushedUpto is reset as proposed in your patch,\n> > s2 might\n> > > already have applied a WAL record that s1 has not processed yet, and\n> > there\n> > > would be no gurantee that subsecuent applys suceed.\n> >\n> >\n>  Thank you for your response. In my scenario, s1 and s2 is the standbys of\n> the primary server respectively, and s1 a synchronous standby and s2 is an\n> asynchronous standby. You mentioned that if s2's replay progress is ahead\n> of s1, pg_rewind should be used. However, what I'm trying to address is an\n> issue where s2 crashes during replay after s1 has been promoted to primary,\n> even though s2's progress hasn't surpassed s1.\n\nI understood your point. It is odd that the standby server crashes when\nreplication fails because the standby would keep retrying to get the\nnext record even in such case.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Pixian Shi\n\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Mon, 30 Sep 2024 15:14:54 +0800", "msg_from": "px shi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Bug Fix]standby may crash when switching-over in certain special\n cases" } ]
[ { "msg_contents": "Hi everyone,\n\n\nIn the `clause_selectivity_ext()` function, there’s a comment regarding \na NULL clause check that asks, \"can this still happen?\". I decided to \ninvestigate whether this scenario is still valid.\n\nHere’s what I found after reviewing the relevant cases:\n\n- approx_tuple_count(): The function iterates over a loop of other clauses.\n- get_foreign_key_join_selectivity(): The function is invoked within an \n`if (clause)` condition.\n- consider_new_or_clause(): The clause is generated by \n`make_restrictinfo()`, which never returns NULL.\n- clauselist_selectivity_ext(): This function either checks \n`list_length(clauses) == 1` before being called, or it is called within \na loop of clauses.\n\nIn other cases, the function is called recursively from \n`clause_selectivity_ext()`.\n\nIf you are aware of any situations where a NULL clause could be passed \nor if I’ve missed anything, please let me know. I’m also open to any \nother suggestions.\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.", "msg_date": "Mon, 19 Aug 2024 12:38:05 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Remove redundant NULL check in clause_selectivity_ext() function" }, { "msg_contents": "Ilia Evdokimov <[email protected]> writes:\n> In the `clause_selectivity_ext()` function, there’s a comment regarding \n> a NULL clause check that asks, \"can this still happen?\". I decided to \n> investigate whether this scenario is still valid.\n\n> Here’s what I found after reviewing the relevant cases:\n\n> - approx_tuple_count(): The function iterates over a loop of other clauses.\n> - get_foreign_key_join_selectivity(): The function is invoked within an \n> `if (clause)` condition.\n> - consider_new_or_clause(): The clause is generated by \n> `make_restrictinfo()`, which never returns NULL.\n> - clauselist_selectivity_ext(): This function either checks \n> `list_length(clauses) == 1` before being called, or it is called within \n> a loop of clauses.\n\nThat list_length check doesn't prove anything about whether the list\nelement is NULL, though.\n\nWhile I suspect that you may be right that the case doesn't occur\nanymore (if it ever did), I'm inclined to leave this test in place.\nIt's cheap enough compared to what the rest of the function will do,\nand more importantly we can't assume that all interesting call sites\nare within Postgres core. There are definitely extensions calling\nclauselist_selectivity and related functions. It's possible that\nsome of them rely on clause_selectivity not crashing on a NULL.\nCertainly, such an assumption could be argued to be a bug they\nshould fix; but I'm disinclined to make them jump through that\nhoop for a vanishingly small performance improvement.\n\nAlso, there are boatloads of other places where the planner has\npossibly-redundant checks for null clause pointers. It's likely\nthat some of the other ones are more performance-critical than\nthis. But I wouldn't be in favor of retail removal of the others\neither. Maybe with a more holistic approach we could remove a\nwhole lot of them and make a measurable improvement; but it\nwould require some careful thought about what invariants we\nwant to assume. There's not really any design principles\nabout this right now, and where we've ended up is that most\nfunctions operating on expression trees assume they've got to\ndefend against NULL inputs. To remove those checks, we'd\nneed a clear understanding of where caller-side checks need\nto be added instead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 11:02:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove redundant NULL check in clause_selectivity_ext() function" }, { "msg_contents": "On 19.8.24 18:02, Tom Lane wrote:\n> Ilia Evdokimov<[email protected]> writes:\n>> In the `clause_selectivity_ext()` function, there’s a comment regarding\n>> a NULL clause check that asks, \"can this still happen?\". I decided to\n>> investigate whether this scenario is still valid.\n>> Here’s what I found after reviewing the relevant cases:\n>> - approx_tuple_count(): The function iterates over a loop of other clauses.\n>> - get_foreign_key_join_selectivity(): The function is invoked within an\n>> `if (clause)` condition.\n>> - consider_new_or_clause(): The clause is generated by\n>> `make_restrictinfo()`, which never returns NULL.\n>> - clauselist_selectivity_ext(): This function either checks\n>> `list_length(clauses) == 1` before being called, or it is called within\n>> a loop of clauses.\n> That list_length check doesn't prove anything about whether the list\n> element is NULL, though.\n>\n> While I suspect that you may be right that the case doesn't occur\n> anymore (if it ever did), I'm inclined to leave this test in place.\n> It's cheap enough compared to what the rest of the function will do,\n> and more importantly we can't assume that all interesting call sites\n> are within Postgres core. There are definitely extensions calling\n> clauselist_selectivity and related functions. It's possible that\n> some of them rely on clause_selectivity not crashing on a NULL.\n> Certainly, such an assumption could be argued to be a bug they\n> should fix; but I'm disinclined to make them jump through that\n> hoop for a vanishingly small performance improvement.\n>\n> Also, there are boatloads of other places where the planner has\n> possibly-redundant checks for null clause pointers. It's likely\n> that some of the other ones are more performance-critical than\n> this. But I wouldn't be in favor of retail removal of the others\n> either. Maybe with a more holistic approach we could remove a\n> whole lot of them and make a measurable improvement; but it\n> would require some careful thought about what invariants we\n> want to assume. There's not really any design principles\n> about this right now, and where we've ended up is that most\n> functions operating on expression trees assume they've got to\n> defend against NULL inputs. To remove those checks, we'd\n> need a clear understanding of where caller-side checks need\n> to be added instead.\n>\n> \t\t\tregards, tom lane\n\nLet's assume that this check needs to remain, and the length check \ndoesn't guarantee anything. However, I'm a bit concerned that there's a \nNULL check here, but it's missing in the |clauselist_selectivity_ext()| \nfunction. For the reasons mentioned above, I would suggest the \nfollowing: either we perform the NULL check in both places, or we don't \nperform it in either.\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.\n\n\n\n\n\n\n\n\nOn 19.8.24 18:02, Tom Lane wrote:\n\n\nIlia Evdokimov <[email protected]> writes:\n\n\nIn the `clause_selectivity_ext()` function, there’s a comment regarding \na NULL clause check that asks, \"can this still happen?\". I decided to \ninvestigate whether this scenario is still valid.\n\n\n\n\n\nHere’s what I found after reviewing the relevant cases:\n\n\n\n\n\n- approx_tuple_count(): The function iterates over a loop of other clauses.\n- get_foreign_key_join_selectivity(): The function is invoked within an \n`if (clause)` condition.\n- consider_new_or_clause(): The clause is generated by \n`make_restrictinfo()`, which never returns NULL.\n- clauselist_selectivity_ext(): This function either checks \n`list_length(clauses) == 1` before being called, or it is called within \na loop of clauses.\n\n\n\nThat list_length check doesn't prove anything about whether the list\nelement is NULL, though.\n\nWhile I suspect that you may be right that the case doesn't occur\nanymore (if it ever did), I'm inclined to leave this test in place.\nIt's cheap enough compared to what the rest of the function will do,\nand more importantly we can't assume that all interesting call sites\nare within Postgres core. There are definitely extensions calling\nclauselist_selectivity and related functions. It's possible that\nsome of them rely on clause_selectivity not crashing on a NULL.\nCertainly, such an assumption could be argued to be a bug they\nshould fix; but I'm disinclined to make them jump through that\nhoop for a vanishingly small performance improvement.\n\nAlso, there are boatloads of other places where the planner has\npossibly-redundant checks for null clause pointers. It's likely\nthat some of the other ones are more performance-critical than\nthis. But I wouldn't be in favor of retail removal of the others\neither. Maybe with a more holistic approach we could remove a\nwhole lot of them and make a measurable improvement; but it\nwould require some careful thought about what invariants we\nwant to assume. There's not really any design principles\nabout this right now, and where we've ended up is that most\nfunctions operating on expression trees assume they've got to\ndefend against NULL inputs. To remove those checks, we'd\nneed a clear understanding of where caller-side checks need\nto be added instead.\n\n\t\t\tregards, tom lane\n\n\n Let's assume that this check needs to remain, and the length check\n doesn't guarantee anything. However, I'm a bit concerned that\n there's a NULL check here, but it's missing in the clauselist_selectivity_ext()\n function. For the reasons mentioned above, I would suggest the\n following: either we perform the NULL check in both places, or we\n don't perform it in either.\n -- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.", "msg_date": "Mon, 19 Aug 2024 18:48:19 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove redundant NULL check in clause_selectivity_ext() function" }, { "msg_contents": "On Tue, 20 Aug 2024 at 03:48, Ilia Evdokimov\n<[email protected]> wrote:\n> Let's assume that this check needs to remain, and the length check doesn't guarantee anything. However, I'm a bit concerned that there's a NULL check here, but it's missing in the clauselist_selectivity_ext() function. For the reasons mentioned above, I would suggest the following: either we perform the NULL check in both places, or we don't perform it in either.\n\nI don't follow this comparison. clauselist_selectivity_ext() is\nperfectly capable of accepting a NIL List of clauses.\n\nI agree with Tom that it's unlikely to be worth the risk removing the\nNULL check from clause_selectivity_ext(). From my point of view, the\nrisk-to-reward ratio is nowhere near the level of being worth it.\nThere'd just be no way to measure any sort of speedup from this change\nas there are far too many other things going on during planning. This\none is a drop in the ocean.\n\nHowever, I'd like to encourage you to look for other places that might\nhave a more meaningful impact on performance. For those, it's best to\ncome armed with a benchmark and results that demonstrate the speedup\nalong with your justification as to why you think the change is\nworthwhile. We've not received the former and you've not convinced two\ncommitters with your attempt on the latter.\n\nI suggest marking the CF entry for this patch as rejected.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 12:43:21 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove redundant NULL check in clause_selectivity_ext() function" } ]
[ { "msg_contents": "Hi!\n\nAfter rebasing one of my old patches, I'm hit to a problem with the\ninstallcheck test for 041_checkpoint_at_promote.pl.\nAt first, I thought it was something wrong with my patch, although it\ndoesn't relate to this part of the Postgres.\nThen I decided to do the same run but on current master and got the same\nresult.\n\nHere is my configure:\nSRC=\"../postgres\"\nTRG=`pwd`\n\nLINUX_CONFIGURE_FEATURES=\"\n --without-llvm\n --with-tcl --with-tclconfig=/usr/lib/tcl8.6/ --with-perl\n --with-python --with-gssapi --with-pam --with-ldap --with-selinux\n --with-systemd --with-uuid=ossp --with-libxml --with-libxslt\n--with-zstd\n --with-ssl=openssl\n\"\n\n$SRC/configure \\\n -C \\\n --prefix=$TRG/\"pgsql\" \\\n --enable-debug --enable-tap-tests --enable-depend --enable-cassert \\\n --enable-injection-points --enable-nls \\\n $LINUX_CONFIGURE_FEATURES \\\n CC=\"ccache clang\" CXX=\"ccache clang++\" \\\n CFLAGS=\"-Og -ggdb -fsanitize-recover=all\" \\\n CXXFLAGS=\"-Og -ggdb -fsanitize-recover=all\"\n\nAnd here is my run:\n$ time make PROVE_TESTS=t/041_checkpoint_at_promote.pl installcheck -C\nsrc/test/recovery\n...\n# Postmaster PID for node \"standby1\" is 820439\nerror running SQL: 'psql:<stdin>:1: ERROR: extension \"injection_points\" is\nnot available\nDETAIL: Could not open extension control file\n\"/home/omg/proj/build/pgsql/share/extension/injection_points.control\": No\nsuch file or directory.\nHINT: The extension must first be installed on the system where PostgreSQL\nis running.'\nwhile running 'psql -XAtq -d port=17154 host=/tmp/xkTLcw1tDb\ndbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CREATE EXTENSION\ninjection_points;' at\n/home/omg/proj/build/../postgres/src/test/perl/PostgreSQL/Test/Cluster.pm\nline 2140.\n# Postmaster PID for node \"master\" is 820423\n...\n\nCleary, Postgres can't find injection_points extension.\nAm I doing something wrong, or it is a problem with injection points extension\nitself?\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!After rebasing one of my old patches, I'm hit to a problem with the installcheck test for 041_checkpoint_at_promote.pl.At first, I thought it was something wrong with my patch, although it doesn't relate to this part of the Postgres.Then I decided to do the same run but on current master and got the same result.Here is my configure:SRC=\"../postgres\"TRG=`pwd`LINUX_CONFIGURE_FEATURES=\"        --without-llvm        --with-tcl --with-tclconfig=/usr/lib/tcl8.6/ --with-perl        --with-python --with-gssapi --with-pam --with-ldap --with-selinux        --with-systemd --with-uuid=ossp --with-libxml --with-libxslt --with-zstd        --with-ssl=openssl\"$SRC/configure \\        -C \\        --prefix=$TRG/\"pgsql\" \\        --enable-debug --enable-tap-tests --enable-depend --enable-cassert \\        --enable-injection-points --enable-nls \\        $LINUX_CONFIGURE_FEATURES \\        CC=\"ccache clang\" CXX=\"ccache clang++\" \\        CFLAGS=\"-Og -ggdb -fsanitize-recover=all\" \\        CXXFLAGS=\"-Og -ggdb -fsanitize-recover=all\"And here is my run:$ time make PROVE_TESTS=t/041_checkpoint_at_promote.pl installcheck -C src/test/recovery...# Postmaster PID for node \"standby1\" is 820439error running SQL: 'psql:<stdin>:1: ERROR:  extension \"injection_points\" is not availableDETAIL:  Could not open extension control file \"/home/omg/proj/build/pgsql/share/extension/injection_points.control\": No such file or directory.HINT:  The extension must first be installed on the system where PostgreSQL is running.'while running 'psql -XAtq -d port=17154 host=/tmp/xkTLcw1tDb dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CREATE EXTENSION injection_points;' at /home/omg/proj/build/../postgres/src/test/perl/PostgreSQL/Test/Cluster.pm line 2140.# Postmaster PID for node \"master\" is 820423...Cleary, Postgres can't find injection_points extension.Am I doing something wrong, or it is a problem with injection points extension itself?-- Best regards,Maxim Orlov.", "msg_date": "Mon, 19 Aug 2024 18:00:32 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "Maxim Orlov <[email protected]> writes:\n> After rebasing one of my old patches, I'm hit to a problem with the\n> installcheck test for 041_checkpoint_at_promote.pl.\n\nsrc/test/recovery/README points out that\n\n Also, to use \"make installcheck\", you must have built and installed\n contrib/pg_prewarm, contrib/pg_stat_statements and contrib/test_decoding\n in addition to the core code.\n\nI suspect this needs some additional verbiage about also installing\nsrc/test/modules/injection_points if you've enabled injection points.\n\n(I think we haven't noticed because most people just use \"make check\"\ninstead.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Aug 2024 12:10:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Mon, 19 Aug 2024 at 19:10, Tom Lane <[email protected]> wrote:\n\n> src/test/recovery/README points out that\n>\n> Also, to use \"make installcheck\", you must have built and installed\n> contrib/pg_prewarm, contrib/pg_stat_statements and contrib/test_decoding\n> in addition to the core code.\n>\n> I suspect this needs some additional verbiage about also installing\n> src/test/modules/injection_points if you've enabled injection points.\n>\n> (I think we haven't noticed because most people just use \"make check\"\n> instead.)\n>\n\nOK, many thanks for a comprehensive explanation!\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Mon, 19 Aug 2024 at 19:10, Tom Lane <[email protected]> wrote:\nsrc/test/recovery/README points out that\n\n  Also, to use \"make installcheck\", you must have built and installed\n  contrib/pg_prewarm, contrib/pg_stat_statements and contrib/test_decoding\n  in addition to the core code.\n\nI suspect this needs some additional verbiage about also installing\nsrc/test/modules/injection_points if you've enabled injection points.\n\n(I think we haven't noticed because most people just use \"make check\"\ninstead.)OK, many thanks for a comprehensive explanation!-- Best regards,Maxim Orlov.", "msg_date": "Tue, 20 Aug 2024 10:29:42 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "Shame on me, I didn't mention one thing in the original email. Actually,\nthe problem starts for me with \"make installcheck-world\".\nAnd only then I've run a specific test 041_checkpoint_at_promote.pl. I.e.\nthe whole sequence of the commands are:\nconfigure --enable_injection_points ...\nmake world\nmake install-world\ninitdb ...\npg_ctl ... start\nmake installcheck-world\n\nAnd all the tests passes perfectly, for example, in REL_15_STABLE because\nthere is no opt like enable_injection_points.\nBut this is not the case if we are dealing with the current master branch.\nSo, my point here: installcheck-world doesn't\nwork on the current master branch until I explicitly install\ninjection_points extension. In my view, it's a bit wired, since\nneither test_decoding, pg_stat_statements or pg_prewarm demand it.\n\n-- \nBest regards,\nMaxim Orlov.\n\n Shame on me, I didn't mention one thing in the original email.  Actually, the problem starts for me with \"make installcheck-world\".And only then I've run a specific test 041_checkpoint_at_promote.pl. I.e. the whole sequence of the commands are:configure --enable_injection_points ...make worldmake install-worldinitdb ... pg_ctl ... startmake installcheck-worldAnd all the tests passes perfectly, for example, in REL_15_STABLE because there is no opt like enable_injection_points.But this is not the case if we are dealing with the current master branch.  So, my point here: installcheck-world doesn't work on the current master branch until I explicitly install injection_points extension.  In my view, it's a bit wired, since neither test_decoding, pg_stat_statements or pg_prewarm demand it.-- Best regards,Maxim Orlov.", "msg_date": "Tue, 20 Aug 2024 14:40:10 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "Maxim Orlov <[email protected]> writes:\n> So, my point here: installcheck-world doesn't\n> work on the current master branch until I explicitly install\n> injection_points extension. In my view, it's a bit wired, since\n> neither test_decoding, pg_stat_statements or pg_prewarm demand it.\n\nUgh. The basic issue here is that \"make install-world\" doesn't\ninstall anything from underneath src/test/modules, which I recall\nas being an intentional decision. Rather than poking a hole in\nthat policy for injection_points, I wonder if we should move it\nto contrib.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Aug 2024 11:04:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "I wrote:\n> Ugh. The basic issue here is that \"make install-world\" doesn't\n> install anything from underneath src/test/modules, which I recall\n> as being an intentional decision. Rather than poking a hole in\n> that policy for injection_points, I wonder if we should move it\n> to contrib.\n\n... which would also imply writing documentation and so forth,\nand it'd mean that injection_points starts to show up in end-user\ninstallations. (That would happen with the alternative choice of\nhacking install-world to include src/test/modules/injection_points,\ntoo.) While you could argue that that'd be helpful for extension\nauthors who'd like to use injection_points in their own tests, I'm\nnot sure that it's where we want to go with that module. It's only\nmeant as test scaffolding, and I don't think we've analyzed the\nimplications of some naive user installing it.\n\nWe do, however, need to preserve the property that installcheck\nworks after install-world. I'm starting to think that maybe\nthe 041 test should be hacked to silently skip if it doesn't\nfind injection_points available. (We could then remove some of\nthe makefile hackery that's supporting the current behavior.)\nProbably the same needs to happen in each other test script\nthat's using injection_points --- I imagine that Maxim's\ntest is simply failing here first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:10:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On 2024-Aug-20, Tom Lane wrote:\n\n> We do, however, need to preserve the property that installcheck\n> works after install-world. I'm starting to think that maybe\n> the 041 test should be hacked to silently skip if it doesn't\n> find injection_points available.\n\nYeah, I like this option. Injection points require to be explicitly\nenabled in configure, so skipping that test when injection_points can't\nbe found seems reasonable.\n\nThis also suggests that EXTRA_INSTALL should have injection_points only\nwhen the option is enabled.\n\nI've been curious about what exactly does this Makefile line\n export enable_injection_points enable_injection_points\nachieve.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:30:35 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Tue, Aug 20, 2024 at 12:10:08PM -0400, Tom Lane wrote:\n> ... which would also imply writing documentation and so forth,\n> and it'd mean that injection_points starts to show up in end-user\n> installations. (That would happen with the alternative choice of\n> hacking install-world to include src/test/modules/injection_points,\n> too.) While you could argue that that'd be helpful for extension\n> authors who'd like to use injection_points in their own tests, I'm\n> not sure that it's where we want to go with that module. It's only\n> meant as test scaffolding, and I don't think we've analyzed the\n> implications of some naive user installing it.\n\nThe original line of thoughts here is that if I were to write a test\nthat relies on injection points for a critical bug fix, then I'd\nrather not have to worry about the hassle of maintaining user-facing\ndocumentation to get the work done. The second line is that we should\nbe able to tweak the module or even break its ABI as much as we want\nin stable branches to accomodate to the tests, which is what a test\nmodule is good for. The same ABI argument kind of stands as well for\nthe backend portion, but we'll see where it goes.\n\n> We do, however, need to preserve the property that installcheck\n> works after install-world. I'm starting to think that maybe\n> the 041 test should be hacked to silently skip if it doesn't\n> find injection_points available. (We could then remove some of\n> the makefile hackery that's supporting the current behavior.)\n> Probably the same needs to happen in each other test script\n> that's using injection_points --- I imagine that Maxim's\n> test is simply failing here first.\n\nYeah, we could do that.\n--\nMichael", "msg_date": "Wed, 21 Aug 2024 08:25:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Tue, Aug 20, 2024 at 12:30:35PM -0400, Alvaro Herrera wrote:\n> Yeah, I like this option. Injection points require to be explicitly\n> enabled in configure, so skipping that test when injection_points can't\n> be found seems reasonable.\n\nMy apologies for the delay in doing something here.\n\nThe simplest thing would be to scan pg_available_extensions after the\nfirst node is started, like in the attached. What do you think?\n\n> This also suggests that EXTRA_INSTALL should have injection_points only\n> when the option is enabled.\n\nIf the switch is disabled, the path is just ignored, so I don't see\nany harm in keeping it listed all the time.\n\n> I've been curious about what exactly does this Makefile line\n> export enable_injection_points enable_injection_points\n> achieve.\n\nWithout this line, the TAP tests would not be able to know if\ninjection points are enabled or not, no? Well, it is true that we\ncould also do something like a scan of pg_config.h in the installation\npath, but this is consistent with what ./configure uses.\n--\nMichael", "msg_date": "Fri, 23 Aug 2024 12:37:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On 2024-Aug-23, Michael Paquier wrote:\n\n> On Tue, Aug 20, 2024 at 12:30:35PM -0400, Alvaro Herrera wrote:\n> > Yeah, I like this option. Injection points require to be explicitly\n> > enabled in configure, so skipping that test when injection_points can't\n> > be found seems reasonable.\n> \n> My apologies for the delay in doing something here.\n> \n> The simplest thing would be to scan pg_available_extensions after the\n> first node is started, like in the attached. What do you think?\n\nHmm, I think you could test whether \"--enable-injection-points\" is in\nthe pg_config output in $node->config_data('--configure'). You don't\nneed to have started the node for that.\n\n> > This also suggests that EXTRA_INSTALL should have injection_points only\n> > when the option is enabled.\n> \n> If the switch is disabled, the path is just ignored, so I don't see\n> any harm in keeping it listed all the time.\n\nOkay.\n\n> > I've been curious about what exactly does this Makefile line\n> > export enable_injection_points enable_injection_points\n> > achieve.\n> \n> Without this line, the TAP tests would not be able to know if\n> injection points are enabled or not, no? Well, it is true that we\n> could also do something like a scan of pg_config.h in the installation\n> path, but this is consistent with what ./configure uses.\n\nRight, I figured out afterwards that what that does is export the\nmake-level variable as an environment variable.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 23 Aug 2024 18:25:41 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-Aug-23, Michael Paquier wrote:\n>>> I've been curious about what exactly does this Makefile line\n>>> export enable_injection_points enable_injection_points\n>>> achieve.\n\n>> Without this line, the TAP tests would not be able to know if\n>> injection points are enabled or not, no?\n\n> Right, I figured out afterwards that what that does is export the\n> make-level variable as an environment variable.\n\nIt exports it twice, though, which is pretty confusing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Aug 2024 18:45:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "By the way, we have the same kind of \"problem\" with the meson build. As we\nare deliberately\nnot want to install src/test/modules, after b6a0d469cae and 0d237aeebaee we\nmust add step\n\"meson compile install-test-files\" in order to \"meson test -q --setup\nrunning\" to be successful.\n\nTo be honest, this step is not obvious. Especially than there was no such\nstep before. But\ndocs and https://wiki.postgresql.org/wiki/Meson are completely silenced\nabout it.\n\n-- \nBest regards,\nMaxim Orlov.\n\nBy the way, we have the same kind of \"problem\" with the meson build. As we are deliberately not want to install src/test/modules, after b6a0d469cae and 0d237aeebaee we must add step \"meson compile install-test-files\" in order to \"meson test -q --setup running\" to be successful.To be honest, this step is not obvious. Especially than there was no such step before. But docs and https://wiki.postgresql.org/wiki/Meson are completely silenced about it.-- Best regards,Maxim Orlov.", "msg_date": "Wed, 28 Aug 2024 09:14:35 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Fri, Aug 23, 2024 at 06:45:02PM -0400, Tom Lane wrote:\n> It exports it twice, though, which is pretty confusing.\n\nRight. I am not sure what was my state of mind back then, but this\npattern has spread a bit.\n\nREL_17_STABLE is frozen for a few more days, so I'll address all the\nitems of this thread that once the release of this week is tagged: the\nexport duplicates and the installcheck issue. These are staged on a\nlocal branch for now.\n--\nMichael", "msg_date": "Mon, 2 Sep 2024 09:53:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Mon, Sep 02, 2024 at 09:53:30AM +0900, Michael Paquier wrote:\n> REL_17_STABLE is frozen for a few more days, so I'll address all the\n> items of this thread that once the release of this week is tagged: the\n> export duplicates and the installcheck issue. These are staged on a\n> local branch for now.\n\nThere are two TAP tests in REL_17_STABLE that use the module\ninjection_points, and we have 6 of them on HEAD. For now, I have\napplied a patch that addresses the problems for v17 to avoid any\nproblems with the upcoming release, fixing the two tests that exist in\nREL_17_STABLE.\n\nFor HEAD, I'd like to be slightly more ambitious and propose a routine\nin Cluster.pm that scans for available extensions, as of the attached.\nThis simplifies the injection point test in libpq, as the\ninjection_point is one portion of the test so we don't need the check\nbased on the environment variable. There is no need for checks in the\nTAP tests injection_points's 001_stats.pl and test_slru's\n001_multixact.pl as these modules disable installcheck.\n\nAny thoughts about the attached? This makes installcheck work here\nwith and without the configure switch.\n--\nMichael", "msg_date": "Wed, 4 Sep 2024 09:40:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Wed, 4 Sept 2024 at 03:40, Michael Paquier <[email protected]> wrote:\n\n> Any thoughts about the attached? This makes installcheck work here\n> with and without the configure switch.\n>\n>\nWorks for me with configure build. Meson build, obviously, still need extra\n\"meson compile install-test-files\" step\nas expected. From your patch, I see that you used safe_psql call to check\nfor availability of the injection_points\nextension. Are there some hidden, non-obvious reasons for it? It's much\nsimpler to check output of the\npg_config as Álvaro suggested above, does it? And we don't need active node\nfor this. Or I miss something?\n\nAnd one other thing I must mention. I don't know why, but my pg_config from\nmeson build show empty configure\ndespite the fact, I make pass the same options in both cases.\n\nautotools:\n$ ./pg_config --configure\n'--enable-debug' '--enable-tap-tests' '--enable-depend' ....\n\nmeson:\n$ ./pg_config --configure\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Wed, 4 Sept 2024 at 03:40, Michael Paquier <[email protected]> wrote:\nAny thoughts about the attached?  This makes installcheck work here\nwith and without the configure switch.Works for me with configure build. Meson build, obviously, still need extra \"meson compile install-test-files\" stepas expected. From your patch, I see that you used safe_psql call to check for availability of the injection_pointsextension. Are there some hidden, non-obvious reasons for it? It's much simpler to check output of the pg_config as Álvaro suggested above, does it? And we don't need active node for this. Or I miss something?And one other thing I must mention. I don't know why, but my pg_config from meson build show empty configuredespite the fact, I make pass the same options in both cases.autotools:$ ./pg_config --configure'--enable-debug' '--enable-tap-tests' '--enable-depend' ....meson:$ ./pg_config --configure-- Best regards,Maxim Orlov.", "msg_date": "Wed, 4 Sep 2024 19:05:32 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Wed, Sep 04, 2024 at 07:05:32PM +0300, Maxim Orlov wrote:\n> Works for me with configure build. Meson build, obviously, still need extra\n> \"meson compile install-test-files\" step\n> as expected. From your patch, I see that you used safe_psql call to check\n> for availability of the injection_points\n> extension.\n\n> Are there some hidden, non-obvious reasons for it? It's much\n> simpler to check output of the\n> pg_config as Álvaro suggested above, does it? And we don't need active node\n> for this. Or I miss something?\n\nEven if the code is compiled with injection points enabled, it's still\ngoing to be necessary to check if the module exists in the\ninstallation or not. And it may or may not be around.\n\nOne thing that we could do, rather than relying on the environment\nvariable for the compile-time check, would be to scan pg_config.h for\n\"USE_INJECTION_POINTS 1\". If it is set, we could skip the use of these\nenvironment variables. That's really kind of the same thing for\nwith_ssl or similar depending on the dependencies that exist. So\nthat's switching one thing to the other. I am not sure that's worth\nswitching at this stage. It does not change the need of a runtime\ncheck to make sure that the module is installed, though.\n\nAnother thing that we could do, rather than query\npg_available_extension, is implementing an equivalent on the perl\nside, scanning an installation tree for a module .so or equivalent,\nbut I've never been much a fan of the extra maintenance burden these\nduplications introduce, copying what the backend is able to handle\nalready in a more transparent way for the TAP routines.\n\n> And one other thing I must mention. I don't know why, but my pg_config from\n> meson build show empty configure\n> despite the fact, I make pass the same options in both cases.\n> \n> autotools:\n> $ ./pg_config --configure\n> '--enable-debug' '--enable-tap-tests' '--enable-depend' ....\n> \n> meson:\n> $ ./pg_config --configure\n\nYes, ./configure does not apply to meson, so I'm not sure what we\nshould do here, except perhaps inventing a new option switch that\nreports the options that have been used with the meson command, or\nsomething like that.\n--\nMichael", "msg_date": "Thu, 5 Sep 2024 10:59:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" }, { "msg_contents": "On Thu, 5 Sept 2024 at 04:59, Michael Paquier <[email protected]> wrote:\n\n> Even if the code is compiled with injection points enabled, it's still\n> going to be necessary to check if the module exists in the\n> installation or not. And it may or may not be around.\n>\n\nOK, thanks for an explanation, I get it.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Thu, 5 Sept 2024 at 04:59, Michael Paquier <[email protected]> wrote:\nEven if the code is compiled with injection points enabled, it's still\ngoing to be necessary to check if the module exists in the\ninstallation or not.  And it may or may not be around.OK, thanks for an explanation, I get it.-- Best regards,Maxim Orlov.", "msg_date": "Thu, 5 Sep 2024 19:08:58 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 041_checkpoint_at_promote.pl faild in installcheck due to\n missing injection_points" } ]
[ { "msg_contents": "Hi all,\n\nWhile hacking on a different thing, I've noticed that\nldap_password_func had the idea to define _PG_fini(). This is\npointless, as library unloading is not supported in the backend, and\nsomething that has been cleaned up from the tree in ab02d702ef08.\nThat's not something to encourage, perhaps, as well..\n\nHow about removing it like in the attached to be consistent?\n\nThanks,\n--\nMichael", "msg_date": "Tue, 20 Aug 2024 13:46:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Remaining reference to _PG_fini() in ldap_password_func" }, { "msg_contents": "On 20/08/2024 07:46, Michael Paquier wrote:\n> Hi all,\n> \n> While hacking on a different thing, I've noticed that\n> ldap_password_func had the idea to define _PG_fini(). This is\n> pointless, as library unloading is not supported in the backend, and\n> something that has been cleaned up from the tree in ab02d702ef08.\n> That's not something to encourage, perhaps, as well..\n> \n> How about removing it like in the attached to be consistent?\n\n+1. There's also a prototype for _PG_fini() in fmgr.h, let's remove that \ntoo.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 20 Aug 2024 09:59:12 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining reference to _PG_fini() in ldap_password_func" }, { "msg_contents": "On Tue, Aug 20, 2024 at 09:59:12AM +0300, Heikki Linnakangas wrote:\n> +1. There's also a prototype for _PG_fini() in fmgr.h, let's remove that\n> too.\n\nYes, I was wondering too whether we should force the hand of\nextensions to know that it is pointless to support unloading, telling\neverybody to delete this code.\n\nI'm OK with the agressive move and remove the whole. Any thoughts\nfrom others?\n--\nMichael", "msg_date": "Tue, 20 Aug 2024 17:53:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining reference to _PG_fini() in ldap_password_func" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 20/08/2024 07:46, Michael Paquier wrote:\n>> How about removing it like in the attached to be consistent?\n\n> +1. There's also a prototype for _PG_fini() in fmgr.h, let's remove that \n> too.\n\n+1. I think the fmgr.h prototype may have been left there\ndeliberately to avoid breaking extension code, but it's past\ntime to clean it up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Aug 2024 09:05:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remaining reference to _PG_fini() in ldap_password_func" }, { "msg_contents": "On Tue, Aug 20, 2024 at 09:05:54AM -0400, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> +1. There's also a prototype for _PG_fini() in fmgr.h, let's remove that \n>> too.\n> \n> +1. I think the fmgr.h prototype may have been left there\n> deliberately to avoid breaking extension code, but it's past\n> time to clean it up.\n\nOkay, I've removed all that then. Thanks for the feedback.\n--\nMichael", "msg_date": "Wed, 21 Aug 2024 07:26:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remaining reference to _PG_fini() in ldap_password_func" } ]
[ { "msg_contents": "Hello Postgres Hackers\n\nAn application that I am developing uses Postgresql, and includes a fairly large\nnumber of partitioned tables which are used to store time series data.\n\nThe tables are partitioned by time, and typically there is only one partition\nat a time - the current partition - that is actually being updated.\nOlder partitions\nare available for query and eventually dropped.\n\nAs per the documentation, partitioned tables are not analyzed by the autovacuum\nworkers, although their partitions are. Statistics are needed on the partitioned\ntable level for at least some query planning activities.\n\nThe problem is that giving an ANALYZE command targeting a partitioned table\ncauses it to update statistics for the partitioned table AND all the individual\npartitions. There is currently no option to prevent it from including the\npartitions.\n\nThis is wasteful for our application: for one thing the autovacuum\nhas already analyzed the individual partitions; for another most of\nthe partitions\nwill have had no changes, so they don't need to be analyzed repeatedly.\n\nI took some measurements when running ANALYZE on one of our tables. It\ntook approx\n4 minutes to analyze the partitioned table, then 29 minutes to analyze the\npartitions. We have hundreds of these tables, so the cost is very significant.\n\nFor my use case at least it would be fantastic if we could add an ONLY option\nto ANALYZE, which would cause it to analyze the named table only and not\ndescend into the partitions.\n\nI took a look at the source and it looks doable, but before launching into it\nI thought I would ask a few questions here.\n\n 1. Would such a feature be welcomed? Are there any traps I might not\nhave thought of?\n\n 2. The existing ANALYZE command has the following structure:\n\n ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n\n It would be easiest to add ONLY as another option, but that\ndoesn't look quite\n right to me - surely the ONLY should be attached to the table name?\n An alternative would be:\n\n ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]\n\nAny feedback or advice would be great.\n\nRegards\nMike.\n\n\n", "msg_date": "Tue, 20 Aug 2024 15:52:12 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "ANALYZE ONLY" }, { "msg_contents": "On Tue, 20 Aug 2024 at 07:52, Michael Harris <[email protected]> wrote:\n> 1. Would such a feature be welcomed? Are there any traps I might not\n> have thought of?\n\nI think this sounds like a reasonable feature.\n\n\n> 2. The existing ANALYZE command has the following structure:\n>\n> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n>\n> It would be easiest to add ONLY as another option, but that\n> doesn't look quite\n> right to me - surely the ONLY should be attached to the table name?\n> An alternative would be:\n>\n> ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]\n>\n> Any feedback or advice would be great.\n\nPersonally I'd prefer a new option to be added. But I agree ONLY isn't\na good name then. Maybe something like SKIP_PARTITIONS.\n\n\n", "msg_date": "Tue, 20 Aug 2024 09:42:43 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On 20.8.24 10:42, Jelte Fennema-Nio wrote:\n\n> On Tue, 20 Aug 2024 at 07:52, Michael Harris<[email protected]> wrote:\n>> 1. Would such a feature be welcomed? Are there any traps I might not\n>> have thought of?\n> I think this sounds like a reasonable feature.\n>\n>\n>> 2. The existing ANALYZE command has the following structure:\n>>\n>> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n>>\n>> It would be easiest to add ONLY as another option, but that\n>> doesn't look quite\n>> right to me - surely the ONLY should be attached to the table name?\n>> An alternative would be:\n>>\n>> ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]\n>>\n>> Any feedback or advice would be great.\n> Personally I'd prefer a new option to be added. But I agree ONLY isn't\n> a good name then. Maybe something like SKIP_PARTITIONS.\n>\n\nHi everyone,\n\nYour proposal is indeed interesting, but I have a question: can't your \nissue be resolved by properly configuring |autovacuum| instead of \ndeveloping a new feature for |ANALYZE|?\n\n From my perspective, |ANALYZE| is intended to forcibly gather \nstatistics from all partitioned tables. If the goal is to ensure that \nstatistics are updated at the right moment, adjusting the \n|autovacuum_analyze_threshold| and |autovacuum_analyze_scale_factor| \nparameters might be the solution.\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.\n\n\n\n\n\n\nOn 20.8.24 10:42, Jelte Fennema-Nio wrote:\n\nOn Tue, 20 Aug 2024 at 07:52, Michael Harris <[email protected]> wrote:\n\n\n 1. Would such a feature be welcomed? Are there any traps I might not\nhave thought of?\n\n\n\nI think this sounds like a reasonable feature.\n\n\n\n\n 2. The existing ANALYZE command has the following structure:\n\n ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n\n It would be easiest to add ONLY as another option, but that\ndoesn't look quite\n right to me - surely the ONLY should be attached to the table name?\n An alternative would be:\n\n ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]\n\nAny feedback or advice would be great.\n\n\n\nPersonally I'd prefer a new option to be added. But I agree ONLY isn't\na good name then. Maybe something like SKIP_PARTITIONS.\n\n\n\n\n\nHi everyone,\nYour proposal is indeed interesting, but I have a question: can't\n your issue be resolved by properly configuring autovacuum\n instead of developing a new feature for ANALYZE?\nFrom my perspective, ANALYZE is intended to\n forcibly gather statistics from all partitioned tables. If the\n goal is to ensure that statistics are updated at the right moment,\n adjusting the autovacuum_analyze_threshold and autovacuum_analyze_scale_factor\n parameters might be the solution.\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.", "msg_date": "Tue, 20 Aug 2024 14:25:32 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Tue, 20 Aug 2024 at 23:25, Ilia Evdokimov\n<[email protected]> wrote:\n> Your proposal is indeed interesting, but I have a question: can't your issue be resolved by properly configuring autovacuum instead of developing a new feature for ANALYZE?\n\nBasically, no. There's a \"tip\" in [1] which provides information on\nthe limitation, namely:\n\n\"The autovacuum daemon does not issue ANALYZE commands for partitioned\ntables. Inheritance parents will only be analyzed if the parent itself\nis changed - changes to child tables do not trigger autoanalyze on the\nparent table. If your queries require statistics on parent tables for\nproper planning, it is necessary to periodically run a manual ANALYZE\non those tables to keep the statistics up to date.\"\n\nThere is also some discussion about removing the limitation in [2].\nWhile I agree that it would be nice to have autovacuum handle this,\nit's not clear how exactly it would work. Additionally, if we had\nthat, it would still be useful if the ANALYZE command could be\ninstructed to just gather statistics for the partitioned table only.\n\nDavid\n\n[1] https://www.postgresql.org/docs/devel/routine-vacuuming.html#VACUUM-FOR-STATISTICS\n[2] https://www.postgresql.org/message-id/flat/CAKkQ508_PwVgwJyBY%3D0Lmkz90j8CmWNPUxgHvCUwGhMrouz6UA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 23:49:41 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hi Michael,\n\nThanks for starting this thread. I've also spent a bit time on this after\nreading your first thread on this issue [1]\n\nMichael Harris <[email protected]>, 20 Ağu 2024 Sal, 08:52 tarihinde şunu\nyazdı:\n\n> The problem is that giving an ANALYZE command targeting a partitioned table\n> causes it to update statistics for the partitioned table AND all the\n> individual\n> partitions. There is currently no option to prevent it from including the\n> partitions.\n>\n> This is wasteful for our application: for one thing the autovacuum\n> has already analyzed the individual partitions; for another most of\n> the partitions\n> will have had no changes, so they don't need to be analyzed repeatedly.\n>\n\nI agree that it's a waste to analyze partitions when they're already\nanalyzed by autovacuum. It would be nice to have a way to run analyze only\non a partitioned table without its partitions.\n\n\n\n> I took some measurements when running ANALYZE on one of our tables. It\n> took approx\n> 4 minutes to analyze the partitioned table, then 29 minutes to analyze the\n> partitions. We have hundreds of these tables, so the cost is very\n> significant.\n>\n\nI quickly tweaked the code a bit to exclude partitions when a partitioned\ntable is being analyzed. I can confirm that there is a significant gain\neven on a simple case like a partitioned table with 10 partitions and 1M\nrows in each partition.\n\n 1. Would such a feature be welcomed? Are there any traps I might not\n> have thought of?\n>\n> 2. The existing ANALYZE command has the following structure:\n>\n> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n>\n> It would be easiest to add ONLY as another option, but that\n> doesn't look quite\n> right to me - surely the ONLY should be attached to the table name\n> An alternative would be:\n>\n> ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]\n>\n\nI feel closer to adding this as an option instead of a new keyword in\nANALYZE grammar. To me, it would be easier to have this option and then\ngive the names of partitioned tables as opposed to typing ONLY before each\npartition table.\nBut we should think of another name as ONLY is used differently (attached\nto the table name as you mentioned) in other contexts.\n\nI've been also thinking about how this new option should affect inheritance\ntables. Should it have just no impact on them or only analyze the parent\ntable without taking child tables into account? There are two records for\nan inheritance parent table in pg_statistic, one row for only the parent\ntable and a second row including children. We might only analyze the parent\ntable if this new \"ONLY\" option is specified. I'm not sure if that would be\nsomething users would need or not, but I think this option should behave\nsimilarly for both partitioned tables and inheritance tables.\n\nIf we decide to go with only partition tables and not care about\ninheritance, then naming this option to SKIP_PARTITIONS as Jelte suggested\nsounds fine. But that name wouldn't work if this option will affect\ninheritance tables.\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Michael,Thanks for starting this thread. I've also spent a bit time on this after reading your first thread on this issue [1] Michael Harris <[email protected]>, 20 Ağu 2024 Sal, 08:52 tarihinde şunu yazdı:\nThe problem is that giving an ANALYZE command targeting a partitioned table\ncauses it to update statistics for the partitioned table AND all the individual\npartitions. There is currently no option to prevent it from including the\npartitions.\n\nThis is wasteful for our application: for one thing the autovacuum\nhas already analyzed the individual partitions; for another most of\nthe partitions\nwill have had no changes, so they don't need to be analyzed repeatedly.I agree that it's a waste to analyze partitions when they're already analyzed by autovacuum. It would be nice to have a way to run analyze only on a partitioned table without its partitions. \nI took some measurements when running ANALYZE on one of our tables. It\ntook approx\n4 minutes to analyze the partitioned table, then 29 minutes to analyze the\npartitions. We have hundreds of these tables, so the cost is very significant.I quickly tweaked the code a bit to exclude partitions when a partitioned table is being analyzed. I can confirm that there is a significant gain even on a simple case like a partitioned table with 10 partitions and 1M rows in each partition.\n  1. Would such a feature be welcomed? Are there any traps I might not\nhave thought of?\n\n  2. The existing ANALYZE command has the following structure:\n\n     ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n\n     It would be easiest to add ONLY as another option, but that\ndoesn't look quite\n     right to me - surely the ONLY should be attached to the table name\n     An alternative would be:\n\n     ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]I feel closer to adding this as an option instead of a new keyword in ANALYZE grammar. To me, it would be easier to have this option and then give the names of partitioned tables as opposed to typing ONLY before each partition table.But we should think of another name as ONLY is used differently (attached to the table name as you mentioned) in other contexts.I've been also thinking about how this new option should affect inheritance tables. Should it have just no impact on them or only analyze the parent table without taking child tables into account? There are two records for an inheritance parent table in pg_statistic, one row for only the parent table and a second row including children. We might only analyze the parent table if this new \"ONLY\" option is specified. I'm not sure if that would be something users would need or not, but I think this option should behave similarly for both partitioned tables and inheritance tables.If we decide to go with only partition tables and not care about inheritance, then naming this option to SKIP_PARTITIONS as Jelte suggested sounds fine. But that name wouldn't work if this option will affect inheritance tables.Thanks,-- Melih MutluMicrosoft", "msg_date": "Tue, 20 Aug 2024 19:26:48 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Melih Mutlu <[email protected]>, 20 Ağu 2024 Sal, 19:26 tarihinde şunu\nyazdı:\n\n> Hi Michael,\n>\n> Thanks for starting this thread. I've also spent a bit time on this after\n> reading your first thread on this issue [1]\n>\nForgot to add the reference [1]\n\n[1]\nhttps://www.postgresql.org/message-id/CADofcAXVbD0yGp_EaC9chmzsOoSai3jcfBCnyva3j0RRdRvMVA@mail.gmail.com\n\nMelih Mutlu <[email protected]>, 20 Ağu 2024 Sal, 19:26 tarihinde şunu yazdı:Hi Michael,Thanks for starting this thread. I've also spent a bit time on this after reading your first thread on this issue [1] Forgot to add the reference [1][1] https://www.postgresql.org/message-id/CADofcAXVbD0yGp_EaC9chmzsOoSai3jcfBCnyva3j0RRdRvMVA@mail.gmail.com", "msg_date": "Tue, 20 Aug 2024 19:29:21 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Tue, Aug 20, 2024 at 1:52 AM Michael Harris <[email protected]> wrote:\n> 2. The existing ANALYZE command has the following structure:\n>\n> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n>\n> It would be easiest to add ONLY as another option, but that\n> doesn't look quite\n> right to me - surely the ONLY should be attached to the table name?\n> An alternative would be:\n>\n> ANALYZE [ ( option [, ...] ) ] [ONLY] [ table_and_columns [, ...] ]\n>\n> Any feedback or advice would be great.\n\nI like trying to use ONLY somehow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 14:40:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Wed, 21 Aug 2024 at 06:41, Robert Haas <[email protected]> wrote:\n> I like trying to use ONLY somehow.\n\nDo you mean as an ANALYZE command option, i.e. ANALYZE (only) table;\nor as a table modifier like gram.y's extended_relation_expr?\n\nMaking it a command option means that the option would apply to all\ntables listed, whereas if it was more like an extended_relation_expr,\nthe option would be applied per table listed in the command.\n\n1. ANALYZE ONLY ptab, ptab2; -- gather stats on ptab but not on its\npartitions but get stats on ptab2 and stats on its partitions too.\n2. ANALYZE ONLY ptab, ONLY ptab2; -- gather stats on ptab and ptab2\nwithout doing that on any of their partitions.\n\nWhereas: \"ANALYZE (ONLY) ptab, ptab2;\" would always give you the\nbehaviour of #2.\n\nIf we did it as a per-table option, then we'd need to consider what\nshould happen if someone did: \"VACUUM ONLY parttab;\". Probably\nsilently doing nothing wouldn't be good. Maybe a warning, akin to\nwhat's done in:\n\npostgres=# analyze (skip_locked) a;\nWARNING: skipping analyze of \"a\" --- lock not available\n\nDavid\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:53:18 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 21 Aug 2024 at 06:41, Robert Haas <[email protected]> wrote:\n>> I like trying to use ONLY somehow.\n\n> Do you mean as an ANALYZE command option, i.e. ANALYZE (only) table;\n> or as a table modifier like gram.y's extended_relation_expr?\n\n> Making it a command option means that the option would apply to all\n> tables listed, whereas if it was more like an extended_relation_expr,\n> the option would be applied per table listed in the command.\n\n> 1. ANALYZE ONLY ptab, ptab2; -- gather stats on ptab but not on its\n> partitions but get stats on ptab2 and stats on its partitions too.\n> 2. ANALYZE ONLY ptab, ONLY ptab2; -- gather stats on ptab and ptab2\n> without doing that on any of their partitions.\n\nFWIW, I think that's the right approach, for consistency with the\nway that ONLY works in DML.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Aug 2024 18:56:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Tue, Aug 20, 2024 at 6:53 PM David Rowley <[email protected]> wrote:\n> On Wed, 21 Aug 2024 at 06:41, Robert Haas <[email protected]> wrote:\n> > I like trying to use ONLY somehow.\n>\n> Do you mean as an ANALYZE command option, i.e. ANALYZE (only) table;\n> or as a table modifier like gram.y's extended_relation_expr?\n\nThe table modifier idea seems nice to me.\n\n> If we did it as a per-table option, then we'd need to consider what\n> should happen if someone did: \"VACUUM ONLY parttab;\". Probably\n> silently doing nothing wouldn't be good. Maybe a warning, akin to\n> what's done in:\n>\n> postgres=# analyze (skip_locked) a;\n> WARNING: skipping analyze of \"a\" --- lock not available\n\nPerhaps. I'm not sure about this part.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 22:37:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "David Rowley <[email protected]>, 21 Ağu 2024 Çar, 01:53 tarihinde şunu\nyazdı:\n\n> On Wed, 21 Aug 2024 at 06:41, Robert Haas <[email protected]> wrote:\n> > I like trying to use ONLY somehow.\n>\n> Do you mean as an ANALYZE command option, i.e. ANALYZE (only) table;\n> or as a table modifier like gram.y's extended_relation_expr?\n>\n> Making it a command option means that the option would apply to all\n> tables listed, whereas if it was more like an extended_relation_expr,\n> the option would be applied per table listed in the command.\n>\n> 1. ANALYZE ONLY ptab, ptab2; -- gather stats on ptab but not on its\n> partitions but get stats on ptab2 and stats on its partitions too.\n> 2. ANALYZE ONLY ptab, ONLY ptab2; -- gather stats on ptab and ptab2\n> without doing that on any of their partitions.\n>\n\nI believe we should go this route if we want this to be called \"ONLY\" so\nthat it would be consistent with other places too.\n\nWhereas: \"ANALYZE (ONLY) ptab, ptab2;\" would always give you the\n> behaviour of #2.\n>\n\nHavin it as an option would be easier to use when we have several partition\ntables. But I agree that if we call it \"ONLY \", it may be confusing and the\nother approach would be appropriate.\n\n\n> If we did it as a per-table option, then we'd need to consider what\n> should happen if someone did: \"VACUUM ONLY parttab;\". Probably\n> silently doing nothing wouldn't be good. Maybe a warning, akin to\n> what's done in:\n>\n> postgres=# analyze (skip_locked) a;\n> WARNING: skipping analyze of \"a\" --- lock not available\n>\n\n+1 to raising a warning message in that case instead of staying silent. We\nmight also not allow ONLY if ANALYZE is not present in VACUUM query and\nraise an error. But that would require changes in grams.y and could\ncomplicate things. So it may not be necessary and we may be fine with just\na warning.\n\nRegards,\n--\nMelih Mutlu\nMicrosoft\n\nDavid Rowley <[email protected]>, 21 Ağu 2024 Çar, 01:53 tarihinde şunu yazdı:On Wed, 21 Aug 2024 at 06:41, Robert Haas <[email protected]> wrote:\n> I like trying to use ONLY somehow.\n\nDo you mean as an ANALYZE command option, i.e. ANALYZE (only) table;\nor as a table modifier like gram.y's extended_relation_expr?\n\nMaking it a command option means that the option would apply to all\ntables listed, whereas if it was more like an extended_relation_expr,\nthe option would be applied per table listed in the command.\n\n1. ANALYZE ONLY ptab, ptab2; -- gather stats on ptab but not on its\npartitions but get stats on ptab2 and stats on its partitions too.\n2. ANALYZE ONLY ptab, ONLY ptab2; -- gather stats on ptab and ptab2\nwithout doing that on any of their partitions.I believe we should go this route if we want this to be called \"ONLY\" so that it would be consistent with other places too.\nWhereas: \"ANALYZE (ONLY) ptab, ptab2;\" would always give you the\nbehaviour of #2.Havin it as an option would be easier to use when we have several partition tables. But I agree that if we call it \"ONLY \", it may be confusing and the other approach would be appropriate. \nIf we did it as a per-table option, then we'd need to consider what\nshould happen if someone did: \"VACUUM ONLY parttab;\". Probably\nsilently doing nothing wouldn't be good. Maybe a warning, akin to\nwhat's done in:\n\npostgres=# analyze (skip_locked) a;\nWARNING:  skipping analyze of \"a\" --- lock not available+1 to raising a warning message in that case instead of staying silent. We might also not allow ONLY if ANALYZE is not present in VACUUM query and raise an error. But that would require changes in grams.y and could complicate things. So it may not be necessary and we may be fine with just a warning.Regards,--Melih MutluMicrosoft", "msg_date": "Wed, 21 Aug 2024 13:04:16 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Thank you all for the replies & discussion.\n\nIt sounds like more are in favour of using the ONLY syntax attached to\nthe tables is the best way to go, with the main advantages being:\n - consistency with other commands\n - flexibility in allowing to specify whether to include partitions\nfor individual tables when supplying a list of tables\n\nI will start working on an implementation along those lines. It looks\nlike we can simply replace qualified_name with relation_expr in the\nproduction for vacuum_relation within gram.y.\n\nOne other thing I noticed when reading the code. The function\nexpand_vacuum_rel in vacuum.c seems to be responsible for adding the\npartitions. If I am reading it correctly, it only adds child tables in\nthe case of a partitioned table, not in the case of an inheritance\nparent:\n\n include_parts = (classForm->relkind == RELKIND_PARTITIONED_TABLE);\n..\n if (include_parts)\n {\n .. add partitions ..\n\nThis is a little different to some other contexts where the ONLY\nkeyword is used, in that ONLY would be the default and only available\nmode of operation for an inheritance parent.\n\nRegards,\nMike\n\nOn Wed, 21 Aug 2024 at 20:04, Melih Mutlu <[email protected]> wrote:\n>\n> David Rowley <[email protected]>, 21 Ağu 2024 Çar, 01:53 tarihinde şunu yazdı:\n>>\n>> On Wed, 21 Aug 2024 at 06:41, Robert Haas <[email protected]> wrote:\n>> > I like trying to use ONLY somehow.\n>>\n>> Do you mean as an ANALYZE command option, i.e. ANALYZE (only) table;\n>> or as a table modifier like gram.y's extended_relation_expr?\n>>\n>> Making it a command option means that the option would apply to all\n>> tables listed, whereas if it was more like an extended_relation_expr,\n>> the option would be applied per table listed in the command.\n>>\n>> 1. ANALYZE ONLY ptab, ptab2; -- gather stats on ptab but not on its\n>> partitions but get stats on ptab2 and stats on its partitions too.\n>> 2. ANALYZE ONLY ptab, ONLY ptab2; -- gather stats on ptab and ptab2\n>> without doing that on any of their partitions.\n>\n>\n> I believe we should go this route if we want this to be called \"ONLY\" so that it would be consistent with other places too.\n>\n>> Whereas: \"ANALYZE (ONLY) ptab, ptab2;\" would always give you the\n>> behaviour of #2.\n>\n>\n> Havin it as an option would be easier to use when we have several partition tables. But I agree that if we call it \"ONLY \", it may be confusing and the other approach would be appropriate.\n>\n>>\n>> If we did it as a per-table option, then we'd need to consider what\n>> should happen if someone did: \"VACUUM ONLY parttab;\". Probably\n>> silently doing nothing wouldn't be good. Maybe a warning, akin to\n>> what's done in:\n>>\n>> postgres=# analyze (skip_locked) a;\n>> WARNING: skipping analyze of \"a\" --- lock not available\n>\n>\n> +1 to raising a warning message in that case instead of staying silent. We might also not allow ONLY if ANALYZE is not present in VACUUM query and raise an error. But that would require changes in grams.y and could complicate things. So it may not be necessary and we may be fine with just a warning.\n>\n> Regards,\n> --\n> Melih Mutlu\n> Microsoft\n\n\n", "msg_date": "Thu, 22 Aug 2024 09:32:41 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Thu, 22 Aug 2024 at 11:32, Michael Harris <[email protected]> wrote:\n> One other thing I noticed when reading the code. The function\n> expand_vacuum_rel in vacuum.c seems to be responsible for adding the\n> partitions. If I am reading it correctly, it only adds child tables in\n> the case of a partitioned table, not in the case of an inheritance\n> parent:\n>\n> include_parts = (classForm->relkind == RELKIND_PARTITIONED_TABLE);\n> ..\n> if (include_parts)\n> {\n> .. add partitions ..\n>\n> This is a little different to some other contexts where the ONLY\n> keyword is used, in that ONLY would be the default and only available\n> mode of operation for an inheritance parent.\n\nThat's inconvenient and quite long-established behaviour. I had a look\nas far back as 9.2 and we only analyze parents there too. I'm keen on\nthe ONLY syntax, but it would be strange if ONLY did the same thing as\nnot using ONLY for inheritance parents.\n\nI feel like we might need to either bite the bullet and make ONLY work\nconsistently with both, or think of another way to have ANALYZE not\nrecursively gather stats for each partition on partitioned tables.\nCould we possibly get away with changing inheritance parent behaviour?\n\nDavid\n\n\n", "msg_date": "Thu, 22 Aug 2024 12:53:19 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I feel like we might need to either bite the bullet and make ONLY work\n> consistently with both, or think of another way to have ANALYZE not\n> recursively gather stats for each partition on partitioned tables.\n> Could we possibly get away with changing inheritance parent behaviour?\n\nYeah, my thought was to change the behavior so it's consistent in\nthat case too. This doesn't seem to me like a change that'd\nreally cause anybody serious problems: ANALYZE without ONLY would\ndo more than before, but it's work you probably want done anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Aug 2024 21:12:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n> Yeah, my thought was to change the behavior so it's consistent in\n> that case too. This doesn't seem to me like a change that'd\n> really cause anybody serious problems: ANALYZE without ONLY would\n> do more than before, but it's work you probably want done anyway.\n\nWould we want to apply that change to VACUUM too? That might be a\nbit drastic, especially if you had a multi-level inheritance structure featuring\nlarge tables.\n\nIt feels a bit like VACUUM and ANALYZE have opposite natural defaults here.\nFor VACUUM it does not make much sense to vacuum only at the partitioned\ntable level and not include the partitions, since it would do nothing\n- that might\nbe why the existing code always adds the partitions.\n\n\n", "msg_date": "Thu, 22 Aug 2024 11:37:52 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Thu, 22 Aug 2024 at 13:38, Michael Harris <[email protected]> wrote:\n> Would we want to apply that change to VACUUM too? That might be a\n> bit drastic, especially if you had a multi-level inheritance structure featuring\n> large tables.\n\nI think they'd need to work the same way as for \"VACUUM (ANALYZE)\", it\nwould be strange to analyze some tables that you didn't vacuum. It's\njust a much bigger pill to swallow in terms of the additional effort.\n\n> It feels a bit like VACUUM and ANALYZE have opposite natural defaults here.\n> For VACUUM it does not make much sense to vacuum only at the partitioned\n> table level and not include the partitions, since it would do nothing\n> - that might\n> be why the existing code always adds the partitions.\n\nYeah, I suspect that's exactly why it was coded that way.\n\nDavid\n\n\n", "msg_date": "Thu, 22 Aug 2024 14:09:12 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hi All,\n\nHere is a first draft of a patch to implement the ONLY option for\nVACUUM and ANALYZE.\n\nI'm a little nervous about the implications of changing the behaviour of VACUUM\nfor inheritance structures; I can imagine people having regularly\nexecuted scripts\nthat currently vacuum all the tables in their inheritance structure;\nafter this change\nthey might get more vacuuming than they bargained for.\n\nIt's my first attempt to submit a patch here so apologies if I've\nmissed any part\nof the process.\n\nCheers\nMike\n\nOn Thu, 22 Aug 2024 at 12:09, David Rowley <[email protected]> wrote:\n>\n> On Thu, 22 Aug 2024 at 13:38, Michael Harris <[email protected]> wrote:\n> > Would we want to apply that change to VACUUM too? That might be a\n> > bit drastic, especially if you had a multi-level inheritance structure featuring\n> > large tables.\n>\n> I think they'd need to work the same way as for \"VACUUM (ANALYZE)\", it\n> would be strange to analyze some tables that you didn't vacuum. It's\n> just a much bigger pill to swallow in terms of the additional effort.\n>\n> > It feels a bit like VACUUM and ANALYZE have opposite natural defaults here.\n> > For VACUUM it does not make much sense to vacuum only at the partitioned\n> > table level and not include the partitions, since it would do nothing\n> > - that might\n> > be why the existing code always adds the partitions.\n>\n> Yeah, I suspect that's exactly why it was coded that way.\n>\n> David", "msg_date": "Thu, 22 Aug 2024 18:52:30 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hi Michael,\n\nThanks for the patch.\nI quickly tried running some ANALYZE ONLY queries, it seems like it works\nfine.\n\n-ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] )\n> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> +ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] )\n> ] [ [ ONLY ] <replaceable class=\"parameter\">table_and_columns</replaceable>\n> [, ...] ]\n\n\nIt seems like extended_relation_expr allows \"tablename *\" syntax too. That\nshould be added in docs as well. (Same for VACUUM doc)\n\n <para>\n> For partitioned tables, <command>ANALYZE</command> gathers statistics\n> by\n> sampling rows from all partitions; in addition, it will recurse into\n> each\n> partition and update its statistics. Each leaf partition is analyzed\n> only\n> once, even with multi-level partitioning. No statistics are collected\n> for\n> only the parent table (without data from its partitions), because with\n> partitioning it's guaranteed to be empty.\n> </para>\n\n\nWe may also want to update the above note in ANALYZE doc.\n\n+-- ANALYZE ONLY / VACUUM ONLY on partitioned table\n> +CREATE TABLE only_parted (a int, b char) PARTITION BY LIST (a);\n> +CREATE TABLE only_parted1 PARTITION OF vacparted FOR VALUES IN (1);\n\n+INSERT INTO only_parted1 VALUES (1, 'a');\n>\n\nTests don't seem right to me.\nI believe the above should be \" PARTITION OF only_parted \" instead of\nvacparted.\nIt may better to insert into partitioned table (only_parted) instead of the\npartition (only_parted1);\n\nAlso it may be a good idea to test VACUUM ONLY for inheritance tables the\nsame way you test ANALYZE ONLY.\n\nLastly, the patch includes an unrelated file (compile_flags.txt) and has\nwhitespace errors when I apply it.\n\nRegards,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Michael,Thanks for the patch.I quickly tried running some ANALYZE ONLY queries, it seems like it works fine.-ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]+ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ [ ONLY ] <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]It seems like extended_relation_expr allows \"tablename *\" syntax too. That should be added in docs as well. (Same for VACUUM doc) <para>    For partitioned tables, <command>ANALYZE</command> gathers statistics by    sampling rows from all partitions; in addition, it will recurse into each    partition and update its statistics.  Each leaf partition is analyzed only    once, even with multi-level partitioning.  No statistics are collected for    only the parent table (without data from its partitions), because with    partitioning it's guaranteed to be empty.  </para>We may also want to update the above note in ANALYZE doc.+-- ANALYZE ONLY / VACUUM ONLY on partitioned table+CREATE TABLE only_parted (a int, b char) PARTITION BY LIST (a);+CREATE TABLE only_parted1 PARTITION OF vacparted FOR VALUES IN (1);+INSERT INTO only_parted1 VALUES (1, 'a');Tests don't seem right to me.I believe the above should be \" PARTITION OF only_parted \" instead of vacparted.It may better to insert into partitioned table (only_parted) instead of the partition (only_parted1);Also it may be a good idea to test VACUUM ONLY for inheritance tables the same way you test ANALYZE ONLY.Lastly, the patch includes an unrelated file (compile_flags.txt) and has whitespace errors when I apply it.Regards,-- Melih MutluMicrosoft", "msg_date": "Thu, 22 Aug 2024 14:26:11 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Thanks for the feedback Melih,\n\nOn Thu, 22 Aug 2024 at 21:26, Melih Mutlu <[email protected]> wrote:\n> It seems like extended_relation_expr allows \"tablename *\" syntax too. That should be added in docs as well. (Same for VACUUM doc)\n\nI had included it in the parameter description but had missed it from\nthe synopsis. I've fixed that now.\n\n> We may also want to update the above note in ANALYZE doc.\n\nWell spotted. I have updated that now.\n\n> Tests don't seem right to me.\n> I believe the above should be \" PARTITION OF only_parted \" instead of vacparted.\n> It may better to insert into partitioned table (only_parted) instead of the partition (only_parted1);\n>\n> Also it may be a good idea to test VACUUM ONLY for inheritance tables the same way you test ANALYZE ONLY.\n\nWell spotted again. I have fixed the incorrect table names and added\ntests as suggested.\n\n> Lastly, the patch includes an unrelated file (compile_flags.txt) and has whitespace errors when I apply it.\n\nOops! Fixed,\n\nV2 of the patch is attached.\n\nCheers\nMike", "msg_date": "Fri, 23 Aug 2024 20:01:33 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hi Michael,\n\nMichael Harris <[email protected]>, 23 Ağu 2024 Cum, 13:01 tarihinde şunu\nyazdı:\n\n> V2 of the patch is attached.\n>\n\nThanks for updating the patch. I have a few more minor feedbacks.\n\n-ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] )\n> ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n> +ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] )\n> ] [ [ ONLY ] <replaceable class=\"parameter\">table_and_columns</replaceable>\n> [, ...] ]\n\n\nI believe moving \"[ ONLY ]\" to the definitions of table_and_columns below,\nas you did with \"[ * ]\", might be better to be consistent with other places\n(see [1])\n\n+ if ((options & VACOPT_VACUUM) && is_partitioned_table && ! i\n> nclude_children)\n\n\nThere are also some issues with coding conventions in some places (e.g. the\nspace between \"!\" and \"include_children\" abode). I think running pgindent\nwould resolve such issue in most places.\n\n\n[1] https://www.postgresql.org/docs/16/sql-createpublication.html\n\nRegards,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Michael,Michael Harris <[email protected]>, 23 Ağu 2024 Cum, 13:01 tarihinde şunu yazdı:\nV2 of the patch is attached.Thanks for updating the patch. I have a few more minor feedbacks.-ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]+ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ [ ONLY ] <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]I believe moving \"[ ONLY ]\" to the definitions of table_and_columns below, as you did with \"[ * ]\", might be better to be consistent with other places (see [1])+\t\tif ((options & VACOPT_VACUUM) && is_partitioned_table && ! include_children)There are also some issues with coding conventions in some places (e.g. the space between \"!\" and \"include_children\" abode). I think running pgindent would resolve such issue in most places. [1] https://www.postgresql.org/docs/16/sql-createpublication.htmlRegards,-- Melih MutluMicrosoft", "msg_date": "Thu, 29 Aug 2024 14:30:27 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hi Michael,\n\nOn 2024-08-23 19:01, Michael Harris wrote:\n\n> V2 of the patch is attached.\n\nThanks for the proposal and the patch.\n\nYou said this patch was a first draft version, so these may be too minor \ncomments, but I will share them:\n\n\n-- https://www.postgresql.org/docs/devel/progress-reporting.html\n> Note that when ANALYZE is run on a partitioned table, all of its \n> partitions are also recursively analyzed.\n\nShould we also note this is the default, i.e. not using ONLY option \nbehavior here?\n\n\n-- https://www.postgresql.org/docs/devel/ddl-partitioning.html\n> If you are using manual VACUUM or ANALYZE commands, don't forget that \n> you need to run them on each child table individually. A command like:\n> \n> ANALYZE measurement;\n> will only process the root table.\n\nThis part also should be modified, shouldn't it?\n\n\nWhen running ANALYZE VERBOSE ONLY on a partition table, the INFO message \nis like this:\n\n =# ANALYZE VERBOSE ONLY only_parted;\n INFO: analyzing \"public.only_parted\" inheritance tree\n\nI may be wrong, but 'inheritance tree' makes me feel it includes child \ntables.\nRemoving 'inheritance tree' and just describing the table name as below \nmight be better:\n\n INFO: analyzing \"public.only_parted\"\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Fri, 30 Aug 2024 15:45:55 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Thu, Aug 29, 2024 at 7:31 PM Melih Mutlu <[email protected]> wrote:\n>\n> Hi Michael,\n>\n> Michael Harris <[email protected]>, 23 Ağu 2024 Cum, 13:01 tarihinde şunu yazdı:\n>>\n>> V2 of the patch is attached.\n>\n>\n> Thanks for updating the patch. I have a few more minor feedbacks.\n>\n>> -ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n>> +ANALYZE [ ( <replaceable class=\"parameter\">option</replaceable> [, ...] ) ] [ [ ONLY ] <replaceable class=\"parameter\">table_and_columns</replaceable> [, ...] ]\n>\n>\n> I believe moving \"[ ONLY ]\" to the definitions of table_and_columns below, as you did with \"[ * ]\", might be better to be consistent with other places (see [1])\n>\n>> + if ((options & VACOPT_VACUUM) && is_partitioned_table && ! include_children)\n>\n>\n\nI think you are right.\n\nANALYZE [ ( option [, ...] ) ] [ [ ONLY ] table_and_columns [, ...] ]\n\nseems not explain commands like:\n\nANALYZE ONLY only_parted(a), ONLY only_parted(b);\n\n\n", "msg_date": "Fri, 30 Aug 2024 16:19:12 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hi Michael,\n\nOn Fri, 23 Aug 2024 at 22:01, Michael Harris <[email protected]> wrote:\n> V2 of the patch is attached.\n\n(I understand this is your first PostgreSQL patch)\n\nI just wanted to make sure you knew about the commitfest cycle we use\nfor the development of PostgreSQL. If you've not got one already, can\nyou register a community account. That'll allow you to include this\npatch in the September commitfest that starts on the 1st. See\nhttps://commitfest.postgresql.org/49/\n\nI understand there's now some sort of cool-off period for new\ncommunity accounts, so if you have trouble adding the patch before the\nCF starts, let me know and I should be able to do it for you. The\ncommitfest normally closes for new patches around 12:00 UTC on the 1st\nof the month.\n\nThere's a bit more information in\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch#Patch_submission\n\nDavid\n\n\n", "msg_date": "Fri, 30 Aug 2024 20:43:56 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Hello Atsushi & Melih\n\nThank you both for your further feedback.\n\nOn Thu, 29 Aug 2024 at 21:31, Melih Mutlu <[email protected]> wrote:\n> I believe moving \"[ ONLY ]\" to the definitions of table_and_columns below, as you did with \"[ * ]\", might be better to be consistent with other places (see [1])\n\nAgreed. I have updated this.\n\n>> + if ((options & VACOPT_VACUUM) && is_partitioned_table && ! include_children)\n>\n> There are also some issues with coding conventions in some places (e.g. the space between \"!\" and \"include_children\" abode). I think running pgindent would resolve such issue in most places.\n\nI fixed that extra space, and then ran pgindent. It did not report any\nmore problems.\n\nOn Fri, 30 Aug 2024 at 16:45, torikoshia <[email protected]> wrote:\n> -- https://www.postgresql.org/docs/devel/progress-reporting.html\n> > Note that when ANALYZE is run on a partitioned table, all of its\n> > partitions are also recursively analyzed.\n>\n> Should we also note this is the default, i.e. not using ONLY option\n> behavior here?\n\n> -- https://www.postgresql.org/docs/devel/ddl-partitioning.html\n> > If you are using manual VACUUM or ANALYZE commands, don't forget that\n> > you need to run them on each child table individually. A command like:\n> >\n> > ANALYZE measurement;\n> > will only process the root table.\n>\n> This part also should be modified, shouldn't it?\n\nAgreed. I have updated both of these.\n\n> When running ANALYZE VERBOSE ONLY on a partition table, the INFO message\n> is like this:\n>\n> =# ANALYZE VERBOSE ONLY only_parted;\n> INFO: analyzing \"public.only_parted\" inheritance tree\n>\n> I may be wrong, but 'inheritance tree' makes me feel it includes child\n> tables.\n> Removing 'inheritance tree' and just describing the table name as below\n> might be better:\n>\n> INFO: analyzing \"public.only_parted\"\n\nI'm not sure about that one. If I understand the source correctly,\nthat particular progress\nmessage tells the user that the system is gathering stats from the inheritance\ntree in order to update the stats of the given table, not that it is\nactually updating\nthe stats of the descendant tables.\n\nWhen analyzing an inheritance structure with the ONLY you see\nsomething like this:\n\n=> ANALYZE VERBOSE ONLY only_inh_parent;\nINFO: analyzing \"public.only_inh_parent\"\nINFO: \"only_inh_parent\": scanned 0 of 0 pages, containing 0 live rows\nand 0 dead rows; 0 rows in sample, 0 estimated total rows\nINFO: analyzing \"public.only_inh_parent\" inheritance tree\nINFO: \"only_inh_child\": scanned 1 of 1 pages, containing 3 live rows\nand 0 dead rows; 3 rows in sample, 3 estimated total rows\nANALYZE\n\nThe reason you don't see the first one for partitioned tables is that\nit corresponds\nto sampling the contents of the parent table itself, which in the case\nof a partitioned\ntable is guaranteed to be empty, so it is skipped.\n\nI agree the text could be confusing, and in fact is probably confusing\neven today\nwithout the ONLY keyword, but I'm not sure what would be the best\nwording to cover both the partitioned and inherited cases.\n\nv3 of the patch is attached, and I will submit it to the commitfest.", "msg_date": "Sun, 1 Sep 2024 11:31:48 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Thanks David\n\nI had not read that wiki page well enough, so many thanks for alerting\nme that I had to create a commitfest entry. I already had a community\naccount so I was able to login and create it here:\n\n https://commitfest.postgresql.org/49/5226/\n\nI was not sure what to put for some of the fields - for 'reviewer' should\nI have put the people who have provided feedback on this thread?\n\nIs there anything else I need to do?\n\nThanks again\n\nCheers\nMike\n\nOn Fri, 30 Aug 2024 at 18:44, David Rowley <[email protected]> wrote:\n>\n> Hi Michael,\n>\n> On Fri, 23 Aug 2024 at 22:01, Michael Harris <[email protected]> wrote:\n> > V2 of the patch is attached.\n>\n> (I understand this is your first PostgreSQL patch)\n>\n> I just wanted to make sure you knew about the commitfest cycle we use\n> for the development of PostgreSQL. If you've not got one already, can\n> you register a community account. That'll allow you to include this\n> patch in the September commitfest that starts on the 1st. See\n> https://commitfest.postgresql.org/49/\n>\n> I understand there's now some sort of cool-off period for new\n> community accounts, so if you have trouble adding the patch before the\n> CF starts, let me know and I should be able to do it for you. The\n> commitfest normally closes for new patches around 12:00 UTC on the 1st\n> of the month.\n>\n> There's a bit more information in\n> https://wiki.postgresql.org/wiki/Submitting_a_Patch#Patch_submission\n>\n> David\n\n\n", "msg_date": "Sun, 1 Sep 2024 11:41:45 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Sun, 1 Sept 2024 at 13:41, Michael Harris <[email protected]> wrote:\n> https://commitfest.postgresql.org/49/5226/\n>\n> I was not sure what to put for some of the fields - for 'reviewer' should\n> I have put the people who have provided feedback on this thread?\n\nI think it's fairly common that only reviewers either add themselves\nor don't bother. I don't think it's common that patch authors add\nreviewers. Some reviewers like their reviews to be more informal and\nputting themselves as a reviewer can put other people off reviewing as\nthey think it's already handled by someone else. That may result in\nthe patch being neglected by reviewers.\n\n> Is there anything else I need to do?\n\nI see you set the target version as 17. That should be blank or \"18\".\nPG17 is about to go into release candidate, so it is too late for\nthat. It's been fairly consistent that we open for new patches in\nearly July and close before the 2nd week of April the following year.\nWe're currently 2 months into PG18 dev. We don't back-patch new\nfeatures.\n\nNothing else aside from continuing to address reviews, as you are already.\n\nDavid\n\n\n", "msg_date": "Sun, 1 Sep 2024 21:31:18 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On 2024-09-01 10:31, Michael Harris wrote:\n> Hello Atsushi & Melih\n> \n> Thank you both for your further feedback.\n> \n> On Thu, 29 Aug 2024 at 21:31, Melih Mutlu <[email protected]> \n> wrote:\n>> I believe moving \"[ ONLY ]\" to the definitions of table_and_columns \n>> below, as you did with \"[ * ]\", might be better to be consistent with \n>> other places (see [1])\n> \n> Agreed. I have updated this.\n> \n>>> + if ((options & VACOPT_VACUUM) && is_partitioned_table && ! \n>>> include_children)\n>> \n>> There are also some issues with coding conventions in some places \n>> (e.g. the space between \"!\" and \"include_children\" abode). I think \n>> running pgindent would resolve such issue in most places.\n> \n> I fixed that extra space, and then ran pgindent. It did not report any\n> more problems.\n> \n> On Fri, 30 Aug 2024 at 16:45, torikoshia <[email protected]> \n> wrote:\n>> -- https://www.postgresql.org/docs/devel/progress-reporting.html\n>> > Note that when ANALYZE is run on a partitioned table, all of its\n>> > partitions are also recursively analyzed.\n>> \n>> Should we also note this is the default, i.e. not using ONLY option\n>> behavior here?\n> \n>> -- https://www.postgresql.org/docs/devel/ddl-partitioning.html\n>> > If you are using manual VACUUM or ANALYZE commands, don't forget that\n>> > you need to run them on each child table individually. A command like:\n>> >\n>> > ANALYZE measurement;\n>> > will only process the root table.\n>> \n>> This part also should be modified, shouldn't it?\n> \n> Agreed. I have updated both of these.\n\nThanks!\n\n>> When running ANALYZE VERBOSE ONLY on a partition table, the INFO \n>> message\n>> is like this:\n>> \n>> =# ANALYZE VERBOSE ONLY only_parted;\n>> INFO: analyzing \"public.only_parted\" inheritance tree\n>> \n>> I may be wrong, but 'inheritance tree' makes me feel it includes child\n>> tables.\n>> Removing 'inheritance tree' and just describing the table name as \n>> below\n>> might be better:\n>> \n>> INFO: analyzing \"public.only_parted\"\n> \n> I'm not sure about that one. If I understand the source correctly,\n> that particular progress\n> message tells the user that the system is gathering stats from the \n> inheritance\n> tree in order to update the stats of the given table, not that it is\n> actually updating\n> the stats of the descendant tables.\n\nThat makes sense.\n\n> When analyzing an inheritance structure with the ONLY you see\n> something like this:\n> \n> => ANALYZE VERBOSE ONLY only_inh_parent;\n> INFO: analyzing \"public.only_inh_parent\"\n> INFO: \"only_inh_parent\": scanned 0 of 0 pages, containing 0 live rows\n> and 0 dead rows; 0 rows in sample, 0 estimated total rows\n> INFO: analyzing \"public.only_inh_parent\" inheritance tree\n> INFO: \"only_inh_child\": scanned 1 of 1 pages, containing 3 live rows\n> and 0 dead rows; 3 rows in sample, 3 estimated total rows\n> ANALYZE\n> \n> The reason you don't see the first one for partitioned tables is that\n> it corresponds\n> to sampling the contents of the parent table itself, which in the case\n> of a partitioned\n> table is guaranteed to be empty, so it is skipped.\n> \n> I agree the text could be confusing, and in fact is probably confusing\n> even today\n> without the ONLY keyword,\n\nYeah, it seems this isn't dependent on your proposal.\n(BTW I'm also wondering if the expression “inheritance\" is appropriate \nwhen the target is a partitioned table, but this should be discussed in \nanother thread)\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Mon, 02 Sep 2024 12:29:37 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Sun, 1 Sept 2024 at 13:32, Michael Harris <[email protected]> wrote:\n> v3 of the patch is attached, and I will submit it to the commitfest.\n\nI think this patch is pretty good.\n\nI only have a few things I'd point out:\n\n1. The change to ddl.sgml, technically you're removing the caveat, but\nI think the choice you've made to mention the updated behaviour is\nlikely a good idea still. So I agree with this change, but just wanted\nto mention it as someone else might think otherwise.\n\n2. I think there's some mixing of tense in the changes to analyze.sgml:\n\n\"If <literal>ONLY</literal> was not specified, it will also recurse\ninto each partition and update its statistics.\"\n\nYou've written \"was\" (past tense), but then the existing text uses\n\"will\" (future tense). I guess if the point in time is after parse and\nbefore work has been done, then that's correct, but I think using \"is\"\ninstead of \"was\" is better.\n\n3. In vacuum.sgml;\n\n\"If <literal>ONLY</literal> is not specified, the table and all its\ndescendant tables or partitions (if any) are vacuumed\"\n\nMaybe \"are also vacuumed\" instead of \"are vacuumed\" is more clear?\nIt's certainly true for inheritance parents, but I guess you could\nargue that's wrong for partitioned tables.\n\n4. A very minor detail, but I think in vacuum.c the WARNING you've\nadded should use RelationGetRelationName(). We seem to be very\ninconsistent with using that macro and I see it's not used just above\nfor the lock warning, which I imagine you copied.\n\nAside from those, that just leaves me with the behavioural change. I\nnoted Tom was ok with the change in behaviour for ANALYZE (mentioned\nin [1]). Tom, wondering if you feel the same for VACUUM too? If we're\ndoing this, I think we'd need to be quite clear about it on the\nrelease notes.\n\nDavid\n\n[1] https://postgr.es/m/[email protected]\n\n\n", "msg_date": "Mon, 9 Sep 2024 13:27:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Thanks for the feedback David.\n\nOn Mon, 9 Sept 2024 at 11:27, David Rowley <[email protected]> wrote:\n> You've written \"was\" (past tense), but then the existing text uses\n> \"will\" (future tense). I guess if the point in time is after parse and\n> before work has been done, then that's correct, but I think using \"is\"\n> instead of \"was\" is better.\n\n> Maybe \"are also vacuumed\" instead of \"are vacuumed\" is more clear?\n\nAgreed. I have updated my patch with both of these suggestions.\n\n> 4. A very minor detail, but I think in vacuum.c the WARNING you've\n> added should use RelationGetRelationName(). We seem to be very\n> inconsistent with using that macro and I see it's not used just above\n> for the lock warning, which I imagine you copied.\n\nAs far as I can tell RelationGetRelationName is for extracting the name\nfrom a Relation struct, but in this case we have a RangeVar so it doesn't appear\nto be applicable. I could not find an equivalent access macro for RangeVar.\n\nThanks again.\n\nCheers\nMike", "msg_date": "Mon, 9 Sep 2024 17:56:54 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On 2024-09-09 16:56, Michael Harris wrote:\n\nThanks for updating the patch.\nHere is a minor comment.\n\n> @@ -944,20 +948,32 @@ expand_vacuum_rel(VacuumRelation *vrel, \n> MemoryContext vac_context,\n> MemoryContextSwitchTo(oldcontext);\n> }\n..\n> + * Unless the user has specified ONLY, make relation list \n> entries for\n> + * its partitions and/or descendant tables.\n\nRegarding the addition of partition descendant tables, should we also \nupdate the below comment on expand_vacuum_rel? Currently it refers only \npartitions:\n\n| * Given a VacuumRelation, fill in the table OID if it wasn't \nspecified,\n| * and optionally add VacuumRelations for partitions of the table.\n\nOther than this and the following, it looks good to me.\n\nOn Mon, Sep 9, 2024 at 10:27 AM David Rowley <[email protected]> \nwrote:\n> Aside from those, that just leaves me with the behavioural change. I\n> noted Tom was ok with the change in behaviour for ANALYZE (mentioned\n> in [1]). Tom, wondering if you feel the same for VACUUM too? If we're\n> doing this, I think we'd need to be quite clear about it on the\n> release notes.\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Tue, 10 Sep 2024 21:03:40 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Thanks for the feedback.\n\nOn Tue, 10 Sept 2024 at 22:03, torikoshia <[email protected]> wrote:\n> Regarding the addition of partition descendant tables, should we also\n> update the below comment on expand_vacuum_rel? Currently it refers only\n> partitions:\n>\n> | * Given a VacuumRelation, fill in the table OID if it wasn't\n> specified,\n> | * and optionally add VacuumRelations for partitions of the table.\n>\n\nWell spotted! I have updated the comment to read:\n\n * Given a VacuumRelation, fill in the table OID if it wasn't specified,\n * and optionally add VacuumRelations for partitions or descendant tables\n * of the table.\n\nCheers\nMike.", "msg_date": "Wed, 11 Sep 2024 17:47:09 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On 2024-09-11 16:47, Michael Harris wrote:\n> Thanks for the feedback.\n> \n> On Tue, 10 Sept 2024 at 22:03, torikoshia <[email protected]> \n> wrote:\n>> Regarding the addition of partition descendant tables, should we also\n>> update the below comment on expand_vacuum_rel? Currently it refers \n>> only\n>> partitions:\n>> \n>> | * Given a VacuumRelation, fill in the table OID if it wasn't\n>> specified,\n>> | * and optionally add VacuumRelations for partitions of the table.\n>> \n> \n> Well spotted! I have updated the comment to read:\n> \n> * Given a VacuumRelation, fill in the table OID if it wasn't \n> specified,\n> * and optionally add VacuumRelations for partitions or descendant \n> tables\n> * of the table.\n\nThanks for the update!\n\nI've switched the status on the commitfest to 'ready for committer'.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Fri, 13 Sep 2024 10:50:39 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "hi.\n\nin https://www.postgresql.org/docs/current/ddl-inherit.html\n<<<\nCommands that do database maintenance and tuning (e.g., REINDEX,\nVACUUM) typically only work on individual, physical tables and do not\nsupport recursing over inheritance hierarchies. The respective\nbehavior of each individual command is documented in its reference\npage (SQL Commands).\n<<<\ndoes the above para need some tweaks?\n\n\nin section, your patch output is\n<<<<<<<<<\nand table_and_columns is:\n [ ONLY ] table_name [ * ] [ ( column_name [, ...] ) ]\n<<<<<<<<<\n\nANALYZE ONLY only_parted*;\nwill fail.\nMaybe we need a sentence saying ONLY and * are mutually exclusive?\n\n\n>>>\nIf the table being analyzed has inheritance children, ANALYZE gathers\ntwo sets of statistics: one on the rows of the parent table only, and\na second including rows of both the parent table and all of its\nchildren. This second set of statistics is needed when planning\nqueries that process the inheritance tree as a whole. The child tables\nthemselves are not individually analyzed in this case.\n>>>\nis this still true for table inheritance.\n\nfor example:\ndrop table if exists only_inh_parent,only_inh_child;\nCREATE TABLE only_inh_parent (a int , b INT) with (autovacuum_enabled = false);\nCREATE TABLE only_inh_child () INHERITS (only_inh_parent) with\n(autovacuum_enabled = false);\nINSERT INTO only_inh_child(a,b) select g % 80, (g + 1) % 200 from\ngenerate_series(1,1000) g;\nselect pg_table_size('only_inh_parent');\n\nANALYZE ONLY only_inh_parent;\n\nhere, will \"ANALYZE ONLY only_inh_parent;\" will have minimal \"effects\".\nsince the physical table only_inh_parent has zero storage.\nbut\nselect stadistinct,starelid::regclass,staattnum, stainherit\nfrom pg_statistic\nwhere starelid = ANY ('{only_inh_parent, only_inh_child}'::regclass[]);\n\noutput is\n stadistinct | starelid | staattnum | stainherit\n-------------+-----------------+-----------+------------\n 80 | only_inh_parent | 1 | t\n -0.2 | only_inh_parent | 2 | t\n\nI am not sure if this output and the manual description about table\ninheritance is consistent.\n\n\n", "msg_date": "Mon, 16 Sep 2024 10:29:47 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "Thanks for the feedback, and sorry it has taken a few days to respond.\n\nOn Mon, 16 Sept 2024 at 12:29, jian he <[email protected]> wrote:\n> in https://www.postgresql.org/docs/current/ddl-inherit.html\n> <<<\n> Commands that do database maintenance and tuning (e.g., REINDEX,\n> VACUUM) typically only work on individual, physical tables and do not\n> support recursing over inheritance hierarchies. The respective\n> behavior of each individual command is documented in its reference\n> page (SQL Commands).\n> <<<\n> does the above para need some tweaks?\n\nYes, good catch.\n\n> ANALYZE ONLY only_parted*;\n> will fail.\n> Maybe we need a sentence saying ONLY and * are mutually exclusive?\n\nI used the same text that appears on other pages that are describing the\noperation of ONLY / *, eg. the page for TRUNCATE\n(https://www.postgresql.org/docs/current/sql-truncate.html).\n\nI think it would be good to keep them all consistent if possible.\n\n> >>>\n> If the table being analyzed has inheritance children, ANALYZE gathers\n> two sets of statistics: one on the rows of the parent table only, and\n> a second including rows of both the parent table and all of its\n> children. This second set of statistics is needed when planning\n> queries that process the inheritance tree as a whole. The child tables\n> themselves are not individually analyzed in this case.\n> >>>\n> is this still true for table inheritance.\n\nThe way I interpret this is that when you analyze an inheritance parent,\nit will sample the rows of the parent and all its children. It will\nprepare two sets\nof stats, one based on the samples of any rows in the parent itself,\nthe other based on samples of rows in both parent and child tables.\nThis is distinct from the activity of updating the stats on the child\ntables themselves.\n\n> I am not sure if this output and the manual description about table\n> inheritance is consistent.\n\nWith the test you performed, the result was a set of statistics for the\nparent table which included samples from the child tables\n(stainherit=t) but no entries for the parent table only\n(stainherit=f) as there is no data in the parent table itself.\nThere are no statistics for only_inh_child.\n\nIf you add some records to the parent table and re-analyze, you\ndo get statistics with stainherit=f:\n\npostgres=# insert into only_inh_parent(a,b) select g % 10, 100 from\ngenerate_series(1,1000) g;\nINSERT 0 1000\npostgres=# select pg_table_size('only_inh_parent');\n pg_table_size\n---------------\n 65536\n(1 row)\n\npostgres=# ANALYZE ONLY only_inh_parent;\nANALYZE\npostgres=# select stadistinct,starelid::regclass,staattnum, stainherit\nfrom pg_statistic\nwhere starelid = ANY ('{only_inh_parent, only_inh_child}'::regclass[]);\n stadistinct | starelid | staattnum | stainherit\n-------------+-----------------+-----------+------------\n 80 | only_inh_parent | 1 | t\n 200 | only_inh_parent | 2 | t\n 10 | only_inh_parent | 1 | f\n 1 | only_inh_parent | 2 | f\n(4 rows)\n\nand if you perform ANALYZE without ONLY:\n\npostgres=# ANALYZE only_inh_parent;\nANALYZE\npostgres=# select stadistinct,starelid::regclass,staattnum, stainherit\nfrom pg_statistic\nwhere starelid = ANY ('{only_inh_parent, only_inh_child}'::regclass[]);\n stadistinct | starelid | staattnum | stainherit\n-------------+-----------------+-----------+------------\n 80 | only_inh_parent | 1 | t\n 10 | only_inh_parent | 1 | f\n 1 | only_inh_parent | 2 | f\n 200 | only_inh_parent | 2 | t\n 80 | only_inh_child | 1 | f\n -0.2 | only_inh_child | 2 | f\n(6 rows)\n\nNow we also have statistics for only_inh_child.\n\nAt least for me, that is consistent with the para you\nquoted, except perhaps for this sentence:\n\n>> The child tables themselves are not individually analyzed in this case.\n\nThis should probably read:\n\n>> If the ONLY keyword is used, the child tables themselves are not individually analyzed.\n\nI have attached a new version of the patch with this feedback incorporated.\n\nThanks again!\n\nCheers\nMike.", "msg_date": "Fri, 20 Sep 2024 11:20:24 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Fri, 20 Sept 2024 at 13:20, Michael Harris <[email protected]> wrote:\n> I have attached a new version of the patch with this feedback incorporated.\n\nI looked over the v6 patch and I don't have any complaints. However, I\ndid make some minor adjustments:\n\n* A few places said things like \"and possibly partitions and/or child\ntables thereof\". It's not possible for a partition to have inheritance\nchildren, so \"and/\" is wrong there, only \"or\" is possible.\n\n* I spent a bit of time trying to massage the new text into the\ndocuments. I just felt that sometimes the new text was a little bit\nclumsy, for example:\n\n <command>ANALYZE</command> gathers two sets of statistics: one on the rows\n of the parent table only, and a second including rows of both the parent\n table and all of its children. This second set of statistics is\nneeded when\n- planning queries that process the inheritance tree as a whole. The child\n- tables themselves are not individually analyzed in this case.\n- The autovacuum daemon, however, will only consider inserts or\n- updates on the parent table itself when deciding whether to trigger an\n- automatic analyze for that table. If that table is rarely inserted into\n- or updated, the inheritance statistics will not be up to date unless you\n- run <command>ANALYZE</command> manually.\n+ planning queries that process the inheritance tree as a whole. If the\n+ <literal>ONLY</literal> keyword is used, child tables themselves are not\n+ individually analyzed. The autovacuum daemon, however, will only consider\n+ inserts or updates on the parent table itself when deciding whether to\n+ trigger an automatic analyze for that table. If that table is rarely\n+ inserted into or updated, the inheritance statistics will not be up to\n+ date unless you run <command>ANALYZE</command> manually.\n\nI kinda blame the existing text which reads \"The child tables\nthemselves are not individually analyzed in this case.\" as that seems\nto have led you to detail the new behaviour in the same place, but I\nthink that we really need to finish talking about autovacuum maybe not\nanalyzing the inheritance parent unless it receives enough changes\nitself before we talk about what ONLY does.\n\n* Very minor adjustments to the tests to upper case all the keywords\n\"is not null as\" was all lowercase. I wrapped some longer lines too.\nAlso, I made some comment adjustments and I dropped the partitioned\ntable directly after we're done with it instead of waiting until after\nthe inheritance parent tests.\n\n* A bunch of uses of \"descendant tables\" when you meant \"inheritance children\"\n\nv7-0001 is the same as your v6 patch, but I adjusted the commit\nmessage, which I'm happy to take feedback on. Nobody I've spoken to\nseems to be concerned about VACUUM on inheritance parents now being\nrecursive by default. I included \"release notes\" and \"incompatibility\"\nin the commit message in the hope that Bruce will stumble upon it and\nwrite about this when he does the release notes for v18.\n\nv7-0002 is all my changes.\n\nI'd like to push this soon, so if anyone has any last-minute feedback,\nplease let me know in the next few days.\n\nDavid", "msg_date": "Mon, 23 Sep 2024 01:09:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Sun, 22 Sept 2024 at 23:09, David Rowley <[email protected]> wrote:\n> I'd like to push this soon, so if anyone has any last-minute feedback,\n> please let me know in the next few days.\n\nMany thanks for your updates and assistance.\n\nLooks good. Agreed, I was probably too conservative in some of the\ndoc updates.\n\nThanks to all reviewers for helping to get it to this stage!\n\nCheers\nMike\n\n\n", "msg_date": "Mon, 23 Sep 2024 11:41:52 +1000", "msg_from": "Michael Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Sun, Sep 22, 2024 at 9:09 PM David Rowley <[email protected]> wrote:\n>\n> v7-0002 is all my changes.\n>\n> I'd like to push this soon, so if anyone has any last-minute feedback,\n> please let me know in the next few days.\n>\n\n\ndrop table if exists only_inh_parent,only_inh_child;\nCREATE TABLE only_inh_parent (a int , b INT) with (autovacuum_enabled = false);\nCREATE TABLE only_inh_child () INHERITS (only_inh_parent) with\n(autovacuum_enabled = false);\nINSERT INTO only_inh_child(a,b) select g % 80, (g + 1) % 200 from\ngenerate_series(1,1000) g;\nselect pg_table_size('only_inh_parent');\nANALYZE ONLY only_inh_parent;\n\nselect stadistinct,starelid::regclass,staattnum, stainherit\nfrom pg_statistic\nwhere starelid = ANY ('{only_inh_parent, only_inh_child}'::regclass[]);\n\n stadistinct | starelid | staattnum | stainherit\n-------------+-----------------+-----------+------------\n 80 | only_inh_parent | 1 | t\n -0.2 | only_inh_parent | 2 | t\n\n---------------\ncatalog-pg-statistic.html\nstainherit bool\nIf true, the stats include values from child tables, not just the\nvalues in the specified relation\n\nNormally there is one entry, with stainherit = false, for each table\ncolumn that has been analyzed. If the table has inheritance children\nor partitions, a second entry with stainherit = true is also created.\nThis row represents the column's statistics over the inheritance tree,\ni.e., statistics for the data you'd see with SELECT column FROM\ntable*, whereas the stainherit = false row represents the results of\nSELECT column FROM ONLY table.\n\n\nI do understand what Michael Harris said in [1]\n---------------\n\n\nGiven the above context, I am still confused with this sentence in\nsql-analyze.html.\n\"If ONLY is specified before the table name, only that table is analyzed.\"\n\nlike in the above sql example, only_inh_parent's child is also being analyzed.\n\n\n[1] https://www.postgresql.org/message-id/CADofcAW43AD%3Dqqtj1cLkTyRpPM6JB5ZALUK7CA1KZZqpcouoYw%40mail.gmail.com\n\n\n", "msg_date": "Mon, 23 Sep 2024 11:28:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, 23 Sept 2024 at 15:29, jian he <[email protected]> wrote:\n> Given the above context, I am still confused with this sentence in\n> sql-analyze.html.\n> \"If ONLY is specified before the table name, only that table is analyzed.\"\n>\n> like in the above sql example, only_inh_parent's child is also being analyzed.\n\nI guess it depends what you define \"analyzed\" to mean. In this\ncontext, it means gathering statistics specifically for a certain\ntable.\n\nWould it be more clear if \"only that table is analyzed.\" was changed\nto \"then statistics are only gathered specifically for that table.\"?\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 16:46:21 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, Sep 23, 2024 at 12:46 PM David Rowley <[email protected]> wrote:\n>\n> On Mon, 23 Sept 2024 at 15:29, jian he <[email protected]> wrote:\n> > Given the above context, I am still confused with this sentence in\n> > sql-analyze.html.\n> > \"If ONLY is specified before the table name, only that table is analyzed.\"\n> >\n> > like in the above sql example, only_inh_parent's child is also being analyzed.\n>\n> I guess it depends what you define \"analyzed\" to mean. In this\n> context, it means gathering statistics specifically for a certain\n> table.\n>\n> Would it be more clear if \"only that table is analyzed.\" was changed\n> to \"then statistics are only gathered specifically for that table.\"?\n>\n\nlooking at expand_vacuum_rel, analyze_rel.\nif we\n---------\nif (onerel->rd_rel->relhassubclass)\ndo_analyze_rel(onerel, params, va_cols, acquirefunc, relpages,\n true, in_outer_xact, elevel);\n\nchange to\n\n if (onerel->rd_rel->relhassubclass && ((!relation ||\nrelation->inh) || onerel->rd_rel->relkind ==\nRELKIND_PARTITIONED_TABLE))\n do_analyze_rel(onerel, params, va_cols, acquirefunc, relpages,\n true, in_outer_xact, elevel);\n\n\nthen the inheritance table will behave the way the doc says.\n\nfor example:\ndrop table if exists only_inh_parent,only_inh_child;\nCREATE TABLE only_inh_parent (a int , b INT) with (autovacuum_enabled = false);\nCREATE TABLE only_inh_child () INHERITS (only_inh_parent) with\n(autovacuum_enabled = false);\nINSERT INTO only_inh_child(a,b) select g % 80, (g + 1) % 200 from\ngenerate_series(1,1000) g;\nANALYZE ONLY only_inh_parent;\nselect stadistinct,starelid::regclass,staattnum, stainherit\nfrom pg_statistic\nwhere starelid = ANY ('{only_inh_parent, only_inh_child}'::regclass[]);\n\nwill return zero rows, since the physical table only_inh_parent has no storage.\n\n\n\n\nsql-analyze.html\nFor partitioned tables, ANALYZE gathers statistics by sampling rows\nfrom all partitions. Each leaf partition is also recursively analyzed\nand the statistics updated. This recursive part may be disabled by\nusing the ONLY keyword, otherwise, leaf partitions are analyzed only\nonce, even with multi-level partitioning. No statistics are collected\nfor only the parent table (without data from its partitions), because\nwith partitioning it's guaranteed to be empty.\n\nallow me to ask anenglish language question.\nhere \"otherwise\" means specify ONLY or not?\nAs far as i understand.\nif you specify ONLY, postgres will only do \"For partitioned tables,\nANALYZE gathers statistics by sampling rows from all partitions\"\nif you not specify ONLY, postgres will do \"For partitioned tables,\nANALYZE gathers statistics by sampling rows from all partitions *AND*\nalso recursively analyze each leaf partition\"\n\nIs my understanding correct?\n\n\nI think there is a whitespace error in \"ANALYZE ONLY vacparted(a,b); \"\nin vacuum.out.\n\n\n", "msg_date": "Mon, 23 Sep 2024 15:39:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, 23 Sept 2024 at 19:39, jian he <[email protected]> wrote:\n> sql-analyze.html\n> For partitioned tables, ANALYZE gathers statistics by sampling rows\n> from all partitions. Each leaf partition is also recursively analyzed\n> and the statistics updated. This recursive part may be disabled by\n> using the ONLY keyword, otherwise, leaf partitions are analyzed only\n> once, even with multi-level partitioning. No statistics are collected\n> for only the parent table (without data from its partitions), because\n> with partitioning it's guaranteed to be empty.\n>\n> allow me to ask anenglish language question.\n> here \"otherwise\" means specify ONLY or not?\n> As far as i understand.\n> if you specify ONLY, postgres will only do \"For partitioned tables,\n> ANALYZE gathers statistics by sampling rows from all partitions\"\n> if you not specify ONLY, postgres will do \"For partitioned tables,\n> ANALYZE gathers statistics by sampling rows from all partitions *AND*\n> also recursively analyze each leaf partition\"\n>\n> Is my understanding correct?\n\nThe \"Otherwise\" case applies when \"ONLY\" isn't used.\n\nIf this is confusing, I think there's a bunch of detail that I tried\nto keep that's just not that useful. The part about analyzing\npartitions just once and the part about not collecting non-inheritance\nstats for the partitioned table seems like extra detail that's either\nobvious or just not that important.\n\nCan you have a look at the attached and let me know if it's easier to\nunderstand now?\n\nDavid", "msg_date": "Mon, 23 Sep 2024 22:04:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, Sep 23, 2024 at 6:04 PM David Rowley <[email protected]> wrote:\n>\n>\n> If this is confusing, I think there's a bunch of detail that I tried\n> to keep that's just not that useful. The part about analyzing\n> partitions just once and the part about not collecting non-inheritance\n> stats for the partitioned table seems like extra detail that's either\n> obvious or just not that important.\n>\n> Can you have a look at the attached and let me know if it's easier to\n> understand now?\n>\n\nNow the regress test passed.\n\n <para>\n For partitioned tables, <command>ANALYZE</command> gathers statistics by\n sampling rows from all partitions. By default,\n <command>ANALYZE</command> will also recursively collect and update the\n statistics for each partition. The <literal>ONLY</literal> keyword may be\n used to disable this.\n </para>\n\nis very clear to me!\n\n\n <para>\n If the table being analyzed has inheritance children,\n <command>ANALYZE</command> gathers two sets of statistics: one on the rows\n of the parent table only, and a second including rows of both the parent\n table and all of its children. This second set of statistics is needed when\n planning queries that process the inheritance tree as a whole. The\n autovacuum daemon, however, will only consider inserts or updates on the\n parent table itself when deciding whether to trigger an automatic analyze\n for that table. If that table is rarely inserted into or updated, the\n inheritance statistics will not be up to date unless you run\n <command>ANALYZE</command> manually. By default,\n <command>ANALYZE</command> will also recursively collect and update the\n statistics for each inheritance child table. The <literal>ONLY</literal>\n keyword may be used to disable this.\n </para>\n\n\nlooks fine. but maybe we can add the following information\n\"if The <literal>ONLY</literal> is specified, the second set of\nstatistics won't include each children individual statistics\"\nI think that's the main difference between specifying ONLY or not?\n\ncatalog-pg-statistic.html second paragraph seems very clear to me.\nMaybe we can link it somehow\n\nOther than that, it looks good to me.\n\n\n", "msg_date": "Mon, 23 Sep 2024 19:23:18 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, 23 Sept 2024 at 23:23, jian he <[email protected]> wrote:\n> looks fine. but maybe we can add the following information\n> \"if The <literal>ONLY</literal> is specified, the second set of\n> statistics won't include each children individual statistics\"\n> I think that's the main difference between specifying ONLY or not?\n\nOk, I think you're not understanding this yet and I'm not sure what I\ncan make more clear in the documents.\n\nLet me explain... For inheritance parent tables, ANALYZE ONLY will\ngather inheritance and non-inheritance statistics for ONLY the parent.\n\nHere's an example of that:\n\ndrop table if exists parent,child;\ncreate table parent(a int);\ncreate table child () inherits (parent);\ninsert into parent values(1);\ninsert into child values(1);\n\nanalyze ONLY parent;\nselect starelid::regclass,stainherit,stadistinct from pg_statistic\nwhere starelid in ('parent'::regclass,'child'::regclass);\n starelid | stainherit | stadistinct\n----------+------------+-------------\n parent | f | -1 <- this is the distinct estimate\nfor SELECT * FROM ONLY parent;\n parent | t | -0.5 <- this is the distinct estimate\nfor SELECT * FROM parent;\n(2 rows)\n\nFor the stainherit==false stats, only 1 row is sampled here as that's\nthe only row directly located in the \"parent\" table.\nFor the stainherit==true stats, 2 rows are sampled, both of them have\n\"a\" == 1. The stadistinct reflects that fact.\n\nNote there have been no statistics recorded for \"child\". However,\nanalyze did sample rows in that table as part of gathering sample rows\nfor \"parent\" for the stainherit==true row.\n\nNow let's try again without ONLY.\n\nanalyze parent;\n\nselect starelid::regclass,stainherit,stadistinct from pg_statistic\nwhere starelid in ('parent'::regclass,'child'::regclass);\n starelid | stainherit | stadistinct\n----------+------------+-------------\n parent | f | -1\n parent | t | -0.5\n child | f | -1\n(3 rows)\n\nAll of the above rows were re-calculated with the \"analyze parent\"\ncommand, the first two rows have the same values as nothing changed in\nthe table, however, there are now statistics stored for the \"child\"\ntable.\n\n> catalog-pg-statistic.html second paragraph seems very clear to me.\n> Maybe we can link it somehow\n\nI don't know what \"link it\" means in this context.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 23:53:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, Sep 23, 2024 at 7:53 PM David Rowley <[email protected]> wrote:\n>\n> On Mon, 23 Sept 2024 at 23:23, jian he <[email protected]> wrote:\n> > looks fine. but maybe we can add the following information\n> > \"if The <literal>ONLY</literal> is specified, the second set of\n> > statistics won't include each children individual statistics\"\n> > I think that's the main difference between specifying ONLY or not?\n>\n> Ok, I think you're not understanding this yet and I'm not sure what I\n> can make more clear in the documents.\n>\n> Let me explain... For inheritance parent tables, ANALYZE ONLY will\n> gather inheritance and non-inheritance statistics for ONLY the parent.\n>\n> Here's an example of that:\n>\n> drop table if exists parent,child;\n> create table parent(a int);\n> create table child () inherits (parent);\n> insert into parent values(1);\n> insert into child values(1);\n>\n> analyze ONLY parent;\n> select starelid::regclass,stainherit,stadistinct from pg_statistic\n> where starelid in ('parent'::regclass,'child'::regclass);\n> starelid | stainherit | stadistinct\n> ----------+------------+-------------\n> parent | f | -1 <- this is the distinct estimate\n> for SELECT * FROM ONLY parent;\n> parent | t | -0.5 <- this is the distinct estimate\n> for SELECT * FROM parent;\n> (2 rows)\n>\n> For the stainherit==false stats, only 1 row is sampled here as that's\n> the only row directly located in the \"parent\" table.\n> For the stainherit==true stats, 2 rows are sampled, both of them have\n> \"a\" == 1. The stadistinct reflects that fact.\n>\n> Note there have been no statistics recorded for \"child\". However,\n> analyze did sample rows in that table as part of gathering sample rows\n> for \"parent\" for the stainherit==true row.\n>\n> Now let's try again without ONLY.\n>\n> analyze parent;\n>\n> select starelid::regclass,stainherit,stadistinct from pg_statistic\n> where starelid in ('parent'::regclass,'child'::regclass);\n> starelid | stainherit | stadistinct\n> ----------+------------+-------------\n> parent | f | -1\n> parent | t | -0.5\n> child | f | -1\n> (3 rows)\n>\n> All of the above rows were re-calculated with the \"analyze parent\"\n> command, the first two rows have the same values as nothing changed in\n> the table, however, there are now statistics stored for the \"child\"\n> table.\n\nthanks for your explanation!\nnow I don't have any questions about this patch.\n\n\n\n> > catalog-pg-statistic.html second paragraph seems very clear to me.\n> > Maybe we can link it somehow\n>\n> I don't know what \"link it\" means in this context.\n>\n\ni mean, change to:\n\nBy default,\n <command>ANALYZE</command> will also recursively collect and update the\n statistics for each inheritance child table. The <literal>ONLY</literal>\n keyword may be used to disable this.\n You may also refer to catalog <link\nlinkend=\"catalog-pg-statistic\"><structname>pg_statistic</structname></link>\n description about <literal>stainherit</literal>.\n\n\nbut <link linkend=\"catalog-pg-statistic\"><structname>pg_statistic</structname></link>\nalready mentioned once.\nmaybe not a good idea.\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:00:15 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" }, { "msg_contents": "On Mon, 23 Sept 2024 at 22:04, David Rowley <[email protected]> wrote:\n> Can you have a look at the attached and let me know if it's easier to\n> understand now?\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Sep 2024 18:05:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ANALYZE ONLY" } ]
[ { "msg_contents": "I’ve recently started exploring PostgreSQL implementation. I used to\nbe a MySQL InnoDB developer, and I find the PostgreSQL community feels\na bit strange.\n\nThere are some areas where they’ve done really well, but there are\nalso some obvious issues that haven’t been improved.\n\nFor example, the B-link tree implementation in PostgreSQL is\nparticularly elegant, and the code is very clean.\nBut there are some clear areas that could be improved but haven’t been\naddressed, like the double memory problem where the buffer pool and\npage cache store the same page, using full-page writes to deal with\ntorn page writes instead of something like InnoDB’s double write\nbuffer.\n\nIt seems like these issues have clear solutions, such as using\nDirectIO like InnoDB instead of buffered IO, or using a double write\nbuffer instead of relying on the full-page write approach.\nCan anyone replay why?\n\nHowever, the PostgreSQL community’s mailing list is truly a treasure\ntrove, where you can find really interesting discussions. For\ninstance, this discussion on whether lock coupling is needed for\nB-link trees, etc.\nhttps://www.postgresql.org/message-id/flat/CALJbhHPiudj4usf6JF7wuCB81fB7SbNAeyG616k%2Bm9G0vffrYw%40mail.gmail.com\n\n-- \n---\nBlog: https://baotiao.github.io/\nTwitter: https://twitter.com/baotiao\nGit: https://github.com/baotiao\n\n\n", "msg_date": "Tue, 20 Aug 2024 16:46:37 +0800", "msg_from": "=?UTF-8?B?6ZmI5a6X5b+X?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Some_questions_about_PostgreSQL=E2=80=99s_design=2E?=" }, { "msg_contents": "On Tue, Aug 20, 2024 at 8:55 PM 陈宗志 <[email protected]> wrote:\n> It seems like these issues have clear solutions, such as using\n> DirectIO like InnoDB instead of buffered IO,\n\nFor this part: we recently added an experimental option to use direct\nI/O (debug_io_direct). We are working on the infrastructure needed to\nmake it work efficiently before removing the \"debug_\" prefix:\nprediction of future I/O through a \"stream\" abstraction which we have\nsome early pieces of already, I/O combining (see new io_combine_limit\nsetting), and asynchronous I/O (work in progress, basically I/O worker\nprocesses or io_uring or other OS-specific APIs).\n\n\n", "msg_date": "Tue, 20 Aug 2024 22:09:27 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_Some_questions_about_PostgreSQL=E2=80=99s_design=2E?=" }, { "msg_contents": "On 20/08/2024 11:46, 陈宗志 wrote:\n> I’ve recently started exploring PostgreSQL implementation. I used to\n> be a MySQL InnoDB developer, and I find the PostgreSQL community feels\n> a bit strange.\n> \n> There are some areas where they’ve done really well, but there are\n> also some obvious issues that haven’t been improved.\n> \n> For example, the B-link tree implementation in PostgreSQL is\n> particularly elegant, and the code is very clean.\n> But there are some clear areas that could be improved but haven’t been\n> addressed, like the double memory problem where the buffer pool and\n> page cache store the same page, using full-page writes to deal with\n> torn page writes instead of something like InnoDB’s double write\n> buffer.\n> \n> It seems like these issues have clear solutions, such as using\n> DirectIO like InnoDB instead of buffered IO, or using a double write\n> buffer instead of relying on the full-page write approach.\n> Can anyone replay why?\n\nThere are pros and cons. With direct I/O, you cannot take advantage of \nthe kernel page cache anymore, so it becomes important to tune \nshared_buffers more precisely. That's a downside: the system requires \nmore tuning. For many applications, squeezing the last ounce of \nperformance just isn't that important. There are also scaling issues \nwith the Postgres buffer cache, which might need to be addressed first.\n\nWith double write buffering, there are also pros and cons. It also \nrequires careful tuning. And replaying WAL that contains full-page \nimages can be much faster, because you can write new page images \n\"blindly\" without reading the old pages first. We have WAL prefetching \nnow, which alleviates that, but it's no panacea.\n\nIn summary, those are good solutions but they're not obviously better in \nall circumstances.\n\n> However, the PostgreSQL community’s mailing list is truly a treasure\n> trove, where you can find really interesting discussions. For\n> instance, this discussion on whether lock coupling is needed for\n> B-link trees, etc.\n> https://www.postgresql.org/message-id/flat/CALJbhHPiudj4usf6JF7wuCB81fB7SbNAeyG616k%2Bm9G0vffrYw%40mail.gmail.com\n\nYep, there are old threads and patches for double write buffers and \ndirect IO too :-).\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 20 Aug 2024 16:46:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_Some_questions_about_PostgreSQL=E2=80=99s_design=2E?=" }, { "msg_contents": "On Tue, Aug 20, 2024 at 04:46:54PM +0300, Heikki Linnakangas wrote:\n> There are pros and cons. With direct I/O, you cannot take advantage of the\n> kernel page cache anymore, so it becomes important to tune shared_buffers\n> more precisely. That's a downside: the system requires more tuning. For many\n> applications, squeezing the last ounce of performance just isn't that\n> important. There are also scaling issues with the Postgres buffer cache,\n> which might need to be addressed first.\n> \n> With double write buffering, there are also pros and cons. It also requires\n> careful tuning. And replaying WAL that contains full-page images can be much\n> faster, because you can write new page images \"blindly\" without reading the\n> old pages first. We have WAL prefetching now, which alleviates that, but\n> it's no panacea.\n\n陈宗志, you mimght find this blog post helpful:\n\n\thttps://momjian.us/main/blogs/pgblog/2017.html#June_5_2017\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:55:31 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some questions about =?utf-8?Q?Postgre?=\n =?utf-8?B?U1FM4oCZcw==?= design." }, { "msg_contents": "For other approaches, such as whether to use an LRU list to manage the\nshared_buffer or to use a clock sweep for management, both methods\nhave their pros and cons. But for these two issues, there is a clearly\nbetter solution. For example, using DirectIO avoids the problem of\ndouble-copying data, and the OS’s page cache LRU list is optimized for\ngeneral scenarios, while the database kernel should use its own\neviction algorithm. Regarding the other issue, full-page writes don’t\nactually reduce the number of page reads—it’s just a matter of whether\nthose page reads come from data files or from the redo log; the amount\nof data read is essentially the same. However, the problem it\nintroduces is significant write amplification on the critical write\npath, which severely impacts performance. As a result, PostgreSQL has\nto minimize the frequency of checkpoints as much as possible.\n\nI thought someone could write a demo to show it..\n\nOn Tue, Aug 20, 2024 at 9:46 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 20/08/2024 11:46, 陈宗志 wrote:\n> > I’ve recently started exploring PostgreSQL implementation. I used to\n> > be a MySQL InnoDB developer, and I find the PostgreSQL community feels\n> > a bit strange.\n> >\n> > There are some areas where they’ve done really well, but there are\n> > also some obvious issues that haven’t been improved.\n> >\n> > For example, the B-link tree implementation in PostgreSQL is\n> > particularly elegant, and the code is very clean.\n> > But there are some clear areas that could be improved but haven’t been\n> > addressed, like the double memory problem where the buffer pool and\n> > page cache store the same page, using full-page writes to deal with\n> > torn page writes instead of something like InnoDB’s double write\n> > buffer.\n> >\n> > It seems like these issues have clear solutions, such as using\n> > DirectIO like InnoDB instead of buffered IO, or using a double write\n> > buffer instead of relying on the full-page write approach.\n> > Can anyone replay why?\n>\n> There are pros and cons. With direct I/O, you cannot take advantage of\n> the kernel page cache anymore, so it becomes important to tune\n> shared_buffers more precisely. That's a downside: the system requires\n> more tuning. For many applications, squeezing the last ounce of\n> performance just isn't that important. There are also scaling issues\n> with the Postgres buffer cache, which might need to be addressed first.\n>\n> With double write buffering, there are also pros and cons. It also\n> requires careful tuning. And replaying WAL that contains full-page\n> images can be much faster, because you can write new page images\n> \"blindly\" without reading the old pages first. We have WAL prefetching\n> now, which alleviates that, but it's no panacea.\n>\n> In summary, those are good solutions but they're not obviously better in\n> all circumstances.\n>\n> > However, the PostgreSQL community’s mailing list is truly a treasure\n> > trove, where you can find really interesting discussions. For\n> > instance, this discussion on whether lock coupling is needed for\n> > B-link trees, etc.\n> > https://www.postgresql.org/message-id/flat/CALJbhHPiudj4usf6JF7wuCB81fB7SbNAeyG616k%2Bm9G0vffrYw%40mail.gmail.com\n>\n> Yep, there are old threads and patches for double write buffers and\n> direct IO too :-).\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n\n\n-- \n---\nBlog: http://www.chenzongzhi.info\nTwitter: https://twitter.com/baotiao\nGit: https://github.com/baotiao\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:49:55 +0800", "msg_from": "=?UTF-8?B?6ZmI5a6X5b+X?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Re=3A_Some_questions_about_PostgreSQL=E2=80=99s_design=2E?=" }, { "msg_contents": "I disagree with the point made in the article. The article mentions\nthat ‘prevents the kernel from reordering reads and writes to optimize\nperformance,’ which might be referring to the file system’s IO\nscheduling and merging. However, this can be handled within the\ndatabase itself, where IO scheduling and merging can be done even\nbetter.\n\nRegarding ‘does not allow free memory to be used as kernel cache,’ I\nbelieve the database itself should manage memory well, and most of the\nmemory should be managed by the database rather than handed over to\nthe operating system. Additionally, the database’s use of the page\ncache should be restricted.\n\nOn Wed, Aug 21, 2024 at 12:55 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Tue, Aug 20, 2024 at 04:46:54PM +0300, Heikki Linnakangas wrote:\n> > There are pros and cons. With direct I/O, you cannot take advantage of the\n> > kernel page cache anymore, so it becomes important to tune shared_buffers\n> > more precisely. That's a downside: the system requires more tuning. For many\n> > applications, squeezing the last ounce of performance just isn't that\n> > important. There are also scaling issues with the Postgres buffer cache,\n> > which might need to be addressed first.\n> >\n> > With double write buffering, there are also pros and cons. It also requires\n> > careful tuning. And replaying WAL that contains full-page images can be much\n> > faster, because you can write new page images \"blindly\" without reading the\n> > old pages first. We have WAL prefetching now, which alleviates that, but\n> > it's no panacea.\n>\n> 陈宗志, you mimght find this blog post helpful:\n>\n> https://momjian.us/main/blogs/pgblog/2017.html#June_5_2017\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\n\n\n-- \n---\nBlog: http://www.chenzongzhi.info\nTwitter: https://twitter.com/baotiao\nGit: https://github.com/baotiao\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:50:16 +0800", "msg_from": "=?UTF-8?B?6ZmI5a6X5b+X?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Re=3A_Some_questions_about_PostgreSQL=E2=80=99s_design=2E?=" }, { "msg_contents": "On 8/22/24 10:50 AM, 陈宗志 wrote:\n> I disagree with the point made in the article. The article mentions\n> that ‘prevents the kernel from reordering reads and writes to optimize\n> performance,’ which might be referring to the file system’s IO\n> scheduling and merging. However, this can be handled within the\n> database itself, where IO scheduling and merging can be done even\n> better.\n\nThe database does not have all the information that the OS has, but that \nsaid I suspect that the advantages of direct IO outweigh the \ndisadvantages in this regard. But the only way to know for sure would be \nfore someone to provide a benchmark.\n\n> Regarding ‘does not allow free memory to be used as kernel cache,’ I\n> believe the database itself should manage memory well, and most of the\n> memory should be managed by the database rather than handed over to\n> the operating system. Additionally, the database’s use of the page\n> cache should be restricted.\n\nThat all depends on you use case. If the database is running alone or \nalmost alone on a machine direct IO is likely the optional strategy but \nif more services are running on the same machine (e.g. if you run \nPostgreSQL on your personal laptop) you want to use buffered IO.\n\nBut as far as I know the long term plan of the async IO project is to \nsupport both direct and buffered IO so people can pick the right choice \nfor their workload.\n\nAndreas\n\n\n", "msg_date": "Thu, 22 Aug 2024 14:10:55 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_Some_questions_about_PostgreSQL=E2=80=99s_design=2E?=" } ]
[ { "msg_contents": "Hi. I noticed chipmunk is failing in configure:\n\nchecking whether the C compiler works... no\nconfigure: error: in `/home/pgbfarm/buildroot/HEAD/pgsql.build':\nconfigure: error: C compiler cannot create executables\n\nYou may want to give it a look.\n\nThanks!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n", "msg_date": "Tue, 20 Aug 2024 17:48:25 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "configure failures on chipmunk" }, { "msg_contents": "On Wed, Aug 21, 2024 at 9:48 AM Alvaro Herrera <[email protected]> wrote:\n> Hi. I noticed chipmunk is failing in configure:\n>\n> checking whether the C compiler works... no\n> configure: error: in `/home/pgbfarm/buildroot/HEAD/pgsql.build':\n> configure: error: C compiler cannot create executables\n\nOne of the runs shows:\n\n./configure: line 4202: 28268 Bus error $CC -c $CFLAGS\n$CPPFLAGS conftest.$ac_ext 1>&5\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:42:55 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure failures on chipmunk" }, { "msg_contents": "On 21/08/2024 01:42, Thomas Munro wrote:\n> On Wed, Aug 21, 2024 at 9:48 AM Alvaro Herrera <[email protected]> wrote:\n>> Hi. I noticed chipmunk is failing in configure:\n>>\n>> checking whether the C compiler works... no\n>> configure: error: in `/home/pgbfarm/buildroot/HEAD/pgsql.build':\n>> configure: error: C compiler cannot create executables\n> \n> One of the runs shows:\n> \n> ./configure: line 4202: 28268 Bus error $CC -c $CFLAGS\n> $CPPFLAGS conftest.$ac_ext 1>&5\n\nYeah, I've been slowly upgrading it to a new debian/raspbian version, \nand \"ccache\" started segfaulting, not sure why. I compiled ccache from \nsources, and now it seems to work when I run it on its own. Not sure why \nthe buildfarm run still failed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 09:29:39 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure failures on chipmunk" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Yeah, I've been slowly upgrading it to a new debian/raspbian version, \n> and \"ccache\" started segfaulting, not sure why. I compiled ccache from \n> sources, and now it seems to work when I run it on its own. Not sure why \n> the buildfarm run still failed.\n\nMaybe nuking the animal's ccache cache would help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:43:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure failures on chipmunk" }, { "msg_contents": "Hello Heikki,\n\n21.08.2024 09:29, Heikki Linnakangas wrote:\n> Yeah, I've been slowly upgrading it to a new debian/raspbian version, and \"ccache\" started segfaulting, not sure why. \n> I compiled ccache from sources, and now it seems to work when I run it on its own. Not sure why the buildfarm run \n> still failed.\n>\n\nCould you also take a look at the kerberos setup on chipmunk? Now that\nchipmunk goes beyond configure (on HEAD and REL_17_STABLE), it fails on the\nkerberosCheck stage, with the following error:\nCan't exec \"/usr/sbin/kdb5_util\": No such file or directory at \n/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/kerberos/../../../src/test/perl/PostgreSQL/Test/Utils.pm line 349.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 26 Aug 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure failures on chipmunk" }, { "msg_contents": "Hello Heikki,\n\n26.08.2024 07:00, Alexander Lakhin wrote:\n> Could you also take a look at the kerberos setup on chipmunk? Now that\n> chipmunk goes beyond configure (on HEAD and REL_17_STABLE), it fails on the\n> kerberosCheck stage, ...\n\nI see that chipmunk turned green.\n\nThank you for taking care of that!\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 30 Aug 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configure failures on chipmunk" } ]
[ { "msg_contents": "Avoid unnecessary form and deform tuple.\n\nIn the TPCH test, HashJoin speed up to ~2x.", "msg_date": "Wed, 21 Aug 2024 09:49:37 +0800", "msg_from": "\"bucoo\" <[email protected]>", "msg_from_op": true, "msg_subject": "optimize hashjoin" }, { "msg_contents": "Hi,\n\nOn 8/21/24 03:49, bucoo wrote:\n> Avoid unnecessary form and deform tuple.\n> \n\nThanks for the patch. Generally speaking, it's usually a good idea to\nbriefly explain what the patch does, why is that beneficial, in what\ncircumstances, etc. It's sometimes not quite obvious from the patch\nitself (even if the patch is simple). Plus it really helps new\ncontributors who want to review the patch, but even for experienced\npeople it's a huge time saver ...\n\nAnyway, I took a look and the basic idea is simple - when shuffling\ntuples between batches in a hash join, we're currently deforming the\ntuple (->slot) we just read from a batch, only to immediately form it\n(slot->) again and write it to the \"correct\" batch.\n\nI think the idea to skip this unnecessary deform/form step is sound, and\nI don't see any particular argument against doing that. The code looks\nreasonable too, I think.\n\nA couple minor comments:\n\n0) The patch does not apply anymore, thanks to David committing a patch\nyesterday. Attached is a patch rebased on top of current master.\n\n1) Wouldn't it be easier (and just as efficient) to use slots with\nTTSOpsMinimalTuple, i.e. backed by a minimal tuple?\n\n2) I find the naming of the new functions a bit confusing. We now have\nthe \"original\" functions working with slots, and then also functions\nwith \"Tuple\" working with tuples. Seems asymmetric.\n\n3) The code in ExecHashJoinOuterGetTuple is hard to understand, it'd\nvery much benefit from some comments. I'm a bit unsure if the likely()\nand unlikely() hints really help.\n\n4) Is the hj_outerTupleBuffer buffer really needed / effective? I'd bet\njust using palloc() will work just as well, thanks to the AllocSet\ncaching etc.\n\n5) You might want to add the patch to the 2024-09 CF, to keep track of\nit: https://commitfest.postgresql.org/49/\n\n> In the TPCH test, HashJoin speed up to ~2x.\n> \nCan you provide more information about the benchmark you did? What\nhardware, what scale, PostgreSQL configuration, which of the 22 queries\nare improved, etc.\n\nI ran TPC-H with 1GB and 10GB scales on two machines, and I see pretty\nmuch no difference compared to master. However, it occurred to me the\npatch only ever helps if we increase the number of batches during\nexecution, in which case we need to move tuples to the right batch.\n\nPerhaps that's not happening in my testing, but it was happening in your\nruns? Which just means we really need to know more about how you did the\ntesting.\n\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Wed, 21 Aug 2024 18:31:49 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize hashjoin" }, { "msg_contents": "On Wed, Aug 21, 2024 at 12:31 PM Tomas Vondra <[email protected]> wrote:\n> Anyway, I took a look and the basic idea is simple - when shuffling\n> tuples between batches in a hash join, we're currently deforming the\n> tuple (->slot) we just read from a batch, only to immediately form it\n> (slot->) again and write it to the \"correct\" batch.\n\nDoes skipping this cause any problem if some attributes are toasted?\n\nI suppose not, just something to think about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 13:17:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize hashjoin" }, { "msg_contents": "On Aug 21, 2024 19:17, Robert Haas <[email protected]> wrote:On Wed, Aug 21, 2024 at 12:31 PM Tomas Vondra <[email protected]> wrote:\n\n> Anyway, I took a look and the basic idea is simple - when shuffling\n\n> tuples between batches in a hash join, we're currently deforming the\n\n> tuple (->slot) we just read from a batch, only to immediately form it\n\n> (slot->) again and write it to the \"correct\" batch.\n\n\nDoes skipping this cause any problem if some attributes are toasted?\n\n\nI suppose not, just something to think about.\n\nI don't see why would this cause any such problems - if anything has to be done when forming the tuples, it had to be done the first time. Shuffling tuples to a different batch may happen, but AFAIK it's really just a copy.--Tomas Vondra", "msg_date": "Wed, 21 Aug 2024 21:51:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize hashjoin" } ]
[ { "msg_contents": "A USING clause when altering the type of a generated column does not \nmake sense. It would write the output of the USING clause into the \nconverted column, which would violate the generation expression.\n\nThis patch adds a check to error out if this is specified.\n\nThere was a test for this, but that test errored out for a different \nreason, so it was not effective.\n\ndiscovered by Jian He at [0]\n\n[0]: \nhttps://www.postgresql.org/message-id/CACJufxEGPYtFe79hbsMeOBOivfNnPRsw7Gjvk67m1x2MQggyiQ@mail.gmail.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 08:17:45 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Disallow USING clause when altering type of generated column" }, { "msg_contents": "On Wed, 21 Aug 2024 08:17:45 +0200\nPeter Eisentraut <[email protected]> wrote:\n\n> A USING clause when altering the type of a generated column does not \n> make sense. It would write the output of the USING clause into the \n> converted column, which would violate the generation expression.\n> \n> This patch adds a check to error out if this is specified.\n\nI’m afraid you forgot to attach the patch.\nIt seems for me that this fix is reasonable though.\n\nRegards,\nYugo Nagata\n\n> \n> There was a test for this, but that test errored out for a different \n> reason, so it was not effective.\n> \n> discovered by Jian He at [0]\n> \n> [0]: \n> https://www.postgresql.org/message-id/CACJufxEGPYtFe79hbsMeOBOivfNnPRsw7Gjvk67m1x2MQggyiQ@mail.gmail.com\n> \n> \n\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:14:02 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On 21.08.24 09:14, Yugo Nagata wrote:\n> On Wed, 21 Aug 2024 08:17:45 +0200\n> Peter Eisentraut <[email protected]> wrote:\n> \n>> A USING clause when altering the type of a generated column does not\n>> make sense. It would write the output of the USING clause into the\n>> converted column, which would violate the generation expression.\n>>\n>> This patch adds a check to error out if this is specified.\n> \n> I’m afraid you forgot to attach the patch.\n> It seems for me that this fix is reasonable though.\n\nThanks, here is the patch.", "msg_date": "Wed, 21 Aug 2024 10:57:48 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On Wed, Aug 21, 2024 at 4:57 PM Peter Eisentraut <[email protected]> wrote:\n>\n\n+ /*\n+ * Cannot specify USING when altering type of a generated column, because\n+ * that would violate the generation expression.\n+ */\n+ if (attTup->attgenerated && def->cooked_default)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n+ errmsg(\"cannot specify USING when altering type of generated column\"),\n+ errdetail(\"Column \\\"%s\\\" is a generated column.\", colName)));\n+\n\nerrcode should be ERRCODE_FEATURE_NOT_SUPPORTED?\n\nalso\nCREATE TABLE gtest27 (\n a int,\n b text collate \"C\",\n x text GENERATED ALWAYS AS ( b || '_2') STORED\n);\n\nALTER TABLE gtest27 ALTER COLUMN x TYPE int;\nERROR: column \"x\" cannot be cast automatically to type integer\nHINT: You might need to specify \"USING x::integer\".\n\nshould we do something for the errhint, since this specific errhint is wrong?\n\n\n", "msg_date": "Thu, 22 Aug 2024 11:38:49 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On Thu, 22 Aug 2024 11:38:49 +0800\njian he <[email protected]> wrote:\n\n> On Wed, Aug 21, 2024 at 4:57 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> \n> + /*\n> + * Cannot specify USING when altering type of a generated column, because\n> + * that would violate the generation expression.\n> + */\n> + if (attTup->attgenerated && def->cooked_default)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"cannot specify USING when altering type of generated column\"),\n> + errdetail(\"Column \\\"%s\\\" is a generated column.\", colName)));\n> +\n> \n> errcode should be ERRCODE_FEATURE_NOT_SUPPORTED?\n\n\nAlthough ERRCODE_INVALID_TABLE_DEFINITION is used for en error on changing\ntype of inherited column, I guess that is because it prevents from breaking\nconsistency between inherited and inheriting tables as a result of the command. \nIn this sense, maybe, ERRCODE_INVALID_COLUMN_DEFINITION is proper here, because\nthis check is to prevent inconsistency between columns in a tuple.\n\n> also\n> CREATE TABLE gtest27 (\n> a int,\n> b text collate \"C\",\n> x text GENERATED ALWAYS AS ( b || '_2') STORED\n> );\n> \n> ALTER TABLE gtest27 ALTER COLUMN x TYPE int;\n> ERROR: column \"x\" cannot be cast automatically to type integer\n> HINT: You might need to specify \"USING x::integer\".\n> \n> should we do something for the errhint, since this specific errhint is wrong?\n\nYes. I think we don't have to output the hint message if we disallow USING\nfor generated columns. Or, it may be useful to allow only a simple cast\nfor the generated column instead of completely prohibiting USING.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Thu, 22 Aug 2024 15:15:31 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On 22.08.24 08:15, Yugo Nagata wrote:\n> On Thu, 22 Aug 2024 11:38:49 +0800\n> jian he <[email protected]> wrote:\n> \n>> On Wed, Aug 21, 2024 at 4:57 PM Peter Eisentraut <[email protected]> wrote:\n>>>\n>>\n>> + /*\n>> + * Cannot specify USING when altering type of a generated column, because\n>> + * that would violate the generation expression.\n>> + */\n>> + if (attTup->attgenerated && def->cooked_default)\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> + errmsg(\"cannot specify USING when altering type of generated column\"),\n>> + errdetail(\"Column \\\"%s\\\" is a generated column.\", colName)));\n>> +\n>>\n>> errcode should be ERRCODE_FEATURE_NOT_SUPPORTED?\n> \n> \n> Although ERRCODE_INVALID_TABLE_DEFINITION is used for en error on changing\n> type of inherited column, I guess that is because it prevents from breaking\n> consistency between inherited and inheriting tables as a result of the command.\n> In this sense, maybe, ERRCODE_INVALID_COLUMN_DEFINITION is proper here, because\n> this check is to prevent inconsistency between columns in a tuple.\n\nYes, that was my thinking. I think of ERRCODE_FEATURE_NOT_SUPPORTED as \n\"we could add it in the future\", but that does not seem to apply here.\n\n>> also\n>> CREATE TABLE gtest27 (\n>> a int,\n>> b text collate \"C\",\n>> x text GENERATED ALWAYS AS ( b || '_2') STORED\n>> );\n>>\n>> ALTER TABLE gtest27 ALTER COLUMN x TYPE int;\n>> ERROR: column \"x\" cannot be cast automatically to type integer\n>> HINT: You might need to specify \"USING x::integer\".\n>>\n>> should we do something for the errhint, since this specific errhint is wrong?\n> \n> Yes. I think we don't have to output the hint message if we disallow USING\n> for generated columns. Or, it may be useful to allow only a simple cast\n> for the generated column instead of completely prohibiting USING.\n\nGood catch. Here is an updated patch that fixes this.\n\nThis got me thinking whether disallowing USING here would actually \nprevent some useful cases, like if you want to change the type of a \ngenerated column and need to supply an explicit cast, as the hint \nsuggested. But this actually wouldn't work anyway, because later on it \nwill try to cast the generation expression, and that will fail in the \nsame way because it uses the same coerce_to_target_type() parameters. \nSo I think this patch won't break anything. Maybe what I'm describing \nhere could be implemented as a new feature, but I'm looking at this as a \nbug fix right now.", "msg_date": "Thu, 22 Aug 2024 09:10:52 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On Thu, 22 Aug 2024 09:10:52 +0200\nPeter Eisentraut <[email protected]> wrote:\n\n> On 22.08.24 08:15, Yugo Nagata wrote:\n> > On Thu, 22 Aug 2024 11:38:49 +0800\n> > jian he <[email protected]> wrote:\n> > \n> >> On Wed, Aug 21, 2024 at 4:57 PM Peter Eisentraut <[email protected]> wrote:\n> >>>\n> >>\n> >> + /*\n> >> + * Cannot specify USING when altering type of a generated column, because\n> >> + * that would violate the generation expression.\n> >> + */\n> >> + if (attTup->attgenerated && def->cooked_default)\n> >> + ereport(ERROR,\n> >> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> >> + errmsg(\"cannot specify USING when altering type of generated column\"),\n> >> + errdetail(\"Column \\\"%s\\\" is a generated column.\", colName)));\n> >> +\n> >>\n> >> errcode should be ERRCODE_FEATURE_NOT_SUPPORTED?\n> > \n> > \n> > Although ERRCODE_INVALID_TABLE_DEFINITION is used for en error on changing\n> > type of inherited column, I guess that is because it prevents from breaking\n> > consistency between inherited and inheriting tables as a result of the command.\n> > In this sense, maybe, ERRCODE_INVALID_COLUMN_DEFINITION is proper here, because\n> > this check is to prevent inconsistency between columns in a tuple.\n> \n> Yes, that was my thinking. I think of ERRCODE_FEATURE_NOT_SUPPORTED as \n> \"we could add it in the future\", but that does not seem to apply here.\n\n+\t\t\t\t(errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n+\t\t\t\t errmsg(\"cannot specify USING when altering type of generated column\"),\n+\t\t\t\t errdetail(\"Column \\\"%s\\\" is a generated column.\", colName)));\n\nDo you thnik ERRCODE_INVALID_TABLE_DEFINITION is more proper than \nERRCODE_INVALID_COLUMN_DEFINITION in this case?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Thu, 22 Aug 2024 16:59:47 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On 22.08.24 09:59, Yugo NAGATA wrote:\n>>> Although ERRCODE_INVALID_TABLE_DEFINITION is used for en error on changing\n>>> type of inherited column, I guess that is because it prevents from breaking\n>>> consistency between inherited and inheriting tables as a result of the command.\n>>> In this sense, maybe, ERRCODE_INVALID_COLUMN_DEFINITION is proper here, because\n>>> this check is to prevent inconsistency between columns in a tuple.\n>>\n>> Yes, that was my thinking. I think of ERRCODE_FEATURE_NOT_SUPPORTED as\n>> \"we could add it in the future\", but that does not seem to apply here.\n> \n> +\t\t\t\t(errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +\t\t\t\t errmsg(\"cannot specify USING when altering type of generated column\"),\n> +\t\t\t\t errdetail(\"Column \\\"%s\\\" is a generated column.\", colName)));\n> \n> Do you thnik ERRCODE_INVALID_TABLE_DEFINITION is more proper than\n> ERRCODE_INVALID_COLUMN_DEFINITION in this case?\n\nCOLUMN seems better here.\n\nI copied TABLE from the \"cannot alter system column\" above, but maybe \nthat is a different situation.\n\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:49:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disallow USING clause when altering type of generated column" }, { "msg_contents": "On 22.08.24 10:49, Peter Eisentraut wrote:\n> On 22.08.24 09:59, Yugo NAGATA wrote:\n>>>> Although ERRCODE_INVALID_TABLE_DEFINITION is used for en error on \n>>>> changing\n>>>> type of inherited column, I guess that is because it prevents from \n>>>> breaking\n>>>> consistency between inherited and inheriting tables as a result of \n>>>> the command.\n>>>> In this sense, maybe, ERRCODE_INVALID_COLUMN_DEFINITION is proper \n>>>> here, because\n>>>> this check is to prevent inconsistency between columns in a tuple.\n>>>\n>>> Yes, that was my thinking.  I think of ERRCODE_FEATURE_NOT_SUPPORTED as\n>>> \"we could add it in the future\", but that does not seem to apply here.\n>>\n>> +                (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> +                 errmsg(\"cannot specify USING when altering type of \n>> generated column\"),\n>> +                 errdetail(\"Column \\\"%s\\\" is a generated column.\", \n>> colName)));\n>>\n>> Do you thnik ERRCODE_INVALID_TABLE_DEFINITION is more proper than\n>> ERRCODE_INVALID_COLUMN_DEFINITION in this case?\n> \n> COLUMN seems better here.\n\nCommitted and backpatched, with that adjustment.\n\n\n\n", "msg_date": "Thu, 29 Aug 2024 09:15:51 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disallow USING clause when altering type of generated column" } ]
[ { "msg_contents": "When testing the json_table function, it was discovered that specifying FORMAT JSON in the column definition clause and applying this column to the JSON_OBJECT function results in an output that differs from Oracle's output.\n\n\nThe sql statement is as follows:\n\n\nSELECT JSON_OBJECT('config' VALUE config) \nFROM JSON_TABLE(\n '[{\"type\":1, \"order\":1, \"config\":{\"empno\":1001, \"ename\":\"Smith\", \"job\":\"CLERK\", \"sal\":1000}}]',\n '$[*]' COLUMNS (\n config varchar(100) FORMAT JSON PATH '$.config'\n )\n);\n\n\nThe execution results of postgresql are as follows:\n\n\n json_object\n-------------------------------------------------------------------------------------------\n {\"config\" : \"{\\\"job\\\": \\\"CLERK\\\", \\\"sal\\\": 1000, \\\"empno\\\": 1001, \\\"ename\\\": \\\"Smith\\\"}\"}\n(1 row)\n\n\nThe execution results of oracle are as follows:\n\n\nJSON_OBJECT('CONFIG'VALUECONFIG)\n---------------------------------------------------------------------\n{\"config\":{\"empno\":1001,\"ename\":\"Smith\",\"job\":\"CLERK\",\"sal\":1000}}\n\n\n1 row selected.\n\n\nElapsed: 00:00:00.00\n\n\nIn PostgreSQL, the return value of the json_table function is treated as plain text, and double quotes are escaped with a backslash. In Oracle, the return value of the json_table function is treated as a JSON document, and the double quotes within it are not escaped with a backslash.\nBased on the above observation, if the FORMAT JSON option is specified in the column definition clause of the json_table function, the return type should be JSON, rather than a specified type like VARCHAR(100).\nWhen testing the json_table function, it was discovered that specifying FORMAT JSON in the column definition clause and applying this column to the JSON_OBJECT function results in an output that differs from Oracle's output.The sql statement is as follows:SELECT JSON_OBJECT('config' VALUE config)  FROM JSON_TABLE(    '[{\"type\":1, \"order\":1, \"config\":{\"empno\":1001, \"ename\":\"Smith\", \"job\":\"CLERK\", \"sal\":1000}}]',    '$[*]' COLUMNS (        config varchar(100) FORMAT JSON PATH '$.config'    ));The execution results of postgresql are as follows:                                        json_object------------------------------------------------------------------------------------------- {\"config\" : \"{\\\"job\\\": \\\"CLERK\\\", \\\"sal\\\": 1000, \\\"empno\\\": 1001, \\\"ename\\\": \\\"Smith\\\"}\"}(1 row)The execution results of oracle are as follows:JSON_OBJECT('CONFIG'VALUECONFIG)---------------------------------------------------------------------{\"config\":{\"empno\":1001,\"ename\":\"Smith\",\"job\":\"CLERK\",\"sal\":1000}}1 row selected.Elapsed: 00:00:00.00In PostgreSQL, the return value of the json_table function is treated as plain text, and double quotes are escaped with a backslash. In Oracle, the return value of the json_table function is treated as a JSON document, and the double quotes within it are not escaped with a backslash.Based on the above observation, if the FORMAT JSON option is specified in the column definition clause of the json_table function, the return type should be JSON, rather than a specified type like VARCHAR(100).", "msg_date": "Wed, 21 Aug 2024 14:48:28 +0800 (CST)", "msg_from": "zfmohz <[email protected]>", "msg_from_op": true, "msg_subject": "The json_table function returns an incorrect column type" }, { "msg_contents": "Hi\n\nThe JSON_OBJECT is by default formatting as text, adding explicit format\ntype to JSON_OBJECT will solve the problem.\n\nFor example\n\npostgres=# SELECT json_object('configd' value item format json) FROM\nJSON_TABLE('{\"empno\":1001}', '$' COLUMNS (item text FORMAT JSON PATH '$'));\n json_object\n-------------------------------\n {\"configd\" : {\"empno\": 1001}}\n(1 row)\n\npostgres=# SELECT json_object('configd' value item) FROM\nJSON_TABLE('{\"empno\":1001}', '$' COLUMNS (item text FORMAT JSON PATH '$'));\n json_object\n-----------------------------------\n {\"configd\" : \"{\\\"empno\\\": 1001}\"}\n(1 row)\n\n\nI changed the default_format for JSON_OBJECT here[1].\n\nNode *val = transformJsonValueExpr(pstate, \"JSON_OBJECT()\",\nkv->value,\nJS_FORMAT_JSON,\nInvalidOid, false);\n\nThis solves the problem but some tests are still failing. Don't know\nwhether the default format should be JSON(looks like oracle did something\nlike this ) or text However, just sharing some findings here.\n\nThanks\nImran Zaheer\n\n[1]:\nhttps://github.com/postgres/postgres/blob/4baff5013277a61f6d5e1e3369ae3f878cb48d0a/src/backend/parser/parse_expr.c#L3723\n\n\nOn Wed, Aug 21, 2024 at 3:48 PM zfmohz <[email protected]> wrote:\n>\n> When testing the json_table function, it was discovered that specifying\nFORMAT JSON in the column definition clause and applying this column to the\nJSON_OBJECT function results in an output that differs from Oracle's output.\n>\n> The sql statement is as follows:\n>\n> SELECT JSON_OBJECT('config' VALUE config)\n> FROM JSON_TABLE(\n> '[{\"type\":1, \"order\":1, \"config\":{\"empno\":1001, \"ename\":\"Smith\",\n\"job\":\"CLERK\", \"sal\":1000}}]',\n> '$[*]' COLUMNS (\n> config varchar(100) FORMAT JSON PATH '$.config'\n> )\n> );\n>\n> The execution results of postgresql are as follows:\n>\n> json_object\n>\n-------------------------------------------------------------------------------------------\n> {\"config\" : \"{\\\"job\\\": \\\"CLERK\\\", \\\"sal\\\": 1000, \\\"empno\\\": 1001,\n\\\"ename\\\": \\\"Smith\\\"}\"}\n> (1 row)\n>\n> The execution results of oracle are as follows:\n>\n> JSON_OBJECT('CONFIG'VALUECONFIG)\n> ---------------------------------------------------------------------\n> {\"config\":{\"empno\":1001,\"ename\":\"Smith\",\"job\":\"CLERK\",\"sal\":1000}}\n>\n> 1 row selected.\n>\n> Elapsed: 00:00:00.00\n>\n> In PostgreSQL, the return value of the json_table function is treated as\nplain text, and double quotes are escaped with a backslash. In Oracle, the\nreturn value of the json_table function is treated as a JSON document, and\nthe double quotes within it are not escaped with a backslash.\n> Based on the above observation, if the FORMAT JSON option is specified in\nthe column definition clause of the json_table function, the return type\nshould be JSON, rather than a specified type like VARCHAR(100).\n\nHi\n\nThe JSON_OBJECT is by default formatting as text, adding explicit format type to JSON_OBJECT will solve the problem.\n\nFor example\n\npostgres=# SELECT json_object('configd' value item format json) FROM JSON_TABLE('{\"empno\":1001}', '$' COLUMNS (item text FORMAT JSON PATH '$'));\n          json_object          \n-------------------------------\n {\"configd\" : {\"empno\": 1001}}\n(1 row)\n\npostgres=# SELECT json_object('configd' value item) FROM JSON_TABLE('{\"empno\":1001}', '$' COLUMNS (item text FORMAT JSON PATH '$'));\n            json_object            \n-----------------------------------\n {\"configd\" : \"{\\\"empno\\\": 1001}\"}\n(1 row)\n\n\nI changed the default_format for JSON_OBJECT here[1].\n\nNode *val = transformJsonValueExpr(pstate, \"JSON_OBJECT()\",\nkv->value,\nJS_FORMAT_JSON,\nInvalidOid, false);\n\nThis solves the problem but some tests are still failing. Don't know whether the default format should be JSON(looks like oracle did something like this ) or text However, just sharing some findings here.\n\nThanks\nImran Zaheer\n\n[1]: https://github.com/postgres/postgres/blob/4baff5013277a61f6d5e1e3369ae3f878cb48d0a/src/backend/parser/parse_expr.c#L3723\n\nOn Wed, Aug 21, 2024 at 3:48 PM zfmohz <[email protected]> wrote:\n>\n> When testing the json_table function, it was discovered that specifying FORMAT JSON in the column definition clause and applying this column to the JSON_OBJECT function results in an output that differs from Oracle's output.\n>\n> The sql statement is as follows:\n>\n> SELECT JSON_OBJECT('config' VALUE config)  \n> FROM JSON_TABLE(\n>     '[{\"type\":1, \"order\":1, \"config\":{\"empno\":1001, \"ename\":\"Smith\", \"job\":\"CLERK\", \"sal\":1000}}]',\n>     '$[*]' COLUMNS (\n>         config varchar(100) FORMAT JSON PATH '$.config'\n>     )\n> );\n>\n> The execution results of postgresql are as follows:\n>\n>                                         json_object\n> -------------------------------------------------------------------------------------------\n>  {\"config\" : \"{\\\"job\\\": \\\"CLERK\\\", \\\"sal\\\": 1000, \\\"empno\\\": 1001, \\\"ename\\\": \\\"Smith\\\"}\"}\n> (1 row)\n>\n> The execution results of oracle are as follows:\n>\n> JSON_OBJECT('CONFIG'VALUECONFIG)\n> ---------------------------------------------------------------------\n> {\"config\":{\"empno\":1001,\"ename\":\"Smith\",\"job\":\"CLERK\",\"sal\":1000}}\n>\n> 1 row selected.\n>\n> Elapsed: 00:00:00.00\n>\n> In PostgreSQL, the return value of the json_table function is treated as plain text, and double quotes are escaped with a backslash. In Oracle, the return value of the json_table function is treated as a JSON document, and the double quotes within it are not escaped with a backslash.\n> Based on the above observation, if the FORMAT JSON option is specified in the column definition clause of the json_table function, the return type should be JSON, rather than a specified type like VARCHAR(100).", "msg_date": "Wed, 21 Aug 2024 22:45:41 +0900", "msg_from": "Imran Zaheer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The json_table function returns an incorrect column type" } ]
[ { "msg_contents": "Hello hackers,\n\nAs a recent failure, produced by drongo [1], shows, pg_ctl stop/start\nsequence may break on Windows due to the transient DELETE PENDING state of\nposmaster.pid.\n\nPlease look at the excerpt from the failure log:\n...\npg_createsubscriber: stopping the subscriber\n2024-08-19 18:02:47.608 UTC [6988:4] LOG:  received fast shutdown request\n2024-08-19 18:02:47.608 UTC [6988:5] LOG:  aborting any active transactions\n2024-08-19 18:02:47.612 UTC [5884:2] FATAL:  terminating walreceiver process due to administrator command\n2024-08-19 18:02:47.705 UTC [7036:1] LOG:  shutting down\npg_createsubscriber: server was stopped\n### the server instance (1) emitted only \"shutting down\" yet, but pg_ctl\n### considered it stopped and returned 0 to pg_createsubscriber\n[18:02:47.900](2.828s) ok 29 - run pg_createsubscriber without --databases\n...\npg_createsubscriber: starting the standby with command-line options\npg_createsubscriber: pg_ctl command is: ...\n2024-08-19 18:02:48.163 UTC [5284:1] FATAL:  could not create lock file \"postmaster.pid\": File exists\npg_createsubscriber: server was started\npg_createsubscriber: checking settings on subscriber\n### pg_createsubscriber attempts to start new server instance (2), but\n### it fails due to \"postmaster.pid\" still found on disk\n2024-08-19 18:02:48.484 UTC [6988:6] LOG:  database system is shut down\n### the server instance (1) is finally stopped and postmaster.pid unlinked\n\nWith extra debug logging and the ntries limit decreased to 10 (in\nCreateLockFile()), I reproduced the failure easily (when running 20 tests\nin parallel) and got additional information (see attached).\n\nIIUC, the issue is caused by inconsistent checks for postmaster.pid\nexistence:\n\"pg_ctl stop\" ... -> get_pgpid() calls fopen(pid_file, \"r\"),\n  which fails with ENOENT for the DELETE_PENDING state (see\n  pgwin32_open_handle()).\n\n\"pg_ctl start\" ... -> CreateLockFile() calls\n     fd = open(filename, O_RDWR | O_CREAT | O_EXCL, pg_file_create_mode);\nwhich fails with EEXISTS for the same state of postmaster.pid.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-19%2017%3A32%3A54\n\nBest regards,\nAlexander", "msg_date": "Wed, 21 Aug 2024 13:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE PENDING strikes back, via pg_ctl stop/start" }, { "msg_contents": "21.08.2024 13:00, Alexander Lakhin wrote:\n> As a recent failure, produced by drongo [1], shows, pg_ctl stop/start\n> sequence may break on Windows due to the transient DELETE PENDING state of\n> posmaster.pid.\n>\n\nlorikeet didn't lose it's chance to add two cents to the conversation and\nfailed on \"pg_ctl stop\" [1]:\nwaiting for server to shut down........pg_ctl: could not open PID file \"data-C/postmaster.pid\": Permission denied\n\nI find it less interesting, because Cygwin-targeted code doesn't try to\nhandle the DELETE PENDING state at all.\n\nI've made a simple test (see attached), which merely executes stop/start\nin a loop, and observed that running 10 test jobs in parallel is enough to\nget:\n### Stopping node \"node\" using mode fast\n# Running: pg_ctl -D .../tmp_check/t_099_pg_ctl_stop+start_node_data/pgdata -m fast stop\nwaiting for server to shut down....pg_ctl: could not open PID file \n\".../tmp_check/t_099_pg_ctl_stop+start_node_data/pgdata/postmaster.pid\": Permission denied\n# pg_ctl stop failed: 256\n\nor\n# Running: pg_ctl -D .../tmp_check/t_099_pg_ctl_stop+start_node_data/pgdata -m fast stop\nwaiting for server to shut down....pg_ctl: could not open PID file \n\".../tmp_check/t_099_pg_ctl_stop+start_node_data/pgdata/postmaster.pid\": Device or resource busy\n# pg_ctl stop failed: 256\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-08-22%2009%3A52%3A46\n\nBest regards,\nAlexander", "msg_date": "Sat, 24 Aug 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE PENDING strikes back, via pg_ctl stop/start" } ]
[ { "msg_contents": "In many jit related bug reports, one of the first questions is often\n\"which llvm version is used\". How about adding it into the\nPG_VERSION_STR, similar to the gcc version?", "msg_date": "Wed, 21 Aug 2024 12:19:49 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": true, "msg_subject": "Add llvm version into the version string" }, { "msg_contents": "st 21. 8. 2024 v 12:20 odesílatel Dmitry Dolgov <[email protected]>\nnapsal:\n\n> In many jit related bug reports, one of the first questions is often\n> \"which llvm version is used\". How about adding it into the\n> PG_VERSION_STR, similar to the gcc version?\n>\n\n\n+1\n\nPave\n\nst 21. 8. 2024 v 12:20 odesílatel Dmitry Dolgov <[email protected]> napsal:In many jit related bug reports, one of the first questions is often\n\"which llvm version is used\". How about adding it into the\nPG_VERSION_STR, similar to the gcc version?+1Pave", "msg_date": "Wed, 21 Aug 2024 13:29:58 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "Dmitry Dolgov <[email protected]> writes:\n> In many jit related bug reports, one of the first questions is often\n> \"which llvm version is used\". How about adding it into the\n> PG_VERSION_STR, similar to the gcc version?\n\nI'm pretty down on this idea as presented, first because our version\nstrings are quite long already, and second because llvm is an external\nlibrary. So I don't have any faith that what llvm-config said at\nconfigure time is necessarily the exact version we're using at run\ntime. (Probably, the major version must be the same, but that doesn't\nprove anything about minor versions.)\n\nIs there a way to get the llvm library's version at run time? If so\nwe could consider building a new function to return that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Sep 2024 17:25:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "> On Sat, Sep 21, 2024 at 05:25:30PM GMT, Tom Lane wrote:\n>\n> Is there a way to get the llvm library's version at run time? If so\n> we could consider building a new function to return that.\n\nYes, there is a C api LLVMGetVersion to get the major, minor and patch\nnumbers. The jit provider could be extended to return this information.\n\nAbout a new function, I think that the llvm runtime version has to be\nshown in some form by pgsql_version. The idea is to skip an emails\nexchange like \"here is the bug report\" -> \"can you attach the llvm\nversion?\". If it's going to be a new separate function, I guess it won't\nmake much difference to ask either to call the new function or the\nllvm-config, right?\n\n\n", "msg_date": "Sun, 22 Sep 2024 13:15:54 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "> On Sun, Sep 22, 2024 at 01:15:54PM GMT, Dmitry Dolgov wrote:\n> > On Sat, Sep 21, 2024 at 05:25:30PM GMT, Tom Lane wrote:\n> >\n> > Is there a way to get the llvm library's version at run time? If so\n> > we could consider building a new function to return that.\n>\n> Yes, there is a C api LLVMGetVersion to get the major, minor and patch\n> numbers. The jit provider could be extended to return this information.\n>\n> About a new function, I think that the llvm runtime version has to be\n> shown in some form by pgsql_version. The idea is to skip an emails\n> exchange like \"here is the bug report\" -> \"can you attach the llvm\n> version?\". If it's going to be a new separate function, I guess it won't\n> make much difference to ask either to call the new function or the\n> llvm-config, right?\n\nHere is what I had in mind. It turns out the LLVMGetVersion API is\navailable only starting from LLVM 16, so there is need to indicate if\nthe version string is available (otherwise llvmjit would have to fully\nformat the version string).", "msg_date": "Mon, 23 Sep 2024 20:04:30 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "Dmitry Dolgov <[email protected]> writes:\n> On Sun, Sep 22, 2024 at 01:15:54PM GMT, Dmitry Dolgov wrote:\n>> About a new function, I think that the llvm runtime version has to be\n>> shown in some form by pgsql_version. The idea is to skip an emails\n>> exchange like \"here is the bug report\" -> \"can you attach the llvm\n>> version?\".\n\nI'm not in favor of that, for a couple of reasons:\n\n* People already have expectations about what version() returns.\nSome distro and forks even modify it (see eg --extra-version).\nI think we risk breaking obscure use-cases if we add stuff onto that.\nHaving version() return something different from the PG_VERSION_STR\nconstant could cause other problems too, perhaps.\n\n* Believing that it'll save us questioning a bug submitter seems\nfairly naive to me. Nobody includes the result of version() unless\nspecifically asked for it.\n\n* I'm not in favor of overloading version() with subsystem-related\nversion numbers, because it doesn't scale. Who's to say we're not\ngoing to need the version of ICU, or libxml2, to take a couple of\nobvious examples? When you take that larger view, multiple\nsubsystem-specific version functions seem to me to make more sense.\n\nMaybe another idea could be a new system view?\n\n=> select * from pg_system_version;\n property | value\n----------------------------------------\n core version | 18.1\n architecture | x86_64-pc-linux-gnu\n word size | 64 bit\n compiler | gcc (GCC) 12.5.0\n ICU version | 60.3\n LLVM version | 18.1.0\n ...\n\nAdding rows to a view over time seems way less likely to cause\nproblems than messing with a string people probably have crufty\nparsing code for.\n\n>> If it's going to be a new separate function, I guess it won't\n>> make much difference to ask either to call the new function or the\n>> llvm-config, right?\n\nI do think that if we can get a version number out of the loaded\nlibrary, that is worth doing. I don't trust the llvm-config that\nhappens to be in somebody's PATH to be the same version that their\nPG is actually built with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Sep 2024 14:45:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "> On Mon, Sep 23, 2024 at 02:45:18PM GMT, Tom Lane wrote:\n> Maybe another idea could be a new system view?\n>\n> => select * from pg_system_version;\n> property | value\n> ----------------------------------------\n> core version | 18.1\n> architecture | x86_64-pc-linux-gnu\n> word size | 64 bit\n> compiler | gcc (GCC) 12.5.0\n> ICU version | 60.3\n> LLVM version | 18.1.0\n> ...\n>\n> Adding rows to a view over time seems way less likely to cause\n> problems than messing with a string people probably have crufty\n> parsing code for.\n\nAgree, the idea with a new system view sounds interesting. I'll try to\nexperiment in this direction, thanks.\n\n\n", "msg_date": "Tue, 24 Sep 2024 10:39:45 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "On Mon, 23 Sept 2024 at 19:45, Tom Lane <[email protected]> wrote:\n...\n\n> Maybe another idea could be a new system view?\n>\n> => select * from pg_system_version;\n> property | value\n> ----------------------------------------\n> core version | 18.1\n> architecture | x86_64-pc-linux-gnu\n> word size | 64 bit\n> compiler | gcc (GCC) 12.5.0\n> ICU version | 60.3\n> LLVM version | 18.1.0\n> ...\n>\n>\nA view like that sounds very useful.\n\n\n> Adding rows to a view over time seems way less likely to cause\n> problems than messing with a string people probably have crufty\n> parsing code for.\n>\n> >> If it's going to be a new separate function, I guess it won't\n> >> make much difference to ask either to call the new function or the\n> >> llvm-config, right?\n>\n> I do think that if we can get a version number out of the loaded\n> library, that is worth doing. I don't trust the llvm-config that\n> happens to be in somebody's PATH to be the same version that their\n> PG is actually built with.\n>\n>\nSince the build and runtime versions may differ for some things like llvm,\nlibxml2 and the interpreters behind some of the PLs, it may be valuable to\nexpand the view and show two values - a build time (or configure time)\nvalue and a runtime value for these.\n\nRegards\n\nAlastair\n\nOn Mon, 23 Sept 2024 at 19:45, Tom Lane <[email protected]> wrote:...\nMaybe another idea could be a new system view?\n\n=> select * from pg_system_version;\n     property     |     value\n----------------------------------------\n core version     | 18.1\n architecture     | x86_64-pc-linux-gnu\n word size        | 64 bit\n compiler         | gcc (GCC) 12.5.0\n ICU version      | 60.3\n LLVM version     | 18.1.0\n ...\n A view like that sounds very useful. \nAdding rows to a view over time seems way less likely to cause\nproblems than messing with a string people probably have crufty\nparsing code for.\n\n>> If it's going to be a new separate function, I guess it won't\n>> make much difference to ask either to call the new function or the\n>> llvm-config, right?\n\nI do think that if we can get a version number out of the loaded\nlibrary, that is worth doing.  I don't trust the llvm-config that\nhappens to be in somebody's PATH to be the same version that their\nPG is actually built with.\nSince the build and runtime versions may differ for some things like llvm, libxml2 and the interpreters behind some of the PLs, it may be valuable to expand the view and show two values - a build time (or configure time) value and a runtime value for these.RegardsAlastair", "msg_date": "Tue, 24 Sep 2024 13:53:49 +0100", "msg_from": "Alastair Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "Hi,\n\nOn 2024-09-24 13:53:49 +0100, Alastair Turner wrote:\n> Since the build and runtime versions may differ for some things like llvm,\n> libxml2 and the interpreters behind some of the PLs, it may be valuable to\n> expand the view and show two values - a build time (or configure time)\n> value and a runtime value for these.\n\n+1\n\nSomewhat orthogonal: I've wondered before whether it'd be useful to have a\nview showing the file path and perhaps the soversion of libraries dynamically\nloaded into postgres. That's currently hard to figure out over a connection\n(which is all a lot of our users have access to).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:52:16 -0400", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add llvm version into the version string" }, { "msg_contents": "On 9/24/24 09:52, Andres Freund wrote:\n> Hi,\n> \n> On 2024-09-24 13:53:49 +0100, Alastair Turner wrote:\n>> Since the build and runtime versions may differ for some things like llvm,\n>> libxml2 and the interpreters behind some of the PLs, it may be valuable to\n>> expand the view and show two values - a build time (or configure time)\n>> value and a runtime value for these.\n> \n> +1\n> \n> Somewhat orthogonal: I've wondered before whether it'd be useful to have a\n> view showing the file path and perhaps the soversion of libraries dynamically\n> loaded into postgres. That's currently hard to figure out over a connection\n> (which is all a lot of our users have access to).\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:57:47 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add llvm version into the version string" } ]
[ { "msg_contents": "Hi,\n\n Oversight in 7ff9afbbd; I think we can do the same way for the\nATExecAddColumn().\n-- \nTender Wang", "msg_date": "Wed, 21 Aug 2024 18:25:01 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Small code simplification" }, { "msg_contents": "On Wed, Aug 21, 2024 at 6:25 PM Tender Wang <[email protected]> wrote:\n> Oversight in 7ff9afbbd; I think we can do the same way for the ATExecAddColumn().\n\nLGTM. Pushed.\n\nThanks\nRichard\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:53:07 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Small code simplification" }, { "msg_contents": "Richard Guo <[email protected]> 于2024年8月22日周四 10:53写道:\n\n> On Wed, Aug 21, 2024 at 6:25 PM Tender Wang <[email protected]> wrote:\n> > Oversight in 7ff9afbbd; I think we can do the same way for the\n> ATExecAddColumn().\n>\n> LGTM. Pushed.\n>\n\nThanks for pushing.\n\n-- \nTender Wang\n\nRichard Guo <[email protected]> 于2024年8月22日周四 10:53写道:On Wed, Aug 21, 2024 at 6:25 PM Tender Wang <[email protected]> wrote:\n>    Oversight in 7ff9afbbd; I think we can do the same way for the ATExecAddColumn().\n\nLGTM. Pushed.Thanks for pushing. -- Tender Wang", "msg_date": "Thu, 22 Aug 2024 10:55:29 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Small code simplification" } ]
[ { "msg_contents": "Hackers,\n\nThe index access method API mostly encapsulates the functionality of in-core index types, with some lingering oversights and layering violations. There has been an ongoing discussion for several release cycles concerning how the API might be improved to allow interesting additional functionality. That discussion has frequently included patch proposals to support peculiar needs of esoteric bespoke access methods, which have little interest for the rest of the community.\n\nFor your consideration, here is a patch series that takes a different approach. It addresses many of the limitations and layering violations, along with introducing test infrastructure to validate the changes. Nothing in this series is intended to introduce new functionality to the API. Any such, \"wouldn't it be great if...\" type suggestions for the API are out of scope for this work. On the other hand, this patch set does not purport to fix all such problems; it merely moves the project in that direction.\n\nFor validation purposes, the first patch creates shallow copies of hash and btree named \"xash\" and \"xtree\" and introduces some infrastructure to run the src/test/regress and src/test/isolation tests against them without needing to duplicate those tests. Numerous failures like \"unexpected non-btree AM\" can be observed in the test results.\n\nAlso for validation purposes, the second patch creates a deep copy of btree named \"treeb\" which uses modified copies of the btree implementation rather than using the btree implementation by reference. This allows for more experimentation, but might be more code than the community wants. Since this is broken into its own patch, it can be excluded from what eventually gets committed. Even if we knew a priori that this \"treeb\" test would surely never be committed, it still serves to help anybody reviewing the patch series to experiment with those other changes without having to construct such a test index AM individually.\n\nThe next twenty patches are a mix of fixes of various layering violations, such as not allowing non-core index AMs from use in replica identity full, or for speculative insertion, or for foreign key constraints, or as part of merge join; with updates to the \"treeb\" code as needed. The changes to \"treeb\" are broken out so that they can also easily be excluded from whatever gets committed.\n\nThe final commit changes the ordering of the strategy numbers in treeb. The name \"treeb\" is a rotation of \"btree\", and indeed the strategy numbers 1,2,3,4,5 are rotated to 5,1,2,3,4. The fact that treeb indexes work properly after this change is meant to demonstrate that the core changes have been sufficient to address the prior dependency on btree strategy number ordering. Again, this doesn't need to be committed; it might only serve to help reviewers in determining if the functional changes are correct.\n \nNot to harp on this too heavily, but please note that running the core regression and isolation tests against xash, xtree, and treeb are known not to pass. That's the point. But by the end of the patch series, the failures are limited to EXPLAIN output changes; the query results themselves are intended to be consistent with the expected test output. To avoid breaking `make check-world`, these test modules are not added to the test schedule. They are also, at least for now, only useable from make, not from meson.\n\nInternal development versions 1..16 not included. Andrew, Peter, and Alex have all provided reviews internally and are cc'd here. Patch by me. Here is v17 for the community:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 21 Aug 2024 12:25:02 -0700", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "Index AM API cleanup" }, { "msg_contents": "On Thu, 22 Aug 2024 at 00:25, Mark Dilger <[email protected]> wrote:\n>\n> Hackers,\n>\n> The index access method API mostly encapsulates the functionality of in-core index types, with some lingering oversights and layering violations. There has been an ongoing discussion for several release cycles concerning how the API might be improved to allow interesting additional functionality. That discussion has frequently included patch proposals to support peculiar needs of esoteric bespoke access methods, which have little interest for the rest of the community.\n>\n> For your consideration, here is a patch series that takes a different approach. It addresses many of the limitations and layering violations, along with introducing test infrastructure to validate the changes. Nothing in this series is intended to introduce new functionality to the API. Any such, \"wouldn't it be great if...\" type suggestions for the API are out of scope for this work. On the other hand, this patch set does not purport to fix all such problems; it merely moves the project in that direction.\n>\n> For validation purposes, the first patch creates shallow copies of hash and btree named \"xash\" and \"xtree\" and introduces some infrastructure to run the src/test/regress and src/test/isolation tests against them without needing to duplicate those tests. Numerous failures like \"unexpected non-btree AM\" can be observed in the test results.\n>\n> Also for validation purposes, the second patch creates a deep copy of btree named \"treeb\" which uses modified copies of the btree implementation rather than using the btree implementation by reference. This allows for more experimentation, but might be more code than the community wants. Since this is broken into its own patch, it can be excluded from what eventually gets committed. Even if we knew a priori that this \"treeb\" test would surely never be committed, it still serves to help anybody reviewing the patch series to experiment with those other changes without having to construct such a test index AM individually.\n>\n> The next twenty patches are a mix of fixes of various layering violations, such as not allowing non-core index AMs from use in replica identity full, or for speculative insertion, or for foreign key constraints, or as part of merge join; with updates to the \"treeb\" code as needed. The changes to \"treeb\" are broken out so that they can also easily be excluded from whatever gets committed.\n>\n> The final commit changes the ordering of the strategy numbers in treeb. The name \"treeb\" is a rotation of \"btree\", and indeed the strategy numbers 1,2,3,4,5 are rotated to 5,1,2,3,4. The fact that treeb indexes work properly after this change is meant to demonstrate that the core changes have been sufficient to address the prior dependency on btree strategy number ordering. Again, this doesn't need to be committed; it might only serve to help reviewers in determining if the functional changes are correct.\n>\n> Not to harp on this too heavily, but please note that running the core regression and isolation tests against xash, xtree, and treeb are known not to pass. That's the point. But by the end of the patch series, the failures are limited to EXPLAIN output changes; the query results themselves are intended to be consistent with the expected test output. To avoid breaking `make check-world`, these test modules are not added to the test schedule. They are also, at least for now, only useable from make, not from meson.\n>\n> Internal development versions 1..16 not included. Andrew, Peter, and Alex have all provided reviews internally and are cc'd here. Patch by me. Here is v17 for the community:\n>\n>\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\nHi! Why is the patch attached as .tar.bz2? Usually raw patches are sent here...\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Thu, 22 Aug 2024 00:34:40 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "\n\n> On Aug 21, 2024, at 12:34 PM, Kirill Reshke <[email protected]> wrote:\n> \n> Hi! Why is the patch attached as .tar.bz2? Usually raw patches are sent here...\n\nI worried the patch set, being greater than 1 MB, might bounce or be held up in moderation.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 13:09:52 -0700", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "On 2024-08-21 We 4:09 PM, Mark Dilger wrote:\n>\n>> On Aug 21, 2024, at 12:34 PM, Kirill Reshke<[email protected]> wrote:\n>>\n>> Hi! Why is the patch attached as .tar.bz2? Usually raw patches are sent here...\n> I worried the patch set, being greater than 1 MB, might bounce or be held up in moderation.\n\n\nYes, it would have required moderation AIUI. It is not at all \nunprecedented to send a compressed tar of patches, and is explicitly \nprovided for by the cfbot: see \n<https://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-21 We 4:09 PM, Mark Dilger\n wrote:\n\n\n\n\n\n\nOn Aug 21, 2024, at 12:34 PM, Kirill Reshke <[email protected]> wrote:\n\nHi! Why is the patch attached as .tar.bz2? Usually raw patches are sent here...\n\n\n\nI worried the patch set, being greater than 1 MB, might bounce or be held up in moderation.\n\n\n\nYes, it would have required moderation AIUI. It is not at all\n unprecedented to send a compressed tar of patches, and is\n explicitly provided for by the cfbot: see\n<https://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F>\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 21 Aug 2024 16:16:50 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "Mark Dilger <[email protected]> writes:\n>> On Aug 21, 2024, at 12:34 PM, Kirill Reshke <[email protected]> wrote:\n>> Hi! Why is the patch attached as .tar.bz2? Usually raw patches are sent here...\n\n> I worried the patch set, being greater than 1 MB, might bounce or be held up in moderation.\n\nI'm +1 for doing it like this with such a large group of patches.\nSeparate attachments are nice up to say half a dozen attachments,\nbut beyond that they're kind of a pain to deal with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Aug 2024 17:25:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "Hi Mark,\n\nOn Wed, Aug 21, 2024 at 2:25 PM Mark Dilger\n<[email protected]> wrote:\n>\n>\n> For validation purposes, the first patch creates shallow copies of hash and btree named \"xash\" and \"xtree\" and introduces some infrastructure to run the src/test/regress and src/test/isolation tests against them without needing to duplicate those tests. Numerous failures like \"unexpected non-btree AM\" can be observed in the test results.\n>\n> Also for validation purposes, the second patch creates a deep copy of btree named \"treeb\" which uses modified copies of the btree implementation rather than using the btree implementation by reference. This allows for more experimentation, but might be more code than the community wants. Since this is broken into its own patch, it can be excluded from what eventually gets committed. Even if we knew a priori that this \"treeb\" test would surely never be committed, it still serves to help anybody reviewing the patch series to experiment with those other changes without having to construct such a test index AM individually.\n\nThank you for providing an infrastructure that allows modules to test\nby reference and re-use existing regression and isolation tests. I\nbelieve this approach ensures great coverage for the API cleanup.\nI was very excited to compare the “make && make check” results in the\ntest modules - ‘xash,’ ‘xtree,’ and ‘treeb’ - before and after the\nseries of AM API fixes. Here are my results:\n\nBefore the fixes:\n- xash: # 20 of 223 tests failed.\n- xtree: # 95 of 223 tests failed.\n- treeb: # 47 of 223 tests failed.\n\nAfter the fixes:\n- xash: # 21 of 223 tests failed.\n- xtree: # 58 of 223 tests failed.\n- treeb: # 58 of 223 tests failed.\n\nI expected the series of fixes to eliminate all failed tests, but that\nwasn’t the case. It's nice to see the failures for ‘xtree’ have significantly\ndecreased as the ‘unexpected non-btree AM’ errors have been resolved.\nI noticed some of the remaining test failures are due to trivial index\nname changes, like hash -> xash and btree -> treeb.\n\nIf we keep xtree and xash for testing, is there a way to ensure all\ntests pass instead of excluding them from \"make check-world\"? On that\nnote, I ran ‘make check-world’ on the final patch, and everything\nlooks good.\n\nI had to make some changes to the first two patches in order to run\n\"make check\" and compile the treeb code on my machine. I’ve attached\nmy changes.\n\n\"make installcheck\" for treeb is causing issues on my end. I can\ninvestigate further if it’s not a problem for others.\n\nBest,\nAlex", "msg_date": "Thu, 22 Aug 2024 03:36:31 -0500", "msg_from": "Alexandra Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "> On Aug 22, 2024, at 1:36 AM, Alexandra Wang <[email protected]> wrote:\n> \n> I had to make some changes to the first two patches in order to run\n> \"make check\" and compile the treeb code on my machine. I’ve attached\n> my changes.\n\nThank you for the review, and the patches!\n\n\n> \"make installcheck\" for treeb is causing issues on my end. I can\n> investigate further if it’s not a problem for others.\n\nThe test module index AMs are not intended for use in any installed database, so 'make installcheck' is unnecessary. A mere 'make check' should suffice. However, if you want to run it, you can install the modules, edit postgresql.conf to add 'treeb' to shared_preload_libraries, restart the server, and run 'make installcheck'. This is necessary for 'treeb' because it requests shared memory, and that needs to be done at startup.\n\nThe v18 patch set includes the changes your patches suggest, though I modified the approach a bit. Specifically, rather than standardizing on '1.0.0' for the module versions, as your patches do, I went with '1.0', as is standard in other modules in neighboring directories. The '1.0.0' numbering was something I had been using in versions 1..16 of this patch, and I only partially converted to '1.0' before posting v17, so sorry about that. The v18 patch also has some whitespace fixes.\n\nTo address your comments about the noise in the test failures, v18 modifies the clone_tests.pl script to do a little more work translating the expected output to expect the module's AM name (\"xash\", \"xtree\", \"treeb\", or whatnot) beyond what that script did in v17.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 22 Aug 2024 11:28:39 -0700", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "On 21.08.24 21:25, Mark Dilger wrote:\n> The next twenty patches are a mix of fixes of various layering\n> violations, such as not allowing non-core index AMs from use in replica\n> identity full, or for speculative insertion, or for foreign key\n> constraints, or as part of merge join; with updates to the \"treeb\" code\n> as needed. The changes to \"treeb\" are broken out so that they can also\n> easily be excluded from whatever gets committed.\n\nI made a first pass through this patch set. I think the issues it aims \nto address are mostly legitimate. In a few cases, we might need some \nmore discussion and perhaps will end up slicing the APIs a bit \ndifferently. The various patches that generalize the strategy numbers \nappear to overlap with things being discussed at [0], so we should see \nthat the solution covers all the use cases.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n\nTo make a dent, I picked out something that should be mostly harmless: \nStop calling directly into _bt_getrootheight() (patch 0004). I think \nthis patch is ok, but I might call the API function amgettreeheight \ninstead of amgetrootheight. The former seems more general.\n\nAlso, a note for us all in this thread, changes to the index AM API need \nupdates to the corresponding documentation in doc/src/sgml/indexam.sgml.\n\nI notice that _bt_getrootheight() is called only to fill in the \nIndexOptInfo tree_height field, which is only used by btcostestimate(), \nso in some sense this is btree-internal data. But making it so that \nbtcostestimate() calls _bt_getrootheight() directly to avoid all that \nintermediate business seems too complicated, and there was probably a \nreason that the cost estimation functions don't open the index.\n\nInterestingly, the cost estimation functions for gist and spgist also \nlook at the tree_height field but nothing ever fills it on. So with \nyour API restructuring, someone could provide the missing API functions \nfor those index types. Might be interesting.\n\nThat said, there might be value in generalizing this a bit. If you look \nat the cost estimation functions in pgvector (hnswcostestimate() and \nivfflatcostestimate()), they both have this pattern that \nbtcostestimate() tries to avoid: They open the index, look up some \nnumber, close the index, then make a cost estimate computation with the \nnumber looked up. So another idea would be to generalize the \ntree_height field to some \"index size data\" or even \"internal data for \ncost estimation\". This wouldn't need to change the API much, since \nthese are all just integer values, but we'd label the functions and \nfields a bit differently.\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:21:26 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "\n\n> On Aug 26, 2024, at 5:21 AM, Peter Eisentraut <[email protected]> wrote:\n> \n> On 21.08.24 21:25, Mark Dilger wrote:\n>> The next twenty patches are a mix of fixes of various layering\n>> violations, such as not allowing non-core index AMs from use in replica\n>> identity full, or for speculative insertion, or for foreign key\n>> constraints, or as part of merge join; with updates to the \"treeb\" code\n>> as needed. The changes to \"treeb\" are broken out so that they can also\n>> easily be excluded from whatever gets committed.\n> \n> I made a first pass through this patch set.\n\nPeter, thanks for the review!\n\n> I think the issues it aims to address are mostly legitimate. In a few cases, we might need some more discussion and perhaps will end up slicing the APIs a bit differently. The various patches that generalize the strategy numbers appear to overlap with things being discussed at [0], so we should see that the solution covers all the use cases.\n> \n> [0]: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n\nPaul, it seems what you are doing in v39-0001-Add-stratnum-GiST-support-function.patch is similar to what I am doing in v17-0012-Convert-strategies-to-and-from-row-compare-types.patch. In particular, your function\n\n+\n+/*\n+ * Returns the btree number for supported operators, otherwise invalid.\n+ */\n+Datum\n+gist_stratnum_btree(PG_FUNCTION_ARGS)\n+{\n+ StrategyNumber strat = PG_GETARG_UINT16(0);\n+\n+ switch (strat)\n+ {\n+ case RTEqualStrategyNumber:\n+ PG_RETURN_UINT16(BTEqualStrategyNumber);\n+ case RTLessStrategyNumber:\n+ PG_RETURN_UINT16(BTLessStrategyNumber);\n+ case RTLessEqualStrategyNumber:\n+ PG_RETURN_UINT16(BTLessEqualStrategyNumber);\n+ case RTGreaterStrategyNumber:\n+ PG_RETURN_UINT16(BTGreaterStrategyNumber);\n+ case RTGreaterEqualStrategyNumber:\n+ PG_RETURN_UINT16(BTGreaterEqualStrategyNumber);\n+ default:\n+ PG_RETURN_UINT16(InvalidStrategy);\n+ }\n+}\n\nlooks similar to the implementation of an amtranslate_rctype_function. Do you have any interest in taking a look?\n\n\n\n> To make a dent, I picked out something that should be mostly harmless: Stop calling directly into _bt_getrootheight() (patch 0004). I think this patch is ok, but I might call the API function amgettreeheight instead of amgetrootheight. The former seems more general.\n\nPeter, your proposed rename seems fine for the current implementation, but your suggestion below might indicate a different naming.\n\n> I notice that _bt_getrootheight() is called only to fill in the IndexOptInfo tree_height field, which is only used by btcostestimate(), so in some sense this is btree-internal data. But making it so that btcostestimate() calls _bt_getrootheight() directly to avoid all that intermediate business seems too complicated, and there was probably a reason that the cost estimation functions don't open the index.\n> \n> Interestingly, the cost estimation functions for gist and spgist also look at the tree_height field but nothing ever fills it on. So with your API restructuring, someone could provide the missing API functions for those index types. Might be interesting.\n> \n> That said, there might be value in generalizing this a bit. If you look at the cost estimation functions in pgvector (hnswcostestimate() and ivfflatcostestimate()), they both have this pattern that btcostestimate() tries to avoid: They open the index, look up some number, close the index, then make a cost estimate computation with the number looked up. So another idea would be to generalize the tree_height field to some \"index size data\" or even \"internal data for cost estimation\". This wouldn't need to change the API much, since these are all just integer values, but we'd label the functions and fields a bit differently.\n\nWould they be just integers? They could also be void*, with amgetrootheight_function returning data allocated in the current memory context. For btree, that would just be a four byte integer, but other indexes could return whatever they like. If you like that idea, I can code that up for v18, naming the field something like amgetcostestimateinfo_function.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 08:10:34 -0700", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "On 2024-08-22 Th 2:28 PM, Mark Dilger wrote:\n>\n>> On Aug 22, 2024, at 1:36 AM, Alexandra Wang<[email protected]> wrote:\n>>\n>> I had to make some changes to the first two patches in order to run\n>> \"make check\" and compile the treeb code on my machine. I’ve attached\n>> my changes.\n> Thank you for the review, and the patches!\n>\n>\n>> \"make installcheck\" for treeb is causing issues on my end. I can\n>> investigate further if it’s not a problem for others.\n> The test module index AMs are not intended for use in any installed database, so 'make installcheck' is unnecessary. A mere 'make check' should suffice. However, if you want to run it, you can install the modules, edit postgresql.conf to add 'treeb' to shared_preload_libraries, restart the server, and run 'make installcheck'. This is necessary for 'treeb' because it requests shared memory, and that needs to be done at startup.\n>\n> The v18 patch set includes the changes your patches suggest, though I modified the approach a bit. Specifically, rather than standardizing on '1.0.0' for the module versions, as your patches do, I went with '1.0', as is standard in other modules in neighboring directories. The '1.0.0' numbering was something I had been using in versions 1..16 of this patch, and I only partially converted to '1.0' before posting v17, so sorry about that. The v18 patch also has some whitespace fixes.\n>\n> To address your comments about the noise in the test failures, v18 modifies the clone_tests.pl script to do a little more work translating the expected output to expect the module's AM name (\"xash\", \"xtree\", \"treeb\", or whatnot) beyond what that script did in v17.\n\n\n\nSmall addition to your clone script, taking into account the existence \nof alternative result files:\n\n\ndiff --git a/src/tools/clone_tests.pl b/src/tools/clone_tests.pl\nindex c1c50ad90b..d041f93f87 100755\n--- a/src/tools/clone_tests.pl\n+++ b/src/tools/clone_tests.pl\n@@ -183,6 +183,12 @@ foreach my $regress (@regress)\n     push (@dst_regress, \"$dst_sql/$regress.sql\");\n     push (@src_expected, \"$src_expected/$regress.out\");\n     push (@dst_expected, \"$dst_expected/$regress.out\");\n+   foreach my $alt (1,2)\n+   {\n+       next unless -f \"$src_expected/${regress}_$alt.out\";\n+       push (@src_expected, \"$src_expected/${regress}_$alt.out\");\n+       push (@dst_expected, \"$dst_expected/${regress}_$alt.out\");\n+   }\n  }\n  my @isolation = grep /\\w+/, split(/\\s+/, $isolation);\n  foreach my $isolation (@isolation)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-22 Th 2:28 PM, Mark Dilger\n wrote:\n\n\n\n\n\n\nOn Aug 22, 2024, at 1:36 AM, Alexandra Wang <[email protected]> wrote:\n\nI had to make some changes to the first two patches in order to run\n\"make check\" and compile the treeb code on my machine. I’ve attached\nmy changes.\n\n\n\nThank you for the review, and the patches!\n\n\n\n\n\"make installcheck\" for treeb is causing issues on my end. I can\ninvestigate further if it’s not a problem for others.\n\n\n\nThe test module index AMs are not intended for use in any installed database, so 'make installcheck' is unnecessary. A mere 'make check' should suffice. However, if you want to run it, you can install the modules, edit postgresql.conf to add 'treeb' to shared_preload_libraries, restart the server, and run 'make installcheck'. This is necessary for 'treeb' because it requests shared memory, and that needs to be done at startup.\n\nThe v18 patch set includes the changes your patches suggest, though I modified the approach a bit. Specifically, rather than standardizing on '1.0.0' for the module versions, as your patches do, I went with '1.0', as is standard in other modules in neighboring directories. The '1.0.0' numbering was something I had been using in versions 1..16 of this patch, and I only partially converted to '1.0' before posting v17, so sorry about that. The v18 patch also has some whitespace fixes.\n\nTo address your comments about the noise in the test failures, v18 modifies the clone_tests.pl script to do a little more work translating the expected output to expect the module's AM name (\"xash\", \"xtree\", \"treeb\", or whatnot) beyond what that script did in v17.\n\n\n\n\n\nSmall addition to your clone script, taking into account the\n existence of alternative result files:\n\n\ndiff --git a/src/tools/clone_tests.pl b/src/tools/clone_tests.pl\n index c1c50ad90b..d041f93f87 100755\n --- a/src/tools/clone_tests.pl\n +++ b/src/tools/clone_tests.pl\n @@ -183,6 +183,12 @@ foreach my $regress (@regress)\n     push (@dst_regress, \"$dst_sql/$regress.sql\");\n     push (@src_expected, \"$src_expected/$regress.out\");\n     push (@dst_expected, \"$dst_expected/$regress.out\");\n +   foreach my $alt (1,2)\n +   {\n +       next unless -f \"$src_expected/${regress}_$alt.out\";\n +       push (@src_expected, \"$src_expected/${regress}_$alt.out\");\n +       push (@dst_expected, \"$dst_expected/${regress}_$alt.out\");\n +   }\n  }\n  my @isolation = grep /\\w+/, split(/\\s+/, $isolation);\n  foreach my $isolation (@isolation)\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 26 Aug 2024 15:54:33 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "On 26.08.24 17:10, Mark Dilger wrote:\n>> To make a dent, I picked out something that should be mostly harmless: Stop calling directly into _bt_getrootheight() (patch 0004). I think this patch is ok, but I might call the API function amgettreeheight instead of amgetrootheight. The former seems more general.\n> Peter, your proposed rename seems fine for the current implementation, but your suggestion below might indicate a different naming.\n> \n>> I notice that _bt_getrootheight() is called only to fill in the IndexOptInfo tree_height field, which is only used by btcostestimate(), so in some sense this is btree-internal data. But making it so that btcostestimate() calls _bt_getrootheight() directly to avoid all that intermediate business seems too complicated, and there was probably a reason that the cost estimation functions don't open the index.\n>>\n>> Interestingly, the cost estimation functions for gist and spgist also look at the tree_height field but nothing ever fills it on. So with your API restructuring, someone could provide the missing API functions for those index types. Might be interesting.\n>>\n>> That said, there might be value in generalizing this a bit. If you look at the cost estimation functions in pgvector (hnswcostestimate() and ivfflatcostestimate()), they both have this pattern that btcostestimate() tries to avoid: They open the index, look up some number, close the index, then make a cost estimate computation with the number looked up. So another idea would be to generalize the tree_height field to some \"index size data\" or even \"internal data for cost estimation\". This wouldn't need to change the API much, since these are all just integer values, but we'd label the functions and fields a bit differently.\n> Would they be just integers? They could also be void*, with amgetrootheight_function returning data allocated in the current memory context. For btree, that would just be a four byte integer, but other indexes could return whatever they like. If you like that idea, I can code that up for v18, naming the field something like amgetcostestimateinfo_function.\n\nHere is a cleaned-up version of the v17-0004 patch. I have applied the \nrenaming discussed above. I have also made a wrapper function \nbtgettreeheight() that calls _bt_getrootheight(). That way, if we ever \nwant to change the API, we don't have to change _bt_getrootheight(), or \ndisentangle it then. I've also added documentation and put in a source \ncode comment that the API could be expanded for additional uses. Also, \nI have removed the addition to the IndexOptInfo struct; that didn't seem \nnecessary.", "msg_date": "Tue, 3 Sep 2024 18:52:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "\n\n> On Sep 3, 2024, at 9:52 AM, Peter Eisentraut <[email protected]> wrote:\n> \n> Here is a cleaned-up version of the v17-0004 patch. I have applied the renaming discussed above. I have also made a wrapper function btgettreeheight() that calls _bt_getrootheight(). That way, if we ever want to change the API, we don't have to change _bt_getrootheight(), or disentangle it then. I've also added documentation and put in a source code comment that the API could be expanded for additional uses.\n\nOk.\n\n> Also, I have removed the addition to the IndexOptInfo struct; that didn't seem necessary.\n\nGood catch. I agree with the change. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 10:26:11 -0700", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "On Thu, Aug 22, 2024 at 11:28 AM Mark Dilger\n<[email protected]> wrote:\n> > On Aug 22, 2024, at 1:36 AM, Alexandra Wang <[email protected]> wrote:\n> > \"make installcheck\" for treeb is causing issues on my end. I can\n> > investigate further if it’s not a problem for others.\n>\n> The test module index AMs are not intended for use in any installed database, so 'make installcheck' is unnecessary. A mere 'make check' should suffice. However, if you want to run it, you can install the modules, edit postgresql.conf to add 'treeb' to shared_preload_libraries, restart the server, and run 'make installcheck'. This is necessary for 'treeb' because it requests shared memory, and that needs to be done at startup.\n\nThanks, Mark. This works, and I can run treeb now.\n\n> The v18 patch set includes the changes your patches suggest, though I modified the approach a bit. Specifically, rather than standardizing on '1.0.0' for the module versions, as your patches do, I went with '1.0', as is standard in other modules in neighboring directories. The '1.0.0' numbering was something I had been using in versions 1..16 of this patch, and I only partially converted to '1.0' before posting v17, so sorry about that. The v18 patch also has some whitespace fixes.\n>\n> To address your comments about the noise in the test failures, v18 modifies the clone_tests.pl script to do a little more work translating the expected output to expect the module's AM name (\"xash\", \"xtree\", \"treeb\", or whatnot) beyond what that script did in v17.\n\nThanks! I see fewer failures now.\n\nThere are a few occurrences of the following error when I run \"make\ncheck\" in the xtree module:\n\n+ERROR: bogus RowCompare index qualification\n\nI needed to specify `amroutine->amcancrosscompare = true;` in xtree.c\nto eliminate them, as below:\n\ndiff --git a/src/test/modules/xtree/access/xtree.c\nb/src/test/modules/xtree/access/xtree.c\nindex bd472edb04..960966801d 100644\n--- a/src/test/modules/xtree/access/xtree.c\n+++ b/src/test/modules/xtree/access/xtree.c\n@@ -33,6 +33,7 @@ xtree_indexam_handler(PG_FUNCTION_ARGS)\n amroutine->amoptsprocnum = BTOPTIONS_PROC;\n amroutine->amcanorder = true;\n amroutine->amcanhash = false;\n+ amroutine->amcancrosscompare = true;\n amroutine->amcanorderbyop = false;\n amroutine->amcanbackward = true;\n amroutine->amcanunique = true;\n\nAfter adding that, the regression.diffs for the xtree and treeb\nmodules are identical in content, with only plan diffs remaining. I\nthink this change should be either part of\nv18-0008-Generalize-hash-and-ordering-support-in-amapi.patch or a\nseparate commit, similar to\nv18-0009-Adjust-treeb-to-use-amcanhash-and-amcancrosscomp.patch.\n\nBest,\nAlex\n\n\n", "msg_date": "Wed, 4 Sep 2024 07:15:06 -0700", "msg_from": "Alexandra Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "On 03.09.24 19:26, Mark Dilger wrote:\n>> On Sep 3, 2024, at 9:52 AM, Peter Eisentraut <[email protected]> wrote:\n>>\n>> Here is a cleaned-up version of the v17-0004 patch. I have applied the renaming discussed above. I have also made a wrapper function btgettreeheight() that calls _bt_getrootheight(). That way, if we ever want to change the API, we don't have to change _bt_getrootheight(), or disentangle it then. I've also added documentation and put in a source code comment that the API could be expanded for additional uses.\n> \n> Ok.\n> \n>> Also, I have removed the addition to the IndexOptInfo struct; that didn't seem necessary.\n> \n> Good catch. I agree with the change.\n\nI have committed this patch. (It needed another small change to silence \na compiler warning about an uninitialized variable.)\n\n\n\n", "msg_date": "Tue, 10 Sep 2024 10:10:31 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "Next, I have reviewed patches\n\nv17-0010-Track-sort-direction-in-SortGroupClause.patch\nv17-0011-Track-scan-reversals-in-MergeJoin.patch\n\nBoth of these seem ok and sensible to me.\n\nThey take the concept of the \"reverse\" flag that already exists in the \naffected code and just apply it more consistently throughout the various \ncode layers, instead of relying on strategy numbers as intermediate \nstorage. This is both helpful for your ultimate goal in this patch \nseries, and it also makes the affected code areas simpler and more \nconsistent and robust.\n\n\n\n", "msg_date": "Tue, 24 Sep 2024 10:50:32 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index AM API cleanup" }, { "msg_contents": "\n\n> On Sep 24, 2024, at 10:50 AM, Peter Eisentraut <[email protected]> wrote:\n> \n> Next, I have reviewed patches\n> \n> v17-0010-Track-sort-direction-in-SortGroupClause.patch\n> v17-0011-Track-scan-reversals-in-MergeJoin.patch\n> \n> Both of these seem ok and sensible to me.\n> \n> They take the concept of the \"reverse\" flag that already exists in the affected code and just apply it more consistently throughout the various code layers, instead of relying on strategy numbers as intermediate storage. This is both helpful for your ultimate goal in this patch series, and it also makes the affected code areas simpler and more consistent and robust.\n> \n\nThanks for the review!\n\nYes, I found the existing use of a btree strategy number rather than a boolean \"reverse\" flag made using the code from other index AMs needlessly harder. I am glad you see it the same way.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 24 Sep 2024 11:09:41 +0200", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index AM API cleanup" } ]
[ { "msg_contents": "\nHello,\n\nSometimes user want to measure the server's IO performance during some\ntroubleshooting *without* the access to postgresql server. Is there any\ngood way to measure it in such sistuation?\n\nMeasuring it with SELECT .. FROM t WHERE xxx, both shared_buffer in PG\nand file system buffer bothers. pg_read_binary_file is better, but file\nsystem cache still there. should we expose a direct-io option for\npg_read_binary_file? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 22 Aug 2024 09:18:56 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Measure the servers's IO performance" } ]
[ { "msg_contents": "I have had hard times understanding what RIR rules are while reading\nsome threads in pgsql-hackers. One example is 'Virtual generated\ncolumns'. I was trying to grep 'RIR rules', when I just started to\nread kernel sources. And I only today I finally bumped into this note\nin rewriteHandler.c\n\n```\n* NOTES\n* Some of the terms used in this file are of historic nature: \"retrieve\"\n* was the PostQUEL keyword for what today is SELECT. \"RIR\" stands for\n* \"Retrieve-Instead-Retrieve\", that is an ON SELECT DO INSTEAD SELECT rule\n* (which has to be unconditional and where only one rule can exist on each\n* relation).\n*\n\n```\n\nMaybe I'm really bad at searching things and nobody before had\nproblems understanding what RIR stands for. If not, should we enhance\ndocumentation in some way? If yes, what is the proper place?\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Thu, 22 Aug 2024 09:51:10 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": true, "msg_subject": "Better documentation for RIR term?" } ]
[ { "msg_contents": "Hi!\n\nI have an instance that started to consistently crash with segfault or\nbus error and most of the generated coredumps had corrupted stacks.\nSome salvageable frames showed the error happening within\nExecRunCompiledExpr. Sure enough, running the query with jit disabled\nstopped the crashes. The issue happens with the following setup:\n\nUbuntu jammy on arm64, 30G\npostgresql-14 14.12-1.pgdg22.04+1\nlibllvm15 1:15.0.7-0ubuntu0.22.04.3\n\nI was able to isolate the impacted database the db (pg_dump of the\ntable was not enough, a base backup had to be used) and reproduce the\nissue on a debug build of PostgresSQL. This time, there's no crash but\nit was stuck in an infinite loop within jit tuple deforming:\n\n#0 0x0000ec53660aa14c in deform_0_1 ()\n#1 0x0000ec53660aa064 in evalexpr_0_0 ()\n#2 0x0000ab8f9b322948 in ExecEvalExprSwitchContext\n(isNull=0xfffff47c3c87, econtext=0xab8fd0f13878, state=0xab8fd0f13c50)\nat executor/./build/../src/include/executor/executor.h:342\n#3 ExecProject (projInfo=0xab8fd0f13c48) at\nexecutor/./build/../src/include/executor/executor.h:376\n\nLooking at the generated assembly, the infinite loop happens between\ndeform_0_1+140 and deform_0_1+188\n\n// Store address page in x11 register\n0xec53660aa130 <deform_0_1+132> adrp x11, 0xec53fd308000\n// Start of the infinite loop\n0xec53660aa138 <deform_0_1+140> adr x8, 0xec53660aa138 <deform_0_1+140>\n// Load the content of 0xec53fd308000[x12] in x10, x12 was 0 at that time\n0xec53660aa13c <deform_0_1+144> ldrsw x10, [x11, x12, lsl #2]\n// Add the loaded offset to x8\n0xec53660aa140 <deform_0_1+148> add x8, x8, x10\n...\n// Branch to address in x8. Since x10 was 0, x8 has the value\ndeform_0_1+140, creating the infinite loop\n0xec53660aa168 <deform_0_1+188> br x8\n\nLooking at the content of 0xec53fd308000, We only see 0 values stored\nat the address.\n\nx/6 0xec53fd308000\n0xec53fd308000: 0x00000000 0x00000000 0x00000000 0x00000000\n0xec53fd308010: 0x00000000 0x00000000\n\nThe assembly matches the code for the find_start switch case in\nllvmjit_deform[1]. The content at the address 0xec53fd308000 should\ncontain the offset table from the PC to branch to the correct\nattcheckattnoblocks block. As a comparison, if I execute a query not\nimpacted by the issue (the size of the jit compiled module seems to be\na factor), I can see that the offset table was correctly filled.\n\nx/6 0xec55fd30700\n0xec55fd307000: 0x00000060 0x00000098 0x000000e8 0x00000170\n0xec55fd307010: 0x0000022c 0x000002e8\n\nI was suspecting something was erasing the content of the offset table\nso I've checked with rr. However, it was only initialized and nothing\nwas written at this memory address. I was starting to suspect a\npossible LLVM issue and ran the query against a debug build of\nllvm_jit. It immediately triggered the following assertion[2]:\n\nvoid llvm::RuntimeDyldELF::resolveAArch64Relocation(const\nllvm::SectionEntry &, uint64_t, uint64_t, uint32_t, int64_t):\nAssertion `isInt<33>(Result) && \"overflow check failed for\nrelocation\"' failed.\n\nThis happens when LLVM is resolving relocations.\n\n#5 __GI___assert_fail (assertion=0xf693f214771a \"isInt<33>(Result) &&\n\\\"overflow check failed for relocation\\\"\", file=0xf693f2147269\n\"/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp\",\nline=507, function=0xf693f214754f \"void\nllvm::RuntimeDyldELF::resolveAArch64Relocation(const\nllvm::SectionEntry &, uint64_t, uint64_t, uint32_t, int64_t)\") at\n./assert/assert.c:101\n#6 llvm::RuntimeDyldELF::resolveAArch64Relocation () at\n/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp:507\n#7 llvm::RuntimeDyldELF::resolveRelocation () at\n/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp:1044\n#8 llvm::RuntimeDyldELF::resolveRelocation () at\n/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp:1026\n#9 llvm::RuntimeDyldImpl::resolveRelocationList () at\n/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp:1112\n#10 llvm::RuntimeDyldImpl::resolveLocalRelocations () at\n/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp:157\n#11 llvm::RuntimeDyldImpl::finalizeAsync() at\n/var/lib/postgresql/llvm-project/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp:1247\n\nDuring the assertion failure, I have the following values:\nValue: 0xfbc84fab9000\nFinalAddress: 0xfbc5b9cea12c\nAddend: 0x0\nResult: 0x295dcf000\n\nThe result is indeed greater than an int32, triggering the assert.\nLooking at the sections created by LLVM in allocateSection[3], we have\n3 sections created:\n.text {Address = 0xfbc5b9cea000, AllocatedSize = 90112}\n.rodata {Address = 0xfbc84fab9000, AllocatedSize = 4096}\n.eh_frame {Address = 0xfbc84fab7000, AllocatedSize = 8192}\n\nWhen resolving relocation, the difference between the rodata section\nand the PC is computed and stored in the ADRP instruction. However,\nwhen a new section is allocated, LLVM will request a new memory block\nfrom the memory allocator[4]. The MemGroup.Near is passed as the start\nhint of mmap but that's only a hint and the kernel doesn't provide any\nguarantee that the new allocated block will be near. With the impacted\nquery, there are more than 10GB of gap between the .text section and\nthe .rodata section, making it impossible for the code in the .text\nsection to correctly fetch data from the .rodata section as the\naddress in ADRP is limited to a +/-4GB range.\n\nThere are mentions about this in the ABI that the GOT section should\nbe within 4GB from the text section[5]. Though in this case, there's\nno GOT section as the offsets are stored in the .rodata section but\nthe constraint is going to be similar. This is a known LLVM issue[6]\nthat impacted Impala, Numba and Julia. There's an open PR[7] to fix\nthe issue by allocating all sections as a single memory block,\navoiding the gaps between sections. There's also a related discussion\non this on llvm-rtdyld discourse[8].\n\nA possible mitigation is to switch from RuntimeDyld to JITLinking but\nthis requires at least LLVM15 as LLVM14 doesn't have any significant\nrelocation support for aarch64[9]. I did test using JITLinking on my\nimpacted db and it seems to fix the issue. JITLinking has no exposed C\ninterface though so it requires additional wrapping.\n\nI don't necessarily have a good answer for this issue. I've tried to\ntweak relocation settings or the jit code to avoid relocation without\ntoo much success. Ideally, the llvm fix will be merged and backported\nin llvm but the PR has been open for some time now. I've seen multiple\nsegfault reports that look similar to this issue (example: [10], [11])\nbut I don't think it was linked to the LLVM bug so I figured I would\nat least share my findings.\n\n[1] https://github.com/postgres/postgres/blob/REL_14_STABLE/src/backend/jit/llvm/llvmjit_deform.c#L364-L382\n[2] https://github.com/llvm/llvm-project/blob/release/14.x/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp#L501-L513\n[3] https://github.com/llvm/llvm-project/blob/release/14.x/llvm/lib/ExecutionEngine/SectionMemoryManager.cpp#L41C32-L41C47\n[4] https://github.com/llvm/llvm-project/blob/release/14.x/llvm/lib/ExecutionEngine/SectionMemoryManager.cpp#L94-L110\n[5] https://github.com/ARM-software/abi-aa/blob/main/sysvabi64/sysvabi64.rst#7code-models\n[6] https://github.com/llvm/llvm-project/issues/71963\n[7] https://github.com/llvm/llvm-project/pull/71968\n[8] https://discourse.llvm.org/t/llvm-rtdyld-aarch64-abi-relocation-restrictions/74616\n[9] https://github.com/llvm/llvm-project/blob/release/14.x/llvm/lib/ExecutionEngine/JITLink/ELF_aarch64.cpp#L75-L84\n[10] https://www.postgresql.org/message-id/flat/CABa%2BnRvwZy_5t1QF9NJNGwAf03tv_PO_Sg1FsN1%2B-3Odb1XgBA%40mail.gmail.com\n[11] https://www.postgresql.org/message-id/flat/CADAf1kavcN-kY%3DvEm3MYxhUa%2BrtGFs7tym5d7Ee6Ni2cwwxGqQ%40mail.gmail.com\n\nRegards,\nAnthonin Bonnefoy\n\n\n", "msg_date": "Thu, 22 Aug 2024 09:21:39 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Thu, Aug 22, 2024 at 7:22 PM Anthonin Bonnefoy\n<[email protected]> wrote:\n> Ideally, the llvm fix will be merged and backported\n> in llvm but the PR has been open for some time now.\n\nI fear that back-porting, for the LLVM project, would mean \"we fix it\nin main/20.x, and also back-port it to 19.x\". Do distros back-port\nfurther?\n\nNice detective work!\n\nThe JITLINK change sounds interesting, and like something we need to\ndo sooner or later.\n\n\n", "msg_date": "Thu, 22 Aug 2024 22:32:41 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Thu, Aug 22, 2024 at 12:33 PM Thomas Munro <[email protected]> wrote:\n> I fear that back-porting, for the LLVM project, would mean \"we fix it\n> in main/20.x, and also back-port it to 19.x\". Do distros back-port\n> further?\n\nThat's also my fear, I'm not familiar with distros back-port policy\nbut eyeballing ubuntu package changelog[1], it seems to be mostly\nbuild fixes.\n\nGiven that there's no visible way to fix the relocation issue, I\nwonder if jit shouldn't be disabled for arm64 until either the\nRuntimeDyld fix is merged or the switch to JITLink is done. Disabling\njit tuple deforming may be enough but I'm not confident the issue\nwon't happen in a different part.\n\n[1] https://launchpad.net/ubuntu/+source/llvm-toolchain-16/+changelog\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:22:25 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Sat, Aug 24, 2024 at 12:22 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n> On Thu, Aug 22, 2024 at 12:33 PM Thomas Munro <[email protected]> wrote:\n> > I fear that back-porting, for the LLVM project, would mean \"we fix it\n> > in main/20.x, and also back-port it to 19.x\". Do distros back-port\n> > further?\n>\n> That's also my fear, I'm not familiar with distros back-port policy\n> but eyeballing ubuntu package changelog[1], it seems to be mostly\n> build fixes.\n>\n> Given that there's no visible way to fix the relocation issue, I\n> wonder if jit shouldn't be disabled for arm64 until either the\n> RuntimeDyld fix is merged or the switch to JITLink is done. Disabling\n> jit tuple deforming may be enough but I'm not confident the issue\n> won't happen in a different part.\n\nWe've experienced something a little similar before: In the early days\nof PostgreSQL LLVM, it didn't work at all on ARM or POWER. We sent a\ntrivial fix[1] upstream that landed in LLVM 7; since it was a small\nand obvious problem and it took a long time for some distros to ship\nLLVM 7, we even contemplated hot-patching that LLVM function with our\nown copy (but, ugh, only for about 7 nanoseconds). That was before we\nturned JIT on by default, and was also easier to deal with because it\nwas an obvious consistent failure in basic tests, so packagers\nprobably just disabled the build option on those architectures. IIUC\nthis one is a random and rare crash depending on malloc() and perhaps\nalso the working size of your virtual memory dart board. (Annoyingly,\nI had tried to reproduce this quite a few times on small ARM systems\nwhen earlier reports came in, d'oh!).\n\nThis degree of support window mismatch is probably what triggered RHEL\nto develop their new rolling LLVM version policy. Unfortunately, it's\nthe other distros that tell *us* which versions to support, and not\nthe reverse (for example CF #4920 is about to drop support for LLVM <\n14, but that will only be for PostgreSQL 18+).\n\nUltimately, if it doesn't work, and doesn't get fixed, it's hard for\nus to do much about it. But hmm, this is probably madness... I wonder\nif it would be feasible to detect address span overflow ourselves at a\nuseful time, as a kind of band-aid defence...\n\n[1] https://www.postgresql.org/message-id/CAEepm%3D39F_B3Ou8S3OrUw%2BhJEUP3p%3DwCu0ug-TTW67qKN53g3w%40mail.gmail.com\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:32:34 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Mon, Aug 26, 2024 at 4:33 AM Thomas Munro <[email protected]> wrote:\n> IIUC this one is a random and rare crash depending on malloc() and\n> perhaps also the working size of your virtual memory dart board.\n> (Annoyingly, I had tried to reproduce this quite a few times on small ARM\n> systems when earlier reports came in, d'oh!).\n\nallocateMappedMemory used when creating sections will eventually call\nmmap[1], not malloc. So the amount of shared memory configured may be\na factor in triggering the issue.\n\nMy first attempts to reproduce the issue from scratch weren't\nsuccessful either. However, trying again with different values of\nshared_buffers, I've managed to trigger the issue somewhat reliably.\n\nOn a clean Ubuntu jammy, I've compiled the current PostgreSQL\nREL_14_STABLE (6bc2bfc3) with the following options:\nCLANG=clang-14 ../configure --enable-cassert --enable-debug --prefix\n~/.local/ --with-llvm\n\nSet \"shared_buffers = '4GB'\" in the configuration. More may be needed\nbut 4GB was enough for me.\n\nCreate a table with multiple partitions with pgbench. The goal is to\nhave a jit module big enough to trigger the issue.\npgbench -i --partitions=64\n\nThen run the following query with jit forcefully enabled:\npsql options=-cjit_above_cost=0 -c 'SELECT count(bid) from pgbench_accounts;'\n\nIf the issue was successfully triggered, it should segfault or be\nstuck in an infinite loop.\n\n> Ultimately, if it doesn't work, and doesn't get fixed, it's hard for\n> us to do much about it. But hmm, this is probably madness... I wonder\n> if it would be feasible to detect address span overflow ourselves at a\n> useful time, as a kind of band-aid defence...\n\nThere's a possible alternative, but it's definitely in the same\ncategory as the hot-patching idea. llvmjit uses\nLLVMOrcCreateRTDyldObjectLinkingLayerWithSectionMemoryManager to\ncreate the ObjectLinkingLayer and it will be created with the default\nSectionMemoryManager[2]. It should be possible to provide a modified\nSectionMemoryManager with the change to allocate sections in a single\nblock and it could be restricted to arm64 architecture. A part of me\ntells me this is probably a bad idea but on the other hand, LLVM\nprovides this way to plug a custom allocator and it would fix the\nissue...\n\n[1] https://github.com/llvm/llvm-project/blob/release/14.x/llvm/lib/Support/Unix/Memory.inc#L115-L117\n[2] https://github.com/llvm/llvm-project/blob/release/14.x/llvm/lib/ExecutionEngine/Orc/OrcV2CBindings.cpp#L967-L973\n\n\n", "msg_date": "Mon, 26 Aug 2024 16:16:41 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Tue, Aug 27, 2024 at 2:16 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n> There's a possible alternative, but it's definitely in the same\n> category as the hot-patching idea. llvmjit uses\n> LLVMOrcCreateRTDyldObjectLinkingLayerWithSectionMemoryManager to\n> create the ObjectLinkingLayer and it will be created with the default\n> SectionMemoryManager[2]. It should be possible to provide a modified\n> SectionMemoryManager with the change to allocate sections in a single\n> block and it could be restricted to arm64 architecture. A part of me\n> tells me this is probably a bad idea but on the other hand, LLVM\n> provides this way to plug a custom allocator and it would fix the\n> issue...\n\nInteresting. Here is a quick hack to experiment with injecting a new\nmemory manager. This one just wraps the normal one and logs the\naddresses it allocates, but from here, you're right, we could try to\nconstraint its address range somehow (or perhaps just check its range\nand fail gracefully).", "msg_date": "Tue, 27 Aug 2024 09:06:29 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "Here is an experimental attempt to steal the SectorMemoryManager from\nhttps://github.com/llvm/llvm-project/pull/71968, rename it to\nSafeSectorMemoryManager, and inject it as shown in the previous patch.\nAnother approach might be to try to make a new class that derives from\nSectorMemoryManager and adjusts minimal bits and pieces, but I figured\nit would be easier to diff against their code if we take the whole\nfile. Hmm, I guess if \"diff\" convenience is the driving factor, it\nmight be better to use a different namespace instead of a different\nname...\n\nI am sure this requires changes for various LLVM versions. I tested\nit with LLVM 14 on a Mac where I've never managed to reproduce the\noriginal complaint, but ... ooooh, this might be exacerbated by ASLR,\nand macOS only has a small ALSR slide window (16M or 256M apparently,\naccording to me in another thread), so I'd probably have to interpose\nmy own mmap() to choose some more interesting addresses, or run some\nother OS, but that's quite enough rabbit holes for one morning.", "msg_date": "Tue, 27 Aug 2024 11:32:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Tue, Aug 27, 2024 at 11:32 AM Thomas Munro <[email protected]> wrote:\n> SectorMemoryManager\n\nErm, \"Section\". (I was working on some file system stuff at the\nweekend, and apparently my fingers now auto-complete \"sector\".)\n\n\n", "msg_date": "Tue, 27 Aug 2024 11:34:46 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Tue, Aug 27, 2024 at 1:33 AM Thomas Munro <[email protected]> wrote:\n> I am sure this requires changes for various LLVM versions. I tested\n> it with LLVM 14 on a Mac where I've never managed to reproduce the\n> original complaint, but ... ooooh, this might be exacerbated by ASLR,\n> and macOS only has a small ALSR slide window (16M or 256M apparently,\n> according to me in another thread), so I'd probably have to interpose\n> my own mmap() to choose some more interesting addresses, or run some\n> other OS, but that's quite enough rabbit holes for one morning.\n\nI've tested the patch. I had to make sure the issue was triggered on\nmaster first. The issue didn't happen with 4GB shared_buffers and 64\npartitions. However, increasing to 6GB and 128 partitions triggered\nthe issue.\n\nThe architecture check in the patch was incorrect (__arch64__ instead\nof __aarch64__, glad to see I'm not the only one being confused with\naarch64 and arm64 :)) but once fixed, it worked and avoided the\nsegfault.\n\nI've run some additional tests to try to test different parameters:\n- I've tried disabling randomize_va_space, the issue still happened\neven with ASLR disabled.\n- I've tested different PG versions. With 14 and 15, 4GB and 64\npartitions were enough. Starting PG 16, I had to increase\nshared_buffers to 6GB and partitions to 128. I've been able to trigger\nthe issue on all versions from 14 to master (which was expected but I\nwanted confirmation)\n- I haven't been able to reproduce this on a macOS either. I've tried\nto remove MemGroup.Near hint so mmap addresses would be more random\nand played with different shared_buffers and partition values without\nsuccess\n\nI've modified the patch with 3 changes:\n- meson.build was using SectionMemoryManager.cpp file name, I've\nreplaced with SafeSectionMemoryManager.cpp\n- Use __aarch64__ instead of __arch64__\n- Moved the architecture switch to llvm_create_object_layer and go\nthrough the normal\nLLVMOrcCreateRTDyldObjectLinkingLayerWithSectionMemoryManager on non\narm64 architectures. There's no need to use the custom memory manager\nfor non arm64 so it looked better to avoid it entirely if there's no\nneed for the reserve allocation.", "msg_date": "Tue, 27 Aug 2024 11:24:20 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "Thanks! And that's great news. Do you want to report this experience\nto the PR, in support of committing it? That'd make it seem easier to\nconsider shipping a back-ported copy...\n\n\n", "msg_date": "Tue, 27 Aug 2024 22:03:31 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Tue, Aug 27, 2024 at 12:01 PM Thomas Munro <[email protected]> wrote:\n>\n> Thanks! And that's great news. Do you want to report this experience\n> to the PR, in support of committing it? That'd make it seem easier to\n> consider shipping a back-ported copy...\n\nYes, I will do that.\n\n\n", "msg_date": "Tue, 27 Aug 2024 14:07:02 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Wed, Aug 28, 2024 at 12:07 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n> On Tue, Aug 27, 2024 at 12:01 PM Thomas Munro <[email protected]> wrote:\n> > Thanks! And that's great news. Do you want to report this experience\n> > to the PR, in support of committing it? That'd make it seem easier to\n> > consider shipping a back-ported copy...\n>\n> Yes, I will do that.\n\nThanks. Here's a slightly tidied up version:\n\n1. I used namespace llvm::backport, instead of a different class\nname. That minimises the diff against their code.\n\n2. I tested against LLVM 10-18, and found that 10 and 11 lack some\nneeded symbols. So I just hid this code from them. Even though our\nstable branches support those and even older versions, I am not sure\nif it's worth trying to do something about that for EOL'd distros that\nno one has ever complained about. I am willing to try harder if\nsomeone thinks that's important...\n\nOne little problem I am aware of is that if you make an empty .o,\nmacOS's new linker issues a warning, but I think I could live with\nthat. I guess I could put a dummy symbol in there... FWIW those old\nLLVM versions spit out tons of other warnings from the headers on\nnewer compilers too, so *shrug*, don't use them? But then if this\ncode lands in LLVM 19 we'll also be hiding it for 19+ too.\n\nNext, I think we should wait to see if the LLVM project commits that\nPR, this so that we can sync with their 19.x stable branch, instead of\nusing code from a PR. Our next minor release is in November, so we\nhave some time. If they don't commit it, we can consider it anyway: I\nmean, it's crashing all over the place in production, and we see that\nother projects are shipping this code already.", "msg_date": "Wed, 28 Aug 2024 10:23:55 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "Slightly better version, which wraps the conditional code in #ifdef\nUSE_LLVM_BACKPORT_SECTION_MEMORY_MANAGER.", "msg_date": "Wed, 28 Aug 2024 12:36:56 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "On Wed, Aug 28, 2024 at 12:24 AM Thomas Munro <[email protected]> wrote:\n> 2. I tested against LLVM 10-18, and found that 10 and 11 lack some\n> needed symbols. So I just hid this code from them. Even though our\n> stable branches support those and even older versions, I am not sure\n> if it's worth trying to do something about that for EOL'd distros that\n> no one has ever complained about. I am willing to try harder if\n> someone thinks that's important...\n\nI would also assume that people using arm64 are more likely to use\nrecent versions than not.\n\nI've done some additional tests on different LLVM versions with both\nthe unpatched version (to make sure the crash was triggered) and the\npatched version. I'm joining the test scripts I've used as reference.\nThey target a kubernetes pod since it was the easiest way for me to\nget a test ubuntu Jammy:\n- setup_pod.sh: Install necessary packages, get multiple llvm\nversions, fetch and compile master and patched version of postgres on\ndifferent LLVM version\n- run_test.sh: go through all LLVM versions for both unpatched and\npatched postgres to run the test_script.sh\n- test_script.sh: ran inside the pod to setup the db with the\nnecessary tables and check if the crash happens\n\nThis generated the following output:\nTest unpatched version on LLVM 19, : Crash triggered\nTest unpatched version on LLVM 18, libLLVM-18.so.18.1: Crash triggered\nTest unpatched version on LLVM 17, libLLVM-17.so.1: Crash triggered\nTest unpatched version on LLVM 16, libLLVM-16.so.1: Crash triggered\nTest unpatched version on LLVM 15, libLLVM-15.so.1: Crash triggered\nTest unpatched version on LLVM 14, libLLVM-14.so.1: Crash triggered\nTest unpatched version on LLVM 13, libLLVM-13.so.1: Crash triggered\n\nTest patched version on LLVM 19, : Query ran successfully\nTest patched version on LLVM 18, libLLVM-18.so.18.1: Query ran successfully\nTest patched version on LLVM 17, libLLVM-17.so.1: Query ran successfully\nTest patched version on LLVM 16, libLLVM-16.so.1: Query ran successfully\nTest patched version on LLVM 15, libLLVM-15.so.1: Query ran successfully\nTest patched version on LLVM 14, libLLVM-14.so.1: Query ran successfully\nTest patched version on LLVM 13, libLLVM-13.so.1: Query ran successfully\n\nI try to print the libLLVM linked to llvm.jit in the output to double\ncheck whether I test on the correct version. The LLVM 19 package only\nprovides static libraries (probably because it's still a release\ncandidate?) so it shows as empty in the output. There was no LLVM 12\navailable when using the llvm.sh script so I couldn't test it. As for\nthe result, prepatch PG all crashed as expected while the patched\nversion was able to run the query successfully.\n\n> Next, I think we should wait to see if the LLVM project commits that\n> PR, this so that we can sync with their 19.x stable branch, instead of\n> using code from a PR. Our next minor release is in November, so we\n> have some time. If they don't commit it, we can consider it anyway: I\n> mean, it's crashing all over the place in production, and we see that\n> other projects are shipping this code already.\n\nThe PR[1] just received an approval and it sounds like they are ok to\neventually merge it.\n\n[1] https://github.com/llvm/llvm-project/pull/71968", "msg_date": "Thu, 29 Aug 2024 14:18:01 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" }, { "msg_contents": "I created a commitfest entry[1] to have the CI test the patch. There\nwas a failure in headerscheck and cpluspluscheck when the include of\nSectionMemoryManager.h is checked[2]\n\nIn file included from /usr/include/llvm/ADT/SmallVector.h:18,\nfrom /tmp/cirrus-ci-build/src/include/jit/SectionMemoryManager.h:23,\nfrom /tmp/headerscheck.4b1i5C/test.c:2:\n/usr/include/llvm/Support/type_traits.h:17:10: fatal error:\ntype_traits: No such file or directory\n 17 | #include <type_traits>\n\nSince the SmallVector.h include type_traits, this file can't be\ncompiled with a C compiler so I've just excluded it from headerscheck.\n\nLoosely related to headerscheck, running it locally was failing as it\ncouldn't find the <llvm-c/Core.h> file. This is because headerscheck\nexcept llvm include files to be in /usr/include and don't rely on\nllvm-config. I created a second patch to use the LLVM_CPPFLAGS as\nextra flags when testing the src/include/jit/* files.\n\nLastly, I've used www.github.com instead of github.com link to stop\nspamming the llvm-project's PR with reference to the commit every time\nit is pushed somewhere (which seems to be the unofficial hack[3]).\n\n[1] https://commitfest.postgresql.org/49/5220/\n[2] https://cirrus-ci.com/task/4646639124611072?logs=headers_headerscheck#L42-L46\n[3] https://github.com/orgs/community/discussions/23123#discussioncomment-3239240", "msg_date": "Fri, 30 Aug 2024 16:34:03 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Segfault in jit tuple deforming on arm64 due to LLVM issue" } ]
[ { "msg_contents": "I ran into a query plan where the Result node seems redundant to me:\n\ncreate table t (a int, b int, c int);\ninsert into t select i%10, i%10, i%10 from generate_series(1,100)i;\ncreate index on t (a, b);\nanalyze t;\n\nset enable_hashagg to off;\nset enable_seqscan to off;\n\nexplain (verbose, costs off)\nselect distinct b, a from t order by a, b;\n QUERY PLAN\n---------------------------------------------------------\n Result\n Output: b, a\n -> Unique\n Output: a, b\n -> Index Only Scan using t_a_b_idx on public.t\n Output: a, b\n(6 rows)\n\nWhat I expect is that both the Scan node and the Unique node output\n'b, a', and we do not need an additional projection step, something\nlike:\n\nexplain (verbose, costs off)\nselect distinct b, a from t order by a, b;\n QUERY PLAN\n---------------------------------------------------\n Unique\n Output: b, a\n -> Index Only Scan using t_a_b_idx on public.t\n Output: b, a\n(4 rows)\n\nI looked into this a little bit and found that in function\ncreate_ordered_paths, we decide whether a projection step is needed\nbased on a simple pointer comparison between sorted_path->pathtarget\nand final_target.\n\n /* Add projection step if needed */\n if (sorted_path->pathtarget != target)\n sorted_path = apply_projection_to_path(root, ordered_rel,\n sorted_path, target);\n\nThis does not seem right to me, as PathTargets are not canonical, so\nwe cannot guarantee that two identical PathTargets will have the same\npointer. Actually, for the query above, the two PathTargets are\nidentical but have different pointers.\n\nI wonder if we need to invent a function to compare two PathTargets.\nAlternatively, in this case, would it suffice to simply compare\nPathTarget.exprs?\n\nThanks\nRichard\n\n\n", "msg_date": "Thu, 22 Aug 2024 15:34:05 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Redundant Result node" }, { "msg_contents": "On Thu, 22 Aug 2024 at 19:34, Richard Guo <[email protected]> wrote:\n> /* Add projection step if needed */\n> if (sorted_path->pathtarget != target)\n> sorted_path = apply_projection_to_path(root, ordered_rel,\n> sorted_path, target);\n>\n> This does not seem right to me, as PathTargets are not canonical, so\n> we cannot guarantee that two identical PathTargets will have the same\n> pointer. Actually, for the query above, the two PathTargets are\n> identical but have different pointers.\n>\n> I wonder if we need to invent a function to compare two PathTargets.\n> Alternatively, in this case, would it suffice to simply compare\n> PathTarget.exprs?\n\nI think tlist.c would be a good home for such a function. If you go\nwith the function route, then it's easier to add optimisations such as\nchecking if the pointers are equal before going to the trouble of\nchecking if the exprs match.\n\nDavid\n\n\n", "msg_date": "Thu, 22 Aug 2024 23:15:00 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On 22.08.24 09:34, Richard Guo wrote:\n> I looked into this a little bit and found that in function\n> create_ordered_paths, we decide whether a projection step is needed\n> based on a simple pointer comparison between sorted_path->pathtarget\n> and final_target.\n> \n> /* Add projection step if needed */\n> if (sorted_path->pathtarget != target)\n> sorted_path = apply_projection_to_path(root, ordered_rel,\n> sorted_path, target);\n> \n> This does not seem right to me, as PathTargets are not canonical, so\n> we cannot guarantee that two identical PathTargets will have the same\n> pointer. Actually, for the query above, the two PathTargets are\n> identical but have different pointers.\n> \n> I wonder if we need to invent a function to compare two PathTargets.\n\nWouldn't the normal node equal() work?\n\n\n\n", "msg_date": "Thu, 22 Aug 2024 13:32:43 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Thu, 22 Aug 2024 at 23:33, Peter Eisentraut <[email protected]> wrote:\n> > I wonder if we need to invent a function to compare two PathTargets.\n>\n> Wouldn't the normal node equal() work?\n\nIt might. I think has_volatile_expr might be missing a\npg_node_attr(equal_ignore).\n\nDavid\n\n\n", "msg_date": "Fri, 23 Aug 2024 00:02:48 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "Hi.\n\nEm qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]>\nescreveu:\n\n> I ran into a query plan where the Result node seems redundant to me:\n>\n> create table t (a int, b int, c int);\n> insert into t select i%10, i%10, i%10 from generate_series(1,100)i;\n> create index on t (a, b);\n> analyze t;\n>\n> set enable_hashagg to off;\n> set enable_seqscan to off;\n>\n> explain (verbose, costs off)\n> select distinct b, a from t order by a, b;\n> QUERY PLAN\n> ---------------------------------------------------------\n> Result\n> Output: b, a\n> -> Unique\n> Output: a, b\n> -> Index Only Scan using t_a_b_idx on public.t\n> Output: a, b\n> (6 rows)\n>\n> What I expect is that both the Scan node and the Unique node output\n> 'b, a', and we do not need an additional projection step, something\n> like:\n>\n> explain (verbose, costs off)\n> select distinct b, a from t order by a, b;\n> QUERY PLAN\n> ---------------------------------------------------\n> Unique\n> Output: b, a\n> -> Index Only Scan using t_a_b_idx on public.t\n> Output: b, a\n> (4 rows)\n>\n> I looked into this a little bit and found that in function\n> create_ordered_paths, we decide whether a projection step is needed\n> based on a simple pointer comparison between sorted_path->pathtarget\n> and final_target.\n>\n> /* Add projection step if needed */\n> if (sorted_path->pathtarget != target)\n> sorted_path = apply_projection_to_path(root, ordered_rel,\n> sorted_path, target);\n>\n> This does not seem right to me, as PathTargets are not canonical, so\n> we cannot guarantee that two identical PathTargets will have the same\n> pointer. Actually, for the query above, the two PathTargets are\n> identical but have different pointers.\n>\nCould memcmp solve this?\n\nWith patch attached, using memcmp to compare the pointers.\n\nselect distinct b, a from t order by a, b;\n QUERY PLAN\n----------------------------------\n Sort\n Output: b, a\n Sort Key: t.a, t.b\n -> HashAggregate\n Output: b, a\n Group Key: t.a, t.b\n -> Seq Scan on public.t\n Output: a, b, c\n(8 rows)\n\nattached patch for consideration.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 22 Aug 2024 10:02:20 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Thu, 22 Aug 2024 at 15:02, Ranier Vilela <[email protected]> wrote:\n\n> Hi.\n>\n> Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]>\n> escreveu:\n>\n>> I ran into a query plan where the Result node seems redundant to me:\n>>\n>> create table t (a int, b int, c int);\n>> insert into t select i%10, i%10, i%10 from generate_series(1,100)i;\n>> create index on t (a, b);\n>> analyze t;\n>>\n>> set enable_hashagg to off;\n>> set enable_seqscan to off;\n>>\n>> explain (verbose, costs off)\n>> select distinct b, a from t order by a, b;\n>> QUERY PLAN\n>> ---------------------------------------------------------\n>> Result\n>> Output: b, a\n>> -> Unique\n>> Output: a, b\n>> -> Index Only Scan using t_a_b_idx on public.t\n>> Output: a, b\n>> (6 rows)\n>>\n>> What I expect is that both the Scan node and the Unique node output\n>> 'b, a', and we do not need an additional projection step, something\n>> like:\n>>\n>> explain (verbose, costs off)\n>> select distinct b, a from t order by a, b;\n>> QUERY PLAN\n>> ---------------------------------------------------\n>> Unique\n>> Output: b, a\n>> -> Index Only Scan using t_a_b_idx on public.t\n>> Output: b, a\n>> (4 rows)\n>>\n>> I looked into this a little bit and found that in function\n>> create_ordered_paths, we decide whether a projection step is needed\n>> based on a simple pointer comparison between sorted_path->pathtarget\n>> and final_target.\n>>\n>> /* Add projection step if needed */\n>> if (sorted_path->pathtarget != target)\n>> sorted_path = apply_projection_to_path(root, ordered_rel,\n>> sorted_path, target);\n>>\n>> This does not seem right to me, as PathTargets are not canonical, so\n>> we cannot guarantee that two identical PathTargets will have the same\n>> pointer. Actually, for the query above, the two PathTargets are\n>> identical but have different pointers.\n>>\n> Could memcmp solve this?\n>\n> With patch attached, using memcmp to compare the pointers.\n>\n> select distinct b, a from t order by a, b;\n> QUERY PLAN\n> ----------------------------------\n> Sort\n> Output: b, a\n> Sort Key: t.a, t.b\n> -> HashAggregate\n> Output: b, a\n> Group Key: t.a, t.b\n> -> Seq Scan on public.t\n> Output: a, b, c\n> (8 rows)\n>\n> attached patch for consideration.\n>\n> best regards,\n> Ranier Vilela\n>\n\n+1 for the idea of removing this redundant node.\nI had a look in this patch, and I was wondering if we still need\nsorted_path->pathtarget != target in the condition.\n\nApart from that,\n- if (sorted_path->pathtarget != target)\n+ if (sorted_path->pathtarget != target &&\n+ memcmp(sorted_path->pathtarget, target,\nsizeof(PathTarget)) != 0)\nAn extra space is there, please fix it.\n\nSome regression tests should be added for this.\n\n-- \nRegards,\nRafia Sabih\n\nOn Thu, 22 Aug 2024 at 15:02, Ranier Vilela <[email protected]> wrote:Hi.Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]> escreveu:I ran into a query plan where the Result node seems redundant to me:\n\ncreate table t (a int, b int, c int);\ninsert into t select i%10, i%10, i%10 from generate_series(1,100)i;\ncreate index on t (a, b);\nanalyze t;\n\nset enable_hashagg to off;\nset enable_seqscan to off;\n\nexplain (verbose, costs off)\nselect distinct b, a from t order by a, b;\n                       QUERY PLAN\n---------------------------------------------------------\n Result\n   Output: b, a\n   ->  Unique\n         Output: a, b\n         ->  Index Only Scan using t_a_b_idx on public.t\n               Output: a, b\n(6 rows)\n\nWhat I expect is that both the Scan node and the Unique node output\n'b, a', and we do not need an additional projection step, something\nlike:\n\nexplain (verbose, costs off)\nselect distinct b, a from t order by a, b;\n                    QUERY PLAN\n---------------------------------------------------\n Unique\n   Output: b, a\n   ->  Index Only Scan using t_a_b_idx on public.t\n         Output: b, a\n(4 rows)\n\nI looked into this a little bit and found that in function\ncreate_ordered_paths, we decide whether a projection step is needed\nbased on a simple pointer comparison between sorted_path->pathtarget\nand final_target.\n\n    /* Add projection step if needed */\n    if (sorted_path->pathtarget != target)\n        sorted_path = apply_projection_to_path(root, ordered_rel,\n                                               sorted_path, target);\n\nThis does not seem right to me, as PathTargets are not canonical, so\nwe cannot guarantee that two identical PathTargets will have the same\npointer.  Actually, for the query above, the two PathTargets are\nidentical but have different pointers.Could memcmp solve this?With patch attached, using memcmp to compare the pointers.select distinct b, a from t order by a, b;            QUERY PLAN---------------------------------- Sort   Output: b, a   Sort Key: t.a, t.b   ->  HashAggregate         Output: b, a         Group Key: t.a, t.b         ->  Seq Scan on public.t               Output: a, b, c(8 rows)attached patch for consideration.best regards,Ranier Vilela +1 for the idea of removing this redundant node.I had a look in this patch, and I was wondering if we still need sorted_path->pathtarget != target in the condition.Apart from that,-               if (sorted_path->pathtarget != target)+               if (sorted_path->pathtarget != target &&+                   memcmp(sorted_path->pathtarget, target, sizeof(PathTarget)) != 0)An extra space is there, please fix it.Some regression tests should be added for this.-- Regards,Rafia Sabih", "msg_date": "Thu, 22 Aug 2024 22:17:13 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Thu, Aug 22, 2024 at 8:03 PM David Rowley <[email protected]> wrote:\n> On Thu, 22 Aug 2024 at 23:33, Peter Eisentraut <[email protected]> wrote:\n> > > I wonder if we need to invent a function to compare two PathTargets.\n> >\n> > Wouldn't the normal node equal() work?\n>\n> It might. I think has_volatile_expr might be missing a\n> pg_node_attr(equal_ignore).\n\nYeah, maybe we can make the node equal() work for PathTarget. We'll\nneed to remove the no_equal attribute in PathTarget. I think we also\nneed to specify pg_node_attr(equal_ignore) for PathTarget.cost.\n\nBTW, I'm wondering why we specify no_copy for PathTarget, while\nmeanwhile implementing a separate function copy_pathtarget() in\ntlist.c to copy a PathTarget. Can't we support copyObject() for\nPathTarget?\n\nAlso the pg_node_attr(array_size(exprs)) attribute for\nPathTarget.sortgrouprefs does not seem right to me. In a lot of cases\nsortgrouprefs would just be NULL. Usually it is valid only for\nupper-level Paths. Hmm, maybe this is why we do not support\ncopyObject() for PathTarget?\n\nThanks\nRichard\n\n\n", "msg_date": "Fri, 23 Aug 2024 10:31:14 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Thu, Aug 22, 2024 at 3:34 PM Richard Guo <[email protected]> wrote:\n> /* Add projection step if needed */\n> if (sorted_path->pathtarget != target)\n> sorted_path = apply_projection_to_path(root, ordered_rel,\n> sorted_path, target);\n>\n> This does not seem right to me, as PathTargets are not canonical, so\n> we cannot guarantee that two identical PathTargets will have the same\n> pointer. Actually, for the query above, the two PathTargets are\n> identical but have different pointers.\n\nFWIW, the redundant-projection issue is more common in practice than I\ninitially thought. For a simple query as below:\n\nexplain (verbose, costs off) select a from t order by 1;\n QUERY PLAN\n----------------------------\n Sort\n Output: a\n Sort Key: t.a\n -> Seq Scan on public.t\n Output: a\n(5 rows)\n\n... we'll always make a separate ProjectionPath on top of the SortPath\nin create_ordered_paths. It’s only when we create the plan node for\nthe projection step in createplan.c that we realize a separate Result\nis unnecessary. This is not efficient.\n\nThanks\nRichard\n\n\n", "msg_date": "Fri, 23 Aug 2024 11:08:47 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> ... we'll always make a separate ProjectionPath on top of the SortPath\n> in create_ordered_paths. It’s only when we create the plan node for\n> the projection step in createplan.c that we realize a separate Result\n> is unnecessary. This is not efficient.\n\nI'm not sure you're considering \"efficiency\" in the right light.\nIn my mind, any time we can postpone work from path-creation time\nto plan-creation time, we're probably winning because we create\nmany more paths than plans. Perhaps that's wrong in this case,\nbut it's not anywhere near as obvious as you suggest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Aug 2024 23:19:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Fri, Aug 23, 2024 at 11:19 AM Tom Lane <[email protected]> wrote:\n> Richard Guo <[email protected]> writes:\n> > ... we'll always make a separate ProjectionPath on top of the SortPath\n> > in create_ordered_paths. It’s only when we create the plan node for\n> > the projection step in createplan.c that we realize a separate Result\n> > is unnecessary. This is not efficient.\n>\n> I'm not sure you're considering \"efficiency\" in the right light.\n> In my mind, any time we can postpone work from path-creation time\n> to plan-creation time, we're probably winning because we create\n> many more paths than plans. Perhaps that's wrong in this case,\n> but it's not anywhere near as obvious as you suggest.\n\nI agree that it’s always desirable to postpone work from path-creation\ntime to plan-creation time. In this case, however, it’s a little\ndifferent. The projection step could actually be avoided from the\nstart if we perform the correct check in create_ordered_paths.\n\nThanks\nRichard\n\n\n", "msg_date": "Fri, 23 Aug 2024 11:48:37 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Fri, Aug 23, 2024 at 11:19 AM Tom Lane <[email protected]> wrote:\n>> I'm not sure you're considering \"efficiency\" in the right light.\n\n> I agree that it’s always desirable to postpone work from path-creation\n> time to plan-creation time. In this case, however, it’s a little\n> different. The projection step could actually be avoided from the\n> start if we perform the correct check in create_ordered_paths.\n\nWell, the question is how expensive is the \"correct check\" compared\nto what we're doing now. It might be cheaper than creating an extra\nlevel of path node, or it might not. An important factor here is\nthat we'd pay the extra cost of a more complex check every time,\nwhether it avoids creation of an extra path node or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Aug 2024 23:56:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Fri, Aug 23, 2024 at 11:56 AM Tom Lane <[email protected]> wrote:\n> Richard Guo <[email protected]> writes:\n> > I agree that it’s always desirable to postpone work from path-creation\n> > time to plan-creation time. In this case, however, it’s a little\n> > different. The projection step could actually be avoided from the\n> > start if we perform the correct check in create_ordered_paths.\n>\n> Well, the question is how expensive is the \"correct check\" compared\n> to what we're doing now. It might be cheaper than creating an extra\n> level of path node, or it might not. An important factor here is\n> that we'd pay the extra cost of a more complex check every time,\n> whether it avoids creation of an extra path node or not.\n\nFair point. After looking at the code for a while, I believe it is\nsufficient to compare PathTarget.exprs after we've checked that the\ntwo targets have different pointers.\n\nThe sorted_path here should have projected the correct target required\nby the preceding steps of sort, i.e. sort_input_target. We need to\ndetermine whether this target matches final_target. If this target is\nthe same pointer as sort_input_target, a simple pointer comparison, as\nthe current code does, is sufficient, because if no post-sort\nprojection is needed, sort_input_target will always be equal to\nfinal_target.\n\nHowever, sorted_path's target might not be the same pointer as\nsort_input_target, because in apply_scanjoin_target_to_paths, if the\ntarget to be applied has the same expressions as the existing\nreltarget, we only inject the sortgroupref info into the existing\npathtargets, rather than create projection paths. As a result,\npointer comparison in create_ordered_paths is not reliable.\n\nInstead, we can compare PathTarget.exprs to determine whether a\nprojection step is needed. If the expressions match, we can be\nconfident that a post-sort projection is not required.\n\nIf this conclusion is correct, I think the extra cost of comparing\nPathTarget.exprs only when pointer comparison fails should be\nacceptable. We have already done this for\napply_scanjoin_target_to_paths, and I think the rationale there\napplies here as well:\n\n* ... By avoiding the creation of\n* projection paths we save effort both immediately and at plan creation time.\n\nBesides, it can help avoid a Result node in the final plan in some\ncases, as shown by my initial example.\n\nHence, I propose the attached fix. There are two ensuing plan diffs\nin the regression tests, but they look reasonable and are exactly what\nwe are fixing here.\n\nThanks\nRichard", "msg_date": "Fri, 23 Aug 2024 16:27:42 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "Hi Rafia.\n\nEm qui., 22 de ago. de 2024 às 17:17, Rafia Sabih <[email protected]>\nescreveu:\n\n>\n>\n> On Thu, 22 Aug 2024 at 15:02, Ranier Vilela <[email protected]> wrote:\n>\n>> Hi.\n>>\n>> Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]>\n>> escreveu:\n>>\n>>> I ran into a query plan where the Result node seems redundant to me:\n>>>\n>>> create table t (a int, b int, c int);\n>>> insert into t select i%10, i%10, i%10 from generate_series(1,100)i;\n>>> create index on t (a, b);\n>>> analyze t;\n>>>\n>>> set enable_hashagg to off;\n>>> set enable_seqscan to off;\n>>>\n>>> explain (verbose, costs off)\n>>> select distinct b, a from t order by a, b;\n>>> QUERY PLAN\n>>> ---------------------------------------------------------\n>>> Result\n>>> Output: b, a\n>>> -> Unique\n>>> Output: a, b\n>>> -> Index Only Scan using t_a_b_idx on public.t\n>>> Output: a, b\n>>> (6 rows)\n>>>\n>>> What I expect is that both the Scan node and the Unique node output\n>>> 'b, a', and we do not need an additional projection step, something\n>>> like:\n>>>\n>>> explain (verbose, costs off)\n>>> select distinct b, a from t order by a, b;\n>>> QUERY PLAN\n>>> ---------------------------------------------------\n>>> Unique\n>>> Output: b, a\n>>> -> Index Only Scan using t_a_b_idx on public.t\n>>> Output: b, a\n>>> (4 rows)\n>>>\n>>> I looked into this a little bit and found that in function\n>>> create_ordered_paths, we decide whether a projection step is needed\n>>> based on a simple pointer comparison between sorted_path->pathtarget\n>>> and final_target.\n>>>\n>>> /* Add projection step if needed */\n>>> if (sorted_path->pathtarget != target)\n>>> sorted_path = apply_projection_to_path(root, ordered_rel,\n>>> sorted_path, target);\n>>>\n>>> This does not seem right to me, as PathTargets are not canonical, so\n>>> we cannot guarantee that two identical PathTargets will have the same\n>>> pointer. Actually, for the query above, the two PathTargets are\n>>> identical but have different pointers.\n>>>\n>> Could memcmp solve this?\n>>\n>> With patch attached, using memcmp to compare the pointers.\n>>\n>> select distinct b, a from t order by a, b;\n>> QUERY PLAN\n>> ----------------------------------\n>> Sort\n>> Output: b, a\n>> Sort Key: t.a, t.b\n>> -> HashAggregate\n>> Output: b, a\n>> Group Key: t.a, t.b\n>> -> Seq Scan on public.t\n>> Output: a, b, c\n>> (8 rows)\n>>\n>> attached patch for consideration.\n>>\n>> best regards,\n>> Ranier Vilela\n>>\n>\n> +1 for the idea of removing this redundant node.\n> I had a look in this patch, and I was wondering if we still need\n> sorted_path->pathtarget != target in the condition.\n>\nAlthough the test is unnecessary, it is cheap and avoids a possible call to\nmemcmp.\n\nThanks.\n\nbest regards,\nRanier Vilela\n\nHi Rafia.Em qui., 22 de ago. de 2024 às 17:17, Rafia Sabih <[email protected]> escreveu:On Thu, 22 Aug 2024 at 15:02, Ranier Vilela <[email protected]> wrote:Hi.Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]> escreveu:I ran into a query plan where the Result node seems redundant to me:\n\ncreate table t (a int, b int, c int);\ninsert into t select i%10, i%10, i%10 from generate_series(1,100)i;\ncreate index on t (a, b);\nanalyze t;\n\nset enable_hashagg to off;\nset enable_seqscan to off;\n\nexplain (verbose, costs off)\nselect distinct b, a from t order by a, b;\n                       QUERY PLAN\n---------------------------------------------------------\n Result\n   Output: b, a\n   ->  Unique\n         Output: a, b\n         ->  Index Only Scan using t_a_b_idx on public.t\n               Output: a, b\n(6 rows)\n\nWhat I expect is that both the Scan node and the Unique node output\n'b, a', and we do not need an additional projection step, something\nlike:\n\nexplain (verbose, costs off)\nselect distinct b, a from t order by a, b;\n                    QUERY PLAN\n---------------------------------------------------\n Unique\n   Output: b, a\n   ->  Index Only Scan using t_a_b_idx on public.t\n         Output: b, a\n(4 rows)\n\nI looked into this a little bit and found that in function\ncreate_ordered_paths, we decide whether a projection step is needed\nbased on a simple pointer comparison between sorted_path->pathtarget\nand final_target.\n\n    /* Add projection step if needed */\n    if (sorted_path->pathtarget != target)\n        sorted_path = apply_projection_to_path(root, ordered_rel,\n                                               sorted_path, target);\n\nThis does not seem right to me, as PathTargets are not canonical, so\nwe cannot guarantee that two identical PathTargets will have the same\npointer.  Actually, for the query above, the two PathTargets are\nidentical but have different pointers.Could memcmp solve this?With patch attached, using memcmp to compare the pointers.select distinct b, a from t order by a, b;            QUERY PLAN---------------------------------- Sort   Output: b, a   Sort Key: t.a, t.b   ->  HashAggregate         Output: b, a         Group Key: t.a, t.b         ->  Seq Scan on public.t               Output: a, b, c(8 rows)attached patch for consideration.best regards,Ranier Vilela +1 for the idea of removing this redundant node.I had a look in this patch, and I was wondering if we still need sorted_path->pathtarget != target in the condition.Although the test is unnecessary, it is cheap and avoids a possible call to memcmp.Thanks.best regards,Ranier Vilela", "msg_date": "Fri, 23 Aug 2024 09:29:03 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On 23.08.24 10:27, Richard Guo wrote:\n> On Fri, Aug 23, 2024 at 11:56 AM Tom Lane <[email protected]> wrote:\n>> Richard Guo <[email protected]> writes:\n>>> I agree that it’s always desirable to postpone work from path-creation\n>>> time to plan-creation time. In this case, however, it’s a little\n>>> different. The projection step could actually be avoided from the\n>>> start if we perform the correct check in create_ordered_paths.\n>>\n>> Well, the question is how expensive is the \"correct check\" compared\n>> to what we're doing now. It might be cheaper than creating an extra\n>> level of path node, or it might not. An important factor here is\n>> that we'd pay the extra cost of a more complex check every time,\n>> whether it avoids creation of an extra path node or not.\n> \n> Fair point. After looking at the code for a while, I believe it is\n> sufficient to compare PathTarget.exprs after we've checked that the\n> two targets have different pointers.\n\n-\t\tif (sorted_path->pathtarget != target)\n+\t\tif (sorted_path->pathtarget != target &&\n+\t\t\t!equal(sorted_path->pathtarget->exprs, target->exprs))\n \t\t\tsorted_path = apply_projection_to_path(root, ordered_rel,\n\nequal() already checks whether both pointers are equal, so I think this \ncould be simplified to just\n\n if (!equal(sorted_path->pathtarget->exprs, target->exprs))\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 14:58:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Mon, Aug 26, 2024 at 8:58 PM Peter Eisentraut <[email protected]> wrote:\n> On 23.08.24 10:27, Richard Guo wrote:\n> > Fair point. After looking at the code for a while, I believe it is\n> > sufficient to compare PathTarget.exprs after we've checked that the\n> > two targets have different pointers.\n>\n> - if (sorted_path->pathtarget != target)\n> + if (sorted_path->pathtarget != target &&\n> + !equal(sorted_path->pathtarget->exprs, target->exprs))\n> sorted_path = apply_projection_to_path(root, ordered_rel,\n>\n> equal() already checks whether both pointers are equal, so I think this\n> could be simplified to just\n>\n> if (!equal(sorted_path->pathtarget->exprs, target->exprs))\n\nIndeed. If the target pointers are equal, the PathTarget.exprs\npointers must be equal too.\n\nAttached is the updated patch with this change.\n\nThanks\nRichard", "msg_date": "Tue, 27 Aug 2024 11:29:52 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "On Thu, Aug 22, 2024 at 9:02 PM Ranier Vilela <[email protected]> wrote:\n> Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]> escreveu:\n>> This does not seem right to me, as PathTargets are not canonical, so\n>> we cannot guarantee that two identical PathTargets will have the same\n>> pointer. Actually, for the query above, the two PathTargets are\n>> identical but have different pointers.\n>\n> Could memcmp solve this?\n\nHmm, I don't think memcmp works for nodes that contain pointers.\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 27 Aug 2024 11:43:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant Result node" }, { "msg_contents": "Em ter., 27 de ago. de 2024 às 00:43, Richard Guo <[email protected]>\nescreveu:\n\n> On Thu, Aug 22, 2024 at 9:02 PM Ranier Vilela <[email protected]> wrote:\n> > Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <\n> [email protected]> escreveu:\n> >> This does not seem right to me, as PathTargets are not canonical, so\n> >> we cannot guarantee that two identical PathTargets will have the same\n> >> pointer. Actually, for the query above, the two PathTargets are\n> >> identical but have different pointers.\n> >\n> > Could memcmp solve this?\n>\n> Hmm, I don't think memcmp works for nodes that contain pointers.\n>\nThe first case which memcmp can fail is if both pointers are null.\nBut considering the current behavior, the cost vs benefit favors memcmp.\n\nbest regards,\nRanier Vilela\n\nEm ter., 27 de ago. de 2024 às 00:43, Richard Guo <[email protected]> escreveu:On Thu, Aug 22, 2024 at 9:02 PM Ranier Vilela <[email protected]> wrote:\n> Em qui., 22 de ago. de 2024 às 04:34, Richard Guo <[email protected]> escreveu:\n>> This does not seem right to me, as PathTargets are not canonical, so\n>> we cannot guarantee that two identical PathTargets will have the same\n>> pointer.  Actually, for the query above, the two PathTargets are\n>> identical but have different pointers.\n>\n> Could memcmp solve this?\n\nHmm, I don't think memcmp works for nodes that contain pointers.The first case which memcmp can fail is if both pointers are null.But considering the current behavior, the cost vs benefit favors memcmp.best regards,Ranier Vilela", "msg_date": "Tue, 27 Aug 2024 08:26:15 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant Result node" } ]
[ { "msg_contents": "Hello,\n\nI think that pgstattuple should use PageGetExactFreeSpace() instead of \nPageGetHeapFreeSpace() or PageGetFreeSpace(). The latter two compute the \nfree space minus the space of a line pointer. They are used like this in \nthe rest of the code (heapam.c):\n\npagefree = PageGetHeapFreeSpace(page);\n\nif (newtupsize > pagefree) { we need a another page for the tuple }\n\n... so it makes sense to take the line pointer into account in this context.\n\nBut it in the pgstattuple context, I think we want the exact free space.\n\nI have attached a patch.\n\nBest regards,\nFrédéric", "msg_date": "Thu, 22 Aug 2024 10:10:53 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "pgstattuple: fix free space calculation" }, { "msg_contents": "On Thu, 22 Aug 2024 at 10:11, Frédéric Yhuel <[email protected]>\nwrote:\n\n> Hello,\n>\n> I think that pgstattuple should use PageGetExactFreeSpace() instead of\n> PageGetHeapFreeSpace() or PageGetFreeSpace(). The latter two compute the\n> free space minus the space of a line pointer. They are used like this in\n> the rest of the code (heapam.c):\n>\n> pagefree = PageGetHeapFreeSpace(page);\n>\n> if (newtupsize > pagefree) { we need a another page for the tuple }\n>\n> ... so it makes sense to take the line pointer into account in this\n> context.\n>\n> But it in the pgstattuple context, I think we want the exact free space.\n>\n> I have attached a patch.\n>\n> Best regards,\n> Frédéric\n\n\nI agree with the approach here.\nA minor comment here is to change the comments in code referring to the\nPageGetHeapFreeSpace.\n\n--- a/contrib/pgstattuple/pgstatapprox.c\n+++ b/contrib/pgstattuple/pgstatapprox.c\n@@ -111,7 +111,7 @@ statapprox_heap(Relation rel, output_type *stat)\n * treat them as being free space for our purposes.\n */\n if (!PageIsNew(page))\n- stat->free_space += PageGetHeapFreeSpace(page);\n+ stat->free_space += PageGetExactFreeSpace(page);\n-- \nRegards,\nRafia Sabih\n\nOn Thu, 22 Aug 2024 at 10:11, Frédéric Yhuel <[email protected]> wrote:Hello,\n\nI think that pgstattuple should use PageGetExactFreeSpace() instead of \nPageGetHeapFreeSpace() or PageGetFreeSpace(). The latter two compute the \nfree space minus the space of a line pointer. They are used like this in \nthe rest of the code (heapam.c):\n\npagefree = PageGetHeapFreeSpace(page);\n\nif (newtupsize > pagefree) { we need a another page for the tuple }\n\n... so it makes sense to take the line pointer into account in this context.\n\nBut it in the pgstattuple context, I think we want the exact free space.\n\nI have attached a patch.\n\nBest regards,\nFrédéric I agree with the approach here.A minor comment here is to change the comments in code referring to the PageGetHeapFreeSpace.\n--- a/contrib/pgstattuple/pgstatapprox.c+++ b/contrib/pgstattuple/pgstatapprox.c@@ -111,7 +111,7 @@ statapprox_heap(Relation rel, output_type *stat)                 * treat them as being free space for our purposes.                 */                if (!PageIsNew(page))-                       stat->free_space += PageGetHeapFreeSpace(page);+                       stat->free_space += PageGetExactFreeSpace(page);-- Regards,Rafia Sabih", "msg_date": "Thu, 22 Aug 2024 21:56:10 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On 8/22/24 21:56, Rafia Sabih wrote:\n> I agree with the approach here.\n> A minor comment here is to change the comments in code referring to the \n> PageGetHeapFreeSpace.\n\nThank you Rafia. Here is a v2 patch.\n\nI've also added this to the commit message:\n\nAlso, PageGetHeapFreeSpace() will return zero if there are already \nMaxHeapTuplesPerPage line pointers in the page and none are free. We \ndon't want that either, because here we want to keep track of the free \nspace after a page pruning operation even in the (very unlikely) case \nthat there are MaxHeapTuplesPerPage line pointers in the page.", "msg_date": "Fri, 23 Aug 2024 11:01:27 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On Fri, 23 Aug 2024 at 11:01, Frédéric Yhuel <[email protected]>\nwrote:\n\n>\n>\n> On 8/22/24 21:56, Rafia Sabih wrote:\n> > I agree with the approach here.\n> > A minor comment here is to change the comments in code referring to the\n> > PageGetHeapFreeSpace.\n>\n> Thank you Rafia. Here is a v2 patch.\n>\n> I've also added this to the commit message:\n>\n> Also, PageGetHeapFreeSpace() will return zero if there are already\n> MaxHeapTuplesPerPage line pointers in the page and none are free. We\n> don't want that either, because here we want to keep track of the free\n> space after a page pruning operation even in the (very unlikely) case\n> that there are MaxHeapTuplesPerPage line pointers in the page.\n\n\nOn the other hand, this got me thinking about the purpose of this space\ninformation.\nIf we want to understand that there's still some space for the tuples in a\npage, then using PageGetExactFreeSpace is not doing justice in case of heap\npage, because we will not be able to add any more tuples there if there are\nalready MaxHeapTuplesPerPage tuples there.\n\n-- \nRegards,\nRafia Sabih\n\nOn Fri, 23 Aug 2024 at 11:01, Frédéric Yhuel <[email protected]> wrote:\n\nOn 8/22/24 21:56, Rafia Sabih wrote:\n> I agree with the approach here.\n> A minor comment here is to change the comments in code referring to the \n> PageGetHeapFreeSpace.\n\nThank you Rafia. Here is a v2 patch.\n\nI've also added this to the commit message:\n\nAlso, PageGetHeapFreeSpace() will return zero if there are already \nMaxHeapTuplesPerPage line pointers in the page and none are free. We \ndon't want that either, because here we want to keep track of the free \nspace after a page pruning operation even in the (very unlikely) case \nthat there are MaxHeapTuplesPerPage line pointers in the page.On the other hand, this got me thinking about the purpose of this space information. If we want to understand that there's still some space for the tuples in a page, then using PageGetExactFreeSpace is not doing justice in case of heap page, because we will not be able to add any more tuples there if there are already MaxHeapTuplesPerPage tuples there.-- Regards,Rafia Sabih", "msg_date": "Fri, 23 Aug 2024 12:02:37 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "\n\nOn 8/23/24 12:02, Rafia Sabih wrote:\n> On the other hand, this got me thinking about the purpose of this space \n> information.\n> If we want to understand that there's still some space for the tuples in \n> a page, then using PageGetExactFreeSpace is not doing justice in case of \n> heap page, because we will not be able to add any more tuples there if \n> there are already MaxHeapTuplesPerPage tuples there.\n\nWe won't be able to add, but we will be able to update a tuple in this \npage. It's hard to test, because I can't fit more than 226 tuples on a \nsingle page, while MaxHeapTuplesPerPage = 291 on my machine :-)\n\nIn any case, IMVHO, pgstattuple shouldn't answer to the question \"can I \nadd more tuples?\". The goal is for educational, introspection or \ndebugging purposes, and we want the exact amount of free space.\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Fri, 23 Aug 2024 12:51:15 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On 8/23/24 12:02 PM, Rafia Sabih wrote:> On the other hand, this got me \nthinking about the purpose of this space > information.\n> If we want to understand that there's still some space for the tuples in \n> a page, then using PageGetExactFreeSpace is not doing justice in case of \n> heap page, because we will not be able to add any more tuples there if \n> there are already MaxHeapTuplesPerPage tuples there.\n\nI think the new behavior is the more useful one since what if someone \nwants to know the free space since they want to insert two tuples and \nnot just one? I do not think the function should assume that the only \nreason someone would want to know the size is because they want to \ninsert exactly one new tuple.\n\nI am less certain about what the right behavior on pages where we are \nout of line pointers should be but I am leaning towards that the new \nbehvior is better than the old but can see a case for either.\n\nTested the patch and it works as advertised.\n\nAndreas\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 13:11:38 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On 8/23/24 12:51, Frédéric Yhuel wrote:\n> \n> \n> On 8/23/24 12:02, Rafia Sabih wrote:\n>> On the other hand, this got me thinking about the purpose of this \n>> space information.\n>> If we want to understand that there's still some space for the tuples \n>> in a page, then using PageGetExactFreeSpace is not doing justice in \n>> case of heap page, because we will not be able to add any more tuples \n>> there if there are already MaxHeapTuplesPerPage tuples there.\n> \n> We won't be able to add, but we will be able to update a tuple in this \n> page.\n\nSorry, that's not true.\n\nSo in this marginal case we have free space that's unusable in practice. \nNo INSERT or UPDATE (HOT or not) is possible inside the page.\n\nI don't know what pgstattuple should do in this case.\n\nHowever, we should never encounter this case in practice (maybe on some \nexotic architectures with strange alignment behavior?). As I said, I \ncan't fit more than 226 tuples per page on my machine, while \nMaxHeapTuplesPerPage is 291. Am I missing something?\n\nBesides, pgstattuple isn't mission critical, is it?\n\nSo I think we should just use PageGetExactFreeSpace().\n\nHere is a v3 patch. It's the same as v2, I only removed the last \nparagraph in the commit message.\n\nThank you Rafia and Andreas for your review and test.\n\nBest regards,\nFrédéric", "msg_date": "Thu, 29 Aug 2024 16:53:04 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On 8/29/24 4:53 PM, Frédéric Yhuel wrote:\n> So I think we should just use PageGetExactFreeSpace().\n\nI agree, I feel that is the least surprising behavior because we \ncurrently sum tiny amounts of free space that is unusable anyway. E.g. \nimagine one million pages with 10 free bytes each, that looks like 10 \nfree MB so I do not see why we should treat the max tuples per page case \nwith any special logic.\n\nAndreas\n\n\n", "msg_date": "Fri, 30 Aug 2024 14:06:14 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On Thu, 29 Aug 2024 at 16:53, Frédéric Yhuel <[email protected]>\nwrote:\n\n>\n>\n> On 8/23/24 12:51, Frédéric Yhuel wrote:\n> >\n> >\n> > On 8/23/24 12:02, Rafia Sabih wrote:\n> >> On the other hand, this got me thinking about the purpose of this\n> >> space information.\n> >> If we want to understand that there's still some space for the tuples\n> >> in a page, then using PageGetExactFreeSpace is not doing justice in\n> >> case of heap page, because we will not be able to add any more tuples\n> >> there if there are already MaxHeapTuplesPerPage tuples there.\n> >\n> > We won't be able to add, but we will be able to update a tuple in this\n> > page.\n>\n> Sorry, that's not true.\n>\n> So in this marginal case we have free space that's unusable in practice.\n> No INSERT or UPDATE (HOT or not) is possible inside the page.\n>\n> I don't know what pgstattuple should do in this case.\n>\n> However, we should never encounter this case in practice (maybe on some\n> exotic architectures with strange alignment behavior?). As I said, I\n> can't fit more than 226 tuples per page on my machine, while\n> MaxHeapTuplesPerPage is 291. Am I missing something?\n>\n> Besides, pgstattuple isn't mission critical, is it?\n>\n\nYes, also as stated before I am not sure of the utility of this field in\nreal-world scenarios.\nSo, I can not comment more on that. That was just one thought that popped\ninto my head.\nOtherwise, the idea seems fine to me.\n\n>\n> So I think we should just use PageGetExactFreeSpace().\n>\n> Here is a v3 patch. It's the same as v2, I only removed the last\n> paragraph in the commit message.\n>\n\nThanks for the new patch. LGTM.\n\n\n>\n> Thank you Rafia and Andreas for your review and test.\n>\nThanks to you too.\n\n>\n> Best regards,\n> Frédéric\n\n\n\n-- \nRegards,\nRafia Sabih\n\nOn Thu, 29 Aug 2024 at 16:53, Frédéric Yhuel <[email protected]> wrote:\n\nOn 8/23/24 12:51, Frédéric Yhuel wrote:\n> \n> \n> On 8/23/24 12:02, Rafia Sabih wrote:\n>> On the other hand, this got me thinking about the purpose of this \n>> space information.\n>> If we want to understand that there's still some space for the tuples \n>> in a page, then using PageGetExactFreeSpace is not doing justice in \n>> case of heap page, because we will not be able to add any more tuples \n>> there if there are already MaxHeapTuplesPerPage tuples there.\n> \n> We won't be able to add, but we will be able to update a tuple in this \n> page.\n\nSorry, that's not true.\n\nSo in this marginal case we have free space that's unusable in practice. \nNo INSERT or UPDATE (HOT or not) is possible inside the page.\n\nI don't know what pgstattuple should do in this case.\n\nHowever, we should never encounter this case in practice (maybe on some \nexotic architectures with strange alignment behavior?). As I said, I \ncan't fit more than 226 tuples per page on my machine, while \nMaxHeapTuplesPerPage is 291. Am I missing something?\n\nBesides, pgstattuple isn't mission critical, is it? Yes, also as stated before I am not sure of the utility of this field in real-world scenarios.So, I can not comment more on that. That was just one thought that popped into my head.Otherwise, the idea seems fine to me.\n\nSo I think we should just use PageGetExactFreeSpace().\n\nHere is a v3 patch. It's the same as v2, I only removed the last \nparagraph in the commit message.Thanks for the new patch. LGTM. \n\nThank you Rafia and Andreas for your review and test.Thanks to you too.\n\nBest regards,\nFrédéric-- Regards,Rafia Sabih", "msg_date": "Fri, 6 Sep 2024 13:18:39 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "Rafia Sabih <[email protected]> writes:\n> On Thu, 29 Aug 2024 at 16:53, Frédéric Yhuel <[email protected]>\n> wrote:\n>> So I think we should just use PageGetExactFreeSpace().\n>> \n>> Here is a v3 patch. It's the same as v2, I only removed the last\n>> paragraph in the commit message.\n\n> Thanks for the new patch. LGTM.\n\nI looked at this patch. I agree with making the change. However,\nI don't agree with the CF entry's marking of \"target version: stable\"\n(i.e., requesting back-patch). I think this falls somewhere in the\ngray area between a bug fix and a definitional change. Also, people\nare unlikely to be happy if they suddenly get new, not-comparable\nnumbers after a minor version update. So I think we should just fix\nit in HEAD.\n\nAs far as the patch itself goes, the one thing that is bothering me\nis this comment change\n\n /*\n- * It's not safe to call PageGetHeapFreeSpace() on new pages, so we\n+ * It's not safe to call PageGetExactFreeSpace() on new pages, so we\n * treat them as being free space for our purposes.\n */\n\nwhich looks like it wasn't made with a great deal of thought.\nNow it seems to me that the comment was already bogus when written:\nthere isn't anything uncertain about what will happen if you call\neither of these functions on a \"new\" page. PageIsNew checks for\n\n return ((PageHeader) page)->pd_upper == 0;\n\nIf pd_upper is 0, PageGet[Exact]FreeSpace is absolutely guaranteed\nto return zero, even if pd_lower contains garbage. And then\nPageGetHeapFreeSpace will likewise return zero. Perhaps there\ncould be trouble if we got into the line-pointer-checking part\nof PageGetHeapFreeSpace, but we can't. So this comment is wrong,\nand is even more obviously wrong after the above change. I thought\nfor a moment about removing the PageIsNew test altogether, but\nthen I decided that it probably *is* what we want and is just\nmis-explained. I think the comment should read more like\n\n /*\n * PageGetExactFreeSpace() will return zero for a \"new\" page,\n * but it's actually usable free space, so count it that way.\n */\n\nNow alternatively you could argue that a \"new\" page isn't usable free\nspace yet and so we should count it as zero, just as we don't count\ndead tuples as usable free space. You need VACUUM to turn either of\nthose things into real free space. But that'd be a bigger definitional\nchange, and I'm not sure we want it. Thoughts?\n\nAlso, do we need any documentation change for this? I looked through\nhttps://www.postgresql.org/docs/devel/pgstattuple.html\nand didn't see anything that was being very specific about what\n\"free space\" means, so maybe it's fine as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Sep 2024 16:10:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "I wrote:\n> Now alternatively you could argue that a \"new\" page isn't usable free\n> space yet and so we should count it as zero, just as we don't count\n> dead tuples as usable free space. You need VACUUM to turn either of\n> those things into real free space. But that'd be a bigger definitional\n> change, and I'm not sure we want it. Thoughts?\n\nOn the third hand: the code in question is in statapprox_heap, which\nis presumably meant to deliver numbers comparable to pgstat_heap.\nAnd pgstat_heap takes no special care for \"new\" pages, it just applies\nPageGetHeapFreeSpace (or PageGetExactFreeSpace after this patch).\nSo that leaves me feeling pretty strongly that this whole stanza\nis wrong and we should just do PageGetExactFreeSpace here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Sep 2024 16:45:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "Hi Tom, thanks for your review.\n\nOn 9/7/24 22:10, Tom Lane wrote:\n> I looked at this patch. I agree with making the change. However,\n> I don't agree with the CF entry's marking of \"target version: stable\"\n> (i.e., requesting back-patch). I think this falls somewhere in the\n> gray area between a bug fix and a definitional change. Also, people\n> are unlikely to be happy if they suddenly get new, not-comparable\n> numbers after a minor version update. So I think we should just fix\n> it in HEAD.\n>\n\nOK, I did the change.\n\n> As far as the patch itself goes, the one thing that is bothering me\n> is this comment change\n> \n> /*\n> - * It's not safe to call PageGetHeapFreeSpace() on new pages, so we\n> + * It's not safe to call PageGetExactFreeSpace() on new pages, so we\n> * treat them as being free space for our purposes.\n> */\n> \n> which looks like it wasn't made with a great deal of thought.\n> Now it seems to me that the comment was already bogus when written:\n> there isn't anything uncertain about what will happen if you call\n> either of these functions on a \"new\" page. PageIsNew checks for\n> \n> return ((PageHeader) page)->pd_upper == 0;\n> \n> If pd_upper is 0, PageGet[Exact]FreeSpace is absolutely guaranteed\n> to return zero, even if pd_lower contains garbage. And then\n\nIndeed. I failed to notice that LocationIndex was an unsigned int, so I \nthought that pg_upper - pd_upper could be positive with garbage in pg_upper.\n\n> PageGetHeapFreeSpace will likewise return zero. Perhaps there\n> could be trouble if we got into the line-pointer-checking part\n> of PageGetHeapFreeSpace, but we can't. So this comment is wrong,\n> and is even more obviously wrong after the above change. I thought\n> for a moment about removing the PageIsNew test altogether, but\n> then I decided that it probably*is* what we want and is just\n> mis-explained. I think the comment should read more like\n> \n> /*\n> * PageGetExactFreeSpace() will return zero for a \"new\" page,\n> * but it's actually usable free space, so count it that way.\n> */\n> \n> Now alternatively you could argue that a \"new\" page isn't usable free\n> space yet and so we should count it as zero, just as we don't count\n> dead tuples as usable free space. You need VACUUM to turn either of\n> those things into real free space. But that'd be a bigger definitional\n> change, and I'm not sure we want it. Thoughts?\n> \n> Also, do we need any documentation change for this? I looked through\n> https://www.postgresql.org/docs/devel/pgstattuple.html\n> and didn't see anything that was being very specific about what\n> \"free space\" means, so maybe it's fine as-is.\n\nIt's not easy. Maybe something like this?\n\n\"For any initialized page, free space refers to anything that isn't page \nmetadata (header and special), a line pointer or a tuple pointed to by a \nvalid line pointer. In particular, a dead tuple is not free space \nbecause there's still a valid line pointer pointer pointing to it, until \nVACUUM or some other maintenance mechanism (e.g. page pruning) cleans up \nthe page. A dead line pointer is not free space either, but the tuple it \npoints to has become free space. An unused line pointer could be \nconsidered free space, but pgstattuple doesn't take it into account.\"\n\n\n", "msg_date": "Mon, 9 Sep 2024 15:47:49 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "On 9/7/24 22:45, Tom Lane wrote:\n> I wrote:\n>> Now alternatively you could argue that a \"new\" page isn't usable free\n>> space yet and so we should count it as zero, just as we don't count\n>> dead tuples as usable free space. You need VACUUM to turn either of\n>> those things into real free space. But that'd be a bigger definitional\n>> change, and I'm not sure we want it. Thoughts?\n> \n> On the third hand: the code in question is in statapprox_heap, which\n> is presumably meant to deliver numbers comparable to pgstat_heap.\n> And pgstat_heap takes no special care for \"new\" pages, it just applies\n> PageGetHeapFreeSpace (or PageGetExactFreeSpace after this patch).\n> So that leaves me feeling pretty strongly that this whole stanza\n> is wrong and we should just do PageGetExactFreeSpace here.\n> \n\n+1\n\nv4 patch attached.\n\nBest regards,\nFrédéric", "msg_date": "Mon, 9 Sep 2024 15:49:54 +0200", "msg_from": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgstattuple: fix free space calculation" }, { "msg_contents": "=?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Yhuel?= <[email protected]> writes:\n> v4 patch attached.\n\nLGTM, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Sep 2024 14:35:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgstattuple: fix free space calculation" } ]
[ { "msg_contents": "Hello hackers,\n\nWhile investigating a recent copperhead failure [1] with the following\ndiagnostics:\n2024-08-20 20:56:47.318 CEST [2179731:95] LOG:  server process (PID 2184722) was terminated by signal 11: Segmentation fault\n2024-08-20 20:56:47.318 CEST [2179731:96] DETAIL:  Failed process was running: COPY hash_f8_heap FROM \n'/home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/data/hash.data';\n\nCore was generated by `postgres: pgbf regression [local] COPY                                        '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  0x0000002ac8e62674 in heap_multi_insert (relation=0x3f9525c890, slots=0x2ae68a5b30, ntuples=<optimized out>, \ncid=<optimized out>, options=<optimized out>, bistate=0x2ae6891c18) at heapam.c:2296\n2296            tuple->t_tableOid = slots[i]->tts_tableOid;\n#0  0x0000002ac8e62674 in heap_multi_insert (relation=0x3f9525c890, slots=0x2ae68a5b30, ntuples=<optimized out>, \ncid=<optimized out>, options=<optimized out>, bistate=0x2ae6891c18) at heapam.c:2296\n#1  0x0000002ac8f41656 in table_multi_insert (bistate=<optimized out>, options=<optimized out>, cid=<optimized out>, \nnslots=1000, slots=0x2ae68a5b30, rel=<optimized out>) at ../../../src/include/access/tableam.h:1460\n#2  CopyMultiInsertBufferFlush (miinfo=miinfo@entry=0x3ff87bceb0, buffer=0x2ae68a5b30, \nprocessed=processed@entry=0x3ff87bce90) at copyfrom.c:415\n#3  0x0000002ac8f41f6c in CopyMultiInsertInfoFlush (processed=0x3ff87bce90, curr_rri=0x2ae67eacf8, miinfo=0x3ff87bceb0) \nat copyfrom.c:532\n#4  CopyFrom (cstate=cstate@entry=0x2ae6897fc0) at copyfrom.c:1242\n...\n$1 = {si_signo = 11,  ... _sigfault = {si_addr = 0x2ae600cbcc}, ...\n\nI discovered a similarly looking failure, [2]:\n2023-02-11 18:33:09.222 CET [2591215:73] LOG:  server process (PID 2596066) was terminated by signal 11: Segmentation fault\n2023-02-11 18:33:09.222 CET [2591215:74] DETAIL:  Failed process was running: COPY bt_i4_heap FROM \n'/home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/data/desc.data';\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  0x0000002adc9bc61a in heap_multi_insert (relation=0x3fa3bd53a8, slots=0x2b098a13c0, ntuples=<optimized out>, \ncid=<optimized out>, options=<optimized out>, bistate=0x2b097eda10) at heapam.c:2095\n2095            tuple->t_tableOid = slots[i]->tts_tableOid;\n\nBut then I found also different failures on copperhead, all looking like\nmemory-related anomalies:\n[3]\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  fixempties (f=0x0, nfa=0x2b02a59410) at regc_nfa.c:2246\n2246                for (a = inarcsorig[s2->no]; a != NULL; a = a->inchain)\n\n[4]\npgsql.build/src/bin/pg_rewind/tmp_check/log/regress_log_004_pg_xlog_symlink\nmalloc(): memory corruption (fast)\n\n[5]\n2022-11-22 20:22:48.907 CET [1364156:4] LOG:  server process (PID 1364221) was terminated by signal 11: Segmentation fault\n2022-11-22 20:22:48.907 CET [1364156:5] DETAIL:  Failed process was running: BASE_BACKUP LABEL 'pg_basebackup base \nbackup' PROGRESS NOWAIT  TABLESPACE_MAP  MANIFEST 'yes'\n\n[6]\npsql exited with signal 11 (core dumped): '' while running 'psql -XAtq -d port=60743 host=/tmp/zHq9Kzn2b5 \ndbname='postgres' -f - -v ON_ERROR_STOP=1' at \n/home/pgbf/buildroot/REL_14_STABLE/pgsql.build/contrib/bloom/../../src/test/perl/PostgresNode.pm line 1855.\n\n[7]\n- locktype | classid | objid | objsubid |     mode      | granted\n+ locktype | classid | objid | objsubid |     mode      | gr_nted\n(the most mysterious case)\n\n[8]\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  GetMemoryChunkContext (pointer=0x2b21bca1f8) at ../../../../src/include/utils/memutils.h:128\n128        context = *(MemoryContext *) (((char *) pointer) - sizeof(void *));\n...\n$1 = {si_signo = 11, ... _sigfault = {si_addr = 0x2b21bca1f0}, ...\n\n[9]\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  fixempties (f=0x0, nfa=0x2ac0bf4c60) at regc_nfa.c:2246\n2246                for (a = inarcsorig[s2->no]; a != NULL; a = a->inchain)\n\n\nMoreover, the other RISC-V animal, boomslang produced weird failures too:\n[10]\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  0x0000002ae6b50abe in ExecInterpExpr (state=0x2b20ca0040, econtext=0x2b20c9fba8, isnull=<optimized out>) at \nexecExprInterp.c:678\n678                resultslot->tts_values[resultnum] = state->resvalue;\n\n[11]\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  0x0000002addf22728 in ExecInterpExpr (state=0x2ae0af8848, econtext=0x2ae0b16028, isnull=<optimized out>) at \nexecExprInterp.c:666\n666                resultslot->tts_values[resultnum] = scanslot->tts_values[attnum];\n\n[12]\nINSERT INTO ftable SELECT * FROM generate_series(1, 70000) i;\n\nCore was generated by `postgres: buildfarm contrib_regression_postgres_fdw [local] INS'.\nProgram terminated with signal SIGABRT, Aborted.\n\nAs far as I can see, these animals run on Debian 10 with the kernel\nversion 5.15.5-2~bpo11+1 (2022-01-10), but RISC-V was declared an\nofficial Debian architecture on 2023-07-23 [14]. So maybe the OS\nversion installed is not stable enough for testing...\n(I've tried running the regression tests on a RISC-V machine emulated with\nqemu, running Debian trixie, kernel version 6.8.12-1 (2024-05-31), and got\nno failures.)\n\nDear copperhead, boomslang owner, could you consider upgrading OS on\nthese animals to rule out effects of OS anomalies that might be fixed\nalready? If it's not an option, couldn't you perform stress testing of\nthese machines, say, with stress-ng?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2024-08-20%2017%3A59%3A12\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2023-02-11%2016%3A41%3A58\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2023-02-09%2001%3A25%3A06\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2023-03-21%2022%3A58%3A43\n[5] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2022-11-22%2019%3A00%3A19\n[6] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2022-11-24%2018%3A45%3A45\n[7] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2023-03-19%2017%3A21%3A17\n[8] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2023-03-11%2016%3A54%3A52\n[9] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2022-11-11%2021%3A39%3A04\n[10] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=boomslang&dt=2023-03-12%2008%3A32%3A48\n[11] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=boomslang&dt=2022-09-22%2007%3A38%3A42\n[12] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=boomslang&dt=2022-10-18%2006%3A51%3A13\n[13] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=boomslang&dt=2022-09-27%2006%3A57%3A38\n[14] https://lists.debian.org/debian-riscv/2023/07/msg00053.html\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 22 Aug 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "RISC-V animals sporadically produce weird memory-related failures" } ]
[ { "msg_contents": "Hi hackers,\nCc people involved in the related work.\n\nIn the original conflict resolution thread[1], we have decided to split the\nconflict resolution work into multiple patches to facilitate incremental\nprogress towards supporting conflict resolution in logical replication, and one\nof the work is statistics collection for the conflicts.\n\nFollowing the discussions in the conflict resolution thread, the collection of\nlogical replication conflicts is important independently, which can help user\nunderstand conflict stats (e.g., conflict rates) and potentially identify\nportions of the application and other parts of the system to optimize. So, I am\nstarting a new thread for this feature.\n\nThe idea is to add columns(insert_exists_count, update_differ_count,\nupdate_exists_count, update_missing_count, delete_differ_count,\ndelete_missing_count) in view pg_stat_subscription_stats to shows information\nabout the conflict which occur during the application of logical replication\nchanges. The conflict types originate from the committed work which is to\nreport additional information for each conflict in logical replication.\n\nThe patch for this feature is attached.\n\nSuggestions and comments are highly appreciated.\n\n[1] https://www.postgresql.org/message-id/CAA4eK1LgPyzPr_Vrvvr4syrde4hyT%3DQQnGjdRUNP-tz3eYa%3DGQ%40mail.gmail.com\n\nBest Regards,\nHou Zhijie", "msg_date": "Thu, 22 Aug 2024 10:01:27 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Collect statistics about conflicts in logical replication" }, { "msg_contents": "Hi Hou-san. Here are some review comments for your patch v1-0001.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\nnit - added a comma.\n\n======\ndoc/src/sgml/monitoring.sgml\n\nnit - use <literal> for 'apply_error_count'.\nnit - added a period when there are multiple sentences.\nnit - adjusted field descriptions using Chat-GPT clarification suggestions\n\n======\nsrc/include/pgstat.h\n\nnit - change the param to 'type' -- ie. same as the implementation calls it\n\n======\nsrc/include/replication/conflict.h\n\nnit - defined 'NUM_CONFLICT_TYPES' inside the enum (I think this way\nis often used in other PG source enums)\n\n======\nsrc/test/subscription/t/026_stats.pl\n\n1.\n+ # Delete data from the test table on the publisher. This delete operation\n+ # should be skipped on the subscriber since the table is already empty.\n+ $node_publisher->safe_psql($db, qq(DELETE FROM $table_name;));\n+\n+ # Wait for the subscriber to report tuple missing conflict.\n+ $node_subscriber->poll_query_until(\n+ $db,\n+ qq[\n+ SELECT update_missing_count > 0 AND delete_missing_count > 0\n+ FROM pg_stat_subscription_stats\n+ WHERE subname = '$sub_name'\n+ ])\n+ or die\n+ qq(Timed out while waiting for tuple missing conflict for\nsubscription '$sub_name');\n\nCan you write a comment to explain why the replicated DELETE is\nexpected to increment both the 'update_missing_count' and the\n'delete_missing_count'?\n\n~\nnit - update several \"Apply and Sync errors...\" comments that did not\nmention conflicts\nnit - tweak comments wording for update_differ and delete_differ\nnit - /both > 0/> 0/\nnit - /both 0/0/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 26 Aug 2024 17:29:56 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Monday, August 26, 2024 3:30 PM Peter Smith <[email protected]> wrote:\r\n> \r\n> ======\r\n> src/include/replication/conflict.h\r\n> \r\n> nit - defined 'NUM_CONFLICT_TYPES' inside the enum (I think this way is\r\n> often used in other PG source enums)\r\n\r\nI think we have recently tended to avoid doing that, as it has been commented\r\nthat this style is somewhat deceptive and can cause confusion. See a previous\r\nsimilar comment[1]. The current style follows the other existing examples like:\r\n\r\n#define IOOBJECT_NUM_TYPES (IOOBJECT_TEMP_RELATION + 1)\r\n#define IOCONTEXT_NUM_TYPES (IOCONTEXT_VACUUM + 1)\r\n#define IOOP_NUM_TYPES (IOOP_WRITEBACK + 1)\r\n#define BACKEND_NUM_TYPES (B_LOGGER + 1)\r\n...\r\n\r\n\r\n> ======\r\n> src/test/subscription/t/026_stats.pl\r\n> \r\n> 1.\r\n> + # Delete data from the test table on the publisher. This delete\r\n> + operation # should be skipped on the subscriber since the table is already\r\n> empty.\r\n> + $node_publisher->safe_psql($db, qq(DELETE FROM $table_name;));\r\n> +\r\n> + # Wait for the subscriber to report tuple missing conflict.\r\n> + $node_subscriber->poll_query_until(\r\n> + $db,\r\n> + qq[\r\n> + SELECT update_missing_count > 0 AND delete_missing_count > 0 FROM\r\n> + pg_stat_subscription_stats WHERE subname = '$sub_name'\r\n> + ])\r\n> + or die\r\n> + qq(Timed out while waiting for tuple missing conflict for\r\n> subscription '$sub_name');\r\n> \r\n> Can you write a comment to explain why the replicated DELETE is\r\n> expected to increment both the 'update_missing_count' and the\r\n> 'delete_missing_count'?\r\n\r\nI think the comments several lines above the wait explained the reason[2]. I\r\nslightly modified the comments to make it clear.\r\n\r\nOther changes look good to me and have been merged, thanks!\r\n\r\nHere is the V2 patch.\r\n\r\n[1] https://www.postgresql.org/message-id/202201130922.izanq4hkkqnx%40alvherre.pgsql\r\n\r\n[2]\r\n..\r\n\t# Truncate test table to ensure the upcoming update operation is skipped\r\n\t# and the test can continue.\r\n\t$node_subscriber->safe_psql($db, qq(TRUNCATE $table_name));\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 26 Aug 2024 12:12:53 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Mon, Aug 26, 2024 at 10:13 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, August 26, 2024 3:30 PM Peter Smith <[email protected]> wrote:\n> >\n> > ======\n> > src/include/replication/conflict.h\n> >\n> > nit - defined 'NUM_CONFLICT_TYPES' inside the enum (I think this way is\n> > often used in other PG source enums)\n>\n> I think we have recently tended to avoid doing that, as it has been commented\n> that this style is somewhat deceptive and can cause confusion. See a previous\n> similar comment[1]. The current style follows the other existing examples like:\n>\n> #define IOOBJECT_NUM_TYPES (IOOBJECT_TEMP_RELATION + 1)\n> #define IOCONTEXT_NUM_TYPES (IOCONTEXT_VACUUM + 1)\n> #define IOOP_NUM_TYPES (IOOP_WRITEBACK + 1)\n> #define BACKEND_NUM_TYPES (B_LOGGER + 1)\n\nOK.\n\n>\n>\n> > ======\n> > src/test/subscription/t/026_stats.pl\n> >\n> > 1.\n> > + # Delete data from the test table on the publisher. This delete\n> > + operation # should be skipped on the subscriber since the table is already\n> > empty.\n> > + $node_publisher->safe_psql($db, qq(DELETE FROM $table_name;));\n> > +\n> > + # Wait for the subscriber to report tuple missing conflict.\n> > + $node_subscriber->poll_query_until(\n> > + $db,\n> > + qq[\n> > + SELECT update_missing_count > 0 AND delete_missing_count > 0 FROM\n> > + pg_stat_subscription_stats WHERE subname = '$sub_name'\n> > + ])\n> > + or die\n> > + qq(Timed out while waiting for tuple missing conflict for\n> > subscription '$sub_name');\n> >\n> > Can you write a comment to explain why the replicated DELETE is\n> > expected to increment both the 'update_missing_count' and the\n> > 'delete_missing_count'?\n>\n> I think the comments several lines above the wait explained the reason[2]. I\n> slightly modified the comments to make it clear.\n>\n\n1.\nRight, but it still was not obvious to me what caused the\n'update_missing_count'. On further study, I see it was a hangover from\nthe earlier UPDATE test case which was still stuck in an ERROR loop\nattempting to do the update operation. e.g. before it was giving the\nexpected 'update_exists' conflicts but after the subscriber table\nTRUNCATE the update conflict changes to give a 'update_missing'\nconflict instead. I've updated the comment to reflect my\nunderstanding. Please have a look to see if you agree.\n\n~~~~\n\n2.\nI separated the tests for 'update_missing' and 'delete_missing',\nputting the update_missing test *before* the DELETE. I felt the\nexpected results were much clearer when each test did just one thing.\nPlease have a look to see if you agree.\n\n~~~\n\n3.\n+# Enable track_commit_timestamp to detect origin-differ conflicts in logical\n+# replication. Reduce wal_retrieve_retry_interval to 1ms to accelerate the\n+# restart of the logical replication worker after encountering a conflict.\n+$node_subscriber->append_conf(\n+ 'postgresql.conf', q{\n+track_commit_timestamp = on\n+wal_retrieve_retry_interval = 1ms\n+});\n\nLater, after CDR resolvers are implemented, it might be good to\nrevisit these conflict test cases and re-write them to use some\nconflict resolvers like 'skip'. Then the subscriber won't give ERRORs\nand restart apply workers all the time behind the scenes, so you won't\nneed the above configuration for accelerating the worker restarts. In\nother words, running these tests might be more efficient if you can\navoid restarting workers all the time.\n\nI suggest putting an XXX comment here as a reminder that these tests\nshould be revisited to make use of conflict resolvers in the future.\n\n~~~\n\nnit - not caused by this patch, but other comment inconsistencies\nabout \"stats_reset timestamp\" can be fixed in passing too.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 27 Aug 2024 12:58:58 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Tuesday, August 27, 2024 10:59 AM Peter Smith <[email protected]> wrote:\r\n> \r\n> ~~~\r\n> \r\n> 3.\r\n> +# Enable track_commit_timestamp to detect origin-differ conflicts in\r\n> +logical # replication. Reduce wal_retrieve_retry_interval to 1ms to\r\n> +accelerate the # restart of the logical replication worker after encountering a\r\n> conflict.\r\n> +$node_subscriber->append_conf(\r\n> + 'postgresql.conf', q{\r\n> +track_commit_timestamp = on\r\n> +wal_retrieve_retry_interval = 1ms\r\n> +});\r\n> \r\n> Later, after CDR resolvers are implemented, it might be good to revisit these\r\n> conflict test cases and re-write them to use some conflict resolvers like 'skip'.\r\n> Then the subscriber won't give ERRORs and restart apply workers all the time\r\n> behind the scenes, so you won't need the above configuration for accelerating\r\n> the worker restarts. In other words, running these tests might be more efficient\r\n> if you can avoid restarting workers all the time.\r\n> \r\n> I suggest putting an XXX comment here as a reminder that these tests should\r\n> be revisited to make use of conflict resolvers in the future.\r\n\r\nI think it would be too early to mention the resolution implementation detail\r\nin the comments considering that the resolution is still not RFC. Also, I think\r\nreducing wal_retrieve_retry_interval is a reasonable way to speed up the test\r\nin this case because the test is not letting the worker to restart all the time, the\r\nerror causes the restart will be resolved immediately after the stats check. So, I\r\nthink adding XXX is not very appropriate.\r\n\r\nOther comments look good to me.\r\nI slightly adjusted few words and merged the changes. Thanks !\r\n\r\nHere is V3 patch.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 27 Aug 2024 09:50:44 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Tue, Aug 27, 2024 at 3:21 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Tuesday, August 27, 2024 10:59 AM Peter Smith <[email protected]> wrote:\n> >\n> > ~~~\n> >\n> > 3.\n> > +# Enable track_commit_timestamp to detect origin-differ conflicts in\n> > +logical # replication. Reduce wal_retrieve_retry_interval to 1ms to\n> > +accelerate the # restart of the logical replication worker after encountering a\n> > conflict.\n> > +$node_subscriber->append_conf(\n> > + 'postgresql.conf', q{\n> > +track_commit_timestamp = on\n> > +wal_retrieve_retry_interval = 1ms\n> > +});\n> >\n> > Later, after CDR resolvers are implemented, it might be good to revisit these\n> > conflict test cases and re-write them to use some conflict resolvers like 'skip'.\n> > Then the subscriber won't give ERRORs and restart apply workers all the time\n> > behind the scenes, so you won't need the above configuration for accelerating\n> > the worker restarts. In other words, running these tests might be more efficient\n> > if you can avoid restarting workers all the time.\n> >\n> > I suggest putting an XXX comment here as a reminder that these tests should\n> > be revisited to make use of conflict resolvers in the future.\n>\n> I think it would be too early to mention the resolution implementation detail\n> in the comments considering that the resolution is still not RFC. Also, I think\n> reducing wal_retrieve_retry_interval is a reasonable way to speed up the test\n> in this case because the test is not letting the worker to restart all the time, the\n> error causes the restart will be resolved immediately after the stats check. So, I\n> think adding XXX is not very appropriate.\n>\n> Other comments look good to me.\n> I slightly adjusted few words and merged the changes. Thanks !\n>\n> Here is V3 patch.\n>\n\nThanks for the patch. Just thinking out loud, since we have names like\n'apply_error_count', 'sync_error_count' which tells that they are\nactually error-count, will it be better to have something similar in\nconflict-count cases, like 'insert_exists_conflict_count',\n'delete_missing_conflict_count' and so on. Thoughts?\n\nI noticed that now we do mention this (as I suggested earlier):\n+ Note that any conflict resulting in an apply error will be counted\nin both apply_error_count and the corresponding conflict count.\n\nBut we do not mention clearly which ones are conflict-counts. As an\nexample, we have this:\n\n+ insert_exists_count bigint:\n+ Number of times a row insertion violated a NOT DEFERRABLE unique\nconstraint during the application of changes\n\nIt does not mention that it is a conflict count. So we need to either\nchange names or mention clearly against each that it is a conflict\ncount.\n\nthanks\nsHveta\n\n\n", "msg_date": "Wed, 28 Aug 2024 11:43:15 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Wed, Aug 28, 2024 at 11:43 AM shveta malik <[email protected]> wrote:\n>\n> Thanks for the patch. Just thinking out loud, since we have names like\n> 'apply_error_count', 'sync_error_count' which tells that they are\n> actually error-count, will it be better to have something similar in\n> conflict-count cases, like 'insert_exists_conflict_count',\n> 'delete_missing_conflict_count' and so on. Thoughts?\n>\n\nIt would be better to have conflict in the names but OTOH it will make\nthe names a bit longer. The other alternatives could be (a)\ninsert_exists_confl_count, etc. (b) confl_insert_exists_count, etc.\n(c) confl_insert_exists, etc. These are based on the column names in\nthe existing view pg_stat_database_conflicts [1]. The (c) looks better\nthan other options but it will make the conflict-related columns\ndifferent from error-related columns.\n\nYet another option is to have a different view like\npg_stat_subscription_conflicts but that sounds like going too far.\n\n[1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Aug 2024 16:48:49 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Wed, Aug 28, 2024 at 9:19 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 11:43 AM shveta malik <[email protected]> wrote:\n> >\n> > Thanks for the patch. Just thinking out loud, since we have names like\n> > 'apply_error_count', 'sync_error_count' which tells that they are\n> > actually error-count, will it be better to have something similar in\n> > conflict-count cases, like 'insert_exists_conflict_count',\n> > 'delete_missing_conflict_count' and so on. Thoughts?\n> >\n>\n> It would be better to have conflict in the names but OTOH it will make\n> the names a bit longer. The other alternatives could be (a)\n> insert_exists_confl_count, etc. (b) confl_insert_exists_count, etc.\n> (c) confl_insert_exists, etc. These are based on the column names in\n> the existing view pg_stat_database_conflicts [1]. The (c) looks better\n> than other options but it will make the conflict-related columns\n> different from error-related columns.\n>\n> Yet another option is to have a different view like\n> pg_stat_subscription_conflicts but that sounds like going too far.\n>\n> [1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW\n\nOption (c) looked good to me.\n\nRemoving the suffix \"_count\" is OK. For example, try searching all of\nChapter 27 (\"The Cumulative Statistics System\") [1] for columns\ndescribed as \"Number of ...\" and you will find that a \"_count\" suffix\nis used only rarely.\n\nAdding the prefix \"confl_\" is OK. As mentioned, there is a precedent\nfor this. See \"pg_stat_database_conflicts\" [2].\n\nMixing column names where some have and some do not have \"_count\"\nsuffixes may not be ideal, but I see no problem because there are\nprecedents for that too. E.g. see \"pg_stat_replication_slots\" [3], and\n\"pg_stat_all_tables\" [4].\n\n======\n[1] https://www.postgresql.org/docs/devel/monitoring-stats.html\n[2] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW\n[3] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW\n[4] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 29 Aug 2024 09:29:16 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "Hi Hou-San.\n\nI tried an experiment where I deliberately violated a primary key\nduring initial table synchronization.\n\nFor example:\n\ntest_sub=# create table t1(a int primary key);\nCREATE TABLE\n\ntest_sub=# insert into t1 values(1);\nINSERT 0 1\n\ntest_sub=# create subscription sub1 connection 'dbname=test_pub'\npublication pub1 with (enabled=false);\n2024-08-29 09:53:21.172 AEST [24186] WARNING: subscriptions created\nby regression test cases should have names starting with \"regress_\"\nWARNING: subscriptions created by regression test cases should have\nnames starting with \"regress_\"\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\n\ntest_sub=# select * from pg_stat_subscription_stats;\n subid | subname | apply_error_count | sync_error_count |\ninsert_exists_count | update_differ_count | update_exists_count |\nupdate_missing_count | de\nlete_differ_count | delete_missing_count | stats_reset\n-------+---------+-------------------+------------------+---------------------+---------------------+---------------------+----------------------+---\n------------------+----------------------+-------------\n 16390 | sub1 | 0 | 0 |\n 0 | 0 | 0 |\n 0 |\n 0 | 0 |\n(1 row)\n\ntest_sub=# alter subscription sub1 enable;\nALTER SUBSCRIPTION\n\ntest_sub=# 2024-08-29 09:53:57.245 AEST [4345] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2024-08-29 09:53:57.258 AEST [4347] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"t1\" has started\n2024-08-29 09:53:57.311 AEST [4347] ERROR: duplicate key value\nviolates unique constraint \"t1_pkey\"\n2024-08-29 09:53:57.311 AEST [4347] DETAIL: Key (a)=(1) already exists.\n2024-08-29 09:53:57.311 AEST [4347] CONTEXT: COPY t1, line 1\n2024-08-29 09:53:57.312 AEST [23501] LOG: background worker \"logical\nreplication tablesync worker\" (PID 4347) exited with exit code 1\n2024-08-29 09:54:02.385 AEST [4501] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"t1\" has started\n2024-08-29 09:54:02.462 AEST [4501] ERROR: duplicate key value\nviolates unique constraint \"t1_pkey\"\n2024-08-29 09:54:02.462 AEST [4501] DETAIL: Key (a)=(1) already exists.\n2024-08-29 09:54:02.462 AEST [4501] CONTEXT: COPY t1, line 1\n2024-08-29 09:54:02.463 AEST [23501] LOG: background worker \"logical\nreplication tablesync worker\" (PID 4501) exited with exit code 1\n2024-08-29 09:54:07.512 AEST [4654] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"t1\" has started\n2024-08-29 09:54:07.580 AEST [4654] ERROR: duplicate key value\nviolates unique constraint \"t1_pkey\"\n2024-08-29 09:54:07.580 AEST [4654] DETAIL: Key (a)=(1) already exists.\n2024-08-29 09:54:07.580 AEST [4654] CONTEXT: COPY t1, line 1\n...\n\ntest_sub=# alter subscription sub1 disable;'\nALTER SUBSCRIPTION\n2024-08-29 09:55:10.329 AEST [4345] LOG: logical replication worker\nfor subscription \"sub1\" will stop because the subscription was\ndisabled\n\ntest_sub=# select * from pg_stat_subscription_stats;\n subid | subname | apply_error_count | sync_error_count |\ninsert_exists_count | update_differ_count | update_exists_count |\nupdate_missing_count | de\nlete_differ_count | delete_missing_count | stats_reset\n-------+---------+-------------------+------------------+---------------------+---------------------+---------------------+----------------------+---\n------------------+----------------------+-------------\n 16390 | sub1 | 0 | 15 |\n 0 | 0 | 0 |\n 0 |\n 0 | 0 |\n(1 row)\n\n~~~\n\nNotice how after a while there were multiple (15) 'sync_error_count' recorded.\n\nAccording to the docs: 'insert_exists' happens when \"Inserting a row\nthat violates a NOT DEFERRABLE unique constraint.\". So why are there\nnot the same number of 'insert_exists_count' recorded in\npg_stat_subscription_stats?\n\nThe 'insert_exists' is either not happening or is not being counted\nduring table synchronization. Either way, it was not what I was\nexpecting. If it is not a bug, maybe the docs need to explain more\nabout the rules for 'insert_exists' during the initial table sync.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 29 Aug 2024 10:31:18 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Thursday, August 29, 2024 8:31 AM Peter Smith <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> I tried an experiment where I deliberately violated a primary key during initial\r\n> table synchronization.\r\n> \r\n> For example:\r\n...\r\n> test_sub=# 2024-08-29 09:53:57.245 AEST [4345] LOG: logical replication\r\n> apply worker for subscription \"sub1\" has started\r\n> 2024-08-29 09:53:57.258 AEST [4347] LOG: logical replication table\r\n> synchronization worker for subscription \"sub1\", table \"t1\" has started\r\n> 2024-08-29 09:53:57.311 AEST [4347] ERROR: duplicate key value violates\r\n> unique constraint \"t1_pkey\"\r\n> 2024-08-29 09:53:57.311 AEST [4347] DETAIL: Key (a)=(1) already exists.\r\n> 2024-08-29 09:53:57.311 AEST [4347] CONTEXT: COPY t1, line 1\r\n> ~~~\r\n> \r\n> Notice how after a while there were multiple (15) 'sync_error_count' recorded.\r\n> \r\n> According to the docs: 'insert_exists' happens when \"Inserting a row that\r\n> violates a NOT DEFERRABLE unique constraint.\". So why are there not the\r\n> same number of 'insert_exists_count' recorded in pg_stat_subscription_stats?\r\n\r\nBecause this error was caused by COPY instead of an INSERT (e.g., CONTEXT: COPY\r\nt1, line 1), so this is as expected. The doc of conflict counts(\r\ninsert_exists_count) has already mentioned that it counts the conflict only *during the\r\napplication of changes* which is clear to me that it doesn't count the ones in\r\ninitial table synchronization. See the existing apply_error_count where we also\r\nhas similar wording(e.g. \"an error occurred while applying changes\").\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 29 Aug 2024 02:05:49 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Thu, Aug 29, 2024 at 4:59 AM Peter Smith <[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 9:19 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Aug 28, 2024 at 11:43 AM shveta malik <[email protected]> wrote:\n> > >\n> > > Thanks for the patch. Just thinking out loud, since we have names like\n> > > 'apply_error_count', 'sync_error_count' which tells that they are\n> > > actually error-count, will it be better to have something similar in\n> > > conflict-count cases, like 'insert_exists_conflict_count',\n> > > 'delete_missing_conflict_count' and so on. Thoughts?\n> > >\n> >\n> > It would be better to have conflict in the names but OTOH it will make\n> > the names a bit longer. The other alternatives could be (a)\n> > insert_exists_confl_count, etc. (b) confl_insert_exists_count, etc.\n> > (c) confl_insert_exists, etc. These are based on the column names in\n> > the existing view pg_stat_database_conflicts [1]. The (c) looks better\n> > than other options but it will make the conflict-related columns\n> > different from error-related columns.\n> >\n> > Yet another option is to have a different view like\n> > pg_stat_subscription_conflicts but that sounds like going too far.\n\nYes, I think we are good with pg_stat_subscription_stats for the time being.\n\n> >\n> > [1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW\n>\n> Option (c) looked good to me.\n\n+1 for option c. it should be okay to not have '_count' in the name.\n\n> Removing the suffix \"_count\" is OK. For example, try searching all of\n> Chapter 27 (\"The Cumulative Statistics System\") [1] for columns\n> described as \"Number of ...\" and you will find that a \"_count\" suffix\n> is used only rarely.\n>\n> Adding the prefix \"confl_\" is OK. As mentioned, there is a precedent\n> for this. See \"pg_stat_database_conflicts\" [2].\n>\n> Mixing column names where some have and some do not have \"_count\"\n> suffixes may not be ideal, but I see no problem because there are\n> precedents for that too. E.g. see \"pg_stat_replication_slots\" [3], and\n> \"pg_stat_all_tables\" [4].\n>\n> ======\n> [1] https://www.postgresql.org/docs/devel/monitoring-stats.html\n> [2] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW\n> [3] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW\n> [4] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\n\n", "msg_date": "Thu, 29 Aug 2024 08:47:30 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Thursday, August 29, 2024 11:18 AM shveta malik <[email protected]> wrote:\r\n> \r\n> On Thu, Aug 29, 2024 at 4:59 AM Peter Smith <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Wed, Aug 28, 2024 at 9:19 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Wed, Aug 28, 2024 at 11:43 AM shveta malik\r\n> <[email protected]> wrote:\r\n> > > >\r\n> > > > Thanks for the patch. Just thinking out loud, since we have names\r\n> > > > like 'apply_error_count', 'sync_error_count' which tells that they\r\n> > > > are actually error-count, will it be better to have something\r\n> > > > similar in conflict-count cases, like\r\n> > > > 'insert_exists_conflict_count', 'delete_missing_conflict_count' and so\r\n> on. Thoughts?\r\n> > > >\r\n> > >\r\n> > > It would be better to have conflict in the names but OTOH it will\r\n> > > make the names a bit longer. The other alternatives could be (a)\r\n> > > insert_exists_confl_count, etc. (b) confl_insert_exists_count, etc.\r\n> > > (c) confl_insert_exists, etc. These are based on the column names in\r\n> > > the existing view pg_stat_database_conflicts [1]. The (c) looks\r\n> > > better than other options but it will make the conflict-related\r\n> > > columns different from error-related columns.\r\n> > >\r\n> > > Yet another option is to have a different view like\r\n> > > pg_stat_subscription_conflicts but that sounds like going too far.\r\n> \r\n> Yes, I think we are good with pg_stat_subscription_stats for the time being.\r\n> \r\n> > >\r\n> > > [1] -\r\n> > >\r\n> https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORI\r\n> > > NG-PG-STAT-DATABASE-CONFLICTS-VIEW\r\n> >\r\n> > Option (c) looked good to me.\r\n> \r\n> +1 for option c. it should be okay to not have '_count' in the name.\r\n\r\nAgreed. Here is new version patch which change the names as suggested. I also\r\nrebased the patch based on another renaming commit 640178c9.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 29 Aug 2024 05:36:40 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Thu, Aug 29, 2024 at 11:06 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n>\n> Agreed. Here is new version patch which change the names as suggested. I also\n> rebased the patch based on another renaming commit 640178c9.\n>\n\nThanks for the patch. Few minor things:\n\n1)\nconflict.h:\n * This enum is used in statistics collection (see\n * PgStat_StatSubEntry::conflict_count) as well, therefore, when adding new\n * values or reordering existing ones, ensure to review and potentially adjust\n * the corresponding statistics collection codes.\n\n--We shall mention PgStat_BackendSubEntry as well.\n\n026_stats.pl:\n2)\n# Now that the table is empty, the\n# update conflict (update_existing) ERRORs will stop happening.\n\n--Shall it be update_exists instead of update_existing here:\n\n3)\nThis is an existing comment above insert_exists conflict capture:\n# Wait for the apply error to be reported.\n\n--Shall we change to:\n# Wait for the subscriber to report apply error and insert_exists conflict.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 30 Aug 2024 09:40:23 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "Hi Hou-San. Here are my review comments for v4-0001.\n\n======\n\n1. Add links in the docs\n\nIMO it would be good for all these confl_* descriptions (in\ndoc/src/sgml/monitoring.sgml) to include links back to where each of\nthose conflict types was defined [1].\n\nIndeed, when links are included to the original conflict type\ninformation then I think you should remove mentioning\n\"track_commit_timestamp\":\n+ counted only when the\n+ <link linkend=\"guc-track-commit-timestamp\"><varname>track_commit_timestamp</varname></link>\n+ option is enabled on the subscriber.\n\nIt should be obvious that you cannot count a conflict if the conflict\ndoes not happen, but I don't think we should scatter/duplicate those\nrules in different places saying when certain conflicts can/can't\nhappen -- we should just link everywhere back to the original\ndescription for those rules.\n\n~~~\n\n2. Arrange all the counts into an intuitive/natural order\n\nThere is an intuitive/natural ordering for these counts. For example,\nthe 'confl_*' count fields are in the order insert -> update ->\ndelete, which LGTM.\n\nMeanwhile, the 'apply_error_count' and the 'sync_error_count' are not\nin a good order.\n\nIMO it makes more sense if everything is ordered as:\n'sync_error_count', then 'apply_error_count', then all the 'confl_*'\ncounts.\n\nThis comment applies to lots of places, e.g.:\n- docs (doc/src/sgml/monitoring.sgml)\n- function pg_stat_get_subscription_stats (pg_proc.dat)\n- view pg_stat_subscription_stats (src/backend/catalog/system_views.sql)\n- TAP test SELECTs (test/subscription/t/026_stats.pl)\n\nAs all those places are already impacted by this patch, I think it\nwould be good if (in passing) we (if possible) also swapped the\nsync/apply counts so everything is ordered intuitively top-to-bottom\nor left-to-right.\n\n======\n[1] https://www.postgresql.org/docs/devel/logical-replication-conflicts.html#LOGICAL-REPLICATION-CONFLICTS\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 30 Aug 2024 15:23:04 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 9:40 AM shveta malik <[email protected]> wrote:\n>\n> On Thu, Aug 29, 2024 at 11:06 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> >\n> > Agreed. Here is new version patch which change the names as suggested. I also\n> > rebased the patch based on another renaming commit 640178c9.\n> >\n>\n> Thanks for the patch. Few minor things:\n>\n> 1)\n> conflict.h:\n> * This enum is used in statistics collection (see\n> * PgStat_StatSubEntry::conflict_count) as well, therefore, when adding new\n> * values or reordering existing ones, ensure to review and potentially adjust\n> * the corresponding statistics collection codes.\n>\n> --We shall mention PgStat_BackendSubEntry as well.\n>\n> 026_stats.pl:\n> 2)\n> # Now that the table is empty, the\n> # update conflict (update_existing) ERRORs will stop happening.\n>\n> --Shall it be update_exists instead of update_existing here:\n>\n> 3)\n> This is an existing comment above insert_exists conflict capture:\n> # Wait for the apply error to be reported.\n>\n> --Shall we change to:\n> # Wait for the subscriber to report apply error and insert_exists conflict.\n>\n\n1) I have tested the patch, works well.\n2) Verified headers inclusions, all good\n3) All my comments (very old ones when the patch was initially posted)\nare now addressed.\n\nSo apart from the comments I posted in [1], I have no more comments.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uAZpzustNOMBhxBctHHWbBA%3DsnTAYsLpoWZg%2BcqegmD-A%40mail.gmail.com\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 30 Aug 2024 11:53:04 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 10:53 AM Peter Smith <[email protected]> wrote:\n>\n> Hi Hou-San. Here are my review comments for v4-0001.\n>\n> ======\n>\n> 1. Add links in the docs\n>\n> IMO it would be good for all these confl_* descriptions (in\n> doc/src/sgml/monitoring.sgml) to include links back to where each of\n> those conflict types was defined [1].\n>\n> Indeed, when links are included to the original conflict type\n> information then I think you should remove mentioning\n> \"track_commit_timestamp\":\n> + counted only when the\n> + <link linkend=\"guc-track-commit-timestamp\"><varname>track_commit_timestamp</varname></link>\n> + option is enabled on the subscriber.\n>\n> It should be obvious that you cannot count a conflict if the conflict\n> does not happen, but I don't think we should scatter/duplicate those\n> rules in different places saying when certain conflicts can/can't\n> happen -- we should just link everywhere back to the original\n> description for those rules.\n\n+1, will make the doc better.\n\n> ~~~\n>\n> 2. Arrange all the counts into an intuitive/natural order\n>\n> There is an intuitive/natural ordering for these counts. For example,\n> the 'confl_*' count fields are in the order insert -> update ->\n> delete, which LGTM.\n>\n> Meanwhile, the 'apply_error_count' and the 'sync_error_count' are not\n> in a good order.\n>\n> IMO it makes more sense if everything is ordered as:\n> 'sync_error_count', then 'apply_error_count', then all the 'confl_*'\n> counts.\n>\n> This comment applies to lots of places, e.g.:\n> - docs (doc/src/sgml/monitoring.sgml)\n> - function pg_stat_get_subscription_stats (pg_proc.dat)\n> - view pg_stat_subscription_stats (src/backend/catalog/system_views.sql)\n> - TAP test SELECTs (test/subscription/t/026_stats.pl)\n>\n> As all those places are already impacted by this patch, I think it\n> would be good if (in passing) we (if possible) also swapped the\n> sync/apply counts so everything is ordered intuitively top-to-bottom\n> or left-to-right.\n\nNot sure about this though. It does not seem to belong to the current patch.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 30 Aug 2024 11:54:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Friday, August 30, 2024 2:24 PM shveta malik <[email protected]> wrote:\r\n> \r\n> On Fri, Aug 30, 2024 at 10:53 AM Peter Smith <[email protected]>\r\n> wrote:\r\n> >\r\n> > Hi Hou-San. Here are my review comments for v4-0001.\r\n\r\nThanks Shveta and Peter for giving comments !\r\n\r\n> >\r\n> > ======\r\n> >\r\n> > 1. Add links in the docs\r\n> >\r\n> > IMO it would be good for all these confl_* descriptions (in\r\n> > doc/src/sgml/monitoring.sgml) to include links back to where each of\r\n> > those conflict types was defined [1].\r\n> >\r\n> > Indeed, when links are included to the original conflict type\r\n> > information then I think you should remove mentioning\r\n> > \"track_commit_timestamp\":\r\n> > + counted only when the\r\n> > + <link\r\n> linkend=\"guc-track-commit-timestamp\"><varname>track_commit_timesta\r\n> mp</varname></link>\r\n> > + option is enabled on the subscriber.\r\n> >\r\n> > It should be obvious that you cannot count a conflict if the conflict\r\n> > does not happen, but I don't think we should scatter/duplicate those\r\n> > rules in different places saying when certain conflicts can/can't\r\n> > happen -- we should just link everywhere back to the original\r\n> > description for those rules.\r\n> \r\n> +1, will make the doc better.\r\n\r\nChanged. To add link to each conflict type, I added \"<varlistentry\r\nid=\"conflict-xx, xreflabel=xx\" to each conflict in logical-replication.sgml.\r\n\r\n> \r\n> > ~~~\r\n> >\r\n> > 2. Arrange all the counts into an intuitive/natural order\r\n> >\r\n> > There is an intuitive/natural ordering for these counts. For example,\r\n> > the 'confl_*' count fields are in the order insert -> update ->\r\n> > delete, which LGTM.\r\n> >\r\n> > Meanwhile, the 'apply_error_count' and the 'sync_error_count' are not\r\n> > in a good order.\r\n> >\r\n> > IMO it makes more sense if everything is ordered as:\r\n> > 'sync_error_count', then 'apply_error_count', then all the 'confl_*'\r\n> > counts.\r\n> >\r\n> > This comment applies to lots of places, e.g.:\r\n> > - docs (doc/src/sgml/monitoring.sgml)\r\n> > - function pg_stat_get_subscription_stats (pg_proc.dat)\r\n> > - view pg_stat_subscription_stats\r\n> > (src/backend/catalog/system_views.sql)\r\n> > - TAP test SELECTs (test/subscription/t/026_stats.pl)\r\n> >\r\n> > As all those places are already impacted by this patch, I think it\r\n> > would be good if (in passing) we (if possible) also swapped the\r\n> > sync/apply counts so everything is ordered intuitively top-to-bottom\r\n> > or left-to-right.\r\n> \r\n> Not sure about this though. It does not seem to belong to the current patch.\r\n\r\nI also don't think we should handle that in this patch.\r\n\r\nHere is V5 patch which addressed above and Shveta's[1] comments.\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uAZpzustNOMBhxBctHHWbBA%3DsnTAYsLpoWZg%2BcqegmD-A%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 30 Aug 2024 06:45:41 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 12:15 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n>\n> Here is V5 patch which addressed above and Shveta's[1] comments.\n>\n\nThe patch looks good to me.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 30 Aug 2024 14:05:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 6:36 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Aug 30, 2024 at 12:15 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> >\n> > Here is V5 patch which addressed above and Shveta's[1] comments.\n> >\n>\n> The patch looks good to me.\n>\n\nPatch v5 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 2 Sep 2024 08:44:34 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 4:24 PM shveta malik <[email protected]> wrote:\n>\n> On Fri, Aug 30, 2024 at 10:53 AM Peter Smith <[email protected]> wrote:\n> >\n...\n> > 2. Arrange all the counts into an intuitive/natural order\n> >\n> > There is an intuitive/natural ordering for these counts. For example,\n> > the 'confl_*' count fields are in the order insert -> update ->\n> > delete, which LGTM.\n> >\n> > Meanwhile, the 'apply_error_count' and the 'sync_error_count' are not\n> > in a good order.\n> >\n> > IMO it makes more sense if everything is ordered as:\n> > 'sync_error_count', then 'apply_error_count', then all the 'confl_*'\n> > counts.\n> >\n> > This comment applies to lots of places, e.g.:\n> > - docs (doc/src/sgml/monitoring.sgml)\n> > - function pg_stat_get_subscription_stats (pg_proc.dat)\n> > - view pg_stat_subscription_stats (src/backend/catalog/system_views.sql)\n> > - TAP test SELECTs (test/subscription/t/026_stats.pl)\n> >\n> > As all those places are already impacted by this patch, I think it\n> > would be good if (in passing) we (if possible) also swapped the\n> > sync/apply counts so everything is ordered intuitively top-to-bottom\n> > or left-to-right.\n>\n> Not sure about this though. It does not seem to belong to the current patch.\n>\n\nFair enough. But, besides being inappropriate to include in the\ncurrent patch, do you think the suggestion to reorder them made sense?\nIf it has some merit, then I will propose it again as a separate\nthread.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 2 Sep 2024 08:49:37 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Mon, Sep 2, 2024 at 4:20 AM Peter Smith <[email protected]> wrote:\n>\n> On Fri, Aug 30, 2024 at 4:24 PM shveta malik <[email protected]> wrote:\n> >\n> > On Fri, Aug 30, 2024 at 10:53 AM Peter Smith <[email protected]> wrote:\n> > >\n> ...\n> > > 2. Arrange all the counts into an intuitive/natural order\n> > >\n> > > There is an intuitive/natural ordering for these counts. For example,\n> > > the 'confl_*' count fields are in the order insert -> update ->\n> > > delete, which LGTM.\n> > >\n> > > Meanwhile, the 'apply_error_count' and the 'sync_error_count' are not\n> > > in a good order.\n> > >\n> > > IMO it makes more sense if everything is ordered as:\n> > > 'sync_error_count', then 'apply_error_count', then all the 'confl_*'\n> > > counts.\n> > >\n> > > This comment applies to lots of places, e.g.:\n> > > - docs (doc/src/sgml/monitoring.sgml)\n> > > - function pg_stat_get_subscription_stats (pg_proc.dat)\n> > > - view pg_stat_subscription_stats (src/backend/catalog/system_views.sql)\n> > > - TAP test SELECTs (test/subscription/t/026_stats.pl)\n> > >\n> > > As all those places are already impacted by this patch, I think it\n> > > would be good if (in passing) we (if possible) also swapped the\n> > > sync/apply counts so everything is ordered intuitively top-to-bottom\n> > > or left-to-right.\n> >\n> > Not sure about this though. It does not seem to belong to the current patch.\n> >\n>\n> Fair enough. But, besides being inappropriate to include in the\n> current patch, do you think the suggestion to reorder them made sense?\n> If it has some merit, then I will propose it again as a separate\n> thread.\n>\n\n Yes, I think it makes sense. With respect to internal code, it might\nbe still okay as is, but when it comes to pg_stat_subscription_stats,\nI think it is better if user finds it in below order:\n subid | subname | sync_error_count | apply_error_count | confl_*\n\n rather than the existing one:\n subid | subname | apply_error_count | sync_error_count | confl_*\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 2 Sep 2024 08:58:31 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Mon, Sep 2, 2024 at 1:28 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Sep 2, 2024 at 4:20 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Fri, Aug 30, 2024 at 4:24 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Fri, Aug 30, 2024 at 10:53 AM Peter Smith <[email protected]> wrote:\n> > > >\n> > ...\n> > > > 2. Arrange all the counts into an intuitive/natural order\n> > > >\n> > > > There is an intuitive/natural ordering for these counts. For example,\n> > > > the 'confl_*' count fields are in the order insert -> update ->\n> > > > delete, which LGTM.\n> > > >\n> > > > Meanwhile, the 'apply_error_count' and the 'sync_error_count' are not\n> > > > in a good order.\n> > > >\n> > > > IMO it makes more sense if everything is ordered as:\n> > > > 'sync_error_count', then 'apply_error_count', then all the 'confl_*'\n> > > > counts.\n> > > >\n> > > > This comment applies to lots of places, e.g.:\n> > > > - docs (doc/src/sgml/monitoring.sgml)\n> > > > - function pg_stat_get_subscription_stats (pg_proc.dat)\n> > > > - view pg_stat_subscription_stats (src/backend/catalog/system_views.sql)\n> > > > - TAP test SELECTs (test/subscription/t/026_stats.pl)\n> > > >\n> > > > As all those places are already impacted by this patch, I think it\n> > > > would be good if (in passing) we (if possible) also swapped the\n> > > > sync/apply counts so everything is ordered intuitively top-to-bottom\n> > > > or left-to-right.\n> > >\n> > > Not sure about this though. It does not seem to belong to the current patch.\n> > >\n> >\n> > Fair enough. But, besides being inappropriate to include in the\n> > current patch, do you think the suggestion to reorder them made sense?\n> > If it has some merit, then I will propose it again as a separate\n> > thread.\n> >\n>\n> Yes, I think it makes sense. With respect to internal code, it might\n> be still okay as is, but when it comes to pg_stat_subscription_stats,\n> I think it is better if user finds it in below order:\n> subid | subname | sync_error_count | apply_error_count | confl_*\n>\n> rather than the existing one:\n> subid | subname | apply_error_count | sync_error_count | confl_*\n>\n\nHi Shveta, Thanks. FYI - I created a new thread for this here [1].\n\n======\n[1] https://www.postgresql.org/message-id/CAHut+PvbOw90wgGF4aV1HyYtX=6pjWc+pn8_fep7L=aLXwXkqg@mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 2 Sep 2024 19:28:32 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Fri, Aug 30, 2024 at 12:15 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is V5 patch which addressed above and Shveta's[1] comments.\n>\n\nTesting the stats for all types of conflicts is not required for this\npatch especially because they increase the timings by 3-4s. We can add\ntests for one or two types of conflicts.\n\n*\n(see\n+ * PgStat_StatSubEntry::conflict_count and PgStat_StatSubEntry::conflict_count)\n\nThere is a typo in the above comment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 3 Sep 2024 16:42:22 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Tuesday, September 3, 2024 7:12 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Aug 30, 2024 at 12:15 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > Here is V5 patch which addressed above and Shveta's[1] comments.\r\n> >\r\n> \r\n> Testing the stats for all types of conflicts is not required for this patch\r\n> especially because they increase the timings by 3-4s. We can add tests for one\r\n> or two types of conflicts.\r\n> \r\n> *\r\n> (see\r\n> + * PgStat_StatSubEntry::conflict_count and\r\n> + PgStat_StatSubEntry::conflict_count)\r\n> \r\n> There is a typo in the above comment.\r\n\r\nThanks for the comments. I have addressed the comments and adjusted the tests.\r\nIn the V6 patch, Only insert_exists and delete_missing are tested.\r\n\r\nI confirmed that it only increased the testing time by 1 second on my machine.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 3 Sep 2024 11:23:16 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Tuesday, September 3, 2024 7:23 PM Zhijie Hou (Fujitsu) <[email protected]> wrote:\r\n> \r\n> On Tuesday, September 3, 2024 7:12 PM Amit Kapila\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Fri, Aug 30, 2024 at 12:15 PM Zhijie Hou (Fujitsu)\r\n> > <[email protected]>\r\n> > wrote:\r\n> > >\r\n> > > Here is V5 patch which addressed above and Shveta's[1] comments.\r\n> > >\r\n> >\r\n> > Testing the stats for all types of conflicts is not required for this\r\n> > patch especially because they increase the timings by 3-4s. We can add\r\n> > tests for one or two types of conflicts.\r\n> >\r\n> > *\r\n> > (see\r\n> > + * PgStat_StatSubEntry::conflict_count and\r\n> > + PgStat_StatSubEntry::conflict_count)\r\n> >\r\n> > There is a typo in the above comment.\r\n> \r\n> Thanks for the comments. I have addressed the comments and adjusted the\r\n> tests.\r\n> In the V6 patch, Only insert_exists and delete_missing are tested.\r\n> \r\n> I confirmed that it only increased the testing time by 1 second on my machine.\r\n\r\nSorry, I sent the wrong patch in last email, please refer to the correct patch here.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Tue, 3 Sep 2024 11:28:59 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Tue, Sep 3, 2024 at 9:23 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Tuesday, September 3, 2024 7:12 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Testing the stats for all types of conflicts is not required for this patch\n> > especially because they increase the timings by 3-4s. We can add tests for one\n> > or two types of conflicts.\n> >\n...\n>\n> Thanks for the comments. I have addressed the comments and adjusted the tests.\n> In the V6 patch, Only insert_exists and delete_missing are tested.\n>\n> I confirmed that it only increased the testing time by 1 second on my machine.\n>\n> Best Regards,\n> Hou zj\n\nIt seems a pity to throw away perfectly good test cases just because\nthey increase how long the suite takes to run.\n\nThis seems like yet another example of where we could have made good\nuse of the 'PG_TEST_EXTRA' environment variable. I have been trying to\npropose adding \"subscription\" support for this in another thread [1].\nBy using this variable to make some tests conditional, we could have\nthe best of both worlds. e.g.\n- retain all tests, but\n- by default, only run a subset of those tests (to keep default test\nexecution time low).\n\nI hope that if the idea to use PG_TEST_EXTRA for \"subscription\" tests\ngets accepted then later we can revisit this, and put all the removed\nextra test cases back in again.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPsgtnr5BgcqYwD3PSf-AsUtVDE_j799AaZeAjJvE6HGtA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 4 Sep 2024 13:47:20 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" }, { "msg_contents": "On Wed, Sep 4, 2024 at 9:17 AM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Sep 3, 2024 at 9:23 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > I confirmed that it only increased the testing time by 1 second on my machine.\n> >\n>\n> It seems a pity to throw away perfectly good test cases just because\n> they increase how long the suite takes to run.\n>\n\nWe can take every possible test, but I worry about the time they\nconsume without adding much value and the maintenance burden. I feel\nlike core-code we should pay attention to tests as well and don't try\nto jam all the possible tests testing mostly similar stuff. Each time\nbefore committing or otherwise verifying the patch, we run make\ncheck-world, so don't want that time to go enormously high. Having\nsaid that, I don't want the added functionality shouldn't be tested\nproperly and I try my best to achieve that.\n\n> This seems like yet another example of where we could have made good\n> use of the 'PG_TEST_EXTRA' environment variable. I have been trying to\n> propose adding \"subscription\" support for this in another thread [1].\n> By using this variable to make some tests conditional, we could have\n> the best of both worlds. e.g.\n> - retain all tests, but\n> - by default, only run a subset of those tests (to keep default test\n> execution time low).\n>\n> I hope that if the idea to use PG_TEST_EXTRA for \"subscription\" tests\n> gets accepted then later we can revisit this, and put all the removed\n> extra test cases back in again.\n>\n\nI am not convinced that tests that are less useful than others or are\nincreasing the time are good to be kept under PG_TEST_EXTRA but if\nmore people advocate such an approach then it is worth considering.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 4 Sep 2024 10:26:40 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Collect statistics about conflicts in logical replication" } ]
[ { "msg_contents": "Hi pgsql hacker,\n\nRecently I have been trying to understand why GUC changes will be visible\neven though they are done in the signal handler as part of\n*ProcessConfigfile* (done in some extension code). Later I have seen almost\nall postgresql processes/bgworkers use signal handler to set a\nvariable *ConfigReloadPending\n*which will later be read in main code to process guc changes but for\npostmaster *ProcessConfigfile *is being called from signal handler itself\nwhich intern has memory allocation related code (non-async safe code). Is\nit safe to do this?\n\nRegards,\nNarayana\n\nHi pgsql hacker, Recently I have been trying to understand why GUC changes will be visible even though they are done in the signal handler as part of ProcessConfigfile (done in some extension code). Later I have seen almost all postgresql processes/bgworkers use signal handler to set a variable ConfigReloadPending which will later be read in main code to process guc changes but for postmaster ProcessConfigfile is being called from signal handler itself which intern has memory allocation related code (non-async safe code). Is it safe to do this?Regards,Narayana", "msg_date": "Thu, 22 Aug 2024 17:37:13 +0530", "msg_from": "Lakshmi Narayana Velayudam <[email protected]>", "msg_from_op": true, "msg_subject": "Usage of ProcessConfigfile in SIGHUP_Handler" }, { "msg_contents": "On Thu, Aug 22, 2024 at 05:37:13PM +0530, Lakshmi Narayana Velayudam wrote:\n> Later I have seen almost\n> all postgresql processes/bgworkers use signal handler to set a\n> variable *ConfigReloadPending\n> *which will later be read in main code to process guc changes but for\n> postmaster *ProcessConfigfile *is being called from signal handler itself\n> which intern has memory allocation related code (non-async safe code). Is\n> it safe to do this?\n\nI think this is no longer true as of v16, thanks to commit 7389aad [0].\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7389aad\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:16:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Usage of ProcessConfigfile in SIGHUP_Handler" }, { "msg_contents": "On Thu, Aug 22, 2024 at 8:46 PM Nathan Bossart <[email protected]>\nwrote:\n\n>\n> > I think this is no longer true as of v16, thanks to commit 7389aad [0].\n>\n\nMy Bad Nathan, was looking at PG 11, 14 codes. Just to be sure, calling\n*ProcessConfigFile *is a bug from a signal handler is a bug, right? Since\nit uses AllocSetContextCreate & also GUC variables changes might not be\nvisible in regular flow.\n\nI saw the discussion of the commit but couldn't conclude from the\ndiscussion that it was changed due to *ProcessConfigFIle. *\n\nRegards,\nNarayana\n\nOn Thu, Aug 22, 2024 at 8:46 PM Nathan Bossart <[email protected]> wrote:\n> I think this is no longer true as of v16, thanks to commit 7389aad [0].My Bad Nathan, was looking at PG 11, 14 codes. Just to be sure, calling ProcessConfigFile is a bug from a signal handler is a bug, right? Since it uses AllocSetContextCreate & also GUC variables changes might not be visible in regular flow.I saw the discussion of the commit but couldn't conclude from the discussion that it was changed due to ProcessConfigFIle. Regards,Narayana", "msg_date": "Thu, 22 Aug 2024 21:30:05 +0530", "msg_from": "Lakshmi Narayana Velayudam <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Usage of ProcessConfigfile in SIGHUP_Handler" }, { "msg_contents": "Lakshmi Narayana Velayudam <[email protected]> writes:\n> My Bad Nathan, was looking at PG 11, 14 codes. Just to be sure, calling\n> *ProcessConfigFile *is a bug from a signal handler is a bug, right?\n\nNo, it was not. The previous postmaster coding blocked signals\neverywhere except immediately around the main loop's select() call,\nso there wasn't any real hazard of signal handlers interrupting\nanything of concern. We redid it for cleanliness, not because there\nwas any observable bug.\n\n(If there had been a bug there, ProcessConfigFile would have been\nthe least of our problems, because all the other postmaster signals\nwere handled in the same style.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Aug 2024 12:20:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Usage of ProcessConfigfile in SIGHUP_Handler" }, { "msg_contents": "On Thu, Aug 22, 2024 at 9:50 PM Tom Lane <[email protected]> wrote:\n\n>\n> > The previous postmaster coding blocked signals\n> > everywhere except immediately around the main loop's select() call,\n> > so there wasn't any real hazard of signal handlers interrupting\n> > anything of concern. We redid it for cleanliness, not because there\n> > was any observable bug.\n>\n>\nAgreed, signal handlers are very sensitive (at least as of this moment) and\nthe current approach looks very clean and safe.\n\n> No, it was not.\n> (If there had been a bug there, ProcessConfigFile would have been\n> the least of our problems, because all the other postmaster signals\n> were handled in the same style.)\n\nJust as an info for future readers, it is indeed a bug for two reasons\n1. changin GUC values in the signal handler, compiler might optimise the\nvalues so main control wouldn't have know about it later (90% sure about\nthis)\n2. *ProcessConfigFile* calls *AllocSetCreate* which in turn calls *malloc*\nwhich is not async signal safe(see man 7 signal-safety) and can cause\ndeadlock in certain situations. (Dig malloc internal code if interested)\nNot only in *SIGHUP *handler it is done in other handlers as well. Feel\nfree to correct me if there is any inaccuracy in what I said.\n\nRegards,\nNarayana\n\nOn Thu, Aug 22, 2024 at 9:50 PM Tom Lane <[email protected]> wrote:> The previous postmaster coding blocked signals> everywhere except immediately around the main loop's select() call,> so there wasn't any real hazard of signal handlers interrupting> anything of concern.  We redid it for cleanliness, not because there> was any observable bug.Agreed, signal handlers are very sensitive (at least as of this moment) and the current approach looks very clean and safe.> No, it was not.> (If there had been a bug there, ProcessConfigFile would have been> the least of our problems, because all the other postmaster signals> were handled in the same style.)Just as an info for future readers, it is indeed a bug for two reasons1. changin GUC values in the signal handler, compiler might optimise the values so main control wouldn't have know about it later (90% sure about this)2. ProcessConfigFile calls AllocSetCreate which in turn calls malloc which is not async signal safe(see man 7 signal-safety) and can cause deadlock in certain situations. (Dig malloc internal code if interested) Not only in SIGHUP handler it is done in other handlers as well. Feel free to correct me if there is any inaccuracy in what I said.Regards,Narayana", "msg_date": "Thu, 22 Aug 2024 22:09:44 +0530", "msg_from": "Lakshmi Narayana Velayudam <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Usage of ProcessConfigfile in SIGHUP_Handler" }, { "msg_contents": "Lakshmi Narayana Velayudam <[email protected]> writes:\n> Just as an info for future readers, it is indeed a bug for two reasons\n\nNo, it isn't. There's twenty years' worth of successful usage of\nthe old coding pattern that says you're wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Aug 2024 12:44:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Usage of ProcessConfigfile in SIGHUP_Handler" } ]
[ { "msg_contents": "> 0) The patch does not apply anymore, thanks to David committing a patch> yesterday. Attached is a patch rebased on top of current master.That patch is based on PG17. I have now rewritten it based on the master branch and added some comments.> 1) Wouldn't it be easier (and just as efficient) to use slots with> TTSOpsMinimalTuple, i.e. backed by a minimal tuple?Use diffent kind of slot, the ExecEvalExpr function will report an error.\n > 2) I find the naming of the new functions a bit confusing. We now have> the \"original\" functions working with slots, and then also functions> with \"Tuple\" working with tuples. Seems asymmetric.In net patch function name renamed to ExecHashTableInsertSlot and ExecHashTableInsertTuple,also ExecParallelHashTableInsertSlotCurrentBatch and ExecParallelHashTableInsertTupleCurrentBatch.\n > 3) The code in ExecHashJoinOuterGetTuple is hard to understand, it'd> very much benefit from some comments. I'm a bit unsure if the likely()> and unlikely() hints really help.In new patch added some comments.\"Likely\" and \"unlikely\" might offer only marginal help on some CPUs and might not be beneficial at all on other platforms (I think).\n > 4) Is the hj_outerTupleBuffer buffer really needed / effective? I'd bet> just using palloc() will work just as well, thanks to the AllocSet> caching etc.The hj_outerTupleBuffer avoid reform(materialize) tuple in non-TTSOpsMinimalTuple scenarios,see ExecForceStoreMinimalTuple. I added some comments in new patch.> Can you provide more information about the benchmark you did? What> hardware, what scale, PostgreSQL configuration, which of the 22 queries> are improved, etc.> > I ran TPC-H with 1GB and 10GB scales on two machines, and I see pretty> much no difference compared to master. However, it occurred to me the> patch only ever helps if we increase the number of batches during> execution, in which case we need to move tuples to the right batch.Only parallel HashJoin speed up to ~2x(all data cached in memory),not full query, include non-parallel HashJoin.non-parallel HashJoin only when batchs large then one will speed up,because this patch only optimize for read batchs tuples to memory.", "msg_date": "Thu, 22 Aug 2024 20:08:12 +0800 (CST)", "msg_from": "bucoo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimize hashjoin" }, { "msg_contents": "On Thu, 22 Aug 2024 at 17:08, bucoo <[email protected]> wrote:\n\n>\n> > 0) The patch does not apply anymore, thanks to David committing a patch\n>\n> > yesterday. Attached is a patch rebased on top of current master.\n>\n> That patch is based on PG17. I have now rewritten it based on the master\n> branch and added some comments.\n>\n>\n> > 1) Wouldn't it be easier (and just as efficient) to use slots with\n>\n> > TTSOpsMinimalTuple, i.e. backed by a minimal tuple?\n>\n> Use diffent kind of slot, the ExecEvalExpr function will report an error.\n>\n>\n> > 2) I find the naming of the new functions a bit confusing. We now have\n> > the \"original\" functions working with slots, and then also functions\n> > with \"Tuple\" working with tuples. Seems asymmetric.\n>\n> In net patch function name renamed to ExecHashTableInsertSlot and\n> ExecHashTableInsertTuple,\n>\n> also ExecParallelHashTableInsertSlotCurrentBatch and\n> ExecParallelHashTableInsertTupleCurrentBatch.\n>\n>\n> > 3) The code in ExecHashJoinOuterGetTuple is hard to understand, it'd\n> > very much benefit from some comments. I'm a bit unsure if the likely()\n> > and unlikely() hints really help.\n>\n> In new patch added some comments.\n>\n> \"Likely\" and \"unlikely\" might offer only marginal help on some CPUs and\n> might not be beneficial at all on other platforms (I think).\n>\n>\n> > 4) Is the hj_outerTupleBuffer buffer really needed / effective? I'd bet\n> > just using palloc() will work just as well, thanks to the AllocSet\n> > caching etc.\n>\n> The hj_outerTupleBuffer avoid reform(materialize) tuple in\n> non-TTSOpsMinimalTuple scenarios,\n>\n> see ExecForceStoreMinimalTuple. I added some comments in new patch.\n>\n>\n> > Can you provide more information about the benchmark you did? What\n> > hardware, what scale, PostgreSQL configuration, which of the 22 queries\n> > are improved, etc.\n> >\n> > I ran TPC-H with 1GB and 10GB scales on two machines, and I see pretty\n> > much no difference compared to master. However, it occurred to me the\n> > patch only ever helps if we increase the number of batches during\n> > execution, in which case we need to move tuples to the right batch.\n>\n> Only parallel HashJoin speed up to ~2x(all data cached in memory),\n>\n> not full query, include non-parallel HashJoin.\n>\n> non-parallel HashJoin only when batchs large then one will speed up,\n>\n> because this patch only optimize for read batchs tuples to memory.\n>\n> Hi\n\nlikely/unlikely usage can be justified via benchmark. Parallel HashJoin\nspeed up still also can be verified via benchmark. Either benchmark script\nor benchmark result, and it is better to provide both.\n\n\n-- \nBest regards,\nKirill Reshke\n\nOn Thu, 22 Aug 2024 at 17:08, bucoo <[email protected]> wrote:> 0) The patch does not apply anymore, thanks to David committing a patch> yesterday. Attached is a patch rebased on top of current master.That patch is based on PG17. I have now rewritten it based on the master branch and added some comments.> 1) Wouldn't it be easier (and just as efficient) to use slots with> TTSOpsMinimalTuple, i.e. backed by a minimal tuple?Use diffent kind of slot, the ExecEvalExpr function will report an error.\n > 2) I find the naming of the new functions a bit confusing. We now have> the \"original\" functions working with slots, and then also functions> with \"Tuple\" working with tuples. Seems asymmetric.In net patch function name renamed to ExecHashTableInsertSlot and ExecHashTableInsertTuple,also ExecParallelHashTableInsertSlotCurrentBatch and ExecParallelHashTableInsertTupleCurrentBatch.\n > 3) The code in ExecHashJoinOuterGetTuple is hard to understand, it'd> very much benefit from some comments. I'm a bit unsure if the likely()> and unlikely() hints really help.In new patch added some comments.\"Likely\" and \"unlikely\" might offer only marginal help on some CPUs and might not be beneficial at all on other platforms (I think).\n > 4) Is the hj_outerTupleBuffer buffer really needed / effective? I'd bet> just using palloc() will work just as well, thanks to the AllocSet> caching etc.The hj_outerTupleBuffer avoid reform(materialize) tuple in non-TTSOpsMinimalTuple scenarios,see ExecForceStoreMinimalTuple. I added some comments in new patch.> Can you provide more information about the benchmark you did? What> hardware, what scale, PostgreSQL configuration, which of the 22 queries> are improved, etc.> > I ran TPC-H with 1GB and 10GB scales on two machines, and I see pretty> much no difference compared to master. However, it occurred to me the> patch only ever helps if we increase the number of batches during> execution, in which case we need to move tuples to the right batch.Only parallel HashJoin speed up to ~2x(all data cached in memory),not full query, include non-parallel HashJoin.non-parallel HashJoin only when batchs large then one will speed up,because this patch only optimize for read batchs tuples to memory.Hi likely/unlikely usage can be justified via benchmark. Parallel HashJoin speed up still also can be verified  via benchmark. Either benchmark script or benchmark result, and it is better to provide both. -- Best regards,Kirill Reshke", "msg_date": "Thu, 22 Aug 2024 17:14:51 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize hashjoin" }, { "msg_contents": "Hi,\n\nIt seems you responded by writing a new message and just copying the\nsubject, which unfortunately doesn't set the headers used for threading\n(e.g. in archives). Please just respond to the message.\n\nOr maybe your client does not set the References/In-Reply-To headers\ncorrectly. Not sure which mail client you're using.\n\n\nOn 8/22/24 14:08, bucoo wrote:\n> \n>> 0) The patch does not apply anymore, thanks to David committing a patch\n> \n>> yesterday. Attached is a patch rebased on top of current master.\n> \n> That patch is based on PG17. I have now rewritten it based on the master\n> branch and added some comments.\n> \n\nThanks. Yeah, patches should be based on \"master\".\n\n> \n>> 1) Wouldn't it be easier (and just as efficient) to use slots with\n> \n>> TTSOpsMinimalTuple, i.e. backed by a minimal tuple?\n> \n> Use diffent kind of slot, the ExecEvalExpr function will report an error.\n> \n\nHmm, I haven't tried so it's possible it wouldn't work.\n\n> \n>> 2) I find the naming of the new functions a bit confusing. We now have\n>> the \"original\" functions working with slots, and then also functions\n>> with \"Tuple\" working with tuples. Seems asymmetric.\n> \n> In net patch function name renamed to ExecHashTableInsertSlot and\n> ExecHashTableInsertTuple,\n> \n> also ExecParallelHashTableInsertSlotCurrentBatch and\n> ExecParallelHashTableInsertTupleCurrentBatch.\n> \n\nOK\n\n> \n>> 3) The code in ExecHashJoinOuterGetTuple is hard to understand, it'd\n>> very much benefit from some comments. I'm a bit unsure if the likely()\n>> and unlikely() hints really help.\n> \n> In new patch added some comments.\n> \n> \"Likely\" and \"unlikely\" might offer only marginal help on some CPUs and\n> might not be beneficial at all on other platforms (I think).\n> \n\nHaving such hints is an implicit suggestion it's beneficial. Otherwise\nwe'd just use them everywhere, but we don't - only a tiny fraction of\ncondition has those.\n\n> \n>> 4) Is the hj_outerTupleBuffer buffer really needed / effective? I'd bet\n>> just using palloc() will work just as well, thanks to the AllocSet\n>> caching etc.\n> \n> The hj_outerTupleBuffer avoid reform(materialize) tuple in\n> non-TTSOpsMinimalTuple scenarios,\n> \n> see ExecForceStoreMinimalTuple. I added some comments in new patch.\n> \n\nAFAIK you mean this comment:\n\n * mtup is hold in hjstate->hj_outerTupleBuffer, so we can using\n * shouldFree as false to call ExecForceStoreMinimalTuple().\n *\n * When slot is TTSOpsMinimalTuple we can avoid realloc memory for\n * new MinimalTuple(reuse StringInfo to call ExecHashJoinGetSavedTuple).\n\nBut my point was that I don't think the palloc/repalloc should be very\nexpensive, once the AllocSet warms up a bit.\n\n * More importantly, in non-TTSOpsMinimalTuple scenarios, it can avoid\n * reform(materialize) tuple(see ExecForceStoreMinimalTuple).\n\nYeah, but doesn't that conflate two things - materialization and freeing\nthe memory? Only because materialization is expensive, is that a good\nreason to abandon the memory management too?\n\n> \n>> Can you provide more information about the benchmark you did? What\n>> hardware, what scale, PostgreSQL configuration, which of the 22 queries\n>> are improved, etc.\n>>\n>> I ran TPC-H with 1GB and 10GB scales on two machines, and I see pretty\n>> much no difference compared to master. However, it occurred to me the\n>> patch only ever helps if we increase the number of batches during\n>> execution, in which case we need to move tuples to the right batch.\n> \n> Only parallel HashJoin speed up to ~2x(all data cached in memory),\n> \n> not full query, include non-parallel HashJoin.\n> \n> non-parallel HashJoin only when batchs large then one will speed up,\n> \n> because this patch only optimize for read batchs tuples to memory.\n> \n\nI'm sorry, but this does not answer *any* of the questions I asked.\n\nPlease provide enough info to reproduce the benefit - benchmark scale,\nwhich query, which parameters, etc. Show explain / explain analyze of\nthe query without / with the patch, stuff like that.\n\nI ran a number of TPC-H benchmarks with the patch and I never a benefit\nof this scale.\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Thu, 22 Aug 2024 21:26:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize hashjoin" }, { "msg_contents": "\n> * mtup is hold in hjstate->hj_outerTupleBuffer, so we can using\n> * shouldFree as false to call ExecForceStoreMinimalTuple().\n> *\n> * When slot is TTSOpsMinimalTuple we can avoid realloc memory for\n> * new MinimalTuple(reuse StringInfo to call ExecHashJoinGetSavedTuple).\n> \n> But my point was that I don't think the palloc/repalloc should be very expensive, once the AllocSet warms up a bit.\n\nAvoiding memory palloc/repalloc is just a side effect of avoiding reform tuple.\n\n> * More importantly, in non-TTSOpsMinimalTuple scenarios, it can avoid\n> * reform(materialize) tuple(see ExecForceStoreMinimalTuple).\n> \n> Yeah, but doesn't that conflate two things - materialization and freeing the memory? Only because materialization is expensive, is that a good reason to abandon the memory management too?\n\nCurrently, I haven't thought of a better way to avoid reform.\n\n> > \n> >> Can you provide more information about the benchmark you did? What \n> >> hardware, what scale, PostgreSQL configuration, which of the 22 \n> >> queries are improved, etc.\n> >>\n> >> I ran TPC-H with 1GB and 10GB scales on two machines, and I see \n> >> pretty much no difference compared to master. However, it occurred to \n> >> me the patch only ever helps if we increase the number of batches \n> >> during execution, in which case we need to move tuples to the right batch.\n> > \n> > Only parallel HashJoin speed up to ~2x(all data cached in memory),\n> > \n> > not full query, include non-parallel HashJoin.\n> > \n> > non-parallel HashJoin only when batchs large then one will speed up,\n> > \n> > because this patch only optimize for read batchs tuples to memory.\n> > \n> \n> I'm sorry, but this does not answer *any* of the questions I asked.\n> \n> Please provide enough info to reproduce the benefit - benchmark scale, which query, which > parameters, etc. Show explain / explain analyze of the query without / with the patch, stuff > like that.\n> \n> I ran a number of TPC-H benchmarks with the patch and I never a benefit of this scale.\n\nAfter further testing, it turns out that the parallel hashjoin did not improve performance. I might have compared it with a debug version at the time. I apologize for that.\n\nHowerver, the non-parallel hashjoin indeed showed about a 10% performance improvement.\nHere is the testing information:\n\nCPU: 13th Gen Intel(R) Core(TM) i7-13700\nMemory: 32GB\nSSD: UMIS REPEYJ512MKN1QWQ\nWindows version: win11 23H2 22631.4037\nWSL version: 2.2.4.0\nKernel version: 5.15.153.1-2\nOS version: rocky linux 9.4\nTPCH: SF=8\n\nSQL:\nset max_parallel_workers_per_gather = 0;\nset enable_mergejoin = off;\nexplain (verbose,analyze)\nselect count(*)\nfrom lineitem, orders\nwhere lineitem.l_orderkey = orders.o_orderkey;\n\npatch before:\nAggregate (cost=2422401.83..2422401.84 rows=1 width=8) (actual time=10591.679..10591.681 rows=1 loops=1)\n Output: count(*)\n -> Hash Join (cost=508496.00..2302429.31 rows=47989008 width=0) (actual time=1075.213..9503.727 rows=47989007 loops=1)\n Inner Unique: true\n Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)\n -> Index Only Scan using lineitem_pkey on public.lineitem (cost=0.56..1246171.69 rows=47989008 width=4) (actual time=0.023..1974.365 rows=47989007 loops=1)\n Output: lineitem.l_orderkey\n Heap Fetches: 0\n -> Hash (cost=311620.43..311620.43 rows=12000000 width=4) (actual time=1074.155..1074.156 rows=12000000 loops=1)\n Output: orders.o_orderkey\n Buckets: 262144 Batches: 128 Memory Usage: 5335kB\n -> Index Only Scan using orders_pkey on public.orders (cost=0.43..311620.43 rows=12000000 width=4) (actual time=0.014..464.346 rows=12000000 loops=1)\n Output: orders.o_orderkey\n Heap Fetches: 0\n Planning Time: 0.141 ms\n Execution Time: 10591.730 ms\n(16 rows)\n\nPatch after:\nAggregate (cost=2422401.83..2422401.84 rows=1 width=8) (actual time=9826.105..9826.106 rows=1 loops=1)\n Output: count(*)\n -> Hash Join (cost=508496.00..2302429.31 rows=47989008 width=0) (actual time=1087.588..8726.441 rows=47989007 loops=1)\n Inner Unique: true\n Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)\n -> Index Only Scan using lineitem_pkey on public.lineitem (cost=0.56..1246171.69 rows=47989008 width=4) (actual time=0.015..1989.389 rows=47989007 loops=1)\n Output: lineitem.l_orderkey\n Heap Fetches: 0\n -> Hash (cost=311620.43..311620.43 rows=12000000 width=4) (actual time=1086.357..1086.358 rows=12000000 loops=1)\n Output: orders.o_orderkey\n Buckets: 262144 Batches: 128 Memory Usage: 5335kB\n -> Index Only Scan using orders_pkey on public.orders (cost=0.43..311620.43 rows=12000000 width=4) (actual time=0.011..470.225 rows=12000000 loops=1)\n Output: orders.o_orderkey\n Heap Fetches: 0\n Planning Time: 0.065 ms\n Execution Time: 9826.135 ms\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 19:02:22 +0800", "msg_from": "\"bucoo\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?utf-8?Q?=E7=AD=94=E5=A4=8D:_optimize_hashjoin?=" }, { "msg_contents": "On Fri, Aug 23, 2024 at 7:02 AM bucoo <[email protected]> wrote:\n> Howerver, the non-parallel hashjoin indeed showed about a 10% performance improvement.\n> -> Hash Join (cost=508496.00..2302429.31 rows=47989008 width=0) (actual time=1075.213..9503.727 rows=47989007 loops=1)\n> -> Hash Join (cost=508496.00..2302429.31 rows=47989008 width=0) (actual time=1087.588..8726.441 rows=47989007 loops=1)\n\nIt's not a good idea to test performance with EXPLAIN ANALYZE,\ngenerally speaking. And you usually need to test a few times and\naverage or something, rather than just a single test. But also, this\ndoesn't show the hash join being 10% faster. It shows the hash join\nbeing essentially the same speed (1075ms unpatched, 1087ms patched),\nand the aggregate node on top of it being faster.\n\nNow, it does seem possible to me that changing one node could cause a\nperformance improvement for the node above it, but I don't quite see\nwhy that would happen in this case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 08:17:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize hashjoin" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch to $SUBJECT.\n\nThis module provides SQL functions to inspect the contents of serialized logical\nsnapshots of a running database cluster, which I think could be useful for\ndebugging or educational purposes.\n\nIt's currently made of 2 functions, one to return the metadata:\n\npostgres=# SELECT * FROM pg_get_logical_snapshot_meta('0/40796E18');\n-[ RECORD 1 ]--------\nmagic | 1369563137\nchecksum | 1028045905\nversion | 6\n\nand one to return more information:\n\npostgres=# SELECT * FROM pg_get_logical_snapshot_info('0/40796E18');\n-[ RECORD 1 ]------------+-----------\nstate | 2\nxmin | 751\nxmax | 751\nstart_decoding_at | 0/40796AF8\ntwo_phase_at | 0/40796AF8\ninitial_xmin_horizon | 0\nbuilding_full_snapshot | f\nin_slot_creation | f\nlast_serialized_snapshot | 0/0\nnext_phase_at | 0\ncommitted_count | 0\ncommitted_xip |\ncatchange_count | 2\ncatchange_xip | {751,752}\n\nThe LSN used as argument is extracted from the snapshot file name:\n\npostgres=# select * from pg_ls_logicalsnapdir();\n name | size | modification\n-----------------+------+------------------------\n 0-40796E18.snap | 152 | 2024-08-14 16:36:32+00\n(1 row)\n\nA few remarks:\n\n1. The \"state\" field is linked to the SnapBuildState enum (snapbuild.h). I've the\nfeeling that that's fine to display it as int but could write an helper function\nto display strings instead ('SNAPBUILD_BUILDING_SNAPSHOT',...). \n\n2. The SnapBuildOnDisk and SnapBuild structs are now exposed to public. Means\nwe should now pay much more attention when changing their contents but I think\nit's worth it.\n\n3. The pg_get_logical_snapshot_info() function mainly displays the SnapBuild\ncontent extracted from the logical snapshot file.\n\n4. I think that providing SQL functions is enough and that it's not needed to\nalso create a related binary tool.\n\n5. A few PGDLLIMPORT have been added (Windows CI was failing).\n\n6. Related documentation has been added.\n\n7. A test has been added.\n\n8. I don't like the module name that much but it follows the same as for\npg_walinspect.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 22 Aug 2024 12:26:15 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Thu, Aug 22, 2024 at 5:56 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Please find attached a patch to $SUBJECT.\n>\n> This module provides SQL functions to inspect the contents of serialized logical\n> snapshots of a running database cluster, which I think could be useful for\n> debugging or educational purposes.\n>\n\n+1. I see it could be a good debugging aid.\n\n>\n> 2. The SnapBuildOnDisk and SnapBuild structs are now exposed to public. Means\n> we should now pay much more attention when changing their contents but I think\n> it's worth it.\n>\n\nIs it possible to avoid exposing these structures? Can we expose some\nfunction from snapbuild.c that provides the required information?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 19:05:27 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 26, 2024 at 07:05:27PM +0530, Amit Kapila wrote:\n> On Thu, Aug 22, 2024 at 5:56 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Please find attached a patch to $SUBJECT.\n> >\n> > This module provides SQL functions to inspect the contents of serialized logical\n> > snapshots of a running database cluster, which I think could be useful for\n> > debugging or educational purposes.\n> >\n> \n> +1. I see it could be a good debugging aid.\n\nThanks for the feedback!\n\n> >\n> > 2. The SnapBuildOnDisk and SnapBuild structs are now exposed to public. Means\n> > we should now pay much more attention when changing their contents but I think\n> > it's worth it.\n> >\n> \n> Is it possible to avoid exposing these structures? Can we expose some\n> function from snapbuild.c that provides the required information?\n\nYeah, that's an option if we don't want to expose those structs to public.\n\nI think we could 1/ create a function that would return a formed HeapTuple, or\n2/ we could create multiple functions (about 15) that would return the values\nwe are interested in.\n\nI think 2/ is fine as it would give more flexiblity (no need to retrieve a whole\ntuple if one is interested to only one value).\n\nWhat do you think? Did you have something else in mind?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 27 Aug 2024 19:55:35 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Wed, Aug 28, 2024 at 1:25 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Aug 26, 2024 at 07:05:27PM +0530, Amit Kapila wrote:\n> > On Thu, Aug 22, 2024 at 5:56 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > 2. The SnapBuildOnDisk and SnapBuild structs are now exposed to public. Means\n> > > we should now pay much more attention when changing their contents but I think\n> > > it's worth it.\n> > >\n> >\n> > Is it possible to avoid exposing these structures? Can we expose some\n> > function from snapbuild.c that provides the required information?\n>\n> Yeah, that's an option if we don't want to expose those structs to public.\n>\n> I think we could 1/ create a function that would return a formed HeapTuple, or\n> 2/ we could create multiple functions (about 15) that would return the values\n> we are interested in.\n>\n> I think 2/ is fine as it would give more flexiblity (no need to retrieve a whole\n> tuple if one is interested to only one value).\n>\n\nTrue, but OTOH, each time we add a new field to these structures, a\nnew function has to be exposed. I don't have a strong opinion on this\nbut seeing current use cases, it seems okay to expose a single\nfunction.\n\n> What do you think? Did you have something else in mind?\n>\n\nOn similar lines, we can also provide a function to get the slot's\non-disk data. IIRC, Bharath had previously proposed a tool to achieve\nthe same. It is fine if we don't want to add that as part of this\npatch but I mentioned it because by having that we can have a set of\nfunctions to view logical decoding data.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Aug 2024 14:51:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 02:51:36PM +0530, Amit Kapila wrote:\n> On Wed, Aug 28, 2024 at 1:25 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Mon, Aug 26, 2024 at 07:05:27PM +0530, Amit Kapila wrote:\n> > > On Thu, Aug 22, 2024 at 5:56 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > 2. The SnapBuildOnDisk and SnapBuild structs are now exposed to public. Means\n> > > > we should now pay much more attention when changing their contents but I think\n> > > > it's worth it.\n> > > >\n> > >\n> > > Is it possible to avoid exposing these structures? Can we expose some\n> > > function from snapbuild.c that provides the required information?\n> >\n> > Yeah, that's an option if we don't want to expose those structs to public.\n> >\n> > I think we could 1/ create a function that would return a formed HeapTuple, or\n> > 2/ we could create multiple functions (about 15) that would return the values\n> > we are interested in.\n> >\n> > I think 2/ is fine as it would give more flexiblity (no need to retrieve a whole\n> > tuple if one is interested to only one value).\n> >\n> \n> True, but OTOH, each time we add a new field to these structures, a\n> new function has to be exposed. I don't have a strong opinion on this\n> but seeing current use cases, it seems okay to expose a single\n> function.\n\nYeah that's fair. And now I'm wondering if we need an extra module. I think\nwe could \"simply\" expose 2 new functions in core, thoughts?\n\n> > What do you think? Did you have something else in mind?\n> >\n> \n> On similar lines, we can also provide a function to get the slot's\n> on-disk data.\n\nYeah, having a way to expose the data from the disk makes fully sense to me.\n \n> IIRC, Bharath had previously proposed a tool to achieve\n> the same. It is fine if we don't want to add that as part of this\n> patch but I mentioned it because by having that we can have a set of\n> functions to view logical decoding data.\n>\n\nThat's right. I think this one would be simply enough to expose one or two\nfunctions in core too (and probably would not need an extra module).\n\nRegards,\n \n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Aug 2024 10:14:04 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Thu, Aug 29, 2024 at 3:44 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Yeah that's fair. And now I'm wondering if we need an extra module. I think\n> we could \"simply\" expose 2 new functions in core, thoughts?\n>\n> > > What do you think? Did you have something else in mind?\n> > >\n> >\n> > On similar lines, we can also provide a function to get the slot's\n> > on-disk data.\n>\n> Yeah, having a way to expose the data from the disk makes fully sense to me.\n>\n> > IIRC, Bharath had previously proposed a tool to achieve\n> > the same. It is fine if we don't want to add that as part of this\n> > patch but I mentioned it because by having that we can have a set of\n> > functions to view logical decoding data.\n>\n> That's right. I think this one would be simply enough to expose one or two\n> functions in core too (and probably would not need an extra module).\n\n+1 for functions in core unless this extra module\npg_logicalsnapinspect works as a tool to be helpful even when the\nserver is down.\n\nFWIW, I wrote pg_replslotdata as a tool, not as an extension for\nreading on-disk replication slot data to help when the server is down\n- https://www.postgresql.org/message-id/flat/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com.\nWhen the server is running, pg_get_replication_slots() pretty much\ngives the on-disk contents.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Aug 2024 18:33:19 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 06:33:19PM +0530, Bharath Rupireddy wrote:\n> On Thu, Aug 29, 2024 at 3:44 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > That's right. I think this one would be simply enough to expose one or two\n> > functions in core too (and probably would not need an extra module).\n> \n> +1 for functions in core unless this extra module\n> pg_logicalsnapinspect works as a tool to be helpful even when the\n> server is down.\n\nThanks for the feedback!\n\nI don't see any use case where it could be useful when the server is down. So,\nI think I'll move forward with in core functions (unless someone has a different\nopinion).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Aug 2024 14:15:47 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 02:15:47PM +0000, Bertrand Drouvot wrote:\n> I don't see any use case where it could be useful when the server is down. So,\n> I think I'll move forward with in core functions (unless someone has a different\n> opinion).\n> \n\nPlease find v2 attached that creates the 2 new in core functions.\n\nNote that once those new functions are in (or maybe sooner), I'll submit an\nadditional patch to get rid of the code duplication between the new\nValidateSnapshotFile() and SnapBuildRestore().\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 30 Aug 2024 09:02:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Thu, Aug 29, 2024 at 6:33 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Aug 29, 2024 at 3:44 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Yeah that's fair. And now I'm wondering if we need an extra module. I think\n> > we could \"simply\" expose 2 new functions in core, thoughts?\n> >\n> > > > What do you think? Did you have something else in mind?\n> > > >\n> > >\n> > > On similar lines, we can also provide a function to get the slot's\n> > > on-disk data.\n> >\n> > Yeah, having a way to expose the data from the disk makes fully sense to me.\n> >\n> > > IIRC, Bharath had previously proposed a tool to achieve\n> > > the same. It is fine if we don't want to add that as part of this\n> > > patch but I mentioned it because by having that we can have a set of\n> > > functions to view logical decoding data.\n> >\n> > That's right. I think this one would be simply enough to expose one or two\n> > functions in core too (and probably would not need an extra module).\n>\n> +1 for functions in core unless this extra module\n> pg_logicalsnapinspect works as a tool to be helpful even when the\n> server is down.\n>\n\nWe have an example of pageinspect which provides low-level functions\nto aid debugging. The proposal for these APIs also seems to fall in\nthe same category, so why go for the core for these functions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Aug 2024 15:43:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Fri, Aug 30, 2024 at 03:43:12PM +0530, Amit Kapila wrote:\n> On Thu, Aug 29, 2024 at 6:33 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Thu, Aug 29, 2024 at 3:44 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > Yeah that's fair. And now I'm wondering if we need an extra module. I think\n> > > we could \"simply\" expose 2 new functions in core, thoughts?\n> > >\n> > > > > What do you think? Did you have something else in mind?\n> > > > >\n> > > >\n> > > > On similar lines, we can also provide a function to get the slot's\n> > > > on-disk data.\n> > >\n> > > Yeah, having a way to expose the data from the disk makes fully sense to me.\n> > >\n> > > > IIRC, Bharath had previously proposed a tool to achieve\n> > > > the same. It is fine if we don't want to add that as part of this\n> > > > patch but I mentioned it because by having that we can have a set of\n> > > > functions to view logical decoding data.\n> > >\n> > > That's right. I think this one would be simply enough to expose one or two\n> > > functions in core too (and probably would not need an extra module).\n> >\n> > +1 for functions in core unless this extra module\n> > pg_logicalsnapinspect works as a tool to be helpful even when the\n> > server is down.\n> >\n> \n> We have an example of pageinspect which provides low-level functions\n> to aid debugging. The proposal for these APIs also seems to fall in\n> the same category,\n\nThat's right, but...\n\n> so why go for the core for these functions?\n\nas we decided not to expose the SnapBuildOnDisk and SnapBuild structs to public\nand to create/expose 2 new functions in snapbuild.c then the functions in the\nmodule would do nothing but expose the data coming from the snapbuild.c's\nfunctions (get the tuple and send it to the client). That sounds weird to me to\ncreate a module that would \"only\" do so, that's why I thought that in core\nfunctions taking care of everything make more sense.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Aug 2024 11:48:29 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Fri, Aug 30, 2024 at 5:18 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Aug 30, 2024 at 03:43:12PM +0530, Amit Kapila wrote:\n> > On Thu, Aug 29, 2024 at 6:33 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Thu, Aug 29, 2024 at 3:44 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > Yeah that's fair. And now I'm wondering if we need an extra module. I think\n> > > > we could \"simply\" expose 2 new functions in core, thoughts?\n> > > >\n> > > > > > What do you think? Did you have something else in mind?\n> > > > > >\n> > > > >\n> > > > > On similar lines, we can also provide a function to get the slot's\n> > > > > on-disk data.\n> > > >\n> > > > Yeah, having a way to expose the data from the disk makes fully sense to me.\n> > > >\n> > > > > IIRC, Bharath had previously proposed a tool to achieve\n> > > > > the same. It is fine if we don't want to add that as part of this\n> > > > > patch but I mentioned it because by having that we can have a set of\n> > > > > functions to view logical decoding data.\n> > > >\n> > > > That's right. I think this one would be simply enough to expose one or two\n> > > > functions in core too (and probably would not need an extra module).\n> > >\n> > > +1 for functions in core unless this extra module\n> > > pg_logicalsnapinspect works as a tool to be helpful even when the\n> > > server is down.\n> > >\n> >\n> > We have an example of pageinspect which provides low-level functions\n> > to aid debugging. The proposal for these APIs also seems to fall in\n> > the same category,\n>\n> That's right, but...\n>\n> > so why go for the core for these functions?\n>\n> as we decided not to expose the SnapBuildOnDisk and SnapBuild structs to public\n> and to create/expose 2 new functions in snapbuild.c then the functions in the\n> module would do nothing but expose the data coming from the snapbuild.c's\n> functions (get the tuple and send it to the client). That sounds weird to me to\n> create a module that would \"only\" do so, that's why I thought that in core\n> functions taking care of everything make more sense.\n>\n\nI see your point. Does anyone else have an opinion on the need for\nthese functions and whether to expose them from a contrib module or\nhave them as core functions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Sep 2024 16:24:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 09, 2024 at 04:24:09PM +0530, Amit Kapila wrote:\n> On Fri, Aug 30, 2024 at 5:18 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > as we decided not to expose the SnapBuildOnDisk and SnapBuild structs to public\n> > and to create/expose 2 new functions in snapbuild.c then the functions in the\n> > module would do nothing but expose the data coming from the snapbuild.c's\n> > functions (get the tuple and send it to the client). That sounds weird to me to\n> > create a module that would \"only\" do so, that's why I thought that in core\n> > functions taking care of everything make more sense.\n> >\n> \n> I see your point. Does anyone else have an opinion on the need for\n> these functions and whether to expose them from a contrib module or\n> have them as core functions?\n\nI looked at when the SNAPBUILD_VERSION has been changed:\n\nec5896aed3 (2014)\na975ff4980 (2021)\n8bdb1332eb (2021)\n7f13ac8123 (2022)\nbb19b70081 (2024)\n\nSo it's not like we are changing the SnapBuildOnDisk or SnapBuild structs that\nfrequently. Furthermore, those structs are serialized and so we have to preserve\ntheir on-disk compatibility (means we can change them only in a major release\nif we need to).\n\nSo, I think it would not be that much of an issue to expose those structs and\ncreate a new contrib module (as v1 did propose) instead of in core new functions.\n\nIf we want to insist that external modules \"should\" not rely on those structs then\nwe could put them into a new internal_snapbuild.h file (instead of snapbuild.h\nas proposed in v1).\n\nAt the end, I think that creating a contrib module and exposing those structs in\ninternal_snapbuild.h make more sense (as compared to in core functions).\n\nThoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 15:26:08 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Tue, Sep 10, 2024 at 8:56 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Sep 09, 2024 at 04:24:09PM +0530, Amit Kapila wrote:\n> > On Fri, Aug 30, 2024 at 5:18 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > > as we decided not to expose the SnapBuildOnDisk and SnapBuild structs to public\n> > > and to create/expose 2 new functions in snapbuild.c then the functions in the\n> > > module would do nothing but expose the data coming from the snapbuild.c's\n> > > functions (get the tuple and send it to the client). That sounds weird to me to\n> > > create a module that would \"only\" do so, that's why I thought that in core\n> > > functions taking care of everything make more sense.\n> > >\n> >\n> > I see your point. Does anyone else have an opinion on the need for\n> > these functions and whether to expose them from a contrib module or\n> > have them as core functions?\n>\n> I looked at when the SNAPBUILD_VERSION has been changed:\n>\n> ec5896aed3 (2014)\n> a975ff4980 (2021)\n> 8bdb1332eb (2021)\n> 7f13ac8123 (2022)\n> bb19b70081 (2024)\n>\n> So it's not like we are changing the SnapBuildOnDisk or SnapBuild structs that\n> frequently. Furthermore, those structs are serialized and so we have to preserve\n> their on-disk compatibility (means we can change them only in a major release\n> if we need to).\n>\n> So, I think it would not be that much of an issue to expose those structs and\n> create a new contrib module (as v1 did propose) instead of in core new functions.\n>\n> If we want to insist that external modules \"should\" not rely on those structs then\n> we could put them into a new internal_snapbuild.h file (instead of snapbuild.h\n> as proposed in v1).\n>\n\nAdding snapbuild_internal.h sounds like a good idea.\n\n> At the end, I think that creating a contrib module and exposing those structs in\n> internal_snapbuild.h make more sense (as compared to in core functions).\n>\n\nFail enough. We can keep the module name as logicalinspect so that we\ncan extend it for other logical decoding/replication-related files in\nthe future.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 Sep 2024 10:30:37 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 11, 2024 at 10:30:37AM +0530, Amit Kapila wrote:\n> On Tue, Sep 10, 2024 at 8:56 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Mon, Sep 09, 2024 at 04:24:09PM +0530, Amit Kapila wrote:\n> > > On Fri, Aug 30, 2024 at 5:18 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > > as we decided not to expose the SnapBuildOnDisk and SnapBuild structs to public\n> > > > and to create/expose 2 new functions in snapbuild.c then the functions in the\n> > > > module would do nothing but expose the data coming from the snapbuild.c's\n> > > > functions (get the tuple and send it to the client). That sounds weird to me to\n> > > > create a module that would \"only\" do so, that's why I thought that in core\n> > > > functions taking care of everything make more sense.\n> > > >\n> > >\n> > > I see your point. Does anyone else have an opinion on the need for\n> > > these functions and whether to expose them from a contrib module or\n> > > have them as core functions?\n> >\n> > I looked at when the SNAPBUILD_VERSION has been changed:\n> >\n> > ec5896aed3 (2014)\n> > a975ff4980 (2021)\n> > 8bdb1332eb (2021)\n> > 7f13ac8123 (2022)\n> > bb19b70081 (2024)\n> >\n> > So it's not like we are changing the SnapBuildOnDisk or SnapBuild structs that\n> > frequently. Furthermore, those structs are serialized and so we have to preserve\n> > their on-disk compatibility (means we can change them only in a major release\n> > if we need to).\n> >\n> > So, I think it would not be that much of an issue to expose those structs and\n> > create a new contrib module (as v1 did propose) instead of in core new functions.\n> >\n> > If we want to insist that external modules \"should\" not rely on those structs then\n> > we could put them into a new internal_snapbuild.h file (instead of snapbuild.h\n> > as proposed in v1).\n> >\n> \n> Adding snapbuild_internal.h sounds like a good idea.\n\nThanks for the feedback!\n\n> > At the end, I think that creating a contrib module and exposing those structs in\n> > internal_snapbuild.h make more sense (as compared to in core functions).\n> >\n> \n> Fail enough. We can keep the module name as logicalinspect so that we\n> can extend it for other logical decoding/replication-related files in\n> the future.\n\nYeah, good idea. Done that way in v3 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 11 Sep 2024 10:51:38 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Wed, Sep 11, 2024 at 4:21 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n>\n> Yeah, good idea. Done that way in v3 attached.\n>\n\nThanks for the patch. +1 on the patch's idea. I have started\nreviewing/testing it. It is WIP but please find few initial comments:\n\nsrc/backend/replication/logical/snapbuild.c:\n\n1)\n+ fsync_fname(\"pg_logical/snapshots\", true);\n\nShould we use macros PG_LOGICAL_DIR and PG_LOGICAL_SNAPSHOTS_DIR in\nValidateSnapshotFile(), instead of hard coding the path\n\n\n2)\nSame as above in pg_get_logical_snapshot_meta() and\npg_get_logical_snapshot_info()\n\n+ sprintf(path, \"pg_logical/snapshots/%X-%X.snap\",\n+ LSN_FORMAT_ARGS(lsn)); LSN_FORMAT_ARGS(lsn));\n\n3)\n+#include \"replication/internal_snapbuild.h\"\n\nShall we name new file as 'snapbuild_internal.h' instead of\n'internal_snapbuild.h'. Please see other files' name under\n'./src/include/replication':\nworker_internal.h\nwalsender_private.h\n\n4)\n+static void ValidateSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk,\n+ const char *path);\n\nIs it required? We generally don't add declaration unless required by\ncompiler. Since definition is prior to usage, it is not needed?\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 16 Sep 2024 16:02:51 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 16, 2024 at 04:02:51PM +0530, shveta malik wrote:\n> On Wed, Sep 11, 2024 at 4:21 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> >\n> > Yeah, good idea. Done that way in v3 attached.\n> >\n> \n> Thanks for the patch. +1 on the patch's idea. I have started\n> reviewing/testing it. It is WIP but please find few initial comments:\n\nThanks for sharing your thoughts and for the review!\n\n> \n> src/backend/replication/logical/snapbuild.c:\n> \n> 1)\n> + fsync_fname(\"pg_logical/snapshots\", true);\n> \n> Should we use macros PG_LOGICAL_DIR and PG_LOGICAL_SNAPSHOTS_DIR in\n> ValidateSnapshotFile(), instead of hard coding the path\n> \n> \n> 2)\n> Same as above in pg_get_logical_snapshot_meta() and\n> pg_get_logical_snapshot_info()\n> \n> + sprintf(path, \"pg_logical/snapshots/%X-%X.snap\",\n> + LSN_FORMAT_ARGS(lsn)); LSN_FORMAT_ARGS(lsn));\n>\n\nDoh! Yeah, agree that we should use those macros. They are coming from c39afc38cf\nwhich has been introduced after v1 of this patch. I thought I took care of it once\nc39afc38cf went in, but it looks like I missed it somehow. Done in v4 attached,\nThanks!\n \n> 3)\n> +#include \"replication/internal_snapbuild.h\"\n> \n> Shall we name new file as 'snapbuild_internal.h' instead of\n> 'internal_snapbuild.h'. Please see other files' name under\n> './src/include/replication':\n> worker_internal.h\n> walsender_private.h\n\nAgree, that should be snapbuild_internal.h, done in v4.\n\n> \n> 4)\n> +static void ValidateSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk,\n> + const char *path);\n> \n> Is it required? We generally don't add declaration unless required by\n> compiler. Since definition is prior to usage, it is not needed?\n>\n\nI personally prefer to add them even if not required by the compiler. I did not\npay attention that \"We generally don't add declaration unless required by compiler\"\nand (after a quick check) I did not find any reference in the coding style\ndocumentation [1]. That said, I don't have a strong opinion about that and so\nremoved in v4. Worth to add a mention in the coding convention doc?\n\n\n[1]: https://www.postgresql.org/docs/current/source.html\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 16 Sep 2024 14:33:20 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Mon, Sep 16, 2024 at 8:03 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Sep 16, 2024 at 04:02:51PM +0530, shveta malik wrote:\n> > On Wed, Sep 11, 2024 at 4:21 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > >\n> > > Yeah, good idea. Done that way in v3 attached.\n> > >\n> >\n> > Thanks for the patch. +1 on the patch's idea. I have started\n> > reviewing/testing it. It is WIP but please find few initial comments:\n>\n> Thanks for sharing your thoughts and for the review!\n>\n> >\n> > src/backend/replication/logical/snapbuild.c:\n> >\n> > 1)\n> > + fsync_fname(\"pg_logical/snapshots\", true);\n> >\n> > Should we use macros PG_LOGICAL_DIR and PG_LOGICAL_SNAPSHOTS_DIR in\n> > ValidateSnapshotFile(), instead of hard coding the path\n> >\n> >\n> > 2)\n> > Same as above in pg_get_logical_snapshot_meta() and\n> > pg_get_logical_snapshot_info()\n> >\n> > + sprintf(path, \"pg_logical/snapshots/%X-%X.snap\",\n> > + LSN_FORMAT_ARGS(lsn)); LSN_FORMAT_ARGS(lsn));\n> >\n>\n> Doh! Yeah, agree that we should use those macros. They are coming from c39afc38cf\n> which has been introduced after v1 of this patch. I thought I took care of it once\n> c39afc38cf went in, but it looks like I missed it somehow. Done in v4 attached,\n> Thanks!\n>\n> > 3)\n> > +#include \"replication/internal_snapbuild.h\"\n> >\n> > Shall we name new file as 'snapbuild_internal.h' instead of\n> > 'internal_snapbuild.h'. Please see other files' name under\n> > './src/include/replication':\n> > worker_internal.h\n> > walsender_private.h\n>\n> Agree, that should be snapbuild_internal.h, done in v4.\n>\n> >\n> > 4)\n> > +static void ValidateSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk,\n> > + const char *path);\n> >\n> > Is it required? We generally don't add declaration unless required by\n> > compiler. Since definition is prior to usage, it is not needed?\n> >\n>\n> I personally prefer to add them even if not required by the compiler. I did not\n> pay attention that \"We generally don't add declaration unless required by compiler\"\n> and (after a quick check) I did not find any reference in the coding style\n> documentation [1]. That said, I don't have a strong opinion about that and so\n> removed in v4. Worth to add a mention in the coding convention doc?\n>\n\nOkay. I was somehow under the impression that this is the way in the\npostgres i.e. not add redundant declarations. Will be good to know\nwhat others think on this.\n\nThanks for addressing the comments. I have not started reviewing v4\nyet, but here are few more comments on v3:\n\n1)\n+#include \"port/pg_crc32c.h\"\n\nIt is not needed in pg_logicalinspect.c as it is already included in\ninternal_snapbuild.h\n\n2)\n+ values[0] = Int16GetDatum(ondisk.builder.state);\n........\n+ values[8] = LSNGetDatum(ondisk.builder.last_serialized_snapshot);\n+ values[9] = TransactionIdGetDatum(ondisk.builder.next_phase_at);\n+ values[10] = Int64GetDatum(ondisk.builder.committed.xcnt);\n\nWe can have values[i++] in all the places and later we can check :\nAssert(i == PG_GET_LOGICAL_SNAPSHOT_INFO_COLS);\nThen we need not to keep track of number even in later part of code,\nas it goes till 14.\n\n3)\nSimilar change can be done here:\n\n+ values[0] = Int32GetDatum(ondisk.magic);\n+ values[1] = Int32GetDatum(ondisk.checksum);\n+ values[2] = Int32GetDatum(ondisk.version);\n\ncheck at the end will be: Assert(i == PG_GET_LOGICAL_SNAPSHOT_META_COLS);\n\n4)\nMost of the output columns in pg_get_logical_snapshot_info() look\nself-explanatory except 'state'. Should we have meaningful 'text' here\ncorresponding to SnapBuildState? Similar to what we do for\n'invalidation_reason' in pg_replication_slots. (SlotInvalidationCauses\nfor ReplicationSlotInvalidationCause)\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 17 Sep 2024 10:18:35 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Tue, Sep 17, 2024 at 10:18 AM shveta malik <[email protected]> wrote:\n>\n> Thanks for addressing the comments. I have not started reviewing v4\n> yet, but here are few more comments on v3:\n>\n\nI just noticed that when we pass NULL input, both the new functions\ngive 1 row as output, all cols as NULL:\n\nnewdb1=# SELECT * FROM pg_get_logical_snapshot_meta(NULL);\n magic | checksum | version\n-------+----------+---------\n | |\n\n(1 row)\n\nSimilar behavior with pg_get_logical_snapshot_info(). While the\nexisting 'pg_ls_logicalsnapdir' function gives this error, which looks\nmore meaningful:\n\nnewdb1=# select * from pg_ls_logicalsnapdir(NULL);\nERROR: function pg_ls_logicalsnapdir(unknown) does not exist\nLINE 1: select * from pg_ls_logicalsnapdir(NULL);\nHINT: No function matches the given name and argument types. You\nmight need to add explicit type casts.\n\n\nShouldn't the new functions have same behavior?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 17 Sep 2024 10:24:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Monday, September 16, 2024, shveta malik <[email protected]> wrote:\n\n> On Tue, Sep 17, 2024 at 10:18 AM shveta malik <[email protected]>\n> wrote:\n> >\n> > Thanks for addressing the comments. I have not started reviewing v4\n> > yet, but here are few more comments on v3:\n> >\n>\n> I just noticed that when we pass NULL input, both the new functions\n> give 1 row as output, all cols as NULL:\n>\n> newdb1=# SELECT * FROM pg_get_logical_snapshot_meta(NULL);\n> magic | checksum | version\n> -------+----------+---------\n> | |\n>\n> (1 row)\n>\n> Similar behavior with pg_get_logical_snapshot_info(). While the\n> existing 'pg_ls_logicalsnapdir' function gives this error, which looks\n> more meaningful:\n>\n> newdb1=# select * from pg_ls_logicalsnapdir(NULL);\n> ERROR: function pg_ls_logicalsnapdir(unknown) does not exist\n> LINE 1: select * from pg_ls_logicalsnapdir(NULL);\n> HINT: No function matches the given name and argument types. You\n> might need to add explicit type casts.\n>\n>\n> Shouldn't the new functions have same behavior?\n>\n\nNo. Since the name pg_ls_logicalsnapdir has zero single-argument\nimplementations passing a null value as an argument is indeed attempt to\ninvoke a function signature that doesn’t exist.\n\nIf there is exactly one single input argument function of the given name\nthe parser is going to cast the null literal to the data type of the single\nargument and invoke the function. It will not and cannot be convinced to\nfail to find a matching function.\n\nI can see an argument that they should produce an empty set instead of a\nsingle all-null row, but the idea that they wouldn’t even be found is\ncontrary to a core design of the system.\n\nDavid J.\n\nOn Monday, September 16, 2024, shveta malik <[email protected]> wrote:On Tue, Sep 17, 2024 at 10:18 AM shveta malik <[email protected]> wrote:\n>\n> Thanks for addressing the comments. I have not started reviewing v4\n> yet, but here are few more comments on v3:\n>\n\nI just noticed that when we pass NULL input, both the new functions\ngive 1 row as output, all cols as NULL:\n\nnewdb1=# SELECT * FROM pg_get_logical_snapshot_meta(NULL);\n magic | checksum | version\n-------+----------+---------\n       |          |\n\n(1 row)\n\nSimilar behavior with pg_get_logical_snapshot_info(). While the\nexisting 'pg_ls_logicalsnapdir' function gives this error, which looks\nmore meaningful:\n\nnewdb1=# select * from pg_ls_logicalsnapdir(NULL);\nERROR:  function pg_ls_logicalsnapdir(unknown) does not exist\nLINE 1: select * from pg_ls_logicalsnapdir(NULL);\nHINT:  No function matches the given name and argument types. You\nmight need to add explicit type casts.\n\n\nShouldn't the new functions have same behavior?\nNo. Since the name pg_ls_logicalsnapdir has zero single-argument implementations passing a null value as an argument is indeed attempt to invoke a function signature that doesn’t exist.If there is exactly one single input argument function of the given name the parser is going to cast the null literal to the data type of the single argument and invoke the function.  It will not and cannot be convinced to fail to find a matching function.I can see an argument that they should produce an empty set instead of a single all-null row, but the idea that they wouldn’t even be found is contrary to a core design of the system.David J.", "msg_date": "Mon, 16 Sep 2024 22:16:16 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 16, 2024 at 10:16:16PM -0700, David G. Johnston wrote:\n> On Monday, September 16, 2024, shveta malik <[email protected]> wrote:\n> \n> > On Tue, Sep 17, 2024 at 10:18 AM shveta malik <[email protected]>\n> > wrote:\n> > >\n> > > Thanks for addressing the comments. I have not started reviewing v4\n> > > yet, but here are few more comments on v3:\n> > >\n> >\n> > I just noticed that when we pass NULL input, both the new functions\n> > give 1 row as output, all cols as NULL:\n> >\n> > newdb1=# SELECT * FROM pg_get_logical_snapshot_meta(NULL);\n> > magic | checksum | version\n> > -------+----------+---------\n> > | |\n> >\n> > (1 row)\n> >\n> > Similar behavior with pg_get_logical_snapshot_info(). While the\n> > existing 'pg_ls_logicalsnapdir' function gives this error, which looks\n> > more meaningful:\n> >\n> > newdb1=# select * from pg_ls_logicalsnapdir(NULL);\n> > ERROR: function pg_ls_logicalsnapdir(unknown) does not exist\n> > LINE 1: select * from pg_ls_logicalsnapdir(NULL);\n> > HINT: No function matches the given name and argument types. You\n> > might need to add explicit type casts.\n> >\n> >\n> > Shouldn't the new functions have same behavior?\n> >\n> \n> No. Since the name pg_ls_logicalsnapdir has zero single-argument\n> implementations passing a null value as an argument is indeed attempt to\n> invoke a function signature that doesn’t exist.\n\nAgree.\n\n> I can see an argument that they should produce an empty set instead of a\n> single all-null row,\n\nYeah, it's outside the scope of this patch but I've seen different behavior\nin this area.\n\nFor example:\n\npostgres=# select * from pg_ls_replslotdir(NULL);\n name | size | modification\n------+------+--------------\n(0 rows)\n\nas compared to:\n\npostgres=# select * from pg_walfile_name_offset(NULL);\n file_name | file_offset\n-----------+-------------\n |\n(1 row)\n\nI thought that it might be linked to the volatility but it is not:\n\npostgres=# select * from pg_stat_get_xact_blocks_fetched(NULL);\n pg_stat_get_xact_blocks_fetched\n---------------------------------\n\n(1 row)\n\npostgres=# select * from pg_get_multixact_members(NULL);\n xid | mode\n-----+------\n(0 rows)\n\nwhile both are volatile.\n\nI think both make sense: It's \"empty\" or we \"don't know the values of the fields\".\nI don't know if there is any reason about this \"inconsistency\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Sep 2024 07:05:52 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Tue, Sep 17, 2024 at 10:18:35AM +0530, shveta malik wrote:\n> Thanks for addressing the comments. I have not started reviewing v4\n> yet, but here are few more comments on v3:\n> \n> 1)\n> +#include \"port/pg_crc32c.h\"\n> \n> It is not needed in pg_logicalinspect.c as it is already included in\n> internal_snapbuild.h\n\nYeap, forgot to remove that one when creating the new \"internal\".h file, done\nin v5 attached, thanks!\n\n> \n> 2)\n> + values[0] = Int16GetDatum(ondisk.builder.state);\n> ........\n> + values[8] = LSNGetDatum(ondisk.builder.last_serialized_snapshot);\n> + values[9] = TransactionIdGetDatum(ondisk.builder.next_phase_at);\n> + values[10] = Int64GetDatum(ondisk.builder.committed.xcnt);\n> \n> We can have values[i++] in all the places and later we can check :\n> Assert(i == PG_GET_LOGICAL_SNAPSHOT_INFO_COLS);\n> Then we need not to keep track of number even in later part of code,\n> as it goes till 14.\n\nRight, let's do it that way (as it is done in pg_walinspect for example).\n\n> 4)\n> Most of the output columns in pg_get_logical_snapshot_info() look\n> self-explanatory except 'state'. Should we have meaningful 'text' here\n> corresponding to SnapBuildState? Similar to what we do for\n> 'invalidation_reason' in pg_replication_slots. (SlotInvalidationCauses\n> for ReplicationSlotInvalidationCause)\n\nYeah we could. I was not sure about that (and that was my first remark in [1])\n, as the module is mainly for debugging purpose, I was thinking that the one\nusing it could refer to \"snapbuild.h\". Let's see what others think.\n\n[1]: https://www.postgresql.org/message-id/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 17 Sep 2024 07:14:36 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Tue, Sep 17, 2024 at 10:46 AM David G. Johnston\n<[email protected]> wrote:\n>\n>\n>\n> On Monday, September 16, 2024, shveta malik <[email protected]> wrote:\n>>\n>> On Tue, Sep 17, 2024 at 10:18 AM shveta malik <[email protected]> wrote:\n>> >\n>> > Thanks for addressing the comments. I have not started reviewing v4\n>> > yet, but here are few more comments on v3:\n>> >\n>>\n>> I just noticed that when we pass NULL input, both the new functions\n>> give 1 row as output, all cols as NULL:\n>>\n>> newdb1=# SELECT * FROM pg_get_logical_snapshot_meta(NULL);\n>> magic | checksum | version\n>> -------+----------+---------\n>> | |\n>>\n>> (1 row)\n>>\n>> Similar behavior with pg_get_logical_snapshot_info(). While the\n>> existing 'pg_ls_logicalsnapdir' function gives this error, which looks\n>> more meaningful:\n>>\n>> newdb1=# select * from pg_ls_logicalsnapdir(NULL);\n>> ERROR: function pg_ls_logicalsnapdir(unknown) does not exist\n>> LINE 1: select * from pg_ls_logicalsnapdir(NULL);\n>> HINT: No function matches the given name and argument types. You\n>> might need to add explicit type casts.\n>>\n>>\n>> Shouldn't the new functions have same behavior?\n>\n>\n> No. Since the name pg_ls_logicalsnapdir has zero single-argument implementations passing a null value as an argument is indeed attempt to invoke a function signature that doesn’t exist.\n>\n> If there is exactly one single input argument function of the given name the parser is going to cast the null literal to the data type of the single argument and invoke the function. It will not and cannot be convinced to fail to find a matching function.\n\nOkay, understood. Thanks for explaining.\n\n> I can see an argument that they should produce an empty set instead of a single all-null row, but the idea that they wouldn’t even be found is contrary to a core design of the system.\n\nOkay, a single row can be investigated if it comes under this scope.\nBut I see why 'ERROR' is not a possibility here.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 17 Sep 2024 15:19:16 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Tue, Sep 17, 2024 at 12:44 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Sep 17, 2024 at 10:18:35AM +0530, shveta malik wrote:\n> > Thanks for addressing the comments. I have not started reviewing v4\n> > yet, but here are few more comments on v3:\n> >\n> > 1)\n> > +#include \"port/pg_crc32c.h\"\n> >\n> > It is not needed in pg_logicalinspect.c as it is already included in\n> > internal_snapbuild.h\n>\n> Yeap, forgot to remove that one when creating the new \"internal\".h file, done\n> in v5 attached, thanks!\n>\n> >\n> > 2)\n> > + values[0] = Int16GetDatum(ondisk.builder.state);\n> > ........\n> > + values[8] = LSNGetDatum(ondisk.builder.last_serialized_snapshot);\n> > + values[9] = TransactionIdGetDatum(ondisk.builder.next_phase_at);\n> > + values[10] = Int64GetDatum(ondisk.builder.committed.xcnt);\n> >\n> > We can have values[i++] in all the places and later we can check :\n> > Assert(i == PG_GET_LOGICAL_SNAPSHOT_INFO_COLS);\n> > Then we need not to keep track of number even in later part of code,\n> > as it goes till 14.\n>\n> Right, let's do it that way (as it is done in pg_walinspect for example).\n>\n> > 4)\n> > Most of the output columns in pg_get_logical_snapshot_info() look\n> > self-explanatory except 'state'. Should we have meaningful 'text' here\n> > corresponding to SnapBuildState? Similar to what we do for\n> > 'invalidation_reason' in pg_replication_slots. (SlotInvalidationCauses\n> > for ReplicationSlotInvalidationCause)\n>\n> Yeah we could. I was not sure about that (and that was my first remark in [1])\n> , as the module is mainly for debugging purpose, I was thinking that the one\n> using it could refer to \"snapbuild.h\". Let's see what others think.\n>\n\nokay, makes sense. lets wait what others have to say.\n\nThanks for the patch. Few trivial things:\n\n1)\nMay be we shall change 'INTERNAL_SNAPBUILD_H' in snapbuild_internal.h\nto 'SNAPBUILD_INTERNAL_H'?\n\n2)\nValidateSnapshotFile()\n\nIt is not only validating, but loading the content as well. So may be\nwe can rename to ValidateAndRestoreSnapshotFile?\n\n3) sgml:\na)\n+ The pg_logicalinspect functions are called using an LSN argument\nthat can be extracted from the output name of the\npg_ls_logicalsnapdir() function.\n\nIs it possible to give link to pg_ls_logicalsnapdir function here?\n\nb)\n+ Gets logical snapshot metadata about a snapshot file that is located\nin the pg_logical/snapshots directory.\n\nlocated in server's pg_logical/snapshots directory\n (i.e. use server keyword, similar to how pg_ls_logicalsnapdir ,\npg_ls_logicalmapdir explains it)\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 18 Sep 2024 11:33:08 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 18, 2024 at 11:33:08AM +0530, shveta malik wrote:\n> Thanks for the patch. Few trivial things:\n> \n> 1)\n> May be we shall change 'INTERNAL_SNAPBUILD_H' in snapbuild_internal.h\n> to 'SNAPBUILD_INTERNAL_H'?\n\nIndeed, done in v6 attached, thanks!\n\n> 2)\n> ValidateSnapshotFile()\n> \n> It is not only validating, but loading the content as well. So may be\n> we can rename to ValidateAndRestoreSnapshotFile?\n\nI see what you mean, we're also populating the SnapBuildOnDisk. I think your\nproposal makes sense, done that way in v6.\n\n> 3) sgml:\n> a)\n> + The pg_logicalinspect functions are called using an LSN argument\n> that can be extracted from the output name of the\n> pg_ls_logicalsnapdir() function.\n> \n> Is it possible to give link to pg_ls_logicalsnapdir function here?\n\nYes but I'm not sure that's needed. A quick \"git grep \"<function>\" \"*.sgml\"\"\nseems to show that providing a link is not that common.\n\n> b)\n> + Gets logical snapshot metadata about a snapshot file that is located\n> in the pg_logical/snapshots directory.\n> \n> located in server's pg_logical/snapshots directory\n> (i.e. use server keyword, similar to how pg_ls_logicalsnapdir ,\n> pg_ls_logicalmapdir explains it)\n\nAgree, done that way in v6.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Sep 2024 09:01:06 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "HI, here are some mostly minor review comments for the patch v5-0001.\n\n======\nCommit message\n\n1.\nDo you think you should also name the new functions here?\n\n======\ncontrib/pg_logicalinspect/pg_logicalinspect.c\n\n2.\nRegarding the question about static function declarations:\n\nShveta wrote: I was somehow under the impression that this is the way\nin the postgres i.e. not add redundant declarations. Will be good to\nknow what others think on this.\n\nFWIW, my understanding is the convention is just to be consistent with\nwhatever the module currently does. If it declares static functions,\nthen declare them all (redundant or not). If it doesn't declare static\nfunctions, then don't add one. But, in the current case, since this is\na new module, I guess it is entirely up to you whatever you want to\ndo.\n\n~~~\n\n3.\n+/*\n+ * NOTE: For any code change or issue fix here, it is highly recommended to\n+ * give a thought about doing the same in SnapBuildRestore() as well.\n+ */\n+\n\nnit - I think this NOTE should be part of this module's header\ncomment. (e.g. like the tablesync.c NOTES)\n\n~~~\n\nValidateSnapshotFile:\n\n4.\n+ValidateSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk, const char *path)\n+{\n+ int fd;\n+ Size sz;\n\nnit - The 'sz' is overwritten a few times. I thnk declaring it at each\nscope where used would be tidier.\n\n~~~\n\n5.\n+ fsync_fname(path, false);\n+ fsync_fname(PG_LOGICAL_SNAPSHOTS_DIR, true);\n+\n+\n\nnit - remove some excessive blank lines\n\n~~~\n\n6.\n+ /* read statically sized portion of snapshot */\n+ SnapBuildRestoreContents(fd, (char *) ondisk,\nSnapBuildOnDiskConstantSize, path);\n\nShould that say \"fixed size portion\"?\n\n~~~\n\npg_get_logical_snapshot_info:\n\n7.\n+ if (ondisk.builder.committed.xcnt > 0)\n+ {\n+ Datum *arrayelems;\n+ int narrayelems;\n+\n+ arrayelems = (Datum *) palloc(ondisk.builder.committed.xcnt * sizeof(Datum));\n+ narrayelems = 0;\n+\n+ for (narrayelems = 0; narrayelems < ondisk.builder.committed.xcnt;\nnarrayelems++)\n+ arrayelems[narrayelems] = Int64GetDatum((int64)\nondisk.builder.committed.xip[narrayelems]);\n\nnit - Why the double assignment of narrayelems = 0? It is simpler to\nassign at the declaration and then remove both others.\n\n~~~\n\n8.\n+ if (ondisk.builder.catchange.xcnt > 0)\n+ {\n+ Datum *arrayelems;\n+ int narrayelems;\n+\n+ arrayelems = (Datum *) palloc(ondisk.builder.catchange.xcnt * sizeof(Datum));\n+ narrayelems = 0;\n+\n+ for (narrayelems = 0; narrayelems < ondisk.builder.catchange.xcnt;\nnarrayelems++)\n+ arrayelems[narrayelems] = Int64GetDatum((int64)\nondisk.builder.catchange.xip[narrayelems]);\n\nnit - ditto previous comment\n\n======\ndoc/src/sgml/pglogicalinspect.sgml\n\n9.\n+ <para>\n+ The <filename>pg_logicalinspect</filename> module provides SQL functions\n+ that allow you to inspect the contents of logical decoding components. It\n+ allows to inspect serialized logical snapshots of a running\n+ <productname>PostgreSQL</productname> database cluster, which is useful\n+ for debugging or educational purposes.\n+ </para>\n\nnit - /It allows to inspect/It allows the inspection of/\n\n~~~\n\n10.\n+ example:\n\nnit - /example:/For example:/ (this is in a couple of places)\n\n======\nsrc/include/replication/snapbuild_internal.h\n\n11.\n+#ifndef INTERNAL_SNAPBUILD_H\n+#define INTERNAL_SNAPBUILD_H\n\nShouldn't these be SNAPBUILD_INTERNAL_H to match the filename?\n\n~~~\n\n12.\nThe contents of the snapbuild.c that got moved into\nsnapbuild_internal.h also got shuffled around a bit.\n\ne.g. originally the typedef struct SnapBuildOnDisk:\n\n+/*\n+ * We store current state of struct SnapBuild on disk in the following manner:\n+ *\n+ * struct SnapBuildOnDisk;\n+ * TransactionId * committed.xcnt; (*not xcnt_space*)\n+ * TransactionId * catchange.xcnt;\n+ *\n+ */\n+typedef struct SnapBuildOnDisk\n\nwas directly beneath the comment:\n-/* -----------------------------------\n- * Snapshot serialization support\n- * -----------------------------------\n- */\n-\n\nThe macros were also defined immediately after the SnapBuildOnDisk\nfields they referred to.\n\nWasn't that original ordering better than how it is now ordered in\nsnapshot_internal.h?\n\n======\n\nPlease also see the attachment, which implements some of those nits\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 18 Sep 2024 19:52:51 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 18, 2024 at 07:52:51PM +1000, Peter Smith wrote:\n> HI, here are some mostly minor review comments for the patch v5-0001.\n> \n\nThanks for the review!\n\n> ======\n> Commit message\n> \n> 1.\n> Do you think you should also name the new functions here?\n\nNot sure about this one. It has not been done in 2258e76f90 for example.\n\n> \n> ======\n> contrib/pg_logicalinspect/pg_logicalinspect.c\n> \n> 2.\n> Regarding the question about static function declarations:\n> \n> Shveta wrote: I was somehow under the impression that this is the way\n> in the postgres i.e. not add redundant declarations. Will be good to\n> know what others think on this.\n> \n> FWIW, my understanding is the convention is just to be consistent with\n> whatever the module currently does. If it declares static functions,\n> then declare them all (redundant or not). If it doesn't declare static\n> functions, then don't add one. But, in the current case, since this is\n> a new module, I guess it is entirely up to you whatever you want to\n> do.\n>\n\nThanks for the feedback and sharing your thoughts. I don't have a strong opinion\non this (though I tend to write the declaration(s)). I see it as a Nit, so let\njust keep it as done in v6.\n \n> ~~~\n> \n> 3.\n> +/*\n> + * NOTE: For any code change or issue fix here, it is highly recommended to\n> + * give a thought about doing the same in SnapBuildRestore() as well.\n> + */\n> +\n> \n> nit - I think this NOTE should be part of this module's header\n> comment. (e.g. like the tablesync.c NOTES)\n>\n\nNot sure about this one. I took pg_walinspect.c as an example. And we may want\nto add more functionalities in the future that could have nothing to do with\nSnapBuildRestore(). I think that I prefer where it is located currently (near the\ncode that \"looks like\" SnapBuildRestore()).\n \n> ~~~\n> \n> ValidateSnapshotFile:\n> \n> 4.\n> +ValidateSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk, const char *path)\n> +{\n> + int fd;\n> + Size sz;\n> \n> nit - The 'sz' is overwritten a few times. I thnk declaring it at each\n> scope where used would be tidier.\n\nI see what you mean. I think it's a matter of taste and I generally also prefer\nto do it the way you propose. That said, it's mainly inspired from SnapBuildRestore(),\nso I think it's better to try to differ from it the less that we can.\n\n> \n> ~~~\n> \n> 5.\n> + fsync_fname(path, false);\n> + fsync_fname(PG_LOGICAL_SNAPSHOTS_DIR, true);\n> +\n> +\n> \n> nit - remove some excessive blank lines\n\nAgree. While it's the same as in SnapBuildRestore(), I'm ok to do this particular\nchange, done in v7 attached.\n\n> \n> ~~~\n> \n> 6.\n> + /* read statically sized portion of snapshot */\n> + SnapBuildRestoreContents(fd, (char *) ondisk,\n> SnapBuildOnDiskConstantSize, path);\n> \n> Should that say \"fixed size portion\"?\n>\n\nMaybe, but same remark as for 4. (though that's only a comment).\n \n> ~~~\n> \n> pg_get_logical_snapshot_info:\n> \n> 7.\n> + if (ondisk.builder.committed.xcnt > 0)\n> + {\n> + Datum *arrayelems;\n> + int narrayelems;\n> +\n> + arrayelems = (Datum *) palloc(ondisk.builder.committed.xcnt * sizeof(Datum));\n> + narrayelems = 0;\n> +\n> + for (narrayelems = 0; narrayelems < ondisk.builder.committed.xcnt;\n> narrayelems++)\n> + arrayelems[narrayelems] = Int64GetDatum((int64)\n> ondisk.builder.committed.xip[narrayelems]);\n> \n> nit - Why the double assignment of narrayelems = 0?\n\nProbably fat fingers when writting this part.\n\n> assign at the declaration and then remove both others.\n\nyeah, done.\n \n> ======\n> doc/src/sgml/pglogicalinspect.sgml\n> \n> 9.\n> + <para>\n> + The <filename>pg_logicalinspect</filename> module provides SQL functions\n> + that allow you to inspect the contents of logical decoding components. It\n> + allows to inspect serialized logical snapshots of a running\n> + <productname>PostgreSQL</productname> database cluster, which is useful\n> + for debugging or educational purposes.\n> + </para>\n> \n> nit - /It allows to inspect/It allows the inspection of/\n\nDone.\n\n> \n> ~~~\n> \n> 10.\n> + example:\n> \n> nit - /example:/For example:/ (this is in a couple of places)\n\nDone.\n\n> \n> ======\n> src/include/replication/snapbuild_internal.h\n> \n> 11.\n> +#ifndef INTERNAL_SNAPBUILD_H\n> +#define INTERNAL_SNAPBUILD_H\n> \n> Shouldn't these be SNAPBUILD_INTERNAL_H to match the filename?\n\nYeah, was already mentioned by Shveta up-thread and fixed in v6.\n\n> ~~~\n> \n> 12.\n> The contents of the snapbuild.c that got moved into\n> snapbuild_internal.h also got shuffled around a bit.\n> \n> e.g. originally the typedef struct SnapBuildOnDisk:\n> \n> +/*\n> + * We store current state of struct SnapBuild on disk in the following manner:\n> + *\n> + * struct SnapBuildOnDisk;\n> + * TransactionId * committed.xcnt; (*not xcnt_space*)\n> + * TransactionId * catchange.xcnt;\n> + *\n> + */\n> +typedef struct SnapBuildOnDisk\n> \n> was directly beneath the comment:\n> -/* -----------------------------------\n> - * Snapshot serialization support\n> - * -----------------------------------\n> - */\n> -\n>\n\nMoving it to the same place in v7.\n \n> The macros were also defined immediately after the SnapBuildOnDisk\n> fields they referred to.\n> \n\nMoving them to the same place in v7.\n\n> Please also see the attachment, which implements some of those nits\n> mentioned above.\n\nWhile I did not implement all the nits you mentioned, I really appreciate that\nyou added this attachment, thanks! That's a nice idea and I will try to do the\nsame for my reviews..\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Sep 2024 18:47:07 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Thanks for the updated patch.\n\nHere are a few more trivial comments for the patch v7-0001.\n\n======\n\n1.\nShould the extension descriptions all be identical?\n\nI noticed small variations:\n\ncontrib/pg_logicalinspect/Makefile\n+PGFILEDESC = \"pg_logicalinspect - functions to inspect logical\ndecoding components\"\n\ncontrib/pg_logicalinspect/meson.build\n+ '--FILEDESC', 'pg_logicalinspect - functions to inspect contents\nof logical snapshots',])\n\ncontrib/pg_logicalinspect/pg_logicalinspect.control\n+comment = 'functions to inspect logical decoding components'\n\n======\n.../expected/logical_inspect.out\n\n2\n+step s1_get_logical_snapshot_info: SELECT\n(pg_get_logical_snapshot_info(f.name::pg_lsn)).state,(pg_get_logical_snapshot_info(f.name::pg_lsn)).catchange_count,array_length((pg_get_logical_snapshot_info(f.name::pg_lsn)).catchange_xip,1),(pg_get_logical_snapshot_info(f.name::pg_lsn)).committed_count,array_length((pg_get_logical_snapshot_info(f.name::pg_lsn)).committed_xip,1)\nFROM (SELECT replace(replace(name,'.snap',''),'-','/') AS name FROM\npg_ls_logicalsnapdir()) AS f ORDER BY 2;\n+state|catchange_count|array_length|committed_count|array_length\n+-----+---------------+------------+---------------+------------\n+ 2| 0| | 2| 2\n+ 2| 2| 2| 0|\n+(2 rows)\n+\n\n2a.\nWould it be better to rearrange those columns so 'committed' stuff\ncomes before 'catchange' stuff, to make this table order consistent\nwith the structure/code?\n\n~\n\n2b.\nMaybe those 2 'array_length' columns could have aliases to uniquely\nidentify them?\ne.g. 'catchange_array_length' and 'committed_array_length'.\n\n======\ncontrib/pg_logicalinspect/pg_logicalinspect.c\n\n3.\n+/*\n+ * Validate the logical snapshot file.\n+ */\n+static void\n+ValidateAndRestoreSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk,\n+ const char *path)\n\nSince the name was updated then should the function comment also be\nupdated to include something like the SnapBuildRestoreContents\nfunction comment? e.g. \"Validate the logical snapshot file, and read\nthe contents of the serialized snapshot to 'ondisk'.\"\n\n~~~\n\npg_get_logical_snapshot_info:\n\n4.\nnit - Add/remove some blank lines to help visually associate the array\ncounts with their arrays.\n\n======\n.../specs/logical_inspect.spec\n\n5.\n+setup\n+{\n+ DROP TABLE IF EXISTS tbl1;\n+ CREATE TABLE tbl1 (val1 integer, val2 integer);\n+ CREATE EXTENSION pg_logicalinspect;\n+}\n+\n+teardown\n+{\n+ DROP TABLE tbl1;\n+ SELECT 'stop' FROM pg_drop_replication_slot('isolation_slot');\n+ DROP EXTENSION pg_logicalinspect;\n+}\n\nDifferent indentation for the CREATE/DROP EXTENSION?\n\n======\n\nThe attached file shows the whitespace nit (#4)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 19 Sep 2024 10:08:19 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Tue, Sep 17, 2024 at 12:44 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Tue, Sep 17, 2024 at 10:18:35AM +0530, shveta malik wrote:\n> > Thanks for addressing the comments. I have not started reviewing v4\n> > yet, but here are few more comments on v3:\n> >\n>\n> > 4)\n> > Most of the output columns in pg_get_logical_snapshot_info() look\n> > self-explanatory except 'state'. Should we have meaningful 'text' here\n> > corresponding to SnapBuildState? Similar to what we do for\n> > 'invalidation_reason' in pg_replication_slots. (SlotInvalidationCauses\n> > for ReplicationSlotInvalidationCause)\n>\n> Yeah we could. I was not sure about that (and that was my first remark in [1])\n> , as the module is mainly for debugging purpose, I was thinking that the one\n> using it could refer to \"snapbuild.h\". Let's see what others think.\n>\n\nDisplaying the 'text' for the state column makes it easy to\nunderstand. So, +1 for doing it that way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Sep 2024 11:00:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Thu, Sep 19, 2024 at 12:17 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the review!\n>\n\nThanks for the patch.\n\nShould we include in the document who can execute these functions and\nthe required access permissions, similar to how it's done for\npgwalinspect, pg_ls_logicalmapdir(), and other such functions?\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:10:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Thu, Sep 19, 2024 at 10:08:19AM +1000, Peter Smith wrote:\n> Thanks for the updated patch.\n> \n> ======\n> .../expected/logical_inspect.out\n> \n> 2\n> +step s1_get_logical_snapshot_info: SELECT\n> (pg_get_logical_snapshot_info(f.name::pg_lsn)).state,(pg_get_logical_snapshot_info(f.name::pg_lsn)).catchange_count,array_length((pg_get_logical_snapshot_info(f.name::pg_lsn)).catchange_xip,1),(pg_get_logical_snapshot_info(f.name::pg_lsn)).committed_count,array_length((pg_get_logical_snapshot_info(f.name::pg_lsn)).committed_xip,1)\n> FROM (SELECT replace(replace(name,'.snap',''),'-','/') AS name FROM\n> pg_ls_logicalsnapdir()) AS f ORDER BY 2;\n> +state|catchange_count|array_length|committed_count|array_length\n> +-----+---------------+------------+---------------+------------\n> + 2| 0| | 2| 2\n> + 2| 2| 2| 0|\n> +(2 rows)\n> +\n> \n> 2a.\n> Would it be better to rearrange those columns so 'committed' stuff\n> comes before 'catchange' stuff, to make this table order consistent\n> with the structure/code?\n\nI'm not sure that's a good idea to create a \"dependency\" between the test output\nand the code. I think that could be hard to \"ensure\" in the mid-long term.\n\nPlease find attached v8, that:\n\n- takes care of your comments (except the one above)\n- takes care of Shveta comment [1]\n- displays the 'text' for the state column (as confirmed by Amit [2]): Note that\nthe enum -> text mapping is done in pg_logicalinspect.c and a comment has been\nadded in snapbuild.h (near the SnapBuildState definition). I thought it makes\nmore sense to do it that way instead of implementing the enum -> text mapping\nin snapbuild.h (as the mapping is only used by the module).\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uDJ65QHUZfww7n6TBZAGp-SP74P5U3fUorV%2B%3DbaaRu6Dw%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAA4eK1JgW1o9wOTwgRJ9%2BbQkYcr3iRWAQHoL9eBC%2BrmoQoHZ%3DQ%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 20 Sep 2024 06:52:17 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "My review comments for v8-0001\n\n======\ncontrib/pg_logicalinspect/pg_logicalinspect.c\n\n1.\n+/*\n+ * Lookup table for SnapBuildState.\n+ */\n+\n+#define SNAPBUILD_STATE_INCR 1\n+\n+static const char *const SnapBuildStateDesc[] = {\n+ [SNAPBUILD_START + SNAPBUILD_STATE_INCR] = \"start\",\n+ [SNAPBUILD_BUILDING_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"building\",\n+ [SNAPBUILD_FULL_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"full\",\n+ [SNAPBUILD_CONSISTENT + SNAPBUILD_STATE_INCR] = \"consistent\",\n+};\n+\n+/*\n\nnit - the SNAPBUILD_STATE_INCR made this code appear more complicated\nthan it is. Please take a look at the attachment for an alternative\nimplementation which includes an explanatory comment. YMMV. Feel free\nto ignore it.\n\n======\nsrc/include/replication/snapbuild.h\n\n2.\n+ * Please keep SnapBuildStateDesc[] (located in the pg_logicalinspect module)\n+ * updated should a change needs to be done in SnapBuildState.\n\nnit - \"...should a change needs to be done\" -- the word \"needs\" is\nincorrect here.\n\nHow about:\n\"...if a change needs to be made to SnapBuildState.\"\n\"...if a change is made to SnapBuildState.\"\n\"...if SnapBuildState is changed.\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 23 Sep 2024 12:27:27 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Fri, Sep 20, 2024 at 12:22 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n>\n> Please find attached v8, that:\n>\n\nThank You for the patch. In one of my tests, I noticed that I got\nnegative checksum:\n\npostgres=# SELECT * FROM pg_get_logical_snapshot_meta('0/3481F20');\n magic | checksum | version\n------------+------------+---------\n 1369563137 | -266346460 | 6\n\nBut pg_crc32c is uint32. Is it because we are getting it as\nInt32GetDatum(ondisk.checksum) in pg_get_logical_snapshot_meta()?\nInstead should it be UInt32GetDatum?\n\nSame goes for below:\nvalues[i++] = Int32GetDatum(ondisk.magic);\nvalues[i++] = Int32GetDatum(ondisk.magic);\n\nWe need to recheck the rest of the fields in the info() function as well.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:15:31 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Mon, Sep 23, 2024 at 7:57 AM Peter Smith <[email protected]> wrote:\n>\n> My review comments for v8-0001\n>\n> ======\n> contrib/pg_logicalinspect/pg_logicalinspect.c\n>\n> 1.\n> +/*\n> + * Lookup table for SnapBuildState.\n> + */\n> +\n> +#define SNAPBUILD_STATE_INCR 1\n> +\n> +static const char *const SnapBuildStateDesc[] = {\n> + [SNAPBUILD_START + SNAPBUILD_STATE_INCR] = \"start\",\n> + [SNAPBUILD_BUILDING_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"building\",\n> + [SNAPBUILD_FULL_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"full\",\n> + [SNAPBUILD_CONSISTENT + SNAPBUILD_STATE_INCR] = \"consistent\",\n> +};\n> +\n> +/*\n>\n> nit - the SNAPBUILD_STATE_INCR made this code appear more complicated\n> than it is.\n\nI agree.\n\n> Please take a look at the attachment for an alternative\n> implementation which includes an explanatory comment. YMMV. Feel free\n> to ignore it.\n>\n\n+1. I find Peter's version with comments easier to understand.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:35:14 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 23, 2024 at 12:27:27PM +1000, Peter Smith wrote:\n> My review comments for v8-0001\n> \n> ======\n> contrib/pg_logicalinspect/pg_logicalinspect.c\n> \n> 1.\n> +/*\n> + * Lookup table for SnapBuildState.\n> + */\n> +\n> +#define SNAPBUILD_STATE_INCR 1\n> +\n> +static const char *const SnapBuildStateDesc[] = {\n> + [SNAPBUILD_START + SNAPBUILD_STATE_INCR] = \"start\",\n> + [SNAPBUILD_BUILDING_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"building\",\n> + [SNAPBUILD_FULL_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"full\",\n> + [SNAPBUILD_CONSISTENT + SNAPBUILD_STATE_INCR] = \"consistent\",\n> +};\n> +\n> +/*\n> \n> nit - the SNAPBUILD_STATE_INCR made this code appear more complicated\n> than it is. Please take a look at the attachment for an alternative\n> implementation which includes an explanatory comment. YMMV. Feel free\n> to ignore it.\n\nThanks for the feedback!\n\nI like the commment, so added it in v9 attached. OTOH I think that's better\nto keep SNAPBUILD_STATE_INCR as those \"+1\" are all linked and that would be\neasy to miss the one in pg_get_logical_snapshot_info() should we change the\nincrement in the future.\n\n> ======\n> src/include/replication/snapbuild.h\n> \n> 2.\n> + * Please keep SnapBuildStateDesc[] (located in the pg_logicalinspect module)\n> + * updated should a change needs to be done in SnapBuildState.\n> \n> nit - \"...should a change needs to be done\" -- the word \"needs\" is\n> incorrect here.\n> \n> How about:\n> \"...if a change needs to be made to SnapBuildState.\"\n\nThanks, used this one in v9.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uCppUNdod4F3NaPpMCtrySdw1S0T1d8CA-2c4CX%3DShMOQ%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 24 Sep 2024 16:51:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Tue, Sep 24, 2024 at 09:15:31AM +0530, shveta malik wrote:\n> On Fri, Sep 20, 2024 at 12:22 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> >\n> > Please find attached v8, that:\n> >\n> \n> Thank You for the patch. In one of my tests, I noticed that I got\n> negative checksum:\n> \n> postgres=# SELECT * FROM pg_get_logical_snapshot_meta('0/3481F20');\n> magic | checksum | version\n> ------------+------------+---------\n> 1369563137 | -266346460 | 6\n> \n> But pg_crc32c is uint32. Is it because we are getting it as\n> Int32GetDatum(ondisk.checksum) in pg_get_logical_snapshot_meta()?\n> Instead should it be UInt32GetDatum?\n\nThanks for the testing.\n\nAs the checksum could be > 2^31 - 1, then v9 (just shared up-thread) changes it\nto an int8 in the pg_logicalinspect--1.0.sql file. So, to avoid CI failure on\nthe 32bit build, then v9 is using Int64GetDatum() instead of UInt32GetDatum().\n\n> Same goes for below:\n> values[i++] = Int32GetDatum(ondisk.magic);\n> values[i++] = Int32GetDatum(ondisk.magic);\n\nThe 2 others field (magic and version) are unlikely to be > 2^31 - 1, so v9 is\nmaking use of UInt32GetDatum() and keep int4 in the sql file.\n\n> We need to recheck the rest of the fields in the info() function as well.\n\nI think that the pg_get_logical_snapshot_info()'s fields are ok (I did spend some\ntime to debug CI failing on the 32bit build for some on them before submitting v1).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:53:26 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Wed, Sep 25, 2024 at 2:51 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Sep 23, 2024 at 12:27:27PM +1000, Peter Smith wrote:\n> > My review comments for v8-0001\n> >\n> > ======\n> > contrib/pg_logicalinspect/pg_logicalinspect.c\n> >\n> > 1.\n> > +/*\n> > + * Lookup table for SnapBuildState.\n> > + */\n> > +\n> > +#define SNAPBUILD_STATE_INCR 1\n> > +\n> > +static const char *const SnapBuildStateDesc[] = {\n> > + [SNAPBUILD_START + SNAPBUILD_STATE_INCR] = \"start\",\n> > + [SNAPBUILD_BUILDING_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"building\",\n> > + [SNAPBUILD_FULL_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"full\",\n> > + [SNAPBUILD_CONSISTENT + SNAPBUILD_STATE_INCR] = \"consistent\",\n> > +};\n> > +\n> > +/*\n> >\n> > nit - the SNAPBUILD_STATE_INCR made this code appear more complicated\n> > than it is. Please take a look at the attachment for an alternative\n> > implementation which includes an explanatory comment. YMMV. Feel free\n> > to ignore it.\n>\n> Thanks for the feedback!\n>\n> I like the commment, so added it in v9 attached. OTOH I think that's better\n> to keep SNAPBUILD_STATE_INCR as those \"+1\" are all linked and that would be\n> easy to miss the one in pg_get_logical_snapshot_info() should we change the\n> increment in the future.\n>\n\nI see SNAPBUILD_STATE_INCR more as an \"offset\" (to get the lowest enum\nvalue to be at lookup index [0]) than an \"increment\" (between the enum\nvalues), so I'd be naming that differently. But, maybe I am straying\ninto just personal opinion instead of giving useful feedback, so let's\nsay I have no more review comments. Patch v9 looks OK to me.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 25 Sep 2024 09:42:01 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Tue, Sep 24, 2024 at 10:23 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Sep 24, 2024 at 09:15:31AM +0530, shveta malik wrote:\n> > On Fri, Sep 20, 2024 at 12:22 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > >\n> > > Please find attached v8, that:\n> > >\n> >\n> > Thank You for the patch. In one of my tests, I noticed that I got\n> > negative checksum:\n> >\n> > postgres=# SELECT * FROM pg_get_logical_snapshot_meta('0/3481F20');\n> > magic | checksum | version\n> > ------------+------------+---------\n> > 1369563137 | -266346460 | 6\n> >\n> > But pg_crc32c is uint32. Is it because we are getting it as\n> > Int32GetDatum(ondisk.checksum) in pg_get_logical_snapshot_meta()?\n> > Instead should it be UInt32GetDatum?\n>\n> Thanks for the testing.\n>\n> As the checksum could be > 2^31 - 1, then v9 (just shared up-thread) changes it\n> to an int8 in the pg_logicalinspect--1.0.sql file. So, to avoid CI failure on\n> the 32bit build, then v9 is using Int64GetDatum() instead of UInt32GetDatum().\n>\n\nOkay, looks good,\n\n> > Same goes for below:\n> > values[i++] = Int32GetDatum(ondisk.magic);\n> > values[i++] = Int32GetDatum(ondisk.magic);\n>\n> The 2 others field (magic and version) are unlikely to be > 2^31 - 1, so v9 is\n> making use of UInt32GetDatum() and keep int4 in the sql file.\n>\n> > We need to recheck the rest of the fields in the info() function as well.\n>\n> I think that the pg_get_logical_snapshot_info()'s fields are ok (I did spend some\n> time to debug CI failing on the 32bit build for some on them before submitting v1).\n>\n\n+ OUT catchange_xip xid[]\n\nOne question, what is xid datatype, is it too int8? Sorry, could not\nfind the correct doc. Since we are getting uint32 in Int64, this also\nneeds to be accordingly.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 25 Sep 2024 11:23:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Is there a reason for this elaborate error handling:\n\n+\tfd = OpenTransientFile(path, O_RDONLY | PG_BINARY);\n+\n+\tif (fd < 0 && errno == ENOENT)\n+\t\tereport(ERROR,\n+\t\t\t\terrmsg(\"file \\\"%s\\\" does not exist\", path));\n+\telse if (fd < 0)\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode_for_file_access(),\n+\t\t\t\t errmsg(\"could not open file \\\"%s\\\": %m\", path)));\n\nCouldn't you just use the second branch for all errno's?\n\n\n", "msg_date": "Wed, 25 Sep 2024 16:04:43 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 25, 2024 at 04:04:43PM +0200, Peter Eisentraut wrote:\n> Is there a reason for this elaborate error handling:\n\nThanks for looking at it!\n\n> +\tfd = OpenTransientFile(path, O_RDONLY | PG_BINARY);\n> +\n> +\tif (fd < 0 && errno == ENOENT)\n> +\t\tereport(ERROR,\n> +\t\t\t\terrmsg(\"file \\\"%s\\\" does not exist\", path));\n> +\telse if (fd < 0)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t errmsg(\"could not open file \\\"%s\\\": %m\", path)));\n> \n> Couldn't you just use the second branch for all errno's?\n\nYeah, I think it comes from copying/pasting from SnapBuildRestore() too \"quickly\".\nv10 attached uses the second branch only.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 25 Sep 2024 17:29:09 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 25, 2024 at 11:23:17AM +0530, shveta malik wrote:\n> + OUT catchange_xip xid[]\n> \n> One question, what is xid datatype, is it too int8? Sorry, could not\n> find the correct doc.\n\nI think that we can get the answer from pg_type:\n\npostgres=# select typname,typlen from pg_type where typname = 'xid';\n typname | typlen\n---------+--------\n xid | 4\n(1 row)\n\n> Since we are getting uint32 in Int64, this also needs to be accordingly.\n\nI think the way it is currently done is fine: we're dealing with TransactionId\n(and not with FullTransactionId). So, the Int64GetDatum() output would still\nstay in the \"xid\" range. Keeping xid in the .sql makes it clear that we are\ndealing with transaction ID.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Sep 2024 17:31:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 25, 2024 at 10:29 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Sep 25, 2024 at 04:04:43PM +0200, Peter Eisentraut wrote:\n> > Is there a reason for this elaborate error handling:\n>\n> Thanks for looking at it!\n>\n> > + fd = OpenTransientFile(path, O_RDONLY | PG_BINARY);\n> > +\n> > + if (fd < 0 && errno == ENOENT)\n> > + ereport(ERROR,\n> > + errmsg(\"file \\\"%s\\\" does not exist\", path));\n> > + else if (fd < 0)\n> > + ereport(ERROR,\n> > + (errcode_for_file_access(),\n> > + errmsg(\"could not open file \\\"%s\\\": %m\", path)));\n> >\n> > Couldn't you just use the second branch for all errno's?\n>\n> Yeah, I think it comes from copying/pasting from SnapBuildRestore() too \"quickly\".\n> v10 attached uses the second branch only.\n>\n\nI've reviewed v10 patch and here are some comments:\n\n +static void\n +ValidateAndRestoreSnapshotFile(XLogRecPtr lsn, SnapBuildOnDisk *ondisk,\n + const char *path)\n\nThis function and SnapBuildRestore() have duplicate codes. Can we have\na common function that reads the snapshot data from disk to\nSnapBuildOnDisk, and have both ValidateAndRestoreSnapshotFile() and\nSnapBuildRestore() call it?\n\n---\n+CREATE FUNCTION pg_get_logical_snapshot_meta(IN in_lsn pg_lsn,\n(snip)\n+CREATE FUNCTION pg_get_logical_snapshot_info(IN in_lsn pg_lsn,\n\nIs there any reason why both functions take a pg_lsn value as an\nargument? Given that the main usage would be to use these functions\nwith pg_ls_logicalsnapdir(), I think it would be easier for users if\nthese functions take a filename as a function argument. That way, we\ncan use these functions like:\n\nselect info.* from pg_ls_logicalsnapdir(),\npg_get_logical_snapshot_info(name) as info;\n\nIf there are use cases where specifying a LSN is easier, I think we\nwould have both types.\n\n----\n+static const char *const SnapBuildStateDesc[] = {\n+ [SNAPBUILD_START + SNAPBUILD_STATE_INCR] = \"start\",\n+ [SNAPBUILD_BUILDING_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"building\",\n+ [SNAPBUILD_FULL_SNAPSHOT + SNAPBUILD_STATE_INCR] = \"full\",\n+ [SNAPBUILD_CONSISTENT + SNAPBUILD_STATE_INCR] = \"consistent\",\n+};\n\nI think that it'd be better to have a dedicated function that returns\na string representation of the given state by using a switch\nstatement. That way, we don't need SNAPBUILD_STATE_INCR and a compiler\nwarning would help realize a missing state if a new state is\nintroduced in the future. It needs a function call but I believe it\nwon't be a problem in this use case.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Sep 2024 17:04:18 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" }, { "msg_contents": "On Wed, Sep 25, 2024 at 11:01 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Sep 25, 2024 at 11:23:17AM +0530, shveta malik wrote:\n> > + OUT catchange_xip xid[]\n> >\n> > One question, what is xid datatype, is it too int8? Sorry, could not\n> > find the correct doc.\n>\n> I think that we can get the answer from pg_type:\n>\n> postgres=# select typname,typlen from pg_type where typname = 'xid';\n> typname | typlen\n> ---------+--------\n> xid | 4\n> (1 row)\n>\n> > Since we are getting uint32 in Int64, this also needs to be accordingly.\n>\n> I think the way it is currently done is fine: we're dealing with TransactionId\n> (and not with FullTransactionId). So, the Int64GetDatum() output would still\n> stay in the \"xid\" range. Keeping xid in the .sql makes it clear that we are\n> dealing with transaction ID.\n>\n\nOkay, got it. The 'xid' usage is fine then.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 26 Sep 2024 08:57:22 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add contrib/pg_logicalsnapinspect" } ]
[ { "msg_contents": "Back in commit 6991e774e we established a policy that, well,\nI'll just quote the commit message:\n\n Provide feature-test macros for libpq features added in v14.\n \n We had a request to provide a way to test at compile time for the\n availability of the new pipeline features. More generally, it\n seems like a good idea to provide a way to test via #ifdef for\n all new libpq API features. People have been using the version\n from pg_config.h for that; but that's more likely to represent the\n server version than the libpq version, in the increasingly-common\n scenario where they're different. It's safer if libpq-fe.h itself\n is the source of truth about what features it offers.\n \n Hence, establish a policy that starting in v14 we'll add a suitable\n feature-is-present macro to libpq-fe.h when we add new API there.\n (There doesn't seem to be much point in applying this policy\n retroactively, but it's not too late for v14.)\n\nlibpq has grown a bunch of new features in v17, and not one of\nthem adhered to this policy. That was complained of at [1],\nso I looked into fixing it. After diff'ing v16 and v17 libpq-fe.h,\nit looks like we need roughly this set of new feature-test macros:\n\nLIBPQ_HAS_ASYNC_CANCEL PGcancelConn typedef and associated routines\nLIBPQ_HAS_CHANGE_PASSWORD PQchangePassword\nLIBPQ_HAS_CHUNK_MODE PQsetChunkedRowsMode, PGRES_TUPLES_CHUNK\nLIBPQ_HAS_CLOSE_PREPARED PQclosePrepared, PQclosePortal, etc\nLIBPQ_HAS_SEND_PIPELINE_SYNC PQsendPipelineSync\nLIBPQ_HAS_SOCKET_POLL PQsocketPoll, PQgetCurrentTimeUSec\n\n(Feel free to bikeshed on the names, but I think we don't want 'em\nto get too much longer than these.)\n\nAlternatively we could decide that people can use configure probes\nto see if these functions are there, but I still think that there's\na good rationale for the feature-test-macro approach. It saves\npeople from re-inventing that wheel, it standardizes the way to\ncheck these things (in particular, discouraging people from abusing\nthe server version number for this), and it provides some handy\ndocumentation about what's new or not so new.\n\nIn connection with that last point, I wonder if we should include\ncommentary about when things came in. I'd originally thought of\njust inserting the above names in alphabetical order, but now I\nwonder if the patch ought to look more like\n\n */\n+/* Features added in PostgreSQL v14: */\n /* Indicates presence of PQenterPipelineMode and friends */\n #define LIBPQ_HAS_PIPELINING 1\n /* Indicates presence of PQsetTraceFlags; also new PQtrace output format */\n #define LIBPQ_HAS_TRACE_FLAGS 1\n+/* Features added in PostgreSQL v15: */\n /* Indicates that PQsslAttribute(NULL, \"library\") is useful */\n #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1\n+/* Features added in PostgreSQL v17: */\n+ ... as above ...\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAFCRh-_Fz4HDKt_y%2Bqr-Gztrh%2BvMiJ4EFxFHDLgC6AePJpWOzQ%40mail.gmail.com\n\n\n", "msg_date": "Thu, 22 Aug 2024 13:16:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Feature-test macros for new-in-v17 libpq features" }, { "msg_contents": "On Thu, Aug 22, 2024 at 10:16 AM Tom Lane <[email protected]> wrote:\n>\n> In connection with that last point, I wonder if we should include\n> commentary about when things came in. I'd originally thought of\n> just inserting the above names in alphabetical order, but now I\n> wonder if the patch ought to look more like\n>\n> */\n> +/* Features added in PostgreSQL v14: */\n> /* Indicates presence of PQenterPipelineMode and friends */\n> #define LIBPQ_HAS_PIPELINING 1\n> /* Indicates presence of PQsetTraceFlags; also new PQtrace output format */\n> #define LIBPQ_HAS_TRACE_FLAGS 1\n> +/* Features added in PostgreSQL v15: */\n> /* Indicates that PQsslAttribute(NULL, \"library\") is useful */\n> #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1\n> +/* Features added in PostgreSQL v17: */\n> + ... as above ...\n\n+1, I like the new headers and keeping the version order.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:22:47 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Feature-test macros for new-in-v17 libpq features" } ]
[ { "msg_contents": "Like ICU, allow -1 length to mean that the input string is NUL-\nterminated for pg_strncoll(), pg_strnxfrm(), and pg_strnxfrm_prefix().\n\nThis simplifies the API and code a bit.\n\nAlong with some other refactoring in this area, we are getting close to\nthe point where the collation provider can just be a table of methods,\nwhich means we can add an extension hook to provide a different method\ntable. That still requires more work, I'm just mentioning it here for\ncontext.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 22 Aug 2024 11:00:54 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Refactor: allow pg_strncoll(), etc., to accept -1 length for\n NUL-terminated cstrings." }, { "msg_contents": "On Thu, 2024-08-22 at 11:00 -0700, Jeff Davis wrote:\n> Like ICU, allow -1 length to mean that the input string is NUL-\n> terminated for pg_strncoll(), pg_strnxfrm(), and\n> pg_strnxfrm_prefix().\n\nTo better illustrate the direction I'm going, I roughly implemented\nsome patches that implement collation using a table of methods rather\nthan lots branching based on the provider.\n\nThis more cleanly separates the API for a provider, which will enable\nus to use a hook to create a custom provider with arbitrary methods,\nthat may have nothing to do with ICU or libc. Or, we could go so far as\nto implement a \"CREATE LOCALE PROVIDER\" that would provide the methods\nusing a handler function, and \"datlocprovider\" would be an OID rather\nthan a char.\n\n From a practical perspective, I expect that extensions would use this\nto lock down the version of a particular provider rather than implement\na completely arbitrary one. But the API is good for either case, and\noffers quite a bit of code cleanup.\n\nThere are quite a few loose ends, of course:\n\n * There is still a lot of branching on the provider for DDL and\ncatalog access. I'm not sure if we will ever eliminate all of this, or\nif we would even want to.\n\n * I haven't done anything with get_collation_actual_version().\nPerhaps that should be a method, too, but it requires some extra\nthought if we want this to be useful for \"multilib\" (having multiple\nversions of a provider library at once).\n\n * I didn't add methods for formatting.c yet.\n\n * initdb -- should it offer a way to preload a library and then use\nthat for the provider?\n\n * I need to allow an arbitrary per-provider context, rather than the\ncurrent union designed for the existing providers.\n\nAgain, the patches are rough and there's a lot of code churn. I'd like\nsome feedback on whether people generally like the direction this is\ngoing. If so I will clean up the patch series into smaller, more\nreviewable chunks.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 20 Sep 2024 17:28:51 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactor: allow pg_strncoll(), etc., to accept -1 length for\n NUL-terminated cstrings." } ]
[ { "msg_contents": "Hi,\n\nI would like to propose a slight elaboration of the sort cost model.\nIn practice, we frequently see the choice of suboptimal sortings, which \nslump performance by 10-50%.\n\nThe key reason here is the optimiser's blindness to the fact that \nsorting calls a comparison operator for each pair of attributes in the \ntuple. Just for example:\n\nSET work_mem = '128MB';\nCREATE TABLE test (\n x1 int,x2 int,x3 int,x4 int,x5 int,x6 int,x7 int,x8 int,payload text);\nINSERT INTO test (x1,x2,x3,x4,x5,x6,x7,x8,payload) (\n SELECT 1,2,3,4,5,6,7,(1E5-x), 'abc'||x\n FROM generate_series(1,1E5) AS x);\nVACUUM ANALYZE test;\n\nLet's execute the following three queries:\n\nEXPLAIN (ANALYZE)\nSELECT * FROM test WHERE x1<9E4 ORDER BY x8;\nEXPLAIN (ANALYZE)\nSELECT * FROM test WHERE x1<9E4 ORDER BY x1,x2,x3,x8;\nEXPLAIN (ANALYZE)\nSELECT * FROM test WHERE x1<9E4 ORDER BY x1,x2,x3,x4,x5,x6,x7,x8;\n\nMy laptop executes these three queries in 100ms, 300ms, and 560ms, \nrespectively. At the same time, the costs of these query plans are \nsimilar. This corner case shows that sorting matters and in the corner \ncase, dependence on the number of comparisons looks close to linear.\n\nThe patch in the attachment is a simplistic attempt to differentiate \nthese sortings by the number of columns. It is not ideal because \ncomparisons depend on the duplicates in each column, but it may be the \nsubject of further work.\n\nEven such a trivial model allows us to resolve the case like the below:\nCREATE INDEX ON test (x1,x2,x3,x8);\nEXPLAIN (ANALYZE) SELECT * FROM test WHERE x1<9E4 ORDER BY x1,x2,x3,x8;\n\nThe current master will choose SeqScan for four columns as well as for a \nsingle column. With the model, the optimiser will understand that \nsorting four columns costs more than sorting a single column and switch \nto IndexScan.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Thu, 22 Aug 2024 20:46:01 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Consider the number of columns in the sort cost model" } ]
[ { "msg_contents": "Hello all,\nThis PostgreSQL version is 17beta2.\nIn SlruSelectLRUPage(), Why do we need to traverse all slots to find that a page already has a buffer assigned? Why not find it \nfrom the [bankstart,bankend]?\nBest regards\n\nHello all,This PostgreSQL version is 17beta2.In SlruSelectLRUPage(),  Why do we need to traverse all slots to find that a page already has a buffer assigned? Why not find it from the [bankstart,bankend]?Best regards", "msg_date": "Fri, 23 Aug 2024 10:06:48 +0800", "msg_from": "\"=?UTF-8?B?5bit5YayKOWunOephik=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?c2xydSBiYW5r?=" }, { "msg_contents": "On Thu, Aug 22, 2024 at 7:07 PM 席冲(宜穆) <[email protected]> wrote:\n\n> In SlruSelectLRUPage(), Why do we need to traverse all slots to find that\n> a page already has a buffer assigned? Why not find it\n> from the [bankstart,bankend]?\n>\n>\nOnly the bank is searched, both of the logic loops are bounded by:\n\nfor (int slotno = bankstart; slotno < bankend; slotno++)\n\nDavid J.\n\nOn Thu, Aug 22, 2024 at 7:07 PM 席冲(宜穆) <[email protected]> wrote:In SlruSelectLRUPage(),  Why do we need to traverse all slots to find that a page already has a buffer assigned? Why not find it from the [bankstart,bankend]?Only the bank is searched, both of the logic loops are bounded by:for (int slotno = bankstart; slotno < bankend; slotno++)David J.", "msg_date": "Thu, 22 Aug 2024 19:19:35 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slru bank" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Thu, Aug 22, 2024 at 7:07 PM 席冲(宜穆) <[email protected]> wrote:\n>> In SlruSelectLRUPage(), Why do we need to traverse all slots to find that\n>> a page already has a buffer assigned? Why not find it\n>> from the [bankstart,bankend]?\n\n> Only the bank is searched, both of the logic loops are bounded by:\n> for (int slotno = bankstart; slotno < bankend; slotno++)\n\nI think the OP has rediscovered the bug already fixed in 7b063ff26.\nThat's post-17beta2, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Aug 2024 22:27:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slru bank" }, { "msg_contents": "On Thu, Aug 22, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Thu, Aug 22, 2024 at 7:07 PM 席冲(宜穆) <[email protected]>\n> wrote:\n> >> In SlruSelectLRUPage(), Why do we need to traverse all slots to find\n> that\n> >> a page already has a buffer assigned? Why not find it\n> >> from the [bankstart,bankend]?\n>\n> > Only the bank is searched, both of the logic loops are bounded by:\n> > for (int slotno = bankstart; slotno < bankend; slotno++)\n>\n> I think the OP has rediscovered the bug already fixed in 7b063ff26.\n> That's post-17beta2, though.\n>\n>\nIndeed. I was looking at HEAD.\n\nDavid J.\n\nOn Thu, Aug 22, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Thu, Aug 22, 2024 at 7:07 PM 席冲(宜穆) <[email protected]> wrote:\n>> In SlruSelectLRUPage(),  Why do we need to traverse all slots to find that\n>> a page already has a buffer assigned? Why not find it\n>> from the [bankstart,bankend]?\n\n> Only the bank is searched, both of the logic loops are bounded by:\n> for (int slotno = bankstart; slotno < bankend; slotno++)\n\nI think the OP has rediscovered the bug already fixed in 7b063ff26.\nThat's post-17beta2, though.Indeed.  I was looking at HEAD.David J.", "msg_date": "Thu, 22 Aug 2024 19:33:47 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slru bank" } ]
[ { "msg_contents": "Hi hackers,\n\nCurrently, all processes in PostgreSQL actually use malloc to allocate and\nfree memory. In the case of long connections where business queries are\nexecuted over extended periods, the distribution of memory can become\nextremely complex.\n\nUnder certain circumstances, a common issue in memory usage due to the\ncaching strategy of malloc may arise: even if memory is released through\nthe free function, it may not be returned to the OS in a timely manner.\nThis can lead to high system memory usage, affecting performance and the\noperation of other applications, and may even result in Out-Of-Memory (OOM)\nerrors.\n\nTo address this issue, I have developed a new function called\npg_trim_backend_heap_free_memory, based on the existing\npg_log_backend_memory_contexts function. This function triggers the\nspecified process to execute the malloc_trim operation by sending signals,\nthereby releasing as much unreturned memory to the operating system as\npossible. This not only helps to optimize memory usage but can also\nsignificantly enhance system performance under memory pressure.\n\nHere is an example of using the pg_trim_backend_heap_free_memory function\nto demonstrate its effect:\n\nCREATE OR REPLACE FUNCTION public.partition_create(schemaname character\n> varying, numberofpartition integer)\n> RETURNS integer\n> LANGUAGE plpgsql\n> AS $function$\n> declare\n> currentTableId integer;\n> currentSchemaName varchar(100);\n> currentTableName varchar(100);\n> begin\n> execute 'create schema ' || schemaname;\n> execute 'create table ' || schemaname || '.' || schemaname || 'hashtable\n> (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9\n> name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';\n> currentTableId := 1;\n> loop\n> currentTableName := schemaname || '.' || schemaname || 'hashtable' ||\n> ltrim(currentTableId::varchar(10));\n> execute 'create table ' || currentTableName || ' PARTITION OF ' ||\n> schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS\n> ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n> currentTableId := currentTableId + 1;\n> if (currentTableId > numberofpartition) then exit; end if;\n> end loop;\n> return currentTableId - 1;\n> END $function$;\n>\n> select public.partition_create('test3', 5000);\n> select public.partition_create('test4', 5000);\n> select count(*) from test4.test4hashtable a, test3.test3hashtable b where\n> a.p1=b.p1;\n\nYou are now about to see the memory size of the process executing the query.\n\n> postgres 68673 1.2 0.0 610456 124768 ? Ss 08:25 0:01\n> postgres: postgres postgres [local] idle\n> Size: 89600 kB\n> KernelPageSize: 4 kB\n> MMUPageSize: 4 kB\n> Rss: 51332 kB\n> Pss: 51332 kB\n\n02b65000-082e5000 rw-p 00000000 00:00 0\n> [heap]\n>\n\n\nAfter use pg_trim_backend_heap_free_memory, you will see:\n\n> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n> 2024-08-23 08:27:53.958 UTC [68673] LOG: trimming heap free memory of PID\n> 68673\n> pg_trim_backend_heap_free_memory\n> ----------------------------------\n> t\n> (1 row)\n> 02b65000-082e5000 rw-p 00000000 00:00 0\n> [heap]\n> Size: 89600 kB\n> KernelPageSize: 4 kB\n> MMUPageSize: 4 kB\n> Rss: 4888 kB\n> Pss: 4888 kB\n\npostgres 68673 1.2 0.0 610456 75244 ? Ss 08:26 0:01\n> postgres: postgres postgres [local] idle\n>\n\nLooking forward to your feedback,\n\nRegards,\n\n--\nShawn Wang\n\n\nNow", "msg_date": "Fri, 23 Aug 2024 16:53:58 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Trim the heap free memory" }, { "msg_contents": "On Fri, 23 Aug 2024 at 10:54, shawn wang <[email protected]> wrote:\n\n> Hi hackers,\n>\n> Currently, all processes in PostgreSQL actually use malloc to allocate\n> and free memory. In the case of long connections where business queries are\n> executed over extended periods, the distribution of memory can become\n> extremely complex.\n>\n> Under certain circumstances, a common issue in memory usage due to the\n> caching strategy of malloc may arise: even if memory is released through\n> the free function, it may not be returned to the OS in a timely manner.\n> This can lead to high system memory usage, affecting performance and the\n> operation of other applications, and may even result in Out-Of-Memory (OOM)\n> errors.\n>\n> To address this issue, I have developed a new function called\n> pg_trim_backend_heap_free_memory, based on the existing\n> pg_log_backend_memory_contexts function. This function triggers the\n> specified process to execute the malloc_trim operation by sending\n> signals, thereby releasing as much unreturned memory to the operating\n> system as possible. This not only helps to optimize memory usage but can\n> also significantly enhance system performance under memory pressure.\n>\n> Here is an example of using the pg_trim_backend_heap_free_memory function\n> to demonstrate its effect:\n>\n> CREATE OR REPLACE FUNCTION public.partition_create(schemaname character\n>> varying, numberofpartition integer)\n>> RETURNS integer\n>> LANGUAGE plpgsql\n>> AS $function$\n>> declare\n>> currentTableId integer;\n>> currentSchemaName varchar(100);\n>> currentTableName varchar(100);\n>> begin\n>> execute 'create schema ' || schemaname;\n>> execute 'create table ' || schemaname || '.' || schemaname || 'hashtable\n>> (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9\n>> name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';\n>> currentTableId := 1;\n>> loop\n>> currentTableName := schemaname || '.' || schemaname || 'hashtable' ||\n>> ltrim(currentTableId::varchar(10));\n>> execute 'create table ' || currentTableName || ' PARTITION OF ' ||\n>> schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS\n>> ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n>> currentTableId := currentTableId + 1;\n>> if (currentTableId > numberofpartition) then exit; end if;\n>> end loop;\n>> return currentTableId - 1;\n>> END $function$;\n>>\n>> select public.partition_create('test3', 5000);\n>> select public.partition_create('test4', 5000);\n>> select count(*) from test4.test4hashtable a, test3.test3hashtable b where\n>> a.p1=b.p1;\n>\n> You are now about to see the memory size of the process executing the\n> query.\n>\n>> postgres 68673 1.2 0.0 610456 124768 ? Ss 08:25 0:01\n>> postgres: postgres postgres [local] idle\n>> Size: 89600 kB\n>> KernelPageSize: 4 kB\n>> MMUPageSize: 4 kB\n>> Rss: 51332 kB\n>> Pss: 51332 kB\n>\n> 02b65000-082e5000 rw-p 00000000 00:00 0\n>> [heap]\n>>\n>\n>\n> After use pg_trim_backend_heap_free_memory, you will see:\n>\n>> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n>> 2024-08-23 08:27:53.958 UTC [68673] LOG: trimming heap free memory of\n>> PID 68673\n>> pg_trim_backend_heap_free_memory\n>> ----------------------------------\n>> t\n>> (1 row)\n>> 02b65000-082e5000 rw-p 00000000 00:00 0\n>> [heap]\n>> Size: 89600 kB\n>> KernelPageSize: 4 kB\n>> MMUPageSize: 4 kB\n>> Rss: 4888 kB\n>> Pss: 4888 kB\n>\n> postgres 68673 1.2 0.0 610456 75244 ? Ss 08:26 0:01\n>> postgres: postgres postgres [local] idle\n>>\n>\n> Looking forward to your feedback,\n>\n> Regards,\n>\n> --\n> Shawn Wang\n>\n>\n> Now\n>\nLiked the idea. Unfortunately, at the moment it is giving compilation error\n--\n\nmake[4]: *** No rule to make target `memtrim.o', needed by `objfiles.txt'.\nStop.\n-- \nRegards,\nRafia Sabih\n\nOn Fri, 23 Aug 2024 at 10:54, shawn wang <[email protected]> wrote:Hi hackers,Currently, all processes in PostgreSQL actually use malloc to allocate and free memory. In the case of long connections where business queries are executed over extended periods, the distribution of memory can become extremely complex.Under certain circumstances, a common issue in memory usage due to the caching strategy of malloc may arise: even if memory is released through the free function, it may not be returned to the OS in a timely manner. This can lead to high system memory usage, affecting performance and the operation of other applications, and may even result in Out-Of-Memory (OOM) errors.To address this issue, I have developed a new function called pg_trim_backend_heap_free_memory, based on the existing pg_log_backend_memory_contexts function. This function triggers the specified process to execute the malloc_trim operation by sending signals, thereby releasing as much unreturned memory to the operating system as possible. This not only helps to optimize memory usage but can also significantly enhance system performance under memory pressure.Here is an example of using the pg_trim_backend_heap_free_memory function to demonstrate its effect:CREATE OR REPLACE FUNCTION public.partition_create(schemaname character varying, numberofpartition integer)RETURNS integerLANGUAGE plpgsqlAS $function$declarecurrentTableId integer;currentSchemaName varchar(100);currentTableName varchar(100);beginexecute 'create schema ' || schemaname;execute 'create table ' || schemaname || '.' || schemaname || 'hashtable (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9 name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';currentTableId := 1;loopcurrentTableName := schemaname || '.' || schemaname || 'hashtable' || ltrim(currentTableId::varchar(10));execute 'create table ' || currentTableName || ' PARTITION OF ' || schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';currentTableId := currentTableId + 1;if (currentTableId > numberofpartition) then exit; end if;end loop; return currentTableId - 1;END $function$;select public.partition_create('test3', 5000);select public.partition_create('test4', 5000);select count(*) from test4.test4hashtable a, test3.test3hashtable b where a.p1=b.p1;You are now about to see the memory size of the process executing the query.postgres   68673  1.2  0.0 610456 124768 ?        Ss   08:25   0:01 postgres: postgres postgres [local] idleSize:              89600 kBKernelPageSize:        4 kBMMUPageSize:           4 kBRss:               51332 kBPss:               51332 kB02b65000-082e5000 rw-p 00000000 00:00 0                                  [heap] After use pg_trim_backend_heap_free_memory, you will see:postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());2024-08-23 08:27:53.958 UTC [68673] LOG:  trimming heap free memory of PID 68673 pg_trim_backend_heap_free_memory---------------------------------- t(1 row) 02b65000-082e5000 rw-p 00000000 00:00 0                                  [heap]Size:              89600 kBKernelPageSize:        4 kBMMUPageSize:           4 kBRss:                4888 kBPss:                4888 kBpostgres   68673  1.2  0.0 610456 75244 ?        Ss   08:26   0:01 postgres: postgres postgres [local] idleLooking forward to your feedback,Regards,--Shawn WangNow\nLiked the idea. Unfortunately, at the moment it is giving compilation error -- \n\n\n\n\n\nmake[4]: *** No rule to make target `memtrim.o', needed by `objfiles.txt'.  Stop.-- Regards,Rafia Sabih", "msg_date": "Fri, 23 Aug 2024 12:30:12 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Hi Shawn,\n\n\nOn Fri, Aug 23, 2024 at 2:24 PM shawn wang <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> Currently, all processes in PostgreSQL actually use malloc to allocate and free memory. In the case of long connections where business queries are executed over extended periods, the distribution of memory can become extremely complex.\n>\n> Under certain circumstances, a common issue in memory usage due to the caching strategy of malloc may arise: even if memory is released through the free function, it may not be returned to the OS in a timely manner. This can lead to high system memory usage, affecting performance and the operation of other applications, and may even result in Out-Of-Memory (OOM) errors.\n>\n> To address this issue, I have developed a new function called pg_trim_backend_heap_free_memory, based on the existing pg_log_backend_memory_contexts function. This function triggers the specified process to execute the malloc_trim operation by sending signals, thereby releasing as much unreturned memory to the operating system as possible. This not only helps to optimize memory usage but can also significantly enhance system performance under memory pressure.\n>\n> Here is an example of using the pg_trim_backend_heap_free_memory function to demonstrate its effect:\n>>\n>> CREATE OR REPLACE FUNCTION public.partition_create(schemaname character varying, numberofpartition integer)\n>> RETURNS integer\n>> LANGUAGE plpgsql\n>> AS $function$\n>> declare\n>> currentTableId integer;\n>> currentSchemaName varchar(100);\n>> currentTableName varchar(100);\n>> begin\n>> execute 'create schema ' || schemaname;\n>> execute 'create table ' || schemaname || '.' || schemaname || 'hashtable (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9 name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';\n>> currentTableId := 1;\n>> loop\n>> currentTableName := schemaname || '.' || schemaname || 'hashtable' || ltrim(currentTableId::varchar(10));\n>> execute 'create table ' || currentTableName || ' PARTITION OF ' || schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n>> currentTableId := currentTableId + 1;\n>> if (currentTableId > numberofpartition) then exit; end if;\n>> end loop;\n>> return currentTableId - 1;\n>> END $function$;\n>>\n>> select public.partition_create('test3', 5000);\n>> select public.partition_create('test4', 5000);\n>> select count(*) from test4.test4hashtable a, test3.test3hashtable b where a.p1=b.p1;\n>\n> You are now about to see the memory size of the process executing the query.\n>>\n>> postgres 68673 1.2 0.0 610456 124768 ? Ss 08:25 0:01 postgres: postgres postgres [local] idle\n>> Size: 89600 kB\n>> KernelPageSize: 4 kB\n>> MMUPageSize: 4 kB\n>> Rss: 51332 kB\n>> Pss: 51332 kB\n>>\n>> 02b65000-082e5000 rw-p 00000000 00:00 0 [heap]\n>\n>\n>\n> After use pg_trim_backend_heap_free_memory, you will see:\n>>\n>> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n>> 2024-08-23 08:27:53.958 UTC [68673] LOG: trimming heap free memory of PID 68673\n>> pg_trim_backend_heap_free_memory\n>> ----------------------------------\n>> t\n>> (1 row)\n>> 02b65000-082e5000 rw-p 00000000 00:00 0 [heap]\n>> Size: 89600 kB\n>> KernelPageSize: 4 kB\n>> MMUPageSize: 4 kB\n>> Rss: 4888 kB\n>> Pss: 4888 kB\n>>\n>> postgres 68673 1.2 0.0 610456 75244 ? Ss 08:26 0:01 postgres: postgres postgres [local] idle\n>\n>\n> Looking forward to your feedback,\nLooks useful.\n\nHow much time does malloc_trim() take to finish? Does it affect the\ncurrent database activity in that backend? It may be good to see\neffect of this function by firing the function on random backends\nwhile the query is running through pgbench.\n\nIn the patch I don't see definitions of\nProcessTrimHeapFreeMemoryInterrupt() and\nHandleTrimHeapFreeMemoryInterrupt(). Am I missing something?\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 23 Aug 2024 17:32:10 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Thank you Rafia. Here is a v2 patch.\n\nRafia Sabih <[email protected]> 于2024年8月23日周五 18:30写道:\n\n>\n>\n> On Fri, 23 Aug 2024 at 10:54, shawn wang <[email protected]> wrote:\n>\n>> Hi hackers,\n>>\n>> Currently, all processes in PostgreSQL actually use malloc to allocate\n>> and free memory. In the case of long connections where business queries are\n>> executed over extended periods, the distribution of memory can become\n>> extremely complex.\n>>\n>> Under certain circumstances, a common issue in memory usage due to the\n>> caching strategy of malloc may arise: even if memory is released through\n>> the free function, it may not be returned to the OS in a timely manner.\n>> This can lead to high system memory usage, affecting performance and the\n>> operation of other applications, and may even result in Out-Of-Memory (OOM)\n>> errors.\n>>\n>> To address this issue, I have developed a new function called\n>> pg_trim_backend_heap_free_memory, based on the existing\n>> pg_log_backend_memory_contexts function. This function triggers the\n>> specified process to execute the malloc_trim operation by sending\n>> signals, thereby releasing as much unreturned memory to the operating\n>> system as possible. This not only helps to optimize memory usage but can\n>> also significantly enhance system performance under memory pressure.\n>>\n>> Here is an example of using the pg_trim_backend_heap_free_memory\n>> function to demonstrate its effect:\n>>\n>> CREATE OR REPLACE FUNCTION public.partition_create(schemaname character\n>>> varying, numberofpartition integer)\n>>> RETURNS integer\n>>> LANGUAGE plpgsql\n>>> AS $function$\n>>> declare\n>>> currentTableId integer;\n>>> currentSchemaName varchar(100);\n>>> currentTableName varchar(100);\n>>> begin\n>>> execute 'create schema ' || schemaname;\n>>> execute 'create table ' || schemaname || '.' || schemaname || 'hashtable\n>>> (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9\n>>> name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';\n>>> currentTableId := 1;\n>>> loop\n>>> currentTableName := schemaname || '.' || schemaname || 'hashtable' ||\n>>> ltrim(currentTableId::varchar(10));\n>>> execute 'create table ' || currentTableName || ' PARTITION OF ' ||\n>>> schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS\n>>> ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n>>> currentTableId := currentTableId + 1;\n>>> if (currentTableId > numberofpartition) then exit; end if;\n>>> end loop;\n>>> return currentTableId - 1;\n>>> END $function$;\n>>>\n>>> select public.partition_create('test3', 5000);\n>>> select public.partition_create('test4', 5000);\n>>> select count(*) from test4.test4hashtable a, test3.test3hashtable b\n>>> where a.p1=b.p1;\n>>\n>> You are now about to see the memory size of the process executing the\n>> query.\n>>\n>>> postgres 68673 1.2 0.0 610456 124768 ? Ss 08:25 0:01\n>>> postgres: postgres postgres [local] idle\n>>> Size: 89600 kB\n>>> KernelPageSize: 4 kB\n>>> MMUPageSize: 4 kB\n>>> Rss: 51332 kB\n>>> Pss: 51332 kB\n>>\n>> 02b65000-082e5000 rw-p 00000000 00:00 0\n>>> [heap]\n>>>\n>>\n>>\n>> After use pg_trim_backend_heap_free_memory, you will see:\n>>\n>>> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n>>> 2024-08-23 08:27:53.958 UTC [68673] LOG: trimming heap free memory of\n>>> PID 68673\n>>> pg_trim_backend_heap_free_memory\n>>> ----------------------------------\n>>> t\n>>> (1 row)\n>>> 02b65000-082e5000 rw-p 00000000 00:00 0\n>>> [heap]\n>>> Size: 89600 kB\n>>> KernelPageSize: 4 kB\n>>> MMUPageSize: 4 kB\n>>> Rss: 4888 kB\n>>> Pss: 4888 kB\n>>\n>> postgres 68673 1.2 0.0 610456 75244 ? Ss 08:26 0:01\n>>> postgres: postgres postgres [local] idle\n>>>\n>>\n>> Looking forward to your feedback,\n>>\n>> Regards,\n>>\n>> --\n>> Shawn Wang\n>>\n>>\n>> Now\n>>\n> Liked the idea. Unfortunately, at the moment it is giving compilation\n> error --\n>\n> make[4]: *** No rule to make target `memtrim.o', needed by\n> `objfiles.txt'. Stop.\n> --\n> Regards,\n> Rafia Sabih\n>", "msg_date": "Sat, 24 Aug 2024 10:26:41 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Hi Ashutosh, thank you for your response.\nFirstly, the purpose of caching memory in malloc is for performance, so\nwhen we execute malloc_trim(), it will affect the efficiency of memory\nusage in the subsequent operation. Secondly, the function of malloc_trim()\nis to lock and traverse the bins, then execute madvise on the memory that\ncan be released. When there is a lot of memory in the bins, the traversal\ntime will also increase. I once placed malloc_trim() to execute at the end\nof each query, which resulted in a 20% performance drop. Therefore, I use\nit as such a function. The new v2 patch has included the omitted code.\n\nAshutosh Bapat <[email protected]> 于2024年8月23日周五 20:02写道:\n\n> Hi Shawn,\n>\n>\n> On Fri, Aug 23, 2024 at 2:24 PM shawn wang <[email protected]>\n> wrote:\n> >\n> > Hi hackers,\n> >\n> > Currently, all processes in PostgreSQL actually use malloc to allocate\n> and free memory. In the case of long connections where business queries are\n> executed over extended periods, the distribution of memory can become\n> extremely complex.\n> >\n> > Under certain circumstances, a common issue in memory usage due to the\n> caching strategy of malloc may arise: even if memory is released through\n> the free function, it may not be returned to the OS in a timely manner.\n> This can lead to high system memory usage, affecting performance and the\n> operation of other applications, and may even result in Out-Of-Memory (OOM)\n> errors.\n> >\n> > To address this issue, I have developed a new function called\n> pg_trim_backend_heap_free_memory, based on the existing\n> pg_log_backend_memory_contexts function. This function triggers the\n> specified process to execute the malloc_trim operation by sending signals,\n> thereby releasing as much unreturned memory to the operating system as\n> possible. This not only helps to optimize memory usage but can also\n> significantly enhance system performance under memory pressure.\n> >\n> > Here is an example of using the pg_trim_backend_heap_free_memory\n> function to demonstrate its effect:\n> >>\n> >> CREATE OR REPLACE FUNCTION public.partition_create(schemaname character\n> varying, numberofpartition integer)\n> >> RETURNS integer\n> >> LANGUAGE plpgsql\n> >> AS $function$\n> >> declare\n> >> currentTableId integer;\n> >> currentSchemaName varchar(100);\n> >> currentTableName varchar(100);\n> >> begin\n> >> execute 'create schema ' || schemaname;\n> >> execute 'create table ' || schemaname || '.' || schemaname ||\n> 'hashtable (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8\n> text, p9 name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY\n> HASH(p1);';\n> >> currentTableId := 1;\n> >> loop\n> >> currentTableName := schemaname || '.' || schemaname || 'hashtable' ||\n> ltrim(currentTableId::varchar(10));\n> >> execute 'create table ' || currentTableName || ' PARTITION OF ' ||\n> schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS\n> ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n> >> currentTableId := currentTableId + 1;\n> >> if (currentTableId > numberofpartition) then exit; end if;\n> >> end loop;\n> >> return currentTableId - 1;\n> >> END $function$;\n> >>\n> >> select public.partition_create('test3', 5000);\n> >> select public.partition_create('test4', 5000);\n> >> select count(*) from test4.test4hashtable a, test3.test3hashtable b\n> where a.p1=b.p1;\n> >\n> > You are now about to see the memory size of the process executing the\n> query.\n> >>\n> >> postgres 68673 1.2 0.0 610456 124768 ? Ss 08:25 0:01\n> postgres: postgres postgres [local] idle\n> >> Size: 89600 kB\n> >> KernelPageSize: 4 kB\n> >> MMUPageSize: 4 kB\n> >> Rss: 51332 kB\n> >> Pss: 51332 kB\n> >>\n> >> 02b65000-082e5000 rw-p 00000000 00:00 0\n> [heap]\n> >\n> >\n> >\n> > After use pg_trim_backend_heap_free_memory, you will see:\n> >>\n> >> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n> >> 2024-08-23 08:27:53.958 UTC [68673] LOG: trimming heap free memory of\n> PID 68673\n> >> pg_trim_backend_heap_free_memory\n> >> ----------------------------------\n> >> t\n> >> (1 row)\n> >> 02b65000-082e5000 rw-p 00000000 00:00 0\n> [heap]\n> >> Size: 89600 kB\n> >> KernelPageSize: 4 kB\n> >> MMUPageSize: 4 kB\n> >> Rss: 4888 kB\n> >> Pss: 4888 kB\n> >>\n> >> postgres 68673 1.2 0.0 610456 75244 ? Ss 08:26 0:01\n> postgres: postgres postgres [local] idle\n> >\n> >\n> > Looking forward to your feedback,\n> Looks useful.\n>\n> How much time does malloc_trim() take to finish? Does it affect the\n> current database activity in that backend? It may be good to see\n> effect of this function by firing the function on random backends\n> while the query is running through pgbench.\n>\n> In the patch I don't see definitions of\n> ProcessTrimHeapFreeMemoryInterrupt() and\n> HandleTrimHeapFreeMemoryInterrupt(). Am I missing something?\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nHi Ashutosh, thank you for your response.Firstly, the purpose of caching memory in malloc is for performance, so when we execute malloc_trim(), it will affect the efficiency of memory usage in the subsequent operation. Secondly, the function of malloc_trim() is to lock and traverse the bins, then execute madvise on the memory that can be released. When there is a lot of memory in the bins, the traversal time will also increase. I once placed malloc_trim() to execute at the end of each query, which resulted in a 20% performance drop. Therefore, I use it as such a function. The new v2 patch has included the omitted code.Ashutosh Bapat <[email protected]> 于2024年8月23日周五 20:02写道:Hi Shawn,\n\n\nOn Fri, Aug 23, 2024 at 2:24 PM shawn wang <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> Currently, all processes in PostgreSQL actually use malloc to allocate and free memory. In the case of long connections where business queries are executed over extended periods, the distribution of memory can become extremely complex.\n>\n> Under certain circumstances, a common issue in memory usage due to the caching strategy of malloc may arise: even if memory is released through the free function, it may not be returned to the OS in a timely manner. This can lead to high system memory usage, affecting performance and the operation of other applications, and may even result in Out-Of-Memory (OOM) errors.\n>\n> To address this issue, I have developed a new function called pg_trim_backend_heap_free_memory, based on the existing pg_log_backend_memory_contexts function. This function triggers the specified process to execute the malloc_trim operation by sending signals, thereby releasing as much unreturned memory to the operating system as possible. This not only helps to optimize memory usage but can also significantly enhance system performance under memory pressure.\n>\n> Here is an example of using the pg_trim_backend_heap_free_memory function to demonstrate its effect:\n>>\n>> CREATE OR REPLACE FUNCTION public.partition_create(schemaname character varying, numberofpartition integer)\n>> RETURNS integer\n>> LANGUAGE plpgsql\n>> AS $function$\n>> declare\n>> currentTableId integer;\n>> currentSchemaName varchar(100);\n>> currentTableName varchar(100);\n>> begin\n>> execute 'create schema ' || schemaname;\n>> execute 'create table ' || schemaname || '.' || schemaname || 'hashtable (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9 name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';\n>> currentTableId := 1;\n>> loop\n>> currentTableName := schemaname || '.' || schemaname || 'hashtable' || ltrim(currentTableId::varchar(10));\n>> execute 'create table ' || currentTableName || ' PARTITION OF ' || schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n>> currentTableId := currentTableId + 1;\n>> if (currentTableId > numberofpartition) then exit; end if;\n>> end loop;\n>> return currentTableId - 1;\n>> END $function$;\n>>\n>> select public.partition_create('test3', 5000);\n>> select public.partition_create('test4', 5000);\n>> select count(*) from test4.test4hashtable a, test3.test3hashtable b where a.p1=b.p1;\n>\n> You are now about to see the memory size of the process executing the query.\n>>\n>> postgres   68673  1.2  0.0 610456 124768 ?        Ss   08:25   0:01 postgres: postgres postgres [local] idle\n>> Size:              89600 kB\n>> KernelPageSize:        4 kB\n>> MMUPageSize:           4 kB\n>> Rss:               51332 kB\n>> Pss:               51332 kB\n>>\n>> 02b65000-082e5000 rw-p 00000000 00:00 0                                  [heap]\n>\n>\n>\n> After use pg_trim_backend_heap_free_memory, you will see:\n>>\n>> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n>> 2024-08-23 08:27:53.958 UTC [68673] LOG:  trimming heap free memory of PID 68673\n>>  pg_trim_backend_heap_free_memory\n>> ----------------------------------\n>>  t\n>> (1 row)\n>> 02b65000-082e5000 rw-p 00000000 00:00 0                                  [heap]\n>> Size:              89600 kB\n>> KernelPageSize:        4 kB\n>> MMUPageSize:           4 kB\n>> Rss:                4888 kB\n>> Pss:                4888 kB\n>>\n>> postgres   68673  1.2  0.0 610456 75244 ?        Ss   08:26   0:01 postgres: postgres postgres [local] idle\n>\n>\n> Looking forward to your feedback,\nLooks useful.\n\nHow much time does malloc_trim() take to finish? Does it affect the\ncurrent database activity in that backend? It may be good to see\neffect of this function by firing the function on random backends\nwhile the query is running through pgbench.\n\nIn the patch I don't see definitions of\nProcessTrimHeapFreeMemoryInterrupt() and\nHandleTrimHeapFreeMemoryInterrupt(). Am I missing something?\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Sat, 24 Aug 2024 10:42:04 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Hi Shawn,\nIt will be good to document usage of this function. Please add\ndocument changes in your patch. We need to document the impact of this\nfunction so that users can judiciously decide whether or not to use\nthis function and under what conditions. Also they would know what to\nexpect when they use this function.\n\nRunning it after a query finishes is one thing but that can't be\nguaranteed because of the asynchronous nature of signal handlers.\nmalloc_trim() may be called while a query is being executed. We need\nto assess that impact as well.\n\nCan you please share some numbers - TPS, latency etc. with and without\nthis function invoked during a benchmark run?\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Aug 24, 2024 at 8:12 AM shawn wang <[email protected]> wrote:\n>\n> Hi Ashutosh, thank you for your response.\n> Firstly, the purpose of caching memory in malloc is for performance, so when we execute malloc_trim(), it will affect the efficiency of memory usage in the subsequent operation. Secondly, the function of malloc_trim() is to lock and traverse the bins, then execute madvise on the memory that can be released. When there is a lot of memory in the bins, the traversal time will also increase. I once placed malloc_trim() to execute at the end of each query, which resulted in a 20% performance drop. Therefore, I use it as such a function. The new v2 patch has included the omitted code.\n>\n> Ashutosh Bapat <[email protected]> 于2024年8月23日周五 20:02写道:\n>>\n>> Hi Shawn,\n>>\n>>\n>> On Fri, Aug 23, 2024 at 2:24 PM shawn wang <[email protected]> wrote:\n>> >\n>> > Hi hackers,\n>> >\n>> > Currently, all processes in PostgreSQL actually use malloc to allocate and free memory. In the case of long connections where business queries are executed over extended periods, the distribution of memory can become extremely complex.\n>> >\n>> > Under certain circumstances, a common issue in memory usage due to the caching strategy of malloc may arise: even if memory is released through the free function, it may not be returned to the OS in a timely manner. This can lead to high system memory usage, affecting performance and the operation of other applications, and may even result in Out-Of-Memory (OOM) errors.\n>> >\n>> > To address this issue, I have developed a new function called pg_trim_backend_heap_free_memory, based on the existing pg_log_backend_memory_contexts function. This function triggers the specified process to execute the malloc_trim operation by sending signals, thereby releasing as much unreturned memory to the operating system as possible. This not only helps to optimize memory usage but can also significantly enhance system performance under memory pressure.\n>> >\n>> > Here is an example of using the pg_trim_backend_heap_free_memory function to demonstrate its effect:\n>> >>\n>> >> CREATE OR REPLACE FUNCTION public.partition_create(schemaname character varying, numberofpartition integer)\n>> >> RETURNS integer\n>> >> LANGUAGE plpgsql\n>> >> AS $function$\n>> >> declare\n>> >> currentTableId integer;\n>> >> currentSchemaName varchar(100);\n>> >> currentTableName varchar(100);\n>> >> begin\n>> >> execute 'create schema ' || schemaname;\n>> >> execute 'create table ' || schemaname || '.' || schemaname || 'hashtable (p1 text, p2 text, p3 text, p4 int, p5 int, p6 int, p7 int, p8 text, p9 name, p10 varchar, p11 text, p12 text, p13 text) PARTITION BY HASH(p1);';\n>> >> currentTableId := 1;\n>> >> loop\n>> >> currentTableName := schemaname || '.' || schemaname || 'hashtable' || ltrim(currentTableId::varchar(10));\n>> >> execute 'create table ' || currentTableName || ' PARTITION OF ' || schemaname || '.' || schemaname || 'hashtable' || ' FOR VALUES WITH(MODULUS ' || numberofpartition || ', REMAINDER ' || currentTableId - 1 || ')';\n>> >> currentTableId := currentTableId + 1;\n>> >> if (currentTableId > numberofpartition) then exit; end if;\n>> >> end loop;\n>> >> return currentTableId - 1;\n>> >> END $function$;\n>> >>\n>> >> select public.partition_create('test3', 5000);\n>> >> select public.partition_create('test4', 5000);\n>> >> select count(*) from test4.test4hashtable a, test3.test3hashtable b where a.p1=b.p1;\n>> >\n>> > You are now about to see the memory size of the process executing the query.\n>> >>\n>> >> postgres 68673 1.2 0.0 610456 124768 ? Ss 08:25 0:01 postgres: postgres postgres [local] idle\n>> >> Size: 89600 kB\n>> >> KernelPageSize: 4 kB\n>> >> MMUPageSize: 4 kB\n>> >> Rss: 51332 kB\n>> >> Pss: 51332 kB\n>> >>\n>> >> 02b65000-082e5000 rw-p 00000000 00:00 0 [heap]\n>> >\n>> >\n>> >\n>> > After use pg_trim_backend_heap_free_memory, you will see:\n>> >>\n>> >> postgres=# select pg_trim_backend_heap_free_memory(pg_backend_pid());\n>> >> 2024-08-23 08:27:53.958 UTC [68673] LOG: trimming heap free memory of PID 68673\n>> >> pg_trim_backend_heap_free_memory\n>> >> ----------------------------------\n>> >> t\n>> >> (1 row)\n>> >> 02b65000-082e5000 rw-p 00000000 00:00 0 [heap]\n>> >> Size: 89600 kB\n>> >> KernelPageSize: 4 kB\n>> >> MMUPageSize: 4 kB\n>> >> Rss: 4888 kB\n>> >> Pss: 4888 kB\n>> >>\n>> >> postgres 68673 1.2 0.0 610456 75244 ? Ss 08:26 0:01 postgres: postgres postgres [local] idle\n>> >\n>> >\n>> > Looking forward to your feedback,\n>> Looks useful.\n>>\n>> How much time does malloc_trim() take to finish? Does it affect the\n>> current database activity in that backend? It may be good to see\n>> effect of this function by firing the function on random backends\n>> while the query is running through pgbench.\n>>\n>> In the patch I don't see definitions of\n>> ProcessTrimHeapFreeMemoryInterrupt() and\n>> HandleTrimHeapFreeMemoryInterrupt(). Am I missing something?\n>>\n>> --\n>> Best Wishes,\n>> Ashutosh Bapat\n\n\n", "msg_date": "Mon, 26 Aug 2024 16:34:49 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Hi Ashutosh,\n\nAshutosh Bapat <[email protected]> 于2024年8月26日周一 19:05写道:\n\n> Hi Shawn,\n> It will be good to document usage of this function. Please add\n> document changes in your patch. We need to document the impact of this\n> function so that users can judiciously decide whether or not to use\n> this function and under what conditions. Also they would know what to\n> expect when they use this function.\n\n\nI have already incorporated the usage of this function into the new patch.\n\nCurrently, there is no memory information that can be extremely accurate to\nreflect whether a trim operation should be performed. Here are two\nconditions\nthat can be used as references:\n1. Check the difference between the process's memory usage (for example,\nthe top command, due to the relationship with shared memory, it is necessary\nto subtract SHR from RES) and the statistics of the memory context. If the\ndifference is very large, this function should be used to release memory;\n2. Execute malloc_stats(). If the system bytes are greater than the\nin-use bytes, this indicates that this function can be used to release\nmemory.\n\n>\n>\nRunning it after a query finishes is one thing but that can't be\n> guaranteed because of the asynchronous nature of signal handlers.\n> malloc_trim() may be called while a query is being executed. We need\n> to assess that impact as well.\n>\n> Can you please share some numbers - TPS, latency etc. with and without\n> this function invoked during a benchmark run?\n>\n\nI have placed malloc_trim() at the end of the exec_simple_query function,\nso that malloc_trim() is executed once for each SQL statement executed. I\nused pgbench to reproduce the performance impact,\nand the results are as follows.\n*Database preparation:*\n\n> create database testc;\n> create user t1;\n> alter database testc owner to t1;\n> ./pgbench testc -U t1 -i -s 100\n> ./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n\n*Without Trim*:\n\n> $./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: select only>\n> scaling factor: 100\n> query mode: simple\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> duration: 600 s\n> number of transactions actually processed: 551984376\n> number of failed transactions: 0 (0.000%)\n> latency average = 0.109 ms\n> initial connection time = 23.569 ms\n> tps = 920001.842189 (without initial connection time)\n\n*With Trim :*\n\n> $./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: select only>\n> scaling factor: 100\n> query mode: simple\n> number of clients: 100\n> number of threads: 100\n> maximum number of tries: 1\n> duration: 600 s\n> number of transactions actually processed: 470690787\n> number of failed transactions: 0 (0.000%)\n> latency average = 0.127 ms\n> initial connection time = 23.632 ms\n> tps = 784511.901558 (without initial connection time)", "msg_date": "Wed, 28 Aug 2024 18:54:36 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Unfortunately, I still see a compiling issue with this patch,\n\nmemtrim.c:15:10: fatal error: 'malloc.h' file not found\n#include <malloc.h>\n ^~~~~~~~~~\n1 error generated.\n\nOn Wed, 28 Aug 2024 at 12:54, shawn wang <[email protected]> wrote:\n\n> Hi Ashutosh,\n>\n> Ashutosh Bapat <[email protected]> 于2024年8月26日周一 19:05写道:\n>\n>> Hi Shawn,\n>> It will be good to document usage of this function. Please add\n>> document changes in your patch. We need to document the impact of this\n>> function so that users can judiciously decide whether or not to use\n>> this function and under what conditions. Also they would know what to\n>> expect when they use this function.\n>\n>\n> I have already incorporated the usage of this function into the new patch.\n>\n>\n> Currently, there is no memory information that can be extremely accurate to\n> reflect whether a trim operation should be performed. Here are two\n> conditions\n> that can be used as references:\n> 1. Check the difference between the process's memory usage (for example,\n> the top command, due to the relationship with shared memory, it is\n> necessary\n> to subtract SHR from RES) and the statistics of the memory context. If the\n> difference is very large, this function should be used to release memory;\n> 2. Execute malloc_stats(). If the system bytes are greater than the\n> in-use bytes, this indicates that this function can be used to release\n> memory.\n>\n>>\n>>\n> Running it after a query finishes is one thing but that can't be\n>> guaranteed because of the asynchronous nature of signal handlers.\n>> malloc_trim() may be called while a query is being executed. We need\n>> to assess that impact as well.\n>>\n>> Can you please share some numbers - TPS, latency etc. with and without\n>> this function invoked during a benchmark run?\n>>\n>\n> I have placed malloc_trim() at the end of the exec_simple_query function,\n> so that malloc_trim() is executed once for each SQL statement executed. I\n> used pgbench to reproduce the performance impact,\n> and the results are as follows.\n> *Database preparation:*\n>\n>> create database testc;\n>> create user t1;\n>> alter database testc owner to t1;\n>> ./pgbench testc -U t1 -i -s 100\n>> ./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n>\n> *Without Trim*:\n>\n>> $./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n>> pgbench (18devel)\n>> starting vacuum...end.\n>> transaction type: <builtin: select only>\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 100\n>> number of threads: 100\n>> maximum number of tries: 1\n>> duration: 600 s\n>> number of transactions actually processed: 551984376\n>> number of failed transactions: 0 (0.000%)\n>> latency average = 0.109 ms\n>> initial connection time = 23.569 ms\n>> tps = 920001.842189 (without initial connection time)\n>\n> *With Trim :*\n>\n>> $./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n>> pgbench (18devel)\n>> starting vacuum...end.\n>> transaction type: <builtin: select only>\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 100\n>> number of threads: 100\n>> maximum number of tries: 1\n>> duration: 600 s\n>> number of transactions actually processed: 470690787\n>> number of failed transactions: 0 (0.000%)\n>> latency average = 0.127 ms\n>> initial connection time = 23.632 ms\n>> tps = 784511.901558 (without initial connection time)\n>\n>\n\n-- \nRegards,\nRafia Sabih\n\nUnfortunately, I still see a compiling issue with this patch,memtrim.c:15:10: fatal error: 'malloc.h' file not found#include <malloc.h>         ^~~~~~~~~~1 error generated.On Wed, 28 Aug 2024 at 12:54, shawn wang <[email protected]> wrote:Hi Ashutosh,Ashutosh Bapat <[email protected]> 于2024年8月26日周一 19:05写道:Hi Shawn,\nIt will be good to document usage of this function. Please add\ndocument changes in your patch. We need to document the impact of this\nfunction so that users can judiciously decide whether or not to use\nthis function and under what conditions. Also they would know what to\nexpect when they use this function. I have already incorporated the usage of this function into the new patch. Currently, there is no memory information that can be extremely accurate toreflect whether a\ntrim operation should be performed. Here are two conditionsthat can be used as references:1. Check the difference between the process's memory usage (for example,the top command,\n due to the relationship with shared memory, it is necessaryto subtract SHR from RES) and the\n statistics of the memory context. If thedifference is very large, this function should be used to\n release memory;2. Execute malloc_stats(). If the system bytes are greater than thein-use bytes, this indicates\n that this function can be used to release memory. \nRunning it after a query finishes is one thing but that can't be\nguaranteed because of the asynchronous nature of signal handlers.\nmalloc_trim() may be called while a query is being executed. We need\nto assess that impact as well.\n\nCan you please share some numbers - TPS, latency etc. with and without\nthis function invoked during a benchmark run?I have placed malloc_trim() at the end of the exec_simple_query function,so that malloc_trim()\nis executed once for each SQL statement executed. Iused pgbench to reproduce the performance impact,and the results are as follows.Database preparation:create database testc;create user t1;alter database testc owner to t1;./pgbench testc -U t1 -i -s 100./pgbench testc -U t1 -S -c 100 -j 100 -T 600Without Trim:$./pgbench testc -U t1 -S -c 100 -j 100 -T 600pgbench (18devel)starting vacuum...end.transaction type: <builtin: select only>scaling factor: 100query mode: simplenumber of clients: 100number of threads: 100maximum number of tries: 1duration: 600 snumber of transactions actually processed: 551984376number of failed transactions: 0 (0.000%)latency average = 0.109 msinitial connection time = 23.569 mstps = 920001.842189 (without initial connection time)With Trim :$./pgbench testc -U t1 -S -c 100 -j 100 -T 600pgbench (18devel)starting vacuum...end.transaction type: <builtin: select only>scaling factor: 100query mode: simplenumber of clients: 100number of threads: 100maximum number of tries: 1duration: 600 snumber of transactions actually processed: 470690787number of failed transactions: 0 (0.000%)latency average = 0.127 msinitial connection time = 23.632 mstps = 784511.901558 (without initial connection time)\n-- Regards,Rafia Sabih", "msg_date": "Wed, 11 Sep 2024 12:24:49 +0200", "msg_from": "Rafia Sabih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Hi Rafia,\n\nI have made the necessary adjustment by replacing the inclusion of malloc.h\nwith stdlib.h in the relevant codebase. This change should address the\nprevious concerns regarding memory allocation functions.\n\nCould you please perform another round of testing to ensure that everything\nis functioning as expected with this modification?\n\nThank you for your assistance.\n\nBest regards, Shawn\n\n\nRafia Sabih <[email protected]> 于2024年9月11日周三 18:25写道:\n\n> Unfortunately, I still see a compiling issue with this patch,\n>\n> memtrim.c:15:10: fatal error: 'malloc.h' file not found\n> #include <malloc.h>\n> ^~~~~~~~~~\n> 1 error generated.\n>\n> On Wed, 28 Aug 2024 at 12:54, shawn wang <[email protected]> wrote:\n>\n>> Hi Ashutosh,\n>>\n>> Ashutosh Bapat <[email protected]> 于2024年8月26日周一 19:05写道:\n>>\n>>> Hi Shawn,\n>>> It will be good to document usage of this function. Please add\n>>> document changes in your patch. We need to document the impact of this\n>>> function so that users can judiciously decide whether or not to use\n>>> this function and under what conditions. Also they would know what to\n>>> expect when they use this function.\n>>\n>>\n>> I have already incorporated the usage of this function into the new patch.\n>>\n>>\n>> Currently, there is no memory information that can be extremely accurate\n>> to\n>> reflect whether a trim operation should be performed. Here are two\n>> conditions\n>> that can be used as references:\n>> 1. Check the difference between the process's memory usage (for example,\n>> the top command, due to the relationship with shared memory, it is\n>> necessary\n>> to subtract SHR from RES) and the statistics of the memory context. If the\n>> difference is very large, this function should be used to release memory;\n>> 2. Execute malloc_stats(). If the system bytes are greater than the\n>> in-use bytes, this indicates that this function can be used to release\n>> memory.\n>>\n>>>\n>>>\n>> Running it after a query finishes is one thing but that can't be\n>>> guaranteed because of the asynchronous nature of signal handlers.\n>>> malloc_trim() may be called while a query is being executed. We need\n>>> to assess that impact as well.\n>>>\n>>> Can you please share some numbers - TPS, latency etc. with and without\n>>> this function invoked during a benchmark run?\n>>>\n>>\n>> I have placed malloc_trim() at the end of the exec_simple_query function,\n>> so that malloc_trim() is executed once for each SQL statement executed. I\n>> used pgbench to reproduce the performance impact,\n>> and the results are as follows.\n>> *Database preparation:*\n>>\n>>> create database testc;\n>>> create user t1;\n>>> alter database testc owner to t1;\n>>> ./pgbench testc -U t1 -i -s 100\n>>> ./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n>>\n>> *Without Trim*:\n>>\n>>> $./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n>>> pgbench (18devel)\n>>> starting vacuum...end.\n>>> transaction type: <builtin: select only>\n>>> scaling factor: 100\n>>> query mode: simple\n>>> number of clients: 100\n>>> number of threads: 100\n>>> maximum number of tries: 1\n>>> duration: 600 s\n>>> number of transactions actually processed: 551984376\n>>> number of failed transactions: 0 (0.000%)\n>>> latency average = 0.109 ms\n>>> initial connection time = 23.569 ms\n>>> tps = 920001.842189 (without initial connection time)\n>>\n>> *With Trim :*\n>>\n>>> $./pgbench testc -U t1 -S -c 100 -j 100 -T 600\n>>> pgbench (18devel)\n>>> starting vacuum...end.\n>>> transaction type: <builtin: select only>\n>>> scaling factor: 100\n>>> query mode: simple\n>>> number of clients: 100\n>>> number of threads: 100\n>>> maximum number of tries: 1\n>>> duration: 600 s\n>>> number of transactions actually processed: 470690787\n>>> number of failed transactions: 0 (0.000%)\n>>> latency average = 0.127 ms\n>>> initial connection time = 23.632 ms\n>>> tps = 784511.901558 (without initial connection time)\n>>\n>>\n>\n> --\n> Regards,\n> Rafia Sabih\n>", "msg_date": "Thu, 12 Sep 2024 10:40:24 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:40, shawn wang <[email protected]> wrote:\n> Could you please perform another round of testing to ensure that everything is functioning as expected with this modification?\n\nOne way to get a few machines with various build systems testing this\nis to register the patch on the commitfest app in [1]. You can then\nsee if the patch is passing the continuous integration tests in [2].\nOne day soon the features of [2] should be combined with [1].\n\nDavid\n\n[1] https://commitfest.postgresql.org/50/\n[2] http://cfbot.cputube.org/\n\n\n", "msg_date": "Thu, 12 Sep 2024 20:42:24 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Thank you for your valuable suggestion.\n\nI have successfully registered my patch for the commitfest. However, upon\nintegration, I encountered several errors during the testing phase. I am\ncurrently investigating the root causes of these issues and will work on\nproviding the necessary fixes. If you have any further insights or\nrecommendations, I would greatly appreciate your guidance.\n\nThank you once again for your support.\n\nBest regards, Shawn\n\nDavid Rowley <[email protected]> 于2024年9月12日周四 16:42写道:\n\n> On Thu, 12 Sept 2024 at 14:40, shawn wang <[email protected]> wrote:\n> > Could you please perform another round of testing to ensure that\n> everything is functioning as expected with this modification?\n>\n> One way to get a few machines with various build systems testing this\n> is to register the patch on the commitfest app in [1]. You can then\n> see if the patch is passing the continuous integration tests in [2].\n> One day soon the features of [2] should be combined with [1].\n>\n> David\n>\n> [1] https://commitfest.postgresql.org/50/\n> [2] http://cfbot.cputube.org/\n>", "msg_date": "Mon, 16 Sep 2024 01:48:44 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "shawn wang <[email protected]> writes:\n> I have successfully registered my patch for the commitfest. However, upon\n> integration, I encountered several errors during the testing phase. I am\n> currently investigating the root causes of these issues and will work on\n> providing the necessary fixes.\n\nI should think the root cause is pretty obvious: malloc_trim() is\na glibc-ism.\n\nI'm fairly doubtful that this is something we should spend time on.\nIt can never work on any non-glibc platform. Even granting that\na Linux-only feature could be worth having, I'm really doubtful\nthat our memory allocation patterns are such that malloc_trim()\ncould be expected to free a useful amount of memory mid-query.\nThe single test case you showed suggested that maybe we could\nusefully prod glibc to free memory at query completion, but we\ndon't need all this interrupt infrastructure to do that. I think\nwe could likely get 95% of the benefit with about a five-line\npatch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Sep 2024 14:22:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "I wrote:\n> The single test case you showed suggested that maybe we could\n> usefully prod glibc to free memory at query completion, but we\n> don't need all this interrupt infrastructure to do that. I think\n> we could likely get 95% of the benefit with about a five-line\n> patch.\n\nTo try to quantify that a little, I wrote a very quick-n-dirty\npatch to apply malloc_trim during finish_xact_command and log\nthe effects. (I am not asserting this is the best place to\ncall malloc_trim; it's just one plausible possibility.) Patch\nattached, as well as statistics collected from a run of the\ncore regression tests followed by\n\ngrep malloc_trim postmaster.log | sed 's/.*LOG:/LOG:/' | sort -k4n | uniq -c >trim_savings.txt\n\nWe can see that out of about 43K test queries, 32K saved nothing\nwhatever, and in only four was more than a couple of meg saved.\nThat's pretty discouraging IMO. It might be useful to look closer\nat the behavior of those top four though. I see them as\n\n2024-09-15 14:58:06.146 EDT [960138] LOG: malloc_trim saved 7228 kB\n2024-09-15 14:58:06.146 EDT [960138] STATEMENT: ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);\n\n2024-09-15 14:58:09.861 EDT [960949] LOG: malloc_trim saved 12488 kB\n2024-09-15 14:58:09.861 EDT [960949] STATEMENT: with recursive search_graph(f, t, label, is_cycle, path) as (\n\t\tselect *, false, array[row(g.f, g.t)] from graph g\n\t\tunion distinct\n\t\tselect g.*, row(g.f, g.t) = any(path), path || row(g.f, g.t)\n\t\tfrom graph g, search_graph sg\n\t\twhere g.f = sg.t and not is_cycle\n\t)\n\tselect * from search_graph;\n\n2024-09-15 14:58:09.866 EDT [960949] LOG: malloc_trim saved 12488 kB\n2024-09-15 14:58:09.866 EDT [960949] STATEMENT: with recursive search_graph(f, t, label) as (\n\t\tselect * from graph g\n\t\tunion distinct\n\t\tselect g.*\n\t\tfrom graph g, search_graph sg\n\t\twhere g.f = sg.t\n\t) cycle f, t set is_cycle to 'Y' default 'N' using path\n\tselect * from search_graph;\n\n2024-09-15 14:58:09.853 EDT [960949] LOG: malloc_trim saved 12616 kB\n2024-09-15 14:58:09.853 EDT [960949] STATEMENT: with recursive search_graph(f, t, label) as (\n\t\tselect * from graph0 g\n\t\tunion distinct\n\t\tselect g.*\n\t\tfrom graph0 g, search_graph sg\n\t\twhere g.f = sg.t\n\t) search breadth first by f, t set seq\n\tselect * from search_graph order by seq;\n\nI don't understand why WITH RECURSIVE queries might be more prone\nto leave non-garbage-collected memory behind than other queries,\nbut maybe that is worth looking into.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 15 Sep 2024 15:16:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trim the heap free memory" }, { "msg_contents": "Thank you very much for your response and suggestions.\n\nAs you mentioned, the patch here is actually designed for glibc's ptmalloc2\nandis not applicable to other platforms. I will consider supporting it only\non the Linux platform in the future. In the memory management strategy of\nptmalloc2, there is a certain amount of non-garbage-collected memory, which\nis closely related to the order and method of memory allocation and\nrelease. To reduce the performance overhead caused by frequent allocation\nand release of small blocks of memory, ptmalloc2 intentionally retains this\npart of the memory. The malloc_trim function locks, traverses memory\nblocks, and uses madvise to release this part of the memory, but this\nprocess may also have a negative impact on performance. In the process of\nexploring solutions, I also considered a variety of strategies, including\nscheduling malloc_trim to be executed at regular intervals or triggering\nmalloc_trim after a specific number of free operations. However, we found\nthat these methods are not optimal solutions.\n\n> We can see that out of about 43K test queries, 32K saved nothing\n> whatever, and in only four was more than a couple of meg saved.\n> That's pretty discouraging IMO. It might be useful to look closer\n> at the behavior of those top four though. I see them as\n\n\nI have previously encountered situations where the non-garbage-collected\nmemory of wal_sender was approximately hundreds of megabytes or even\nexceeded 1GB, but I was unable to reproduce this situation using simple\nSQL. Therefore, I introduced an asynchronous processing function, hoping to\nmanage memory more efficiently without affecting performance.\n\n\nIn addition, I have considered the following optimization strategies:\n\n 1.\n\n Adjust the configuration of ptmalloc2 through the mallopt function to\n use mmap rather than sbrk for memory allocation. This can immediately\n return the memory to the operating system when it is released, but it may\n affect performance due to the higher overhead of mmap.\n 2.\n\n Use other memory allocators such as jemalloc or tcmalloc, and adjust\n relevant parameters to reduce the generation of non-garbage-collected\n memory. However, these allocators are designed for multi-threaded and may\n lead to increased memory usage per process.\n 3.\n\n Build a set of memory context (memory context) allocation functions\n based on mmap, delegating the responsibility of memory management entirely\n to the database level. Although this solution can effectively control\n memory allocation, it requires a large-scale engineering implementation.\n\nI look forward to further discussing these solutions with you and exploring\nthe best memory management practices together.\n\nBest regards, Shawn\n\nTom Lane <[email protected]> 于2024年9月16日周一 03:16写道:\n\n> I wrote:\n> > The single test case you showed suggested that maybe we could\n> > usefully prod glibc to free memory at query completion, but we\n> > don't need all this interrupt infrastructure to do that. I think\n> > we could likely get 95% of the benefit with about a five-line\n> > patch.\n>\n> To try to quantify that a little, I wrote a very quick-n-dirty\n> patch to apply malloc_trim during finish_xact_command and log\n> the effects. (I am not asserting this is the best place to\n> call malloc_trim; it's just one plausible possibility.) Patch\n> attached, as well as statistics collected from a run of the\n> core regression tests followed by\n>\n> grep malloc_trim postmaster.log | sed 's/.*LOG:/LOG:/' | sort -k4n | uniq\n> -c >trim_savings.txt\n>\n> We can see that out of about 43K test queries, 32K saved nothing\n> whatever, and in only four was more than a couple of meg saved.\n> That's pretty discouraging IMO. It might be useful to look closer\n> at the behavior of those top four though. I see them as\n>\n> 2024-09-15 14:58:06.146 EDT [960138] LOG: malloc_trim saved 7228 kB\n> 2024-09-15 14:58:06.146 EDT [960138] STATEMENT: ALTER TABLE\n> delete_test_table ADD PRIMARY KEY (a,b,c,d);\n>\n> 2024-09-15 14:58:09.861 EDT [960949] LOG: malloc_trim saved 12488 kB\n> 2024-09-15 14:58:09.861 EDT [960949] STATEMENT: with recursive\n> search_graph(f, t, label, is_cycle, path) as (\n> select *, false, array[row(g.f, g.t)] from graph g\n> union distinct\n> select g.*, row(g.f, g.t) = any(path), path || row(g.f,\n> g.t)\n> from graph g, search_graph sg\n> where g.f = sg.t and not is_cycle\n> )\n> select * from search_graph;\n>\n> 2024-09-15 14:58:09.866 EDT [960949] LOG: malloc_trim saved 12488 kB\n> 2024-09-15 14:58:09.866 EDT [960949] STATEMENT: with recursive\n> search_graph(f, t, label) as (\n> select * from graph g\n> union distinct\n> select g.*\n> from graph g, search_graph sg\n> where g.f = sg.t\n> ) cycle f, t set is_cycle to 'Y' default 'N' using path\n> select * from search_graph;\n>\n> 2024-09-15 14:58:09.853 EDT [960949] LOG: malloc_trim saved 12616 kB\n> 2024-09-15 14:58:09.853 EDT [960949] STATEMENT: with recursive\n> search_graph(f, t, label) as (\n> select * from graph0 g\n> union distinct\n> select g.*\n> from graph0 g, search_graph sg\n> where g.f = sg.t\n> ) search breadth first by f, t set seq\n> select * from search_graph order by seq;\n>\n> I don't understand why WITH RECURSIVE queries might be more prone\n> to leave non-garbage-collected memory behind than other queries,\n> but maybe that is worth looking into.\n>\n> regards, tom lane\n>\n>\n\nThank you very much for your response and suggestions.As you mentioned, the patch here is actually designed for glibc's ptmalloc2 andis not\napplicable to other platforms. I will consider supporting it only on the Linux platform\nin the future.\nIn the memory management strategy of ptmalloc2, there is a certain amount of\nnon-garbage-collected memory, which is closely related to the order and method\nof memory allocation and release. To reduce the performance overhead caused by\nfrequent allocation and release of small blocks of memory, ptmalloc2 intentionally\nretains this part of the memory. The malloc_trim function locks, traverses memory\nblocks, and uses madvise to release this part of the memory, but this process may\nalso have a negative impact on performance.\nIn the process of exploring solutions, I also considered a variety of strategies,\nincluding scheduling malloc_trim to be executed at regular intervals or triggering\nmalloc_trim after a specific number of free operations. However, we found that\nthese methods are not optimal solutions. We can see that out of about 43K test queries, 32K saved nothingwhatever, and in only four was more than a couple of meg saved.That's pretty discouraging IMO.  It might be useful to look closerat the behavior of those top four though.  I see them asI have previously encountered situations where the non-garbage-collected memory\nof wal_sender was approximately hundreds of megabytes or even exceeded 1GB,\nbut I was unable to reproduce this situation using simple SQL.\nTherefore, I introduced an asynchronous processing function, hoping to manage\nmemory more efficiently without affecting performance. In addition, I have considered the following optimization strategies:Adjust the configuration of ptmalloc2 through the mallopt function to use mmap\nrather than sbrk for memory allocation. This can immediately return the memory\nto the operating system when it is released, but it may affect performance due to\nthe higher overhead of mmap.Use other memory allocators such as jemalloc or tcmalloc, and adjust relevant\nparameters to reduce the generation of non-garbage-collected memory.\nHowever, these allocators are designed for multi-threaded\nand may lead to increased memory usage per process.Build a set of memory context (memory context) allocation functions based\non mmap, delegating the responsibility of memory management entirely to\nthe database level. Although this solution can effectively control memory\nallocation, it requires a large-scale engineering implementation.I look forward to further discussing these solutions with you and exploring the best\nmemory management practices together.Best regards, ShawnTom Lane <[email protected]> 于2024年9月16日周一 03:16写道:I wrote:\n> The single test case you showed suggested that maybe we could\n> usefully prod glibc to free memory at query completion, but we\n> don't need all this interrupt infrastructure to do that.  I think\n> we could likely get 95% of the benefit with about a five-line\n> patch.\n\nTo try to quantify that a little, I wrote a very quick-n-dirty\npatch to apply malloc_trim during finish_xact_command and log\nthe effects.  (I am not asserting this is the best place to\ncall malloc_trim; it's just one plausible possibility.)  Patch\nattached, as well as statistics collected from a run of the\ncore regression tests followed by\n\ngrep malloc_trim postmaster.log | sed 's/.*LOG:/LOG:/' | sort -k4n | uniq -c >trim_savings.txt\n\nWe can see that out of about 43K test queries, 32K saved nothing\nwhatever, and in only four was more than a couple of meg saved.\nThat's pretty discouraging IMO.  It might be useful to look closer\nat the behavior of those top four though.  I see them as\n\n2024-09-15 14:58:06.146 EDT [960138] LOG:  malloc_trim saved 7228 kB\n2024-09-15 14:58:06.146 EDT [960138] STATEMENT:  ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);\n\n2024-09-15 14:58:09.861 EDT [960949] LOG:  malloc_trim saved 12488 kB\n2024-09-15 14:58:09.861 EDT [960949] STATEMENT:  with recursive search_graph(f, t, label, is_cycle, path) as (\n                select *, false, array[row(g.f, g.t)] from graph g\n                union distinct\n                select g.*, row(g.f, g.t) = any(path), path || row(g.f, g.t)\n                from graph g, search_graph sg\n                where g.f = sg.t and not is_cycle\n        )\n        select * from search_graph;\n\n2024-09-15 14:58:09.866 EDT [960949] LOG:  malloc_trim saved 12488 kB\n2024-09-15 14:58:09.866 EDT [960949] STATEMENT:  with recursive search_graph(f, t, label) as (\n                select * from graph g\n                union distinct\n                select g.*\n                from graph g, search_graph sg\n                where g.f = sg.t\n        ) cycle f, t set is_cycle to 'Y' default 'N' using path\n        select * from search_graph;\n\n2024-09-15 14:58:09.853 EDT [960949] LOG:  malloc_trim saved 12616 kB\n2024-09-15 14:58:09.853 EDT [960949] STATEMENT:  with recursive search_graph(f, t, label) as (\n                select * from graph0 g\n                union distinct\n                select g.*\n                from graph0 g, search_graph sg\n                where g.f = sg.t\n        ) search breadth first by f, t set seq\n        select * from search_graph order by seq;\n\nI don't understand why WITH RECURSIVE queries might be more prone\nto leave non-garbage-collected memory behind than other queries,\nbut maybe that is worth looking into.\n\n                        regards, tom lane", "msg_date": "Wed, 18 Sep 2024 10:56:08 +0800", "msg_from": "shawn wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trim the heap free memory" } ]
[ { "msg_contents": "Hi hackers. I found a strange point when I was reading the heapam\nhandlers of table access method, where heapam_tuple_insert and several\nhandlers explicitly assign t_tableOid of the tuple to be operated,\nwhile later the t_tableOid is assigned again by like heap_prepare_insert or\nso. Is it redundant? Better to do it together with other tuple\ninitialization\ncode?\n\nRegards,\nJingtang", "msg_date": "Fri, 23 Aug 2024 19:50:22 +0800", "msg_from": "Jingtang Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Redundant assignment of table OID for a HeapTuple?" }, { "msg_contents": "On 23/08/2024 14:50, Jingtang Zhang wrote:\n> Hi hackers. I found a strange point when I was reading the heapam\n> handlers of table access method, where heapam_tuple_insert and several\n> handlers explicitly assign t_tableOid of the tuple to be operated,\n> while later the t_tableOid is assigned again by like heap_prepare_insert or\n> so. Is it redundant? Better to do it together with other tuple \n> initialization\n> code?\n\nI wonder if we could get rid of t_tableOid altogether. It's only used in \na few places, maybe those places could get the informatiotion from \nsomewhere else. Notably, there's also a t_tableOid field in TupleTableSlot.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 15:04:30 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant assignment of table OID for a HeapTuple?" } ]
[ { "msg_contents": "So, for our production use, we want to achieve $subj. The problem is\nnot new [1], as well as solutions to it[2]. So, what is the problem?\n\nOn one hand, we cannot resolve this issue with some primitive log\nchunk truncation mechanism inside core (the way it is done in\ngreenplum db for example[3]), because this will result in invalid json\nlogged in postgresql log file. Also, we maybe don't want to truncate\nHINT or DETAIL (which should not be long). We only want to truncate\nthe query or its parameters list in case of extended proto. So, any\npatch like this is probably unacceptable.\n\nOn the other hand, it is also unclear how to truncate long lines\ninside emit_log_hook. I came up with this [4] module. This approach is\nbad, because we modify core-owned data, so we break other\nemit_log_hook hooks, which is a show stopper for this extension use.\nThe one detail we also should address is debug_query_string. We have\nno API currently to tell the kernel to log debug_query_string\npartially. One can set ErrorData hide_stmt field to true; but this way\nwe erase query string from logs, instead of logging to more than $N\nchars (or multibyte sequence) from it.\n\nSo, I want to hear suggestions from the community, which way to\nachieve $subj. The approach I have in mind right now is setting\nsomething like field_max_length inside emit_log_hook. Then, use this\nnew setting in core in the send_message_to_server_log function.\n\n[1] https://www.postgresql.org/message-id/CAM-gcbQqriv%3Dnr%3DYFFmp5ytgW7HbiftLBANFB9C0GwHMGDC0LA%40mail.gmail.com\n[2] https://stackoverflow.com/questions/45992209/how-to-truncate-log-statement-in-postgres-9-6\n[3] https://github.com/greenplum-db/gpdb-archive/blob/283eea57100c690cb05b672b14eef7d0382e4e16/src/backend/utils/error/elog.c#L3668-L3671\n[4] https://github.com/pg-sharding/pg_log_trunc/blob/master/pg_log_trunc.c#L43\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Fri, 23 Aug 2024 17:36:48 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid logging enormously large messages" } ]
[ { "msg_contents": "Currently numeric.c has 2 separate functions that implement numeric\ndivision, div_var() and div_var_fast(). Since div_var_fast() is only\napproximate, it is only used in the transcendental functions, where a\nslightly inaccurate result is OK. In all other cases, div_var() is\nused.\n\nAside from the pain of having to maintain 2 separate functions,\ndiv_var() is very slow for large inputs. In fact, it's well over an\norder of magnitude slower than mul_var() for similar sized inputs. For\nexample:\n\ncreate temp table foo(x numeric, y numeric, xy numeric);\ninsert into foo select x, y, x*y from\n (select random(1e49999, 1e50000-1), random(1e49999, 1e50000-1)\n from generate_series(1,100)) g(x,y);\n\nselect count(*) from foo where x*y != xy; -- 840 ms\n\nselect count(*) from foo where xy/x != y; -- 36364 ms\n\nselect count(*) from foo where xy/y != x; -- 36515 ms\n\nThe attached patch attempts to resolve those issues by replacing\ndiv_var() and div_var_fast() with a single function intended to be\nfaster than both the originals.\n\nThe new version of div_var() has an extra \"exact\" boolean parameter,\nso that it can operate in the faster, approximate mode when used by\nthe transcendental functions, but regardless of the value of this\nparameter, it adopts the algorithm used by div_var_fast(), using\nfloat-point arithmetic to approximate each quotient digit. The\ndifference is that, if an exact result is requested, it uses a larger\nworking dividend, so that it can subtract off complete multiples of\nthe divisor for each quotient digit (while still delaying the\npropagation of carries). This then gives the remainder, which can be\nused to check the quotient and correct it, if it's inaccurate.\n\nIn addition, like mul_var(), div_var() now does the computation in\nbase NBASE^2, using 64-bit integers for the working dividend array,\nwhich halves the number of iterations in the outer loop and reduces\nthe frequency of carry-propagation passes.\n\nIn the testing I've done so far, this is up to around 20 times faster\nthan the old version of div_var() when \"exact\" is true (it computes\neach of the queries above in around 1.6 seconds), and it's up to\naround 2-3 times faster than div_var_fast() when \"exact\" is false.\n\nIn addition, I found that it's significantly faster than\ndiv_var_int64() for 3 and 4 digit divisors, so I've removed that as\nwell.\n\nOverall, this reduces numeric.c by around 300 lines, and means that\nwe'll only have one function to maintain.\n\nPatch 0001 is the patch itself. 0002 is just for testing purposes --\nit exposes both old functions and the new one as SQL-callable\nfunctions with all their arguments, so they can be compared.\n\nI'm also attaching the performance test script I used, and the output\nit produced.\n\nSomething else these test results reveal is the answer to the\nquestion: under what circumstances is the approximate computation\nfaster than the exact one? The answer seems to be: whenever var2 has\nmore than around 12 NBASE digits, regardless of the size of var1.\n\nThinking about that, it makes sense based on the following reasoning:\nThe exact computation subtracts off a complete multiple of the divisor\nfor each result digit, so the overall cost is roughly proportional to\n\n res_ndigitpairs * var2ndigitpairs\n\nThe approximate calculation computes an additional DIV_GUARD_DIGITS\nresult digits (4 NBASE digits, or 2 NBASE^2 digits), so there's an\nadditional cost of around 2 * var2ndigitpairs, but it ignores digits\nin the working dividend beyond the guard digits, which means that, for\nlater result digits, it is able to truncate the subtraction step. The\nsize of the truncation increases from 1 to var2ndigitpairs-1 as \"qi\"\nincreases, so the overall net saving is roughly\n\n 1 + 2 + 3 + ... + (var2ndigitpairs-1) - 2 * var2ndigitpairs\n\nThe first part of that is just an arithmetic series, so this equals\n\n var2ndigitpairs * (var2ndigitpairs - 1) / 2 - 2 * var2ndigitpairs\n\n = var2ndigitpairs * (var2ndigitpairs - 5) / 2\n\nThat suggests there will be an overall saving once var2ndigitpairs is\nlarger than 5, which is roughly consistent with observations. It's\nunderstandable that it's a little larger in practice, due to other\noverheads in the outer loop.\n\nSo I'm inclined to add something like\n\n if (var2ndigitpairs <= 6)\n exact = true;\n\nnear the start, to try to get the best performance in all cases.\n\nI also tested how common it was that a correction needed to be applied\nto the quotient. Using random inputs, the quotient was one too small\nin the final place roughly 1 in every 2000 - 100000 times (varying\ndepending on the size of the inputs), and it was never off by more\nthan one, or too large. A simple example where the estimated quotient\nis one too small is this:\n\nselect div(5399133729, 5399133729);\n\nThis initially rounds the wrong way, due to loss of precision in the\nfloating-point numbers.\n\nIt is also possible to trigger the other case (an estimated quotient\ndigit that is one too large) with carefully crafted inputs of this\nform:\n\n-- 53705623790171816464 = 7328412092 * 7328412092\nselect div(53705623790171816464 - 1, 7328412092) = 7328412092 - 1;\n\nAgain, I wasn't able to cause the intermediate result to be off by\nmore than one using these kinds of inputs.\n\nI'm inclined to add those examples to the regression tests, just so\nthat we have coverage of those 2 branches of the correction code.\n\nRegards,\nDean", "msg_date": "Fri, 23 Aug 2024 14:49:22 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Optimising numeric division" }, { "msg_contents": "On Fri, Aug 23, 2024, at 15:49, Dean Rasheed wrote:\n> Currently numeric.c has 2 separate functions that implement numeric\n> division, div_var() and div_var_fast(). Since div_var_fast() is only\n> approximate, it is only used in the transcendental functions, where a\n> slightly inaccurate result is OK. In all other cases, div_var() is\n> used.\n...\n> The attached patch attempts to resolve those issues by replacing\n> div_var() and div_var_fast() with a single function intended to be\n> faster than both the originals.\n...\n> In addition, like mul_var(), div_var() now does the computation in\n> base NBASE^2, using 64-bit integers for the working dividend array,\n> which halves the number of iterations in the outer loop and reduces\n> the frequency of carry-propagation passes.\n...\n> In the testing I've done so far, this is up to around 20 times faster\n> than the old version of div_var() when \"exact\" is true (it computes\n> each of the queries above in around 1.6 seconds), and it's up to\n> around 2-3 times faster than div_var_fast() when \"exact\" is false.\n>\n> In addition, I found that it's significantly faster than\n> div_var_int64() for 3 and 4 digit divisors, so I've removed that as\n> well.\n...\n> Overall, this reduces numeric.c by around 300 lines, and means that\n> we'll only have one function to maintain.\n\nImpressive simplifications and optimizations.\nVery happy to see reuse of the NBASE^2 trick.\n\n> Patch 0001 is the patch itself. 0002 is just for testing purposes --\n> it exposes both old functions and the new one as SQL-callable\n> functions with all their arguments, so they can be compared.\n>\n> I'm also attaching the performance test script I used, and the output\n> it produced.\n\nI've had an initial look at the code and it looks straight-forward,\nthanks to most of the complicated parts of the changed code is just\nchange of NBASE to NBASE_SQR.\n\nI think the comments are of high quality.\n\nI've run perf_test.sql on my three machines, without any errors.\nOutput files attached.\n\nRegards,\nJoel", "msg_date": "Fri, 23 Aug 2024 21:21:23 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising numeric division" }, { "msg_contents": "On Fri, Aug 23, 2024, at 21:21, Joel Jacobson wrote:\n> On Fri, Aug 23, 2024, at 15:49, Dean Rasheed wrote:\n>> The attached patch attempts to resolve those issues by replacing\n>> div_var() and div_var_fast() with a single function intended to be\n>> faster than both the originals.\n...\n> I've had an initial look at the code and it looks straight-forward,\n> thanks to most of the complicated parts of the changed code is just\n> change of NBASE to NBASE_SQR.\n\nThis part seems to be entirely new:\n\n```\nif (remainder[0] < 0)\n{\n\t/* remainder < 0; quotient is too large */\n\t...\n}\nelse\n{\n\t/* remainder >= 0 */\n\t...\n}\n```\n\nMaybe add some additional comments here? Or maybe it's fine already as is,\nnot sure what I think. Nothing specific here that is extra complicated,\nbut there are four nested branches, so quite a lot in total to keep track of.\n\nRegards,\nJoel\n\n\n", "msg_date": "Fri, 23 Aug 2024 23:03:56 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising numeric division" }, { "msg_contents": "On Fri, Aug 23, 2024, at 21:21, Joel Jacobson wrote:\n> Attachments:\n> * perf_test-M3 Max.out\n> * perf_test-Intel Core i9-14900K.out\n> * perf_test-AMD Ryzen 9 7950X3D.out\n\nHere are some additional benchmarks from pg-catbench:\n\nAMD Ryzen 9 7950X3D:\n\nselect x var1ndigits,y var2ndigits,a_avg,b_avg,pooled_stddev,abs_diff,rel_diff,sigmas from catbench.vreport where summary like 'Optimise numeric division using base-NBASE^2 arithmetic%' and function_name = 'numeric_div' order by 1,2;\n var1ndigits | var2ndigits | a_avg | b_avg | pooled_stddev | abs_diff | rel_diff | sigmas\n-------------+-------------+--------+--------+---------------+----------+----------+--------\n 1 | 1 | 43 ns | 39 ns | 11 ns | -3.3 ns | -8 | 0\n 1 | 2 | 47 ns | 43 ns | 11 ns | -3.2 ns | -7 | 0\n 1 | 3 | 55 ns | 89 ns | 18 ns | 33 ns | 60 | 2\n 1 | 4 | 76 ns | 93 ns | 20 ns | 16 ns | 22 | 1\n 1 | 8 | 190 ns | 98 ns | 36 ns | -94 ns | -49 | 3\n 1 | 16 | 280 ns | 120 ns | 52 ns | -160 ns | -57 | 3\n 1 | 32 | 490 ns | 140 ns | 98 ns | -350 ns | -72 | 4\n 1 | 64 | 780 ns | 190 ns | 110 ns | -590 ns | -75 | 5\n 1 | 128 | 1.4 µs | 330 ns | 85 ns | -1.1 µs | -77 | 13\n 1 | 256 | 590 ns | 210 ns | 120 ns | -380 ns | -65 | 3\n 1 | 512 | 1.6 µs | 430 ns | 460 ns | -1.2 µs | -74 | 3\n 1 | 1024 | 2.8 µs | 820 ns | 1 µs | -2 µs | -71 | 2\n 1 | 2048 | 6.6 µs | 1.4 µs | 1.9 µs | -5.2 µs | -78 | 3\n 1 | 4096 | 11 µs | 2.8 µs | 2.5 µs | -8.5 µs | -76 | 3\n 1 | 8192 | 25 µs | 5.6 µs | 7.4 µs | -20 µs | -78 | 3\n 1 | 16384 | 53 µs | 15 µs | 15 µs | -37 µs | -71 | 3\n 2 | 2 | 49 ns | 49 ns | 12 ns | 2e-10 s | 0 | 0\n 2 | 3 | 54 ns | 93 ns | 16 ns | 39 ns | 73 | 2\n 2 | 4 | 86 ns | 97 ns | 19 ns | 10 ns | 12 | 1\n 2 | 8 | 200 ns | 89 ns | 36 ns | -110 ns | -56 | 3\n 2 | 16 | 340 ns | 120 ns | 63 ns | -210 ns | -63 | 3\n 2 | 32 | 460 ns | 130 ns | 84 ns | -320 ns | -71 | 4\n 2 | 64 | 770 ns | 210 ns | 68 ns | -560 ns | -73 | 8\n 2 | 128 | 1.5 µs | 330 ns | 290 ns | -1.2 µs | -78 | 4\n 2 | 256 | 1.2 µs | 380 ns | 160 ns | -870 ns | -70 | 5\n 2 | 512 | 1.9 µs | 520 ns | 510 ns | -1.4 µs | -73 | 3\n 2 | 1024 | 3.9 µs | 1 µs | 1.2 µs | -2.9 µs | -74 | 2\n 2 | 2048 | 7.8 µs | 2.1 µs | 2.5 µs | -5.7 µs | -74 | 2\n 2 | 4096 | 17 µs | 5.3 µs | 2 µs | -12 µs | -69 | 6\n 2 | 8192 | 30 µs | 7.7 µs | 8.5 µs | -22 µs | -74 | 3\n 2 | 16384 | 35 µs | 7.3 µs | 7.2 µs | -27 µs | -79 | 4\n 3 | 3 | 52 ns | 88 ns | 16 ns | 36 ns | 69 | 2\n 3 | 4 | 78 ns | 97 ns | 24 ns | 19 ns | 24 | 1\n 3 | 8 | 210 ns | 94 ns | 38 ns | -120 ns | -56 | 3\n 3 | 16 | 300 ns | 130 ns | 53 ns | -170 ns | -57 | 3\n 3 | 32 | 510 ns | 140 ns | 95 ns | -370 ns | -72 | 4\n 3 | 64 | 800 ns | 230 ns | 100 ns | -570 ns | -72 | 6\n 3 | 128 | 1.4 µs | 290 ns | 210 ns | -1.1 µs | -79 | 5\n 3 | 256 | 900 ns | 270 ns | 310 ns | -630 ns | -70 | 2\n 3 | 512 | 1.4 µs | 470 ns | 550 ns | -940 ns | -67 | 2\n 3 | 1024 | 2.2 µs | 510 ns | 460 ns | -1.7 µs | -77 | 4\n 3 | 2048 | 5.5 µs | 1.6 µs | 2.1 µs | -4 µs | -72 | 2\n 3 | 4096 | 13 µs | 3.8 µs | 3.7 µs | -9.3 µs | -71 | 3\n 3 | 8192 | 29 µs | 7.6 µs | 8.3 µs | -22 µs | -74 | 3\n 3 | 16384 | 43 µs | 11 µs | 17 µs | -32 µs | -74 | 2\n 4 | 4 | 85 ns | 92 ns | 20 ns | 6.6 ns | 8 | 0\n 4 | 8 | 200 ns | 89 ns | 37 ns | -110 ns | -56 | 3\n 4 | 16 | 320 ns | 120 ns | 61 ns | -200 ns | -61 | 3\n 4 | 32 | 470 ns | 140 ns | 88 ns | -320 ns | -69 | 4\n 4 | 64 | 800 ns | 220 ns | 120 ns | -580 ns | -72 | 5\n 4 | 128 | 1.5 µs | 330 ns | 250 ns | -1.2 µs | -78 | 5\n 4 | 256 | 890 ns | 340 ns | 310 ns | -550 ns | -61 | 2\n 4 | 512 | 1.2 µs | 460 ns | 330 ns | -730 ns | -62 | 2\n 4 | 1024 | 2.5 µs | 790 ns | 860 ns | -1.7 µs | -69 | 2\n 4 | 2048 | 3.5 µs | 950 ns | 400 ns | -2.6 µs | -73 | 7\n 4 | 4096 | 13 µs | 4 µs | 3.8 µs | -9 µs | -69 | 2\n 4 | 8192 | 30 µs | 7.7 µs | 8.2 µs | -22 µs | -74 | 3\n 4 | 16384 | 61 µs | 20 µs | 7.1 µs | -42 µs | -68 | 6\n 8 | 8 | 200 ns | 94 ns | 38 ns | -110 ns | -53 | 3\n 8 | 16 | 330 ns | 110 ns | 63 ns | -210 ns | -66 | 3\n 8 | 32 | 480 ns | 140 ns | 86 ns | -340 ns | -71 | 4\n 8 | 64 | 800 ns | 210 ns | 49 ns | -590 ns | -74 | 12\n 8 | 128 | 1.6 µs | 320 ns | 190 ns | -1.2 µs | -79 | 7\n 8 | 256 | 1.6 µs | 460 ns | 290 ns | -1.1 µs | -71 | 4\n 8 | 512 | 1.9 µs | 570 ns | 550 ns | -1.3 µs | -70 | 2\n 8 | 1024 | 4 µs | 1 µs | 1.1 µs | -3 µs | -75 | 3\n 8 | 2048 | 6.6 µs | 1.4 µs | 2.4 µs | -5.2 µs | -78 | 2\n 8 | 4096 | 19 µs | 5.2 µs | 870 ns | -14 µs | -73 | 16\n 8 | 8192 | 22 µs | 5.8 µs | 8.2 µs | -16 µs | -73 | 2\n 8 | 16384 | 50 µs | 11 µs | 14 µs | -39 µs | -78 | 3\n 16 | 16 | 310 ns | 130 ns | 57 ns | -180 ns | -59 | 3\n 16 | 32 | 460 ns | 160 ns | 63 ns | -310 ns | -66 | 5\n 16 | 64 | 820 ns | 210 ns | 130 ns | -610 ns | -74 | 5\n 16 | 128 | 1.4 µs | 310 ns | 180 ns | -1.1 µs | -78 | 6\n 16 | 256 | 2.8 µs | 510 ns | 150 ns | -2.3 µs | -82 | 15\n 16 | 512 | 1.2 µs | 320 ns | 340 ns | -840 ns | -72 | 2\n 16 | 1024 | 4.6 µs | 1.2 µs | 980 ns | -3.4 µs | -73 | 3\n 16 | 2048 | 7.7 µs | 1.9 µs | 2.4 µs | -5.8 µs | -75 | 2\n 16 | 4096 | 15 µs | 4.1 µs | 4.3 µs | -11 µs | -72 | 3\n 16 | 8192 | 14 µs | 3.6 µs | 480 ns | -10 µs | -74 | 21\n 16 | 16384 | 28 µs | 6.9 µs | 430 ns | -21 µs | -75 | 48\n 32 | 32 | 570 ns | 150 ns | 120 ns | -420 ns | -74 | 4\n 32 | 64 | 860 ns | 220 ns | 120 ns | -640 ns | -74 | 5\n 32 | 128 | 1.4 µs | 350 ns | 160 ns | -1.1 µs | -76 | 7\n 32 | 256 | 2.9 µs | 530 ns | 420 ns | -2.4 µs | -82 | 6\n 32 | 512 | 2.3 µs | 750 ns | 410 ns | -1.6 µs | -67 | 4\n 32 | 1024 | 3.7 µs | 1 µs | 1 µs | -2.7 µs | -73 | 3\n 32 | 2048 | 5.4 µs | 1.6 µs | 2.2 µs | -3.8 µs | -70 | 2\n 32 | 4096 | 11 µs | 1.8 µs | 1.7 µs | -9.2 µs | -83 | 5\n 32 | 8192 | 25 µs | 6.1 µs | 7.5 µs | -19 µs | -76 | 3\n 32 | 16384 | 59 µs | 15 µs | 17 µs | -44 µs | -74 | 3\n 64 | 64 | 830 ns | 230 ns | 150 ns | -610 ns | -73 | 4\n 64 | 128 | 1.4 µs | 330 ns | 100 ns | -1.1 µs | -77 | 11\n 64 | 256 | 2.9 µs | 520 ns | 170 ns | -2.3 µs | -82 | 14\n 64 | 512 | 1.7 µs | 540 ns | 530 ns | -1.2 µs | -69 | 2\n 64 | 1024 | 2.7 µs | 770 ns | 580 ns | -2 µs | -72 | 3\n 64 | 2048 | 4.3 µs | 1 µs | 950 ns | -3.3 µs | -77 | 3\n 64 | 4096 | 17 µs | 3.8 µs | 2.5 µs | -13 µs | -77 | 5\n 64 | 8192 | 38 µs | 9.9 µs | 1.3 µs | -28 µs | -74 | 21\n 64 | 16384 | 66 µs | 15 µs | 10 µs | -51 µs | -77 | 5\n 128 | 128 | 1.6 µs | 380 ns | 180 ns | -1.2 µs | -76 | 7\n 128 | 256 | 2.6 µs | 530 ns | 100 ns | -2.1 µs | -80 | 20\n 128 | 512 | 1.2 µs | 380 ns | 300 ns | -840 ns | -69 | 3\n 128 | 1024 | 5.3 µs | 1.3 µs | 1 µs | -4 µs | -75 | 4\n 128 | 2048 | 5.5 µs | 980 ns | 960 ns | -4.5 µs | -82 | 5\n 128 | 4096 | 13 µs | 2.8 µs | 3.9 µs | -10 µs | -78 | 3\n 128 | 8192 | 18 µs | 5.9 µs | 5.2 µs | -12 µs | -68 | 2\n 128 | 16384 | 59 µs | 15 µs | 9.3 µs | -44 µs | -75 | 5\n 256 | 256 | 3.3 µs | 590 ns | 540 ns | -2.7 µs | -82 | 5\n 256 | 512 | 1.2 µs | 410 ns | 340 ns | -830 ns | -67 | 2\n 256 | 1024 | 2.9 µs | 810 ns | 1.1 µs | -2.1 µs | -72 | 2\n 256 | 2048 | 9.1 µs | 2.4 µs | 1.7 µs | -6.7 µs | -74 | 4\n 256 | 4096 | 13 µs | 3.8 µs | 3.7 µs | -9.4 µs | -71 | 2\n 256 | 8192 | 30 µs | 8 µs | 4.7 µs | -22 µs | -73 | 5\n 256 | 16384 | 43 µs | 11 µs | 17 µs | -32 µs | -74 | 2\n 512 | 512 | 6.6 µs | 1 µs | 790 ns | -5.6 µs | -84 | 7\n 512 | 1024 | 4.4 µs | 980 ns | 1.4 µs | -3.4 µs | -78 | 2\n 512 | 2048 | 9.6 µs | 2 µs | 2.2 µs | -7.6 µs | -79 | 4\n 512 | 4096 | 9.3 µs | 1.9 µs | 2.1 µs | -7.4 µs | -79 | 3\n 512 | 8192 | 27 µs | 7.9 µs | 7.8 µs | -19 µs | -70 | 2\n 512 | 16384 | 60 µs | 15 µs | 17 µs | -44 µs | -74 | 3\n 1024 | 1024 | 12 µs | 2 µs | 960 ns | -9.9 µs | -83 | 10\n 1024 | 2048 | 4.7 µs | 1.5 µs | 1.2 µs | -3.2 µs | -67 | 3\n 1024 | 4096 | 11 µs | 3.1 µs | 4.6 µs | -8.2 µs | -73 | 2\n 1024 | 8192 | 22 µs | 5.8 µs | 8.8 µs | -17 µs | -74 | 2\n 1024 | 16384 | 35 µs | 7.4 µs | 7.5 µs | -28 µs | -79 | 4\n 2048 | 2048 | 24 µs | 3.8 µs | 1.5 µs | -20 µs | -84 | 13\n 2048 | 4096 | 17 µs | 4.1 µs | 5 µs | -13 µs | -75 | 3\n 2048 | 8192 | 27 µs | 7.9 µs | 8 µs | -19 µs | -71 | 2\n 2048 | 16384 | 45 µs | 12 µs | 18 µs | -33 µs | -74 | 2\n 4096 | 4096 | 51 µs | 8.3 µs | 1.7 µs | -42 µs | -84 | 24\n 4096 | 8192 | 28 µs | 7.5 µs | 8.6 µs | -21 µs | -73 | 2\n 4096 | 16384 | 28 µs | 8.4 µs | 1.1 µs | -19 µs | -70 | 17\n 8192 | 8192 | 80 µs | 14 µs | 1.3 µs | -66 µs | -82 | 49\n 8192 | 16384 | 66 µs | 16 µs | 20 µs | -50 µs | -76 | 3\n 16384 | 16384 | 200 µs | 30 µs | 2.4 µs | -170 µs | -85 | 71\n(136 rows)\n\nSince microbenchmark results are not normally distributed,\nthe sigmas and stddev columns unfortunately don't say much at all,\nthey are just an attempt to give some form of indication of variance.\nAny ideas on better indicators appreciated.\n\nHere are the same report for Intel Core i9-14900K:\n\nselect x var1ndigits,y var2ndigits,a_avg,b_avg,pooled_stddev,abs_diff,rel_diff,sigmas from catbench.vreport where summary like 'Optimise numeric division using base-NBASE^2 arithmetic%' and function_name = 'numeric_div' order by 1,2;\n var1ndigits | var2ndigits | a_avg | b_avg | pooled_stddev | abs_diff | rel_diff | sigmas\n-------------+-------------+--------+--------+---------------+------------+----------+--------\n 1 | 1 | 72 ns | 72 ns | 6.8 ns | -1.9e-10 s | 0 | 0\n 1 | 2 | 76 ns | 77 ns | 8.5 ns | 1.1 ns | 1 | 0\n 1 | 3 | 87 ns | 120 ns | 10 ns | 38 ns | 43 | 4\n 1 | 4 | 98 ns | 120 ns | 12 ns | 27 ns | 27 | 2\n 1 | 8 | 340 ns | 130 ns | 19 ns | -200 ns | -60 | 11\n 1 | 16 | 500 ns | 160 ns | 34 ns | -350 ns | -69 | 10\n 1 | 32 | 850 ns | 200 ns | 81 ns | -660 ns | -77 | 8\n 1 | 64 | 1.6 µs | 300 ns | 130 ns | -1.3 µs | -82 | 11\n 1 | 128 | 3.2 µs | 520 ns | 180 ns | -2.7 µs | -84 | 15\n 1 | 256 | 2.2 µs | 570 ns | 360 ns | -1.6 µs | -74 | 5\n 1 | 512 | 4.4 µs | 1.1 µs | 1.2 µs | -3.3 µs | -75 | 3\n 1 | 1024 | 4.9 µs | 1 µs | 1 µs | -3.9 µs | -79 | 4\n 1 | 2048 | 15 µs | 4.2 µs | 4.1 µs | -11 µs | -72 | 3\n 1 | 4096 | 42 µs | 11 µs | 3.3 µs | -31 µs | -75 | 10\n 1 | 8192 | 86 µs | 22 µs | 4.5 µs | -65 µs | -75 | 15\n 1 | 16384 | 65 µs | 17 µs | 4 µs | -47 µs | -73 | 12\n 2 | 2 | 78 ns | 83 ns | 8.2 ns | 5.1 ns | 7 | 1\n 2 | 3 | 89 ns | 120 ns | 12 ns | 29 ns | 33 | 3\n 2 | 4 | 97 ns | 130 ns | 13 ns | 29 ns | 30 | 2\n 2 | 8 | 310 ns | 130 ns | 30 ns | -180 ns | -58 | 6\n 2 | 16 | 520 ns | 160 ns | 34 ns | -360 ns | -69 | 11\n 2 | 32 | 980 ns | 210 ns | 14 ns | -780 ns | -79 | 57\n 2 | 64 | 1.6 µs | 300 ns | 150 ns | -1.3 µs | -81 | 9\n 2 | 128 | 3.1 µs | 530 ns | 270 ns | -2.6 µs | -83 | 10\n 2 | 256 | 2.8 µs | 710 ns | 150 ns | -2.1 µs | -74 | 14\n 2 | 512 | 4.2 µs | 1.1 µs | 720 ns | -3.1 µs | -73 | 4\n 2 | 1024 | 6.7 µs | 2.2 µs | 1.5 µs | -4.5 µs | -67 | 3\n 2 | 2048 | 7.7 µs | 2 µs | 690 ns | -5.7 µs | -74 | 8\n 2 | 4096 | 20 µs | 4 µs | 4 µs | -16 µs | -81 | 4\n 2 | 8192 | 65 µs | 13 µs | 12 µs | -52 µs | -80 | 4\n 2 | 16384 | 100 µs | 26 µs | 39 µs | -78 µs | -75 | 2\n 3 | 3 | 87 ns | 130 ns | 10 ns | 39 ns | 45 | 4\n 3 | 4 | 100 ns | 130 ns | 13 ns | 27 ns | 27 | 2\n 3 | 8 | 330 ns | 120 ns | 21 ns | -210 ns | -63 | 10\n 3 | 16 | 470 ns | 160 ns | 38 ns | -310 ns | -66 | 8\n 3 | 32 | 910 ns | 190 ns | 72 ns | -720 ns | -79 | 10\n 3 | 64 | 1.7 µs | 300 ns | 130 ns | -1.4 µs | -83 | 11\n 3 | 128 | 3.2 µs | 510 ns | 250 ns | -2.7 µs | -84 | 11\n 3 | 256 | 1.9 µs | 570 ns | 530 ns | -1.4 µs | -71 | 3\n 3 | 512 | 3.3 µs | 770 ns | 1.2 µs | -2.5 µs | -77 | 2\n 3 | 1024 | 7.6 µs | 2 µs | 2.2 µs | -5.6 µs | -73 | 3\n 3 | 2048 | 12 µs | 3 µs | 4.8 µs | -9.4 µs | -76 | 2\n 3 | 4096 | 30 µs | 8.4 µs | 8.7 µs | -21 µs | -72 | 2\n 3 | 8192 | 68 µs | 17 µs | 19 µs | -51 µs | -75 | 3\n 3 | 16384 | 100 µs | 26 µs | 39 µs | -76 µs | -75 | 2\n 4 | 4 | 100 ns | 130 ns | 13 ns | 28 ns | 28 | 2\n 4 | 8 | 300 ns | 130 ns | 33 ns | -170 ns | -56 | 5\n 4 | 16 | 510 ns | 160 ns | 35 ns | -350 ns | -69 | 10\n 4 | 32 | 910 ns | 210 ns | 77 ns | -700 ns | -77 | 9\n 4 | 64 | 1.7 µs | 310 ns | 97 ns | -1.4 µs | -82 | 14\n 4 | 128 | 3.1 µs | 490 ns | 270 ns | -2.6 µs | -84 | 10\n 4 | 256 | 1.9 µs | 440 ns | 530 ns | -1.4 µs | -77 | 3\n 4 | 512 | 2.8 µs | 800 ns | 730 ns | -2 µs | -71 | 3\n 4 | 1024 | 9.4 µs | 2.1 µs | 1.6 µs | -7.3 µs | -77 | 5\n 4 | 2048 | 13 µs | 3 µs | 4.9 µs | -9.7 µs | -76 | 2\n 4 | 4096 | 34 µs | 8.4 µs | 9.9 µs | -26 µs | -75 | 3\n 4 | 8192 | 61 µs | 17 µs | 17 µs | -44 µs | -73 | 3\n 4 | 16384 | 120 µs | 26 µs | 34 µs | -89 µs | -77 | 3\n 8 | 8 | 310 ns | 120 ns | 33 ns | -180 ns | -59 | 6\n 8 | 16 | 540 ns | 170 ns | 30 ns | -360 ns | -68 | 12\n 8 | 32 | 930 ns | 200 ns | 75 ns | -730 ns | -79 | 10\n 8 | 64 | 1.7 µs | 310 ns | 120 ns | -1.4 µs | -82 | 12\n 8 | 128 | 3.3 µs | 510 ns | 270 ns | -2.7 µs | -84 | 10\n 8 | 256 | 4.7 µs | 880 ns | 330 ns | -3.8 µs | -81 | 11\n 8 | 512 | 3.7 µs | 800 ns | 1 µs | -2.9 µs | -79 | 3\n 8 | 1024 | 6.3 µs | 1.5 µs | 2.5 µs | -4.8 µs | -76 | 2\n 8 | 2048 | 14 µs | 3 µs | 4.2 µs | -11 µs | -79 | 3\n 8 | 4096 | 29 µs | 6.2 µs | 8.6 µs | -23 µs | -79 | 3\n 8 | 8192 | 32 µs | 8.3 µs | 2 µs | -24 µs | -74 | 12\n 8 | 16384 | 120 µs | 25 µs | 34 µs | -94 µs | -79 | 3\n 16 | 16 | 490 ns | 160 ns | 55 ns | -330 ns | -67 | 6\n 16 | 32 | 1 µs | 200 ns | 38 ns | -810 ns | -80 | 21\n 16 | 64 | 1.6 µs | 310 ns | 130 ns | -1.3 µs | -81 | 10\n 16 | 128 | 3.2 µs | 530 ns | 250 ns | -2.7 µs | -83 | 11\n 16 | 256 | 6.4 µs | 950 ns | 440 ns | -5.5 µs | -85 | 13\n 16 | 512 | 3.1 µs | 780 ns | 1.2 µs | -2.3 µs | -74 | 2\n 16 | 1024 | 5.2 µs | 1.5 µs | 1.4 µs | -3.8 µs | -72 | 3\n 16 | 2048 | 22 µs | 5.2 µs | 1.1 µs | -16 µs | -76 | 15\n 16 | 4096 | 29 µs | 8.4 µs | 8.1 µs | -21 µs | -71 | 3\n 16 | 8192 | 76 µs | 17 µs | 12 µs | -59 µs | -78 | 5\n 16 | 16384 | 120 µs | 26 µs | 12 µs | -90 µs | -77 | 8\n 32 | 32 | 970 ns | 220 ns | 98 ns | -740 ns | -77 | 8\n 32 | 64 | 1.7 µs | 310 ns | 140 ns | -1.3 µs | -81 | 10\n 32 | 128 | 3.3 µs | 520 ns | 280 ns | -2.8 µs | -84 | 10\n 32 | 256 | 6.9 µs | 980 ns | 250 ns | -5.9 µs | -86 | 23\n 32 | 512 | 3.7 µs | 790 ns | 1.1 µs | -2.9 µs | -79 | 3\n 32 | 1024 | 6.2 µs | 1.6 µs | 2.5 µs | -4.7 µs | -75 | 2\n 32 | 2048 | 22 µs | 5.3 µs | 840 ns | -17 µs | -76 | 20\n 32 | 4096 | 20 µs | 4 µs | 4.3 µs | -16 µs | -80 | 4\n 32 | 8192 | 58 µs | 13 µs | 17 µs | -45 µs | -78 | 3\n 32 | 16384 | 140 µs | 34 µs | 19 µs | -100 µs | -75 | 5\n 64 | 64 | 2 µs | 340 ns | 74 ns | -1.6 µs | -83 | 22\n 64 | 128 | 3.6 µs | 550 ns | 190 ns | -3.1 µs | -85 | 17\n 64 | 256 | 6.1 µs | 930 ns | 600 ns | -5.2 µs | -85 | 9\n 64 | 512 | 3.3 µs | 790 ns | 1.3 µs | -2.5 µs | -76 | 2\n 64 | 1024 | 9.5 µs | 2 µs | 1.5 µs | -7.4 µs | -79 | 5\n 64 | 2048 | 13 µs | 3.1 µs | 4.9 µs | -9.4 µs | -75 | 2\n 64 | 4096 | 39 µs | 11 µs | 4.3 µs | -29 µs | -73 | 7\n 64 | 8192 | 50 µs | 12 µs | 19 µs | -38 µs | -75 | 2\n 64 | 16384 | 64 µs | 17 µs | 4.5 µs | -47 µs | -73 | 10\n 128 | 128 | 3 µs | 540 ns | 210 ns | -2.5 µs | -82 | 12\n 128 | 256 | 6.9 µs | 1 µs | 390 ns | -5.9 µs | -85 | 15\n 128 | 512 | 2.7 µs | 810 ns | 730 ns | -1.9 µs | -70 | 3\n 128 | 1024 | 5.1 µs | 1.6 µs | 1.5 µs | -3.5 µs | -68 | 2\n 128 | 2048 | 20 µs | 5.2 µs | 2.4 µs | -15 µs | -74 | 6\n 128 | 4096 | 26 µs | 6.1 µs | 9.8 µs | -19 µs | -76 | 2\n 128 | 8192 | 60 µs | 17 µs | 17 µs | -44 µs | -72 | 3\n 128 | 16384 | 100 µs | 26 µs | 39 µs | -77 µs | -74 | 2\n 256 | 256 | 5.9 µs | 1 µs | 370 ns | -4.9 µs | -83 | 13\n 256 | 512 | 4.2 µs | 850 ns | 1.2 µs | -3.4 µs | -80 | 3\n 256 | 1024 | 12 µs | 2.6 µs | 180 ns | -9.3 µs | -78 | 51\n 256 | 2048 | 20 µs | 5 µs | 2.4 µs | -15 µs | -75 | 6\n 256 | 4096 | 30 µs | 6.3 µs | 8.8 µs | -24 µs | -79 | 3\n 256 | 8192 | 69 µs | 17 µs | 19 µs | -52 µs | -75 | 3\n 256 | 16384 | 150 µs | 35 µs | 23 µs | -120 µs | -78 | 5\n 512 | 512 | 14 µs | 2.1 µs | 1.1 µs | -12 µs | -85 | 11\n 512 | 1024 | 9.7 µs | 1.8 µs | 1.4 µs | -7.9 µs | -82 | 6\n 512 | 2048 | 10 µs | 2.1 µs | 2.4 µs | -8.3 µs | -79 | 3\n 512 | 4096 | 20 µs | 4.2 µs | 4.5 µs | -16 µs | -79 | 4\n 512 | 8192 | 88 µs | 21 µs | 4.6 µs | -67 µs | -76 | 15\n 512 | 16384 | 140 µs | 34 µs | 38 µs | -100 µs | -76 | 3\n 1024 | 1024 | 25 µs | 4 µs | 2.7 µs | -21 µs | -84 | 8\n 1024 | 2048 | 16 µs | 4.2 µs | 5.2 µs | -12 µs | -74 | 2\n 1024 | 4096 | 40 µs | 8.1 µs | 6.7 µs | -32 µs | -80 | 5\n 1024 | 8192 | 80 µs | 17 µs | 12 µs | -63 µs | -79 | 5\n 1024 | 16384 | 120 µs | 35 µs | 35 µs | -89 µs | -72 | 3\n 2048 | 2048 | 55 µs | 8 µs | 5.2 µs | -46 µs | -85 | 9\n 2048 | 4096 | 28 µs | 6.2 µs | 6 µs | -22 µs | -78 | 4\n 2048 | 8192 | 53 µs | 13 µs | 21 µs | -40 µs | -76 | 2\n 2048 | 16384 | 100 µs | 26 µs | 40 µs | -76 µs | -74 | 2\n 4096 | 4096 | 110 µs | 16 µs | 8.8 µs | -95 µs | -86 | 11\n 4096 | 8192 | 43 µs | 13 µs | 12 µs | -30 µs | -69 | 2\n 4096 | 16384 | 150 µs | 36 µs | 42 µs | -110 µs | -76 | 3\n 8192 | 8192 | 210 µs | 32 µs | 22 µs | -180 µs | -85 | 8\n 8192 | 16384 | 160 µs | 34 µs | 47 µs | -120 µs | -79 | 3\n 16384 | 16384 | 370 µs | 61 µs | 23 µs | -310 µs | -84 | 14\n(136 rows)\n\nOut of these, some appears to be slower, but not sure if actually so,\nmight just be noise, since quite few sigmas, and like said above,\nthe sigmas isn't very scientific since the data can't be assumed\nto be normally distributed:\n\nAMD Ryzen 9 7950X3D:\n\nselect x var1ndigits,y var2ndigits,a_avg,b_avg,pooled_stddev,abs_diff,rel_diff,sigmas from catbench.vreport where summary like 'Optimise numeric division using base-NBASE^2 arithmetic%' and function_name = 'numeric_div' and rel_diff > 0 order by 1,2;\n var1ndigits | var2ndigits | a_avg | b_avg | pooled_stddev | abs_diff | rel_diff | sigmas\n-------------+-------------+-------+-------+---------------+----------+----------+--------\n 1 | 3 | 55 ns | 89 ns | 18 ns | 33 ns | 60 | 2\n 1 | 4 | 76 ns | 93 ns | 20 ns | 16 ns | 22 | 1\n 2 | 3 | 54 ns | 93 ns | 16 ns | 39 ns | 73 | 2\n 2 | 4 | 86 ns | 97 ns | 19 ns | 10 ns | 12 | 1\n 3 | 3 | 52 ns | 88 ns | 16 ns | 36 ns | 69 | 2\n 3 | 4 | 78 ns | 97 ns | 24 ns | 19 ns | 24 | 1\n 4 | 4 | 85 ns | 92 ns | 20 ns | 6.6 ns | 8 | 0\n(7 rows)\n\n\nIntel Core i9-14900K:\n\nselect x var1ndigits,y var2ndigits,a_avg,b_avg,pooled_stddev,abs_diff,rel_diff,sigmas from catbench.vreport where summary like 'Optimise numeric division using base-NBASE^2 arithmetic%' and function_name = 'numeric_div' and rel_diff > 0 order by 1,2;\n var1ndigits | var2ndigits | a_avg | b_avg | pooled_stddev | abs_diff | rel_diff | sigmas\n-------------+-------------+--------+--------+---------------+----------+----------+--------\n 1 | 2 | 76 ns | 77 ns | 8.5 ns | 1.1 ns | 1 | 0\n 1 | 3 | 87 ns | 120 ns | 10 ns | 38 ns | 43 | 4\n 1 | 4 | 98 ns | 120 ns | 12 ns | 27 ns | 27 | 2\n 2 | 2 | 78 ns | 83 ns | 8.2 ns | 5.1 ns | 7 | 1\n 2 | 3 | 89 ns | 120 ns | 12 ns | 29 ns | 33 | 3\n 2 | 4 | 97 ns | 130 ns | 13 ns | 29 ns | 30 | 2\n 3 | 3 | 87 ns | 130 ns | 10 ns | 39 ns | 45 | 4\n 3 | 4 | 100 ns | 130 ns | 13 ns | 27 ns | 27 | 2\n 4 | 4 | 100 ns | 130 ns | 13 ns | 28 ns | 28 | 2\n(9 rows)\n\nQuite similar (var1ndigits,var2ndigits) pairs that seems slower between\nthe two CPUs, so maybe it actually is a slowdown.\n\nThese benchmark results were obtained by comparing\nnumeric_div() between HEAD (ff59d5d) and with\nv1-0001-Optimise-numeric-division-using-base-NBASE-2-arit.patch\napplied.\n\nSince statistical tools that rely on normal distributions can't be used,\nlet's look at the individual measurements for (var1ndigits=3, var2ndigits=3)\nsince that seems to be the biggest slowdown on both CPUs,\nand see if our level of surprise is affected.\n\nThis is how many microseconds 512 iterations took\nwhen comparing HEAD vs v1-0001:\n\nAMD Ryzen 9 7950X3D:\n{21,31,31,21,21,21,32,34,21,21,22,35,36,35,39,21,35,21,21,30,21,21,21,20,31,36,22,22,37,33} -- HEAD (ff59d5d)\n{49,60,32,56,48,55,48,48,32,32,48,48,32,63,48,49,49,56,48,49,33,47,32,47,33,55,55,56,33,32} -- v1-0001\n\nIntel Core i9-14900K:\n{45,36,46,45,49,45,46,49,46,46,46,45,49,49,45,33,46,46,36,49,45,49,46,45,46,49,45,49,46,45} -- HEAD (ff59d5d)\n{70,63,70,70,70,51,63,45,69,69,63,70,70,69,70,62,69,70,63,70,69,69,63,69,64,70,69,63,64,51} -- v1-0001\n\nn=30 (3 random vars * 10 measurements)\n\n(The reason why Intel is slower than AMD is because the Intel is running at a fixed CPU frequency.)\n\nRegards,\nJoel\n\n\n", "msg_date": "Sat, 24 Aug 2024 00:00:30 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising numeric division" }, { "msg_contents": "On Sat, Aug 24, 2024, at 00:00, Joel Jacobson wrote:\n> Since statistical tools that rely on normal distributions can't be used,\n> let's look at the individual measurements for (var1ndigits=3, var2ndigits=3)\n> since that seems to be the biggest slowdown on both CPUs,\n> and see if our level of surprise is affected.\n\nHere is a more traditional benchmark,\nwhich seems to also indicate (var1ndigits=3, var2ndigits=3) is a bit slower:\n\nSELECT setseed(0);\nCREATE TABLE t AS\nSELECT\n random(111111111111::numeric,999999999999::numeric) AS var1,\n random(111111111111::numeric,999999999999::numeric) AS var2\nFROM generate_series(1,1e7);\nEXPLAIN ANALYZE SELECT SUM(var1/var2) FROM t;\n\n/*\n * Intel Core i9-14900K\n */\n\n-- HEAD (ff59d5d)\nExecution Time: 575.141 ms\nExecution Time: 572.179 ms\nExecution Time: 571.394 ms\n\n-- v1-0001-Optimise-numeric-division-using-base-NBASE-2-arit.patch\nExecution Time: 672.983 ms\nExecution Time: 603.031 ms\nExecution Time: 620.736 ms\n\n/*\n * AMD Ryzen 9 7950X3D\n */\n\n-- HEAD (ff59d5d)\nExecution Time: 561.349 ms\nExecution Time: 516.365 ms\nExecution Time: 510.782 ms\n\n-- v1-0001-Optimise-numeric-division-using-base-NBASE-2-arit.patch\nExecution Time: 659.049 ms\nExecution Time: 607.035 ms\nExecution Time: 600.026 ms\n\nRegards,\nJoel\n\n\n", "msg_date": "Sat, 24 Aug 2024 01:35:21 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising numeric division" }, { "msg_contents": "On Sat, Aug 24, 2024, at 01:35, Joel Jacobson wrote:\n> On Sat, Aug 24, 2024, at 00:00, Joel Jacobson wrote:\n>> Since statistical tools that rely on normal distributions can't be used,\n>> let's look at the individual measurements for (var1ndigits=3, var2ndigits=3)\n>> since that seems to be the biggest slowdown on both CPUs,\n>> and see if our level of surprise is affected.\n>\n> Here is a more traditional benchmark,\n> which seems to also indicate (var1ndigits=3, var2ndigits=3) is a bit slower:\n\nI tested just adding back div_var_int64, and it seems to help.\n\n-- Intel Core i9-14900K:\n\nselect summary, x var1ndigits,y var2ndigits,a_avg,b_avg,pooled_stddev,abs_diff,rel_diff,sigmas from catbench.vreport where function_name = 'numeric_div' and summary like 'Add back div_var_int64%' and sigmas > 1 order by x,y;\n summary | var1ndigits | var2ndigits | a_avg | b_avg | pooled_stddev | abs_diff | rel_diff | sigmas\n--------------------------+-------------+-------------+--------+--------+---------------+----------+----------+--------\n Add back div_var_int64. | 1 | 3 | 120 ns | 85 ns | 11 ns | -40 ns | -32 | 4\n Add back div_var_int64. | 1 | 4 | 120 ns | 97 ns | 11 ns | -28 ns | -23 | 3\n Add back div_var_int64. | 2 | 3 | 120 ns | 89 ns | 11 ns | -29 ns | -25 | 3\n Add back div_var_int64. | 2 | 4 | 130 ns | 94 ns | 14 ns | -32 ns | -25 | 2\n Add back div_var_int64. | 3 | 3 | 130 ns | 85 ns | 11 ns | -41 ns | -32 | 4\n Add back div_var_int64. | 3 | 4 | 130 ns | 99 ns | 13 ns | -29 ns | -23 | 2\n Add back div_var_int64. | 4 | 4 | 130 ns | 100 ns | 12 ns | -28 ns | -22 | 2\n(7 rows)\n\nRegards,\nJoel\n\n\n", "msg_date": "Sat, 24 Aug 2024 09:26:02 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising numeric division" }, { "msg_contents": "On Sat, 24 Aug 2024 at 08:26, Joel Jacobson <[email protected]> wrote:\n>\n> On Sat, Aug 24, 2024, at 01:35, Joel Jacobson wrote:\n> > On Sat, Aug 24, 2024, at 00:00, Joel Jacobson wrote:\n> >> Since statistical tools that rely on normal distributions can't be used,\n> >> let's look at the individual measurements for (var1ndigits=3, var2ndigits=3)\n> >> since that seems to be the biggest slowdown on both CPUs,\n> >> and see if our level of surprise is affected.\n> >\n> > Here is a more traditional benchmark,\n> > which seems to also indicate (var1ndigits=3, var2ndigits=3) is a bit slower:\n>\n> I tested just adding back div_var_int64, and it seems to help.\n>\n\nThanks for testing.\n\nThere does appear to be quite a lot of variability between platforms\nover whether or not div_var_int64() is a win for 3 and 4 digit\ndivisors. Since this patch is primarily about improving div_var()'s\nlong division algorithm, it's probably best for it to not touch that,\nso I've put div_var_int64() back in for now. We could possibly\ninvestigate whether it can be improved separately.\n\nLooking at your other test results, they seem to confirm my previous\nobservation that exact mode is faster than approximate mode for\nvar2ndigits <= 12 or so, so I've added code to do that.\n\nI also expanded on the comments for the quotient-correction code a bit.\n\nRegards,\nDean", "msg_date": "Sat, 24 Aug 2024 13:10:19 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimising numeric division" }, { "msg_contents": "On Sat, Aug 24, 2024, at 14:10, Dean Rasheed wrote:\n> On Sat, 24 Aug 2024 at 08:26, Joel Jacobson <[email protected]> wrote:\n>>\n>> On Sat, Aug 24, 2024, at 01:35, Joel Jacobson wrote:\n>> > On Sat, Aug 24, 2024, at 00:00, Joel Jacobson wrote:\n>> >> Since statistical tools that rely on normal distributions can't be used,\n>> >> let's look at the individual measurements for (var1ndigits=3, var2ndigits=3)\n>> >> since that seems to be the biggest slowdown on both CPUs,\n>> >> and see if our level of surprise is affected.\n>> >\n>> > Here is a more traditional benchmark,\n>> > which seems to also indicate (var1ndigits=3, var2ndigits=3) is a bit slower:\n>>\n>> I tested just adding back div_var_int64, and it seems to help.\n>>\n>\n> Thanks for testing.\n>\n> There does appear to be quite a lot of variability between platforms\n> over whether or not div_var_int64() is a win for 3 and 4 digit\n> divisors. Since this patch is primarily about improving div_var()'s\n> long division algorithm, it's probably best for it to not touch that,\n> so I've put div_var_int64() back in for now. We could possibly\n> investigate whether it can be improved separately.\n>\n> Looking at your other test results, they seem to confirm my previous\n> observation that exact mode is faster than approximate mode for\n> var2ndigits <= 12 or so, so I've added code to do that.\n>\n> I also expanded on the comments for the quotient-correction code a bit.\n\nNice. LGTM.\nI've successfully tested the new patch again on both Intel and AMD.\n\nI've marked it as Ready for Committer.\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 25 Aug 2024 11:32:38 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimising numeric division" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe date for PostgreSQL 17 Release Candidate 1 (RC1) is September 5, \r\n2024. Please ensure all open items[1] are completed and committed before \r\nAugust 31, 2024 12:00 UTC to allow enough time for them to clear the \r\nbuildfarm.\r\n\r\nThe current target date for the PostgreSQL 17 GA release is September \r\n26, 2024. While this date could change if the release team decides the \r\ncandidate release is not ready, please plan for this date to be the GA \r\nrelease.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items", "msg_date": "Fri, 23 Aug 2024 10:14:23 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 17 RC1 & GA dates" }, { "msg_contents": "Hi,\n\nI would like to know if the PostgreSQL 17 GAis happening this week or is\nthere a change in dates?\n\nOn Fri, Aug 23, 2024 at 7:44 PM Jonathan S. Katz <[email protected]>\nwrote:\n\n> Hi,\n>\n> The date for PostgreSQL 17 Release Candidate 1 (RC1) is September 5,\n> 2024. Please ensure all open items[1] are completed and committed before\n> August 31, 2024 12:00 UTC to allow enough time for them to clear the\n> buildfarm.\n>\n> The current target date for the PostgreSQL 17 GA release is September\n> 26, 2024. While this date could change if the release team decides the\n> candidate release is not ready, please plan for this date to be the GA\n> release.\n>\n> Thanks,\n>\n> Jonathan\n>\n> [1] https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n>\n\n\n-- \nSandeep Thakkar\n\nHi,I would like to know if the PostgreSQL 17 GAis  happening this week or is there a change in dates? On Fri, Aug 23, 2024 at 7:44 PM Jonathan S. Katz <[email protected]> wrote:Hi,\n\nThe date for PostgreSQL 17 Release Candidate 1 (RC1) is September 5, \n2024. Please ensure all open items[1] are completed and committed before \nAugust 31, 2024 12:00 UTC to allow enough time for them to clear the \nbuildfarm.\n\nThe current target date for the PostgreSQL 17 GA release is September \n26, 2024. While this date could change if the release team decides the \ncandidate release is not ready, please plan for this date to be the GA \nrelease.\n\nThanks,\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n-- Sandeep Thakkar", "msg_date": "Mon, 23 Sep 2024 12:57:35 +0530", "msg_from": "Sandeep Thakkar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 RC1 & GA dates" }, { "msg_contents": "On Mon, 23 Sept 2024 at 19:28, Sandeep Thakkar\n<[email protected]> wrote:\n> I would like to know if the PostgreSQL 17 GAis happening this week or is there a change in dates?\n\nNothing so far has come up to cause the dates to change.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 21:38:34 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 RC1 & GA dates" } ]