threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nWhile following along with Tristan and Heikki's thread about signals\nin psql, it occurred to me that the documentation atop pqsignal() is\nnot very good:\n\n * we don't explain what problem it originally solved\n * we don't explain why it's still needed today\n * we don't explain what else it does for us today\n * we describe the backend implementation for Windows incorrectly (mea culpa)\n * we vaguely mention one issue with Windows frontend code, but I\nthink the point made is misleading, and we don't convey the scale of\nthe differences\n\nHere is my attempt to improve it.", "msg_date": "Fri, 24 Nov 2023 11:33:29 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Improving the comments in pqsignal()" }, { "msg_contents": "On 24/11/2023 00:33, Thomas Munro wrote:\n> Hi,\n> \n> While following along with Tristan and Heikki's thread about signals\n> in psql, it occurred to me that the documentation atop pqsignal() is\n> not very good:\n> \n> * we don't explain what problem it originally solved\n> * we don't explain why it's still needed today\n> * we don't explain what else it does for us today\n> * we describe the backend implementation for Windows incorrectly (mea culpa)\n> * we vaguely mention one issue with Windows frontend code, but I\n> think the point made is misleading, and we don't convey the scale of\n> the differences\n> \n> Here is my attempt to improve it.\n\nThanks!\n\n> This is program 10.12 from Advanced Programming in the UNIX\n> Environment, with minor changes.\nIn the copy I found online (3rd edition), it's \"Figure 10.18\", not \n\"program 10.12\".\n\nOther than that, looks good.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 24 Nov 2023 09:54:56 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving the comments in pqsignal()" }, { "msg_contents": "On Fri, Nov 24, 2023 at 8:55 PM Heikki Linnakangas <[email protected]> wrote:\n> On 24/11/2023 00:33, Thomas Munro wrote:\n> > This is program 10.12 from Advanced Programming in the UNIX\n> > Environment, with minor changes.\n> In the copy I found online (3rd edition), it's \"Figure 10.18\", not\n> \"program 10.12\".\n>\n> Other than that, looks good.\n\nThanks. I removed that number (it's easy enough to find), replaced\n\"underdocumented\" with \"unspecified\" (a word from the later edition of\nStevens) and added a line break to break up that final paragraph, and\npushed. Time to upgrade my treeware copy of that book...\n\nOne thing I worried about while writing that text: why is it OK that\nwin32_port.h redefines SIG_DFL etc, if they might be exposed to the\nsystem signal()? But it seems we picked the same numerical values. A\nlittle weird, but not going to break anything.\n\n\n", "msg_date": "Sat, 25 Nov 2023 10:11:54 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving the comments in pqsignal()" } ]
[ { "msg_contents": "While working on the patch in [1], I noticed that ever since\n00b41463c, it's now suboptimal to do the following:\n\nswitch (bms_membership(relids))\n{\n case BMS_EMPTY_SET:\n /* handle empty set */\n break;\n case BMS_SINGLETON:\n /* call bms_singleton_member() and handle singleton set */\n break;\n case BMS_MULTIPLE:\n /* handle multi-member set */\n break;\n}\n\nThe following is cheaper as we don't need to call bms_membership() and\nbms_singleton_member() for singleton sets. It also saves function call\noverhead for empty sets.\n\nif (relids == NULL)\n /* handle empty set */\nelse\n{\n int relid;\n\n if (bms_get_singleton(relids, &relid))\n /* handle singleton set */\n else\n /* handle multi-member set */\n}\n\nIn the attached, I've adjusted the code to use the latter of the two\nabove methods in 3 places. In examine_variable() this reduces the\ncomplexity of the logic quite a bit and saves calling bms_is_member()\nin addition to bms_singleton_member().\n\nI'm trying to reduce the footprint of what's being worked on in [1]\nand I highlighted this as something that'll help with that.\n\nAny objections to me pushing the attached?\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvqHCNKJi9CrQZG-reQDXTfRWnT5rhzNtDQhnrBzAAusfA@mail.gmail.com", "msg_date": "Fri, 24 Nov 2023 17:06:25 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Don't use bms_membership in places where it's not needed" }, { "msg_contents": "On Fri, Nov 24, 2023 at 12:06 PM David Rowley <[email protected]> wrote:\n\n> In the attached, I've adjusted the code to use the latter of the two\n> above methods in 3 places. In examine_variable() this reduces the\n> complexity of the logic quite a bit and saves calling bms_is_member()\n> in addition to bms_singleton_member().\n\n\n+1 to the idea.\n\nI think you have a typo in distribute_restrictinfo_to_rels. We should\nremove the call of bms_singleton_member and use relid instead.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -2644,7 +2644,7 @@ distribute_restrictinfo_to_rels(PlannerInfo *root,\n * There is only one relation participating in the clause, so it\n * is a restriction clause for that relation.\n */\n- rel = find_base_rel(root, bms_singleton_member(relids));\n+ rel = find_base_rel(root, relid);\n\nThanks\nRichard\n\nOn Fri, Nov 24, 2023 at 12:06 PM David Rowley <[email protected]> wrote:\nIn the attached, I've adjusted the code to use the latter of the two\nabove methods in 3 places.  In examine_variable() this reduces the\ncomplexity of the logic quite a bit and saves calling bms_is_member()\nin addition to bms_singleton_member().+1 to the idea.I think you have a typo in distribute_restrictinfo_to_rels.  We shouldremove the call of bms_singleton_member and use relid instead.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -2644,7 +2644,7 @@ distribute_restrictinfo_to_rels(PlannerInfo *root,             * There is only one relation participating in the clause, so it             * is a restriction clause for that relation.             */-           rel = find_base_rel(root, bms_singleton_member(relids));+           rel = find_base_rel(root, relid);ThanksRichard", "msg_date": "Fri, 24 Nov 2023 14:53:50 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't use bms_membership in places where it's not needed" }, { "msg_contents": "On Fri, 24 Nov 2023 at 19:54, Richard Guo <[email protected]> wrote:\n> +1 to the idea.\n>\n> I think you have a typo in distribute_restrictinfo_to_rels. We should\n> remove the call of bms_singleton_member and use relid instead.\n\nThanks for reviewing. I've now pushed this.\n\nDavid\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:43:00 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't use bms_membership in places where it's not needed" }, { "msg_contents": "Hi,\n\nOn 2023-11-24 17:06:25 +1300, David Rowley wrote:\n> While working on the patch in [1], I noticed that ever since\n> 00b41463c, it's now suboptimal to do the following:\n> \n> switch (bms_membership(relids))\n> {\n> case BMS_EMPTY_SET:\n> /* handle empty set */\n> break;\n> case BMS_SINGLETON:\n> /* call bms_singleton_member() and handle singleton set */\n> break;\n> case BMS_MULTIPLE:\n> /* handle multi-member set */\n> break;\n> }\n> \n> The following is cheaper as we don't need to call bms_membership() and\n> bms_singleton_member() for singleton sets. It also saves function call\n> overhead for empty sets.\n> \n> if (relids == NULL)\n> /* handle empty set */\n> else\n> {\n> int relid;\n> \n> if (bms_get_singleton(relids, &relid))\n> /* handle singleton set */\n> else\n> /* handle multi-member set */\n> }\n\nHm, does this ever matter from a performance POV? The current code does look\nsimpler to read to me. If the overhead is relevant, I'd instead just move the\ncode into a static inline?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:21:34 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't use bms_membership in places where it's not needed" }, { "msg_contents": "On Tue, 28 Nov 2023 at 11:21, Andres Freund <[email protected]> wrote:\n> Hm, does this ever matter from a performance POV? The current code does look\n> simpler to read to me. If the overhead is relevant, I'd instead just move the\n> code into a static inline?\n\nI didn't particularly find the code in examine_variable() easy to\nread. I think what's there now is quite a bit better than what was\nthere.\n\nbms_get_singleton_member() was added in d25367ec4 for this purpose, so\nit seems kinda weird not to use it.\n\nDavid\n\n\n", "msg_date": "Tue, 28 Nov 2023 12:16:21 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't use bms_membership in places where it's not needed" } ]
[ { "msg_contents": "Hi,\n\nI propose a patch that ensures `pg_convert` doesn't allocate and copy data\nwhen no conversion is done. It is an unnecessary overhead, especially when\nsuch conversions are done frequently and for large values.\n\nI've tried measuring the performance impact, and the patched version has a\nsmall but non-zero gain.\n\nThe patch builds against `master` and `make check` succeeds.\n\nHappy to hear any feedback!\n\n-- \nY.", "msg_date": "Fri, 24 Nov 2023 06:05:29 -0800", "msg_from": "Yurii Rashkovskii <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] pg_convert improvement" }, { "msg_contents": "Hi,\n\nOn 11/24/23 3:05 PM, Yurii Rashkovskii wrote:\n> Hi,\n> \n> I propose a patch that ensures `pg_convert` doesn't allocate and copy data when no conversion is done. It is an unnecessary overhead, especially when such conversions are done frequently and for large values.\n> \n\n+1 for the patch, I think the less is done the better.\n\n> \n> Happy to hear any feedback!\n> \n\nThe patch is pretty straightforward, I just have one remark:\n\n+ /* if no actual conversion happened, return the original string */\n+ /* (we are checking pointers to strings instead of encodings because\n+ `pg_do_encoding_conversion` above covers more cases than just\n+ encoding equality) */\n\nI think this could be done in one single comment and follow the preferred style\nfor multi-line comment, see [1].\n\n[1]: https://www.postgresql.org/docs/current/source-format.html\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 24 Nov 2023 15:26:00 +0100", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_convert improvement" }, { "msg_contents": "Hi Bertrand,\n\nOn Fri, Nov 24, 2023 at 6:26 AM Drouvot, Bertrand <\[email protected]> wrote:\n\n>\n> The patch is pretty straightforward, I just have one remark:\n>\n> + /* if no actual conversion happened, return the original string */\n> + /* (we are checking pointers to strings instead of encodings\n> because\n> + `pg_do_encoding_conversion` above covers more cases than just\n> + encoding equality) */\n>\n> I think this could be done in one single comment and follow the preferred\n> style\n> for multi-line comment, see [1].\n>\n\nThank you for your feedback. I've attached a revised patch.\n\n-- \nY.", "msg_date": "Fri, 24 Nov 2023 06:32:55 -0800", "msg_from": "Yurii Rashkovskii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_convert improvement" }, { "msg_contents": "Hi,\n\nOn 11/24/23 3:32 PM, Yurii Rashkovskii wrote:\n> Hi Bertrand,\n> \n> On Fri, Nov 24, 2023 at 6:26 AM Drouvot, Bertrand <[email protected] <mailto:[email protected]>> wrote:\n> \n> \n> The patch is pretty straightforward, I just have one remark:\n> \n> +       /* if no actual conversion happened, return the original string */\n> +       /* (we are checking pointers to strings instead of encodings because\n> +          `pg_do_encoding_conversion` above covers more cases than just\n> +          encoding equality) */\n> \n> I think this could be done in one single comment and follow the preferred style\n> for multi-line comment, see [1].\n> \n> \n> Thank you for your feedback. I've attached a revised patch.\n\nDid some minor changes in the attached:\n\n- Started the multi-line comment with an upper case and finished\nit with a \".\" and re-worded a bit.\n- Ran pgindent\n\nWhat do you think about the attached?\n\nAlso, might be good to create a CF entry [1] so that the patch proposal does not get lost\nand gets visibility.\n\n[1]: https://commitfest.postgresql.org/46/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 27 Nov 2023 08:11:06 +0100", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_convert improvement" }, { "msg_contents": "Hi Bertrand,\n\n\n> Did some minor changes in the attached:\n>\n> - Started the multi-line comment with an upper case and finished\n> it with a \".\" and re-worded a bit.\n> - Ran pgindent\n>\n> What do you think about the attached?\n>\n\nIt looks great!\n\n\n>\n> Also, might be good to create a CF entry [1] so that the patch proposal\n> does not get lost\n> and gets visibility.\n>\n\nJust submitted it to SF. Thank you for the review!\n\n-- \nY.\n\n Hi Bertrand,\nDid some minor changes in the attached:\n\n- Started the multi-line comment with an upper case and finished\nit with a \".\" and re-worded a bit.\n- Ran pgindent\n\nWhat do you think about the attached?It looks great! \n\nAlso, might be good to create a CF entry [1] so that the patch proposal does not get lost\nand gets visibility.Just submitted it to SF. Thank you for the review! -- Y.", "msg_date": "Mon, 27 Nov 2023 17:16:17 -0800", "msg_from": "Yurii Rashkovskii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_convert improvement" }, { "msg_contents": "Hi,\n\nOn 11/28/23 2:16 AM, Yurii Rashkovskii wrote:\n>  Hi Bertrand,\n> \n> \n> Did some minor changes in the attached:\n> \n> - Started the multi-line comment with an upper case and finished\n> it with a \".\" and re-worded a bit.\n> - Ran pgindent\n> \n> What do you think about the attached?\n> \n> \n> It looks great!\n> \n> \n> Also, might be good to create a CF entry [1] so that the patch proposal does not get lost\n> and gets visibility.\n> \n> \n> Just submitted it to SF. Thank you for the review!\n> \n\nThanks! Just marked it as \"Ready for Committer\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Nov 2023 07:45:28 +0100", "msg_from": "\"Drouvot, Bertrand\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_convert improvement" }, { "msg_contents": "On Mon, Nov 27, 2023 at 08:11:06AM +0100, Drouvot, Bertrand wrote:\n> +\t\tPG_RETURN_BYTEA_P(string);\n\nI looked around to see whether there was some sort of project policy about\nreturning arguments without copying them, but the only strict rule I see is\nto avoid scribbling on argument data without first copying it. However, I\ndo see functions that return unmodified arguments both with and without\ncopying. For example, unaccent_dict() is careful to copy the argument\nbefore returning it:\n\n\tPG_RETURN_TEXT_P(PG_GETARG_TEXT_P_COPY(strArg));\n\nBut replace_text() is not:\n\n\t/* Return unmodified source string if empty source or pattern */\n\tif (src_text_len < 1 || from_sub_text_len < 1)\n\t{\n\t\tPG_RETURN_TEXT_P(src_text);\n\t}\n\nI don't have any specific concerns about doing this, though. Otherwise,\nthe patch looks pretty good to me, so I will plan on committing it shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Dec 2023 15:50:39 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_convert improvement" }, { "msg_contents": "Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 4 Dec 2023 11:58:11 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_convert improvement" } ]
[ { "msg_contents": "Hello,\n\nHere is a patch to improve rowcount estimates for\n`UNNEST(some_array_column)`. Today we hard code this to 10, but we\nhave statistics about array size, so it's easy to use them.\n\nI've seen plans where this would make a difference. If the array has\nonly 1 or 2 elements, then overestimating the rowcount by 10 leads to\nunnecessary seqscans downstream. I can see how an underestimate would\ncause issues too.\n\nThis patch builds on a391ff3c3d41 which allowed set-returning\nfunctions like UNNEST to include a support function to estimate their\nresult count. (There is a nice writeup at\nhttps://www.cybertec-postgresql.com/en/optimizer-support-functions/)\nBut that patch only changes UNNEST if it has a Const or ArrayExpr\nargument.\n\nThe statistic I'm using is the last value in the DECHIST array, which\nis the average number of distinct elements in the array. Using the\nplain (non-distinct) element count would be more accurate, but we\ndon't have that, and using distinct elements is still better than a\nhardcoded 10.\n\nThe real change is in estimate_array_length, which has several callers\nbesides array_unnest_support, but I think this change should give more\naccurate estimates for all of them.\n\nThere is a comment that estimate_array_length must agree with\nscalararraysel. I don't think this commit introduces any\ndiscrepancies. The most relevant case there is `scalar = ANY/ALL\n(array)`, which also consults DECHIST (and/or MCELEM).\n\nI wasn't sure where to put a test. I finally settled on arrays.sql\nsince (1) that has other UNNEST tests (2) array_unnest_support is in\nutil/adt/arrayfuncs.c (3) I couldn't find a place devoted to\nrowcount/selectivity estimates. I'm happy to move it if someone has a\nbetter idea!\n\nBased on 712dc2338b23.\n\nYours,\n\n--\nPaul ~{:-)\[email protected]", "msg_date": "Sat, 25 Nov 2023 09:19:45 -0800", "msg_from": "Paul A Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Improve rowcount estimate for UNNEST(column)" }, { "msg_contents": "On Sat, 2023-11-25 at 09:19 -0800, Paul A Jungwirth wrote:\n> Here is a patch to improve rowcount estimates for\n> `UNNEST(some_array_column)`. Today we hard code this to 10, but we\n> have statistics about array size, so it's easy to use them.\n> \n> I've seen plans where this would make a difference. If the array has\n> only 1 or 2 elements, then overestimating the rowcount by 10 leads to\n> unnecessary seqscans downstream. I can see how an underestimate would\n> cause issues too.\n\nThe idea sounds good to me.\nI didn't test or scrutinize the code, but I noticed that you use\nEXPLAIN in the regression tests. I think that makes the tests vulnerable\nto changes in the parameters or in the block size.\nPerhaps you can write a function that runs EXPLAIN and extracts just the\nrow count. That should be stable enough.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sun, 26 Nov 2023 21:11:59 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve rowcount estimate for UNNEST(column)" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Sat, 2023-11-25 at 09:19 -0800, Paul A Jungwirth wrote:\n>> Here is a patch to improve rowcount estimates for\n>> `UNNEST(some_array_column)`. Today we hard code this to 10, but we\n>> have statistics about array size, so it's easy to use them.\n\n> The idea sounds good to me.\n\nI didn't read the patch either yet, but it seems like a reasonable idea.\n\n> I didn't test or scrutinize the code, but I noticed that you use\n> EXPLAIN in the regression tests. I think that makes the tests vulnerable\n> to changes in the parameters or in the block size.\n\nYes, this regression test is entirely unacceptable; the numbers will\nnot be stable enough. Even aside from the different-settings issue,\nyou can't rely on ANALYZE deriving exactly the same stats every time.\nUsually what we try to do is devise a query where the plan shape\nchanges because of the better estimate. That typically will provide\nsome insulation against small changes in the numerical estimates.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Nov 2023 15:22:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve rowcount estimate for UNNEST(column)" }, { "msg_contents": "Hi.\nSince both array_op_test, arrest both are not dropped at the end of\nsrc/test/regress/sql/arrays.sql.\nI found using table array_op_test test more convincing.\n\nselect\n reltuples * 10 as original,\n reltuples * (select\nfloor(elem_count_histogram[array_length(elem_count_histogram,1)])\n from pg_stats\n where tablename = 'array_op_test' and attname = 'i')\nas with_patch\n ,(select (elem_count_histogram[array_length(elem_count_histogram,1)])\n from pg_stats\n where tablename = 'array_op_test' and attname = 'i')\nas elem_count_histogram_last_element\nfrom pg_class where relname = 'array_op_test';\n original | with_patch | elem_count_histogram_last_element\n----------+------------+-----------------------------------\n 1030 | 412 | 4.7843137\n(1 row)\n\nwithout patch:\nexplain select unnest(i) from array_op_test;\n QUERY PLAN\n----------------------------------------------------------------------\n ProjectSet (cost=0.00..9.95 rows=1030 width=4)\n -> Seq Scan on array_op_test (cost=0.00..4.03 rows=103 width=40)\n(2 rows)\n\nwith patch:\n explain select unnest(i) from array_op_test;\n QUERY PLAN\n----------------------------------------------------------------------\n ProjectSet (cost=0.00..6.86 rows=412 width=4)\n -> Seq Scan on array_op_test (cost=0.00..4.03 rows=103 width=40)\n(2 rows)\n--------\nbecause, in the estimate_array_length function, `nelem =\nsslot.numbers[sslot.nnumbers - 1];` will round 4.7843137 to 4.\nso with patch estimated row 412 = 103 *4. without patch estimated rows\n= 103 * 10.\n\n\n", "msg_date": "Mon, 27 Nov 2023 15:05:57 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve rowcount estimate for UNNEST(column)" }, { "msg_contents": "On Mon, Nov 27, 2023 at 3:05 PM jian he <[email protected]> wrote:\n>\n> Hi.\n> Since both array_op_test, arrest both are not dropped at the end of\n> src/test/regress/sql/arrays.sql.\n> I found using table array_op_test test more convincing.\n>\n> select\n> reltuples * 10 as original,\n> reltuples * (select\n> floor(elem_count_histogram[array_length(elem_count_histogram,1)])\n> from pg_stats\n> where tablename = 'array_op_test' and attname = 'i')\n> as with_patch\n> ,(select (elem_count_histogram[array_length(elem_count_histogram,1)])\n> from pg_stats\n> where tablename = 'array_op_test' and attname = 'i')\n> as elem_count_histogram_last_element\n> from pg_class where relname = 'array_op_test';\n> original | with_patch | elem_count_histogram_last_element\n> ----------+------------+-----------------------------------\n> 1030 | 412 | 4.7843137\n> (1 row)\n>\n> without patch:\n> explain select unnest(i) from array_op_test;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ProjectSet (cost=0.00..9.95 rows=1030 width=4)\n> -> Seq Scan on array_op_test (cost=0.00..4.03 rows=103 width=40)\n> (2 rows)\n>\n> with patch:\n> explain select unnest(i) from array_op_test;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ProjectSet (cost=0.00..6.86 rows=412 width=4)\n> -> Seq Scan on array_op_test (cost=0.00..4.03 rows=103 width=40)\n> (2 rows)\n> --------\n\nHi.\nI did a minor change. change estimate_array_length return type to\ndouble, cost_tidscan function inside `int ntuples` to `double\nntuples`.\n\n `clamp_row_est(get_function_rows(root, expr->funcid, clause));` will\nround 4.7843137 to 5.\nso with your patch and my refactor, the rows will be 103 * 5 = 515.\n\n explain select unnest(i) from array_op_test;\n QUERY PLAN\n----------------------------------------------------------------------\n ProjectSet (cost=0.00..7.38 rows=515 width=4)\n -> Seq Scan on array_op_test (cost=0.00..4.03 rows=103 width=40)\n(2 rows)", "msg_date": "Thu, 30 Nov 2023 12:35:14 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve rowcount estimate for UNNEST(column)" }, { "msg_contents": "Hello,\n\nOn 11/26/23 12:22, Tom Lane wrote:\n > Yes, this regression test is entirely unacceptable; the numbers will\n > not be stable enough. Even aside from the different-settings issue,\n > you can't rely on ANALYZE deriving exactly the same stats every time.\n > Usually what we try to do is devise a query where the plan shape\n > changes because of the better estimate.\n\nHere is a patch with an improved test. With the old \"10\" estimate we get a Merge Join, but now that \nthe planner can see there are only ~4 elements per array, we get a Nested Loop.\n\nIt was actually hard to get a new plan, since all our regress tables' arrays have around 5 elements \naverage, which isn't so far from 10. Adding a table with 1- or 2- element arrays would be more \ndramatic. So I resorted to tuning the query with `WHERE seqno <= 50`. Hopefully that's not cheating \ntoo much.\n\nI thought about also adding a test where the old code *underestimates*, but then I'd have to add a \nnew table with big arrays. If it's worth it let me know.\n\nOn 11/26/23 23:05, jian he wrote:\n > I found using table array_op_test test more convincing.\n\nTrue, arrtest is pretty small. The new test uses array_op_test instead.\n\nOn 11/29/23 20:35, jian he wrote:\n > I did a minor change. change estimate_array_length return type to double\n\nI'm not sure I want to change estimate_array_length from returning ints to returning doubles, since \nit's called in many places. I can see how that might give plans that are more accurate yet, so maybe \nit's worth it? It feels like it ought to be a separate patch to me, but if others want me to include \nit here please let me know.\n\nI did add clamp_row_est since it's essentially free and maybe gives some safety.\n\nRebased onto d43bd090a8.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Wed, 6 Dec 2023 22:32:07 -0800", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve rowcount estimate for UNNEST(column)" }, { "msg_contents": "Paul Jungwirth <[email protected]> writes:\n> Here is a patch with an improved test. With the old \"10\" estimate we get a Merge Join, but now that \n> the planner can see there are only ~4 elements per array, we get a Nested Loop.\n\nPushed with minor editorialization. I ended up not using the test\ncase, because I was afraid it wouldn't be all that stable, and\ncode coverage shows that we are exercising the added code path\neven without a bespoke test case.\n\n> On 11/29/23 20:35, jian he wrote:\n>>> I did a minor change. change estimate_array_length return type to double\n\n> I'm not sure I want to change estimate_array_length from returning\n> ints to returning doubles, since it's called in many places.\n\nBut your patch forces every one of those places to be touched anyway,\nas a consequence of adding the \"root\" argument. I looked at the\ncallers and saw that every single one of them (in core anyway) ends up\nusing the result in a \"double\" rowcount calculation, so we're really\nnot buying anything by converting to integer and back again. There's\nalso a question of whether the number from DECHIST could be big enough\nto overflow an int. Perhaps not given the current calculation method,\nbut since it's a float4 there's at least a theoretical risk. Hence,\nI adopted Jian's suggestion.\n\nOne other point is that examine_variable is capable of succeeding\non things that aren't Vars, so I thought the restriction to Vars\nwas inappropriate.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jan 2024 18:45:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve rowcount estimate for UNNEST(column)" } ]
[ { "msg_contents": "In the past few days we've had two buildfarm failures[1][2] in the\nstats regression test that look like\n\n@@ -1582,7 +1582,7 @@\n SELECT :io_stats_post_reset < :io_stats_pre_reset;\n ?column? \n ----------\n- t\n+ f\n (1 row)\n \n -- test BRIN index doesn't block HOT update\n\nI'm a bit mystified by this. This test was introduced in Andres'\ncommit 10a082bf7 of 2023-02-11, and it seems to have been stable\nsince then. I trawled the buildfarm logs going back three months\nand found no similar failures. So why's it failing now? The\nmost plausible theory seems to be that Michael's recent commits\nadding pg_stat_reset_xxx features destabilized the test somehow ...\nbut I sure don't see how/why.\n\nFailure [1] was on my own animal longfin, so I tried to reproduce\nit on that animal's host, but no luck so far.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2023-11-21%2001%3A55%3A00\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurus&dt=2023-11-25%2016%3A20%3A04\n\n\n", "msg_date": "Sat, 25 Nov 2023 13:08:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "New instability in stats regression test" }, { "msg_contents": "I wrote:\n> I'm a bit mystified by this. This test was introduced in Andres'\n> commit 10a082bf7 of 2023-02-11, and it seems to have been stable\n> since then. I trawled the buildfarm logs going back three months\n> and found no similar failures. So why's it failing now? The\n> most plausible theory seems to be that Michael's recent commits\n> adding pg_stat_reset_xxx features destabilized the test somehow ...\n> but I sure don't see how/why.\n\nAfter a bit more looking around, I have part of a theory.\nCommit 23c8c0c8f of 2023-11-12 added this, a little ways before\nthe problematic test:\n\n-- Test that reset_shared with no argument resets all the stats types\n-- supported (providing NULL as argument has the same effect).\nSELECT pg_stat_reset_shared();\n\nThe test that is failing is of course\n\n-- Test IO stats reset\nSELECT pg_stat_have_stats('io', 0, 0);\nSELECT sum(evictions) + sum(reuses) + sum(extends) + sum(fsyncs) + sum(reads) + sum(writes) + sum(writebacks) + sum(hits) AS io_stats_pre_reset\n FROM pg_stat_io \\gset\nSELECT pg_stat_reset_shared('io');\nSELECT sum(evictions) + sum(reuses) + sum(extends) + sum(fsyncs) + sum(reads) + sum(writes) + sum(writebacks) + sum(hits) AS io_stats_post_reset\n FROM pg_stat_io \\gset\nSELECT :io_stats_post_reset < :io_stats_pre_reset;\n\nSo the observed failure could be explained if, between the\n\"pg_stat_reset_shared('io')\" call and the subsequent scan of\npg_stat_io, concurrent sessions had done more I/O operations\nthan happened since that new pg_stat_reset_shared() call.\nPreviously, the \"pre_reset\" counts would be large enough to\nmake that a pretty ridiculous theory, but after 23c8c0c8f maybe\nit's not.\n\nTo test this idea, I made the test print out the actual values\nof the counts, like this:\n\n@@ -1585,10 +1585,10 @@\n \n SELECT sum(evictions) + sum(reuses) + sum(extends) + sum(fsyncs) + sum(reads) + sum(writes) + sum(writebacks) + sum(hits) AS io_stats_post_reset\n FROM pg_stat_io \\gset\n-SELECT :io_stats_post_reset < :io_stats_pre_reset;\n- ?column? \n-----------\n- t\n+SELECT :io_stats_post_reset, :io_stats_pre_reset;\n+ ?column? | ?column? \n+----------+----------\n+ 10452 | 190087\n (1 row)\n \nOf course, this makes it fail every time, but the idea is to get\na sense of the magnitude of the counts; and what I'm seeing is\nthat the \"pre reset\" counts are typically 10x more than the\n\"post reset\" ones, even after 23c8c0c8f. If I remove the\nsuspicious pg_stat_reset_shared() call, there's about 3 orders\nof magnitude difference; but still you'd think a 10x safety\nmargin would be enough. So this theory doesn't seem to quite\nwork as-is. Perhaps there's some additional contributing factor\nI didn't think to control.\n\nNonetheless, it seems like a really bad idea that this test\nof I/O stats reset happens after the newly-added test. It\nis clearly now dependent on timing and the amount of concurrent\nactivity whether it will pass or not. We should probably\nre-order the tests to do the old test first; or else abandon\nthis test methodology and just test I/O reset the same way\nwe test the other cases (checking only for timestamp advance).\nOr maybe we don't really need the pg_stat_reset_shared() test?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Nov 2023 14:34:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "On Sat, Nov 25, 2023 at 02:34:40PM -0500, Tom Lane wrote:\n> -- Test that reset_shared with no argument resets all the stats types\n> -- supported (providing NULL as argument has the same effect).\n> SELECT pg_stat_reset_shared();\n\nRight, this has switched pg_stat_reset_shared() from doing nothing to\ndo a full reset. Removing this reset switches the results of\nio_stats_pre_reset from a 7-digit number to a 4-digit number, thanks\nto all the previous I/O activity generated by all the tests.\n\n> Nonetheless, it seems like a really bad idea that this test\n> of I/O stats reset happens after the newly-added test. It\n> is clearly now dependent on timing and the amount of concurrent\n> activity whether it will pass or not. We should probably\n> re-order the tests to do the old test first; or else abandon\n> this test methodology and just test I/O reset the same way\n> we test the other cases (checking only for timestamp advance).\n> Or maybe we don't really need the pg_stat_reset_shared() test?\n\nI was ready to argue that we'd better keep this test and keep it close\nto the end of stats.sql while documenting why things are kept in this\norder, but two resets done on the same shared stats type would still\nbe prone to race conditions without all the previous activity done in\nthe tests (like pg_stat_wal).\n\nWith all that in mind and because we have checks for the individual\ntargets with pg_stat_reset_shared(), I would agree to just remove it\nentirely. Say as of the attached?\n--\nMichael", "msg_date": "Mon, 27 Nov 2023 11:56:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> With all that in mind and because we have checks for the individual\n> targets with pg_stat_reset_shared(), I would agree to just remove it\n> entirely. Say as of the attached?\n\nI'm good with that answer --- I doubt that this test sequence is\nproving anything that's worth the cycles it takes. If it'd catch\noversights like failing to add new stats types to the \"reset all\"\ncode path, then I'd be for keeping it; but I don't see how the\ntest could notice that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Nov 2023 22:34:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "On Mon, Nov 27, 2023 at 8:26 AM Michael Paquier <[email protected]> wrote:\n>\n> I was ready to argue that we'd better keep this test and keep it close\n> to the end of stats.sql while documenting why things are kept in this\n> order,\n\nIt's easy for someone to come and add pg_stat_reset_shared() before\nthe end without noticing the comment as the test failure is sporadic\nin nature.\n\n> but two resets done on the same shared stats type would still\n> be prone to race conditions without all the previous activity done in\n> the tests (like pg_stat_wal).\n\nCan running stats.sql in non-parallel mode help stabilize the tests as-is?\n\n> With all that in mind and because we have checks for the individual\n> targets with pg_stat_reset_shared(), I would agree to just remove it\n> entirely. Say as of the attached?\n\nI tend to agree with this approach, the code is anyways covered. I\nthink the attached patch also needs to remove setting\narchiver_reset_ts (and friends) after pg_stat_reset_shared('archiver')\n(and friends), something like [1].\n\nCan we also remove pg_stat_reset_slru() with no argument test to keep\nthings consistent?\n\n-- Test that multiple SLRUs are reset when no specific SLRU provided\nto reset function\nSELECT pg_stat_reset_slru();\nSELECT stats_reset > :'slru_commit_ts_reset_ts'::timestamptz FROM\npg_stat_slru WHERE name = 'CommitTs';\nSELECT stats_reset > :'slru_notify_reset_ts'::timestamptz FROM\npg_stat_slru WHERE name = 'Notify';\n\n[1]\ndiff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql\nindex d867fb406f..e3b4ca96e8 100644\n--- a/src/test/regress/sql/stats.sql\n+++ b/src/test/regress/sql/stats.sql\n@@ -462,37 +462,31 @@ SELECT stats_reset >\n:'slru_notify_reset_ts'::timestamptz FROM pg_stat_slru WHER\n SELECT stats_reset AS archiver_reset_ts FROM pg_stat_archiver \\gset\n SELECT pg_stat_reset_shared('archiver');\n SELECT stats_reset > :'archiver_reset_ts'::timestamptz FROM pg_stat_archiver;\n-SELECT stats_reset AS archiver_reset_ts FROM pg_stat_archiver \\gset\n\n -- Test that reset_shared with bgwriter specified as the stats type works\n SELECT stats_reset AS bgwriter_reset_ts FROM pg_stat_bgwriter \\gset\n SELECT pg_stat_reset_shared('bgwriter');\n SELECT stats_reset > :'bgwriter_reset_ts'::timestamptz FROM pg_stat_bgwriter;\n-SELECT stats_reset AS bgwriter_reset_ts FROM pg_stat_bgwriter \\gset\n\n -- Test that reset_shared with checkpointer specified as the stats type works\n SELECT stats_reset AS checkpointer_reset_ts FROM pg_stat_checkpointer \\gset\n SELECT pg_stat_reset_shared('checkpointer');\n SELECT stats_reset > :'checkpointer_reset_ts'::timestamptz FROM\npg_stat_checkpointer;\n-SELECT stats_reset AS checkpointer_reset_ts FROM pg_stat_checkpointer \\gset\n\n -- Test that reset_shared with recovery_prefetch specified as the\nstats type works\n SELECT stats_reset AS recovery_prefetch_reset_ts FROM\npg_stat_recovery_prefetch \\gset\n SELECT pg_stat_reset_shared('recovery_prefetch');\n SELECT stats_reset > :'recovery_prefetch_reset_ts'::timestamptz FROM\npg_stat_recovery_prefetch;\n-SELECT stats_reset AS recovery_prefetch_reset_ts FROM\npg_stat_recovery_prefetch \\gset\n\n -- Test that reset_shared with slru specified as the stats type works\n SELECT max(stats_reset) AS slru_reset_ts FROM pg_stat_slru \\gset\n SELECT pg_stat_reset_shared('slru');\n SELECT max(stats_reset) > :'slru_reset_ts'::timestamptz FROM pg_stat_slru;\n-SELECT max(stats_reset) AS slru_reset_ts FROM pg_stat_slru \\gset\n\n -- Test that reset_shared with wal specified as the stats type works\n SELECT stats_reset AS wal_reset_ts FROM pg_stat_wal \\gset\n SELECT pg_stat_reset_shared('wal');\n SELECT stats_reset > :'wal_reset_ts'::timestamptz FROM pg_stat_wal;\n-SELECT stats_reset AS wal_reset_ts FROM pg_stat_wal \\gset\n\n -- Test error case for reset_shared with unknown stats type\n SELECT pg_stat_reset_shared('unknown');\n\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 15:49:01 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "Hi,\n\nOn 2023-11-27 15:49:01 +0530, Bharath Rupireddy wrote:\n> On Mon, Nov 27, 2023 at 8:26 AM Michael Paquier <[email protected]> wrote:\n> > but two resets done on the same shared stats type would still\n> > be prone to race conditions without all the previous activity done in\n> > the tests (like pg_stat_wal).\n> \n> Can running stats.sql in non-parallel mode help stabilize the tests as-is?\n\nI think that'd be a cure *way* worse than the disease. Having concurrent stats\nactivity isn't exactly uncommon. And because of checkpoints, autovacuum etc,\nyou'd still have rare situations of concurrency.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Nov 2023 09:52:59 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "Hi,\n\nOn 2023-11-27 11:56:19 +0900, Michael Paquier wrote:\n> I was ready to argue that we'd better keep this test and keep it close\n> to the end of stats.sql while documenting why things are kept in this\n> order, but two resets done on the same shared stats type would still\n> be prone to race conditions without all the previous activity done in\n> the tests (like pg_stat_wal).\n\nI am probably under-caffeinated: What precisely is the potential race? Just\nthat the timestamps on some system might not be granular enough?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Nov 2023 09:56:17 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I am probably under-caffeinated: What precisely is the potential race? Just\n> that the timestamps on some system might not be granular enough?\n\nThe problem as I see it is that this test:\n\nSELECT :io_stats_post_reset < :io_stats_pre_reset;\n\nrequires an assumption that less I/O has happened since the commanded\nreset action than happened before it (extending back to the previous\nreset, or cluster start). Since concurrent processes might be doing\nI/O, this has a race condition. If we are slow enough about obtaining\n:io_stats_post_reset, the test *will* fail eventually. But the shorter\nthe distance back to the previous reset, the bigger the odds of\nobservable trouble; thus Michael's concern that adding more reset\ntests in future would increase the risk of failure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:01:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "On Mon, Nov 27, 2023 at 02:01:51PM -0500, Tom Lane wrote:\n> The problem as I see it is that this test:\n> \n> SELECT :io_stats_post_reset < :io_stats_pre_reset;\n> \n> requires an assumption that less I/O has happened since the commanded\n> reset action than happened before it (extending back to the previous\n> reset, or cluster start). Since concurrent processes might be doing\n> I/O, this has a race condition. If we are slow enough about obtaining\n> :io_stats_post_reset, the test *will* fail eventually. But the shorter\n> the distance back to the previous reset, the bigger the odds of\n> observable trouble; thus Michael's concern that adding more reset\n> tests in future would increase the risk of failure.\n\nThe new reset added just before checking the contents of pg_stat_io\nreduces :io_stats_pre_reset from 7M to 50k. That's a threshold easy\nto reach if you have a checkpoint or an autovacuum running in\nparallel. I have not checked the buildfarm logs in details, but I'd\nput a coin on a checkpoint triggered by time if the issue happened on\na slow machine.\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 07:41:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New instability in stats regression test" }, { "msg_contents": "On Sun, Nov 26, 2023 at 10:34:59PM -0500, Tom Lane wrote:\n> I'm good with that answer --- I doubt that this test sequence is\n> proving anything that's worth the cycles it takes. If it'd catch\n> oversights like failing to add new stats types to the \"reset all\"\n> code path, then I'd be for keeping it; but I don't see how the\n> test could notice that.\n\nFor now I've applied a patch that removes the whole sequence. I'll\nkeep an eye on the buildfarm for a few days in case there are more\nfailures.\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 13:22:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New instability in stats regression test" } ]
[ { "msg_contents": "Hello!\n\nI'm trying to emit a JSON aggregation of JSON rows to a file using COPY TO,\nbut I'm running into problems with COPY TO double quoting the output.\nHere is a minimal example that demonstrates the problem I'm having:\n\ncreate table public.tbl_json_test (id int, t_test text);\n\n-- insert text that includes double quotes\ninsert into public.tbl_json_test (id, t_test) values (1, 'here''s a \"string\"');\n\n-- select a JSON aggregation of JSON rows\nselect json_agg(row_to_json(t)) from (select * from public.tbl_json_test) t;\n-- this yields the correct result in proper JSON format:\n-- [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\ncopy (select json_agg(row_to_json(t)) from (select * from\npublic.tbl_json_test) t) to '/tmp/tbl_json_test.json';\n-- once the JSON results are copied to file, the JSON is broken due to\ndouble quoting:\n-- [{\"id\":1,\"t_test\":\"here's a \\\\\"string\\\\\"\"}]\n-- this fails to be parsed using jq on the command line:\n-- cat /tmp/tbl_json_test.json | jq .\n-- jq: parse error: Invalid numeric literal at line 1, column 40\n\n\nWe populate a text field in a table with text containing at least one\ndouble-quote (\"). We then select from that table, formating the result as\na JSON aggregation of JSON rows. At this point the JSON syntax is\ncorrect, with the double quotes being properly quoted. The problem is that\nonce we use COPY TO to emit the results to a file, the output gets quoted\nagain with a second escape character (\\), breaking the JSON and causing a\nsyntax error (as we can see above using the `jq` command line tool).\n\nI have tried to get COPY TO to copy the results to file \"as-is\" by setting\nthe escape and the quote characters to the empty string (''), but they only\napply to the CSV format.\n\nIs there a way to emit JSON results to file from within postgres?\nEffectively, nn \"as-is\" option to COPY TO would work well for this JSON use\ncase.\n\nAny assistance would be appreciated.\n\nThanks,\nDavin\n\nHello!I'm trying to emit a JSON aggregation of JSON rows to a file using COPY TO, but I'm running into problems with COPY TO double quoting the output.   Here is a minimal example that demonstrates the problem I'm having:create table public.tbl_json_test (id int, t_test text);-- insert text that includes double quotesinsert into public.tbl_json_test (id, t_test) values (1, 'here''s a \"string\"');-- select a JSON aggregation of JSON rowsselect json_agg(row_to_json(t)) from (select * from public.tbl_json_test) t;-- this yields the correct result in proper JSON format:-- [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]copy (select json_agg(row_to_json(t)) from (select * from public.tbl_json_test) t) to '/tmp/tbl_json_test.json';-- once the JSON results are copied to file, the JSON is broken due to double quoting:-- [{\"id\":1,\"t_test\":\"here's a \\\\\"string\\\\\"\"}]-- this fails to be parsed using jq on the command line:-- cat /tmp/tbl_json_test.json | jq .-- jq: parse error: Invalid numeric literal at line 1, column 40We populate a text field in a table with text containing at least one double-quote (\").  We then select from that table, formating the result as a JSON aggregation of JSON rows.  At this point the JSON syntax is correct, with the double quotes being properly quoted.  The problem is that once we use COPY TO to emit the results to a file, the output gets quoted again with a second escape character (\\), breaking the JSON and causing a syntax error (as we can see above using the `jq` command line tool).I have tried to get COPY TO to copy the results to file \"as-is\" by setting the escape and the quote characters to the empty string (''), but they only apply to the CSV format.Is there a way to emit JSON results to file from within postgres?  Effectively, nn \"as-is\" option to COPY TO would work well for this JSON use case.Any assistance would be appreciated.Thanks,Davin", "msg_date": "Sat, 25 Nov 2023 14:21:37 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": true, "msg_subject": "Emitting JSON to file using COPY TO" }, { "msg_contents": "On Sat, Nov 25, 2023 at 12:22 PM Davin Shearer <[email protected]>\nwrote:\n\n>\n> Is there a way to emit JSON results to file from within postgres?\n>\n\nUse psql to directly output query results to a file instead of using COPY\nto output structured output in a format you don't want.\n\nDavid J.\n\nOn Sat, Nov 25, 2023 at 12:22 PM Davin Shearer <[email protected]> wrote:Is there a way to emit JSON results to file from within postgres?Use psql to directly output query results to a file instead of using COPY to output structured output in a format you don't want.David J.", "msg_date": "Sat, 25 Nov 2023 13:02:46 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 11/25/23 11:21, Davin Shearer wrote:\n> Hello!\n> \n> I'm trying to emit a JSON aggregation of JSON rows to a file using COPY \n> TO, but I'm running into problems with COPY TO double quoting the \n> output.   Here is a minimal example that demonstrates the problem I'm \n> having:\n> \n\n> I have tried to get COPY TO to copy the results to file \"as-is\" by \n> setting the escape and the quote characters to the empty string (''), \n> but they only apply to the CSV format.\n> \n> Is there a way to emit JSON results to file from within postgres? \n> Effectively, nn \"as-is\" option to COPY TO would work well for this JSON \n> use case.\n> \n\nNot using COPY.\n\nSee David Johnson's post for one way using the client psql.\n\nOtherwise you will need to use any of the many ETL programs out there \nthat are designed for this sort of thing.\n\n> Any assistance would be appreciated.\n> \n> Thanks,\n> Davin\n\n-- \nAdrian Klaver\[email protected]\n\n\n\n", "msg_date": "Sat, 25 Nov 2023 13:00:12 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Sat, Nov 25, 2023 at 10:00 PM Adrian Klaver <[email protected]>\nwrote:\n\n> On 11/25/23 11:21, Davin Shearer wrote:\n> > Hello!\n> >\n> > I'm trying to emit a JSON aggregation of JSON rows to a file using COPY\n> > TO, but I'm running into problems with COPY TO double quoting the\n> > output. Here is a minimal example that demonstrates the problem I'm\n> > having:\n> >\n>\n> > I have tried to get COPY TO to copy the results to file \"as-is\" by\n> > setting the escape and the quote characters to the empty string (''),\n> > but they only apply to the CSV format.\n> >\n> > Is there a way to emit JSON results to file from within postgres?\n> > Effectively, nn \"as-is\" option to COPY TO would work well for this JSON\n> > use case.\n> >\n>\n> Not using COPY.\n>\n> See David Johnson's post for one way using the client psql.\n>\n> Otherwise you will need to use any of the many ETL programs out there\n> that are designed for this sort of thing.\n>\n\nGuys, I don't get answers like that. The JSON spec is clear:\n\n>\n\nOn Sat, Nov 25, 2023 at 10:00 PM Adrian Klaver <[email protected]> wrote:On 11/25/23 11:21, Davin Shearer wrote:\n> Hello!\n> \n> I'm trying to emit a JSON aggregation of JSON rows to a file using COPY \n> TO, but I'm running into problems with COPY TO double quoting the \n> output.   Here is a minimal example that demonstrates the problem I'm \n> having:\n> \n\n> I have tried to get COPY TO to copy the results to file \"as-is\" by \n> setting the escape and the quote characters to the empty string (''), \n> but they only apply to the CSV format.\n> \n> Is there a way to emit JSON results to file from within postgres?  \n> Effectively, nn \"as-is\" option to COPY TO would work well for this JSON \n> use case.\n> \n\nNot using COPY.\n\nSee David Johnson's post for one way using the client psql.\n\nOtherwise you will need to use any of the many ETL programs out there \nthat are designed for this sort of thing.Guys, I don't get answers like that. The JSON spec is clear:>", "msg_date": "Mon, 27 Nov 2023 10:33:00 +0100", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne <[email protected]>\nwrote:\n\n> On Sat, Nov 25, 2023 at 10:00 PM Adrian Klaver <[email protected]>\n> wrote:\n>\n>> On 11/25/23 11:21, Davin Shearer wrote:\n>> > Hello!\n>> >\n>> > I'm trying to emit a JSON aggregation of JSON rows to a file using COPY\n>> > TO, but I'm running into problems with COPY TO double quoting the\n>> > output. Here is a minimal example that demonstrates the problem I'm\n>> > having:\n>> >\n>>\n>> > I have tried to get COPY TO to copy the results to file \"as-is\" by\n>> > setting the escape and the quote characters to the empty string (''),\n>> > but they only apply to the CSV format.\n>> >\n>> > Is there a way to emit JSON results to file from within postgres?\n>> > Effectively, nn \"as-is\" option to COPY TO would work well for this JSON\n>> > use case.\n>> >\n>>\n>> Not using COPY.\n>>\n>> See David Johnson's post for one way using the client psql.\n>>\n>> Otherwise you will need to use any of the many ETL programs out there\n>> that are designed for this sort of thing.\n>>\n>\n> Guys, I don't get answers like that. The JSON spec is clear:\n>\n\nOops, sorry, user error. --DD\n\nPS: The JSON spec is a bit ambiguous. First it says\n\n> Any codepoint except \" or \\ or control characters\n\nAnd then is clearly shows \\\" as a valid sequence...\nSounds like JQ is too restrictive?\n\nOr that's the double-escape that's the culprit?\ni.e. \\\\ is in the final text, so that's just a backslash,\nand then the double-quote is no longer escaped.\n\nI've recently noticed json_agg(row_to_json(t))\nis equivalent to json_agg(t)\n\nMaybe use that instead? Does that make a difference?\n\nI haven't noticed wrong escaping of double-quotes yet,\nbut then I'm using the binary mode of queries. Perhaps that matters.\n\nOn second thought, I guess that's COPY in its text modes doing the escaping?\nInteresting. The text-based modes of COPY are configurable. There's even a\nJSON mode.\nBy miracle, would the JSON output mode recognize JSON[B] values, and avoid\nthe escaping?\n\nOn Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne <[email protected]> wrote:On Sat, Nov 25, 2023 at 10:00 PM Adrian Klaver <[email protected]> wrote:On 11/25/23 11:21, Davin Shearer wrote:\n> Hello!\n> \n> I'm trying to emit a JSON aggregation of JSON rows to a file using COPY \n> TO, but I'm running into problems with COPY TO double quoting the \n> output.   Here is a minimal example that demonstrates the problem I'm \n> having:\n> \n\n> I have tried to get COPY TO to copy the results to file \"as-is\" by \n> setting the escape and the quote characters to the empty string (''), \n> but they only apply to the CSV format.\n> \n> Is there a way to emit JSON results to file from within postgres?  \n> Effectively, nn \"as-is\" option to COPY TO would work well for this JSON \n> use case.\n> \n\nNot using COPY.\n\nSee David Johnson's post for one way using the client psql.\n\nOtherwise you will need to use any of the many ETL programs out there \nthat are designed for this sort of thing.Guys, I don't get answers like that. The JSON spec is clear:Oops, sorry, user error. --DDPS: The JSON spec is a bit ambiguous. First it says> Any codepoint except \" or \\ or control charactersAnd then is clearly shows \\\" as a valid sequence...Sounds like JQ is too restrictive?Or that's the double-escape that's the culprit?i.e. \\\\ is in the final text, so that's just a backslash,and then the double-quote is no longer escaped.I've recently noticed json_agg(row_to_json(t))is equivalent to json_agg(t)Maybe use that instead? Does that make a difference?I haven't noticed wrong escaping of double-quotes yet,but then I'm using the binary mode of queries. Perhaps that matters.On second thought, I guess that's COPY in its text modes doing the escaping?Interesting. The text-based modes of COPY are configurable. There's even a JSON mode.By miracle, would the JSON output mode recognize JSON[B] values, and avoid the escaping?", "msg_date": "Mon, 27 Nov 2023 10:44:55 +0100", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Monday, November 27, 2023, Dominique Devienne <[email protected]>\nwrote:\n\n> There's even a JSON mode.\n> By miracle, would the JSON output mode recognize JSON[B] values, and avoid\n> the escaping?\n>\n\nI agree there should be a copy option for “not formatted” so if you dump a\nsingle column result in that format you get the raw unescaped contents of\nthe column. As soon as you ask for a format your json is now embedded so it\nis a value within another format and any structural aspects of the wrapper\npresent in the json text representation need to be escaped.\n\nDavid J.\n\nOn Monday, November 27, 2023, Dominique Devienne <[email protected]> wrote:There's even a JSON mode.By miracle, would the JSON output mode recognize JSON[B] values, and avoid the escaping?I agree there should be a copy option for “not formatted” so if you dump a single column result in that format you get the raw unescaped contents of the column. As soon as you ask for a format your json is now embedded so it is a value within another format and any structural aspects of the wrapper present in the json text representation need to be escaped.David J.", "msg_date": "Mon, 27 Nov 2023 06:27:26 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Hi\n\npo 27. 11. 2023 v 14:27 odesílatel David G. Johnston <\[email protected]> napsal:\n\n> On Monday, November 27, 2023, Dominique Devienne <[email protected]>\n> wrote:\n>\n>> There's even a JSON mode.\n>> By miracle, would the JSON output mode recognize JSON[B] values, and\n>> avoid the escaping?\n>>\n>\n> I agree there should be a copy option for “not formatted” so if you dump a\n> single column result in that format you get the raw unescaped contents of\n> the column. As soon as you ask for a format your json is now embedded so it\n> is a value within another format and any structural aspects of the wrapper\n> present in the json text representation need to be escaped.\n>\n\nIs it better to use the LO API for this purpose? It is native for not\nformatted data.\n\nRegards\n\nPavel\n\n\n> David J.\n>\n\nHipo 27. 11. 2023 v 14:27 odesílatel David G. Johnston <[email protected]> napsal:On Monday, November 27, 2023, Dominique Devienne <[email protected]> wrote:There's even a JSON mode.By miracle, would the JSON output mode recognize JSON[B] values, and avoid the escaping?I agree there should be a copy option for “not formatted” so if you dump a single column result in that format you get the raw unescaped contents of the column. As soon as you ask for a format your json is now embedded so it is a value within another format and any structural aspects of the wrapper present in the json text representation need to be escaped.Is it better to use the LO API for this purpose?  It is native for not formatted data. RegardsPavelDavid J.", "msg_date": "Mon, 27 Nov 2023 14:56:30 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Monday, November 27, 2023, Pavel Stehule <[email protected]> wrote:\n\n> Hi\n>\n> po 27. 11. 2023 v 14:27 odesílatel David G. Johnston <\n> [email protected]> napsal:\n>\n>> On Monday, November 27, 2023, Dominique Devienne <[email protected]>\n>> wrote:\n>>\n>>> There's even a JSON mode.\n>>> By miracle, would the JSON output mode recognize JSON[B] values, and\n>>> avoid the escaping?\n>>>\n>>\n>> I agree there should be a copy option for “not formatted” so if you dump\n>> a single column result in that format you get the raw unescaped contents of\n>> the column. As soon as you ask for a format your json is now embedded so it\n>> is a value within another format and any structural aspects of the wrapper\n>> present in the json text representation need to be escaped.\n>>\n>\n> Is it better to use the LO API for this purpose? It is native for not\n> formatted data.\n>\n\nUsing LO is, IMO, never the answer. But if you are using a driver API\nanyway just handle the normal select query result.\n\nDavid J.\n\nOn Monday, November 27, 2023, Pavel Stehule <[email protected]> wrote:Hipo 27. 11. 2023 v 14:27 odesílatel David G. Johnston <[email protected]> napsal:On Monday, November 27, 2023, Dominique Devienne <[email protected]> wrote:There's even a JSON mode.By miracle, would the JSON output mode recognize JSON[B] values, and avoid the escaping?I agree there should be a copy option for “not formatted” so if you dump a single column result in that format you get the raw unescaped contents of the column. As soon as you ask for a format your json is now embedded so it is a value within another format and any structural aspects of the wrapper present in the json text representation need to be escaped.Is it better to use the LO API for this purpose?  It is native for not formatted data. Using LO is, IMO, never the answer.  But if you are using a driver API anyway just handle the normal select query result.David J.", "msg_date": "Mon, 27 Nov 2023 07:43:08 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> I agree there should be a copy option for “not formatted” so if you dump a\n> single column result in that format you get the raw unescaped contents of\n> the column.\n\nI'm not sure I even buy that. JSON data in particular is typically\nmulti-line, so how will you know where the row boundaries are?\nThat is, is a newline a row separator or part of the data?\n\nYou can debate the intelligence of any particular quoting/escaping\nscheme, but imagining that you can get away without having one at\nall will just create its own problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 09:56:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Mon, Nov 27, 2023 at 3:56 PM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > I agree there should be a copy option for “not formatted” so if you dump\n> a\n> > single column result in that format you get the raw unescaped contents of\n> > the column.\n>\n> I'm not sure I even buy that. JSON data in particular is typically\n> multi-line, so how will you know where the row boundaries are?\n> That is, is a newline a row separator or part of the data?\n>\n> You can debate the intelligence of any particular quoting/escaping\n> scheme, but imagining that you can get away without having one at\n> all will just create its own problems.\n>\n\nWhat I was suggesting is not about a \"not formatted\" option.\nBut rather than JSON values (i.e. typed `json` or `jsonb`) in a\nJSON-formatted COPY operator, the JSON values should not be\nserialized to text that is simply output as a JSON-text-value by COPY,\nbut \"inlined\" as a \"real\" JSON value without the JSON document output by\nCOPY.\n\nThis is a special case, where the inner and outer \"values\" (for lack of a\nbetter terminology)\nare *both* JSON documents, and given that JSON is hierarchical, the inner\nJSON value can\neither by 1) serializing to text first, which must thus be escaped using\nthe JSON escaping rules,\n2) NOT serialized, but \"inline\" or \"spliced-in\" the outer COPY JSON\ndocument.\n\nI guess COPY in JSON mode supports only #1 now? While #2 makes more sense\nto me.\nBut both options are valid. Is that clearer?\n\nBTW, JSON is not multi-line, except for insignificant whitespace.\nSo even COPY in JSON mode is not supposed to be line based I guess?\nUnless COPY in JSON mode is more like NDJSON (https://ndjson.org/)? --DD\n\nOn Mon, Nov 27, 2023 at 3:56 PM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> I agree there should be a copy option for “not formatted” so if you dump a\n> single column result in that format you get the raw unescaped contents of\n> the column.\n\nI'm not sure I even buy that.  JSON data in particular is typically\nmulti-line, so how will you know where the row boundaries are?\nThat is, is a newline a row separator or part of the data?\n\nYou can debate the intelligence of any particular quoting/escaping\nscheme, but imagining that you can get away without having one at\nall will just create its own problems.What I was suggesting is not about a \"not formatted\" option.But rather than JSON values (i.e. typed `json` or `jsonb`) in aJSON-formatted COPY operator, the JSON values should not beserialized to text that is simply output as a JSON-text-value by COPY,but \"inlined\" as a \"real\" JSON value without the JSON document output by COPY.This is a special case, where the inner and outer \"values\" (for lack of a better terminology)are *both* JSON documents, and given that JSON is hierarchical, the inner JSON value caneither by 1) serializing to text first, which must thus be escaped using the JSON escaping rules,2) NOT serialized, but \"inline\" or \"spliced-in\" the outer COPY JSON document.I guess COPY in JSON mode supports only #1 now? While #2 makes more sense to me.But both options are valid. Is that clearer?BTW, JSON is not multi-line, except for insignificant whitespace.So even COPY in JSON mode is not supposed to be line based I guess?Unless COPY in JSON mode is more like NDJSON (https://ndjson.org/)? --DD", "msg_date": "Mon, 27 Nov 2023 16:26:43 +0100", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 11/27/23 01:44, Dominique Devienne wrote:\n> On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n\n> On second thought, I guess that's COPY in its text modes doing the escaping?\n> Interesting. The text-based modes of COPY are configurable. There's even \n> a JSON mode.\n\nWhere are you seeing the JSON mode for COPY? AFAIK there is only text \nand CSV formats.\n\n> By miracle, would the JSON output mode recognize JSON[B] values, and \n> avoid the escaping?\n> \n> \n\n-- \nAdrian Klaver\[email protected]\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 08:04:05 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Mon, Nov 27, 2023 at 5:04 PM Adrian Klaver <[email protected]>\nwrote:\n\n> On 11/27/23 01:44, Dominique Devienne wrote:\n> > On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne <[email protected]\n> > <mailto:[email protected]>> wrote:\n> > On second thought, I guess that's COPY in its text modes doing the\n> escaping?\n> > Interesting. The text-based modes of COPY are configurable. There's even\n> > a JSON mode.\n>\n> Where are you seeing the JSON mode for COPY? AFAIK there is only text\n> and CSV formats.\n>\n\nIndeed. Somehow I thought there was...\nI've used the TEXT and BINARY modes, and remembered a wishful thinking JSON\nmode!\nOK then, if there was, then what I wrote would apply :). --DD\n\nOn Mon, Nov 27, 2023 at 5:04 PM Adrian Klaver <[email protected]> wrote:On 11/27/23 01:44, Dominique Devienne wrote:\n> On Mon, Nov 27, 2023 at 10:33 AM Dominique Devienne <[email protected] \n> <mailto:[email protected]>> wrote:> On second thought, I guess that's COPY in its text modes doing the escaping?\n> Interesting. The text-based modes of COPY are configurable. There's even \n> a JSON mode.\n\nWhere are you seeing the JSON mode for COPY? AFAIK there is only text \nand CSV formats.Indeed. Somehow I thought there was...I've used the TEXT and BINARY modes, and remembered a wishful thinking JSON mode!OK then, if there was, then what I wrote would apply :). --DD", "msg_date": "Mon, 27 Nov 2023 17:14:46 +0100", "msg_from": "Dominique Devienne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "This would be a very special case for COPY. It applies only to a single \ncolumn of JSON values. The original problem can be solved with psql \n--tuples-only as David wrote earlier.\n\n\n$ psql -tc 'select json_agg(row_to_json(t))\n from (select * from public.tbl_json_test) t;'\n\n [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n\n\nSpecial-casing any encoding/escaping scheme leads to bugs and harder \nparsing.\n\nJust my 2c.\n\n--\nFilip Sedlák\n\n\n", "msg_date": "Tue, 28 Nov 2023 08:36:32 +0100", "msg_from": "=?UTF-8?Q?Filip_Sedl=C3=A1k?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Thanks for the responses everyone.\n\nI worked around the issue using the `psql -tc` method as Filip described.\n\nI think it would be great to support writing JSON using COPY TO at\nsome point so I can emit JSON to files using a PostgreSQL function directly.\n\n-Davin\n\nOn Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected]> wrote:\n\n> This would be a very special case for COPY. It applies only to a single\n> column of JSON values. The original problem can be solved with psql\n> --tuples-only as David wrote earlier.\n>\n>\n> $ psql -tc 'select json_agg(row_to_json(t))\n> from (select * from public.tbl_json_test) t;'\n>\n> [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n>\n>\n> Special-casing any encoding/escaping scheme leads to bugs and harder\n> parsing.\n>\n> Just my 2c.\n>\n> --\n> Filip Sedlák\n>\n\nThanks for the responses everyone.I worked around the issue using the `psql -tc` method as Filip described.I think it would be great to support writing JSON using COPY TO at some point so I can emit JSON to files using a PostgreSQL function directly.-DavinOn Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected]> wrote:This would be a very special case for COPY. It applies only to a single \ncolumn of JSON values. The original problem can be solved with psql \n--tuples-only as David wrote earlier.\n\n\n$ psql -tc 'select json_agg(row_to_json(t))\n              from (select * from public.tbl_json_test) t;'\n\n  [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n\n\nSpecial-casing any encoding/escaping scheme leads to bugs and harder \nparsing.\n\nJust my 2c.\n\n--\nFilip Sedlák", "msg_date": "Wed, 29 Nov 2023 10:32:54 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 11/29/23 10:32, Davin Shearer wrote:\n> Thanks for the responses everyone.\n> \n> I worked around the issue using the `psql -tc` method as Filip described.\n> \n> I think it would be great to support writing JSON using COPY TO at \n> some point so I can emit JSON to files using a PostgreSQL function directly.\n> \n> -Davin\n> \n> On Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> This would be a very special case for COPY. It applies only to a single\n> column of JSON values. The original problem can be solved with psql\n> --tuples-only as David wrote earlier.\n> \n> \n> $ psql -tc 'select json_agg(row_to_json(t))\n>               from (select * from public.tbl_json_test) t;'\n> \n>   [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n> \n> \n> Special-casing any encoding/escaping scheme leads to bugs and harder\n> parsing.\n\n(moved to hackers)\n\nI did a quick PoC patch (attached) -- if there interest and no hard \nobjections I would like to get it up to speed for the January commitfest.\n\nCurrently the patch lacks documentation and regression test support.\n\nQuestions:\n----------\n1. Is supporting JSON array format sufficient, or does it need to \nsupport some other options? How flexible does the support scheme need to be?\n\n2. This only supports COPY TO and we would undoubtedly want to support \nCOPY FROM for JSON as well, but is that required from the start?\n\nThanks for any feedback.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 1 Dec 2023 14:28:55 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Fri, Dec 01, 2023 at 02:28:55PM -0500, Joe Conway wrote:\n> I did a quick PoC patch (attached) -- if there interest and no hard\n> objections I would like to get it up to speed for the January commitfest.\n\nCool. I would expect there to be interest, given all the other JSON\nsupport that has been added thus far.\n\nI noticed that, with the PoC patch, \"json\" is the only format that must be\nquoted. Without quotes, I see a syntax error. I'm assuming there's a\nconflict with another json-related rule somewhere in gram.y, but I haven't\ntracked down exactly which one is causing it.\n\n> 1. Is supporting JSON array format sufficient, or does it need to support\n> some other options? How flexible does the support scheme need to be?\n\nI don't presently have a strong opinion on this one. My instinct would be\nstart with something simple, though. I don't think we offer any special\noptions for log_destination...\n\n> 2. This only supports COPY TO and we would undoubtedly want to support COPY\n> FROM for JSON as well, but is that required from the start?\n\nI would vote for including COPY FROM support from the start.\n\n> ! \tif (!cstate->opts.json_mode)\n\nI think it's unfortunate that this further complicates the branching in\nCopyOneRowTo(), but after some quick glances at the code, I'm not sure it's\nworth refactoring a bunch of stuff to make this nicer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Dec 2023 17:09:58 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "I'm really glad to see this taken up as a possible new feature and will\ndefinitely use it if it gets released. I'm impressed with how clean,\nunderstandable, and approachable the postgres codebase is in general and\nhow easy it is to read and understand this patch.\n\nI reviewed the patch (though I didn't build and test the code) and have a\nconcern with adding the '[' at the beginning and ']' at the end of the json\noutput. Those are already added by `json_agg` (\nhttps://www.postgresql.org/docs/current/functions-aggregate.html) as you\ncan see in my initial email. Adding them in the COPY TO may be redundant\n(e.g., [[{\"key\":\"value\"...}....]]).\n\nI think COPY TO makes good sense to support, though COPY FROM maybe not so\nmuch as JSON isn't necessarily flat and rectangular like CSV.\n\nFor my use-case, I'm emitting JSON files to Apache NiFi for processing, and\nNiFi has superior handling of JSON (via JOLT parsers) versus CSV where\nparsing is generally done with regex. I want to be able to emit JSON using\na postgres function and thus COPY TO.\n\nDefinitely +1 for COPY TO.\n\nI don't think COPY FROM will work out well unless the JSON is required to\nbe flat and rectangular. I would vote -1 to leave it out due to the\nnecessary restrictions making it not generally useful.\n\nHope it helps,\nDavin\n\nOn Fri, Dec 1, 2023 at 6:10 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Fri, Dec 01, 2023 at 02:28:55PM -0500, Joe Conway wrote:\n> > I did a quick PoC patch (attached) -- if there interest and no hard\n> > objections I would like to get it up to speed for the January commitfest.\n>\n> Cool. I would expect there to be interest, given all the other JSON\n> support that has been added thus far.\n>\n> I noticed that, with the PoC patch, \"json\" is the only format that must be\n> quoted. Without quotes, I see a syntax error. I'm assuming there's a\n> conflict with another json-related rule somewhere in gram.y, but I haven't\n> tracked down exactly which one is causing it.\n>\n> > 1. Is supporting JSON array format sufficient, or does it need to support\n> > some other options? How flexible does the support scheme need to be?\n>\n> I don't presently have a strong opinion on this one. My instinct would be\n> start with something simple, though. I don't think we offer any special\n> options for log_destination...\n>\n> > 2. This only supports COPY TO and we would undoubtedly want to support\n> COPY\n> > FROM for JSON as well, but is that required from the start?\n>\n> I would vote for including COPY FROM support from the start.\n>\n> > ! if (!cstate->opts.json_mode)\n>\n> I think it's unfortunate that this further complicates the branching in\n> CopyOneRowTo(), but after some quick glances at the code, I'm not sure it's\n> worth refactoring a bunch of stuff to make this nicer.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nI'm really glad to see this taken up as a possible new feature and will definitely use it if it gets released.  I'm impressed with how clean, understandable, and approachable the postgres codebase is in general and how easy it is to read and understand this patch.I reviewed the patch (though I didn't build and test the code) and have a concern with adding the '[' at the beginning and ']' at the end of the json output.  Those are already added by `json_agg` (https://www.postgresql.org/docs/current/functions-aggregate.html) as you can see in my initial email.  Adding them in the COPY TO may be redundant (e.g., [[{\"key\":\"value\"...}....]]).I think COPY TO makes good sense to support, though COPY FROM maybe not so much as JSON isn't necessarily flat and rectangular like CSV.For my use-case, I'm emitting JSON files to Apache NiFi for processing, and NiFi has superior handling of JSON (via JOLT parsers) versus CSV where parsing is generally done with regex.  I want to be able to emit JSON using a postgres function and thus COPY TO.Definitely +1 for COPY TO.I don't think COPY FROM will work out well unless the JSON is required to be flat and rectangular.  I would vote -1 to leave it out due to the necessary restrictions making it not generally useful.Hope it helps,DavinOn Fri, Dec 1, 2023 at 6:10 PM Nathan Bossart <[email protected]> wrote:On Fri, Dec 01, 2023 at 02:28:55PM -0500, Joe Conway wrote:\n> I did a quick PoC patch (attached) -- if there interest and no hard\n> objections I would like to get it up to speed for the January commitfest.\n\nCool.  I would expect there to be interest, given all the other JSON\nsupport that has been added thus far.\n\nI noticed that, with the PoC patch, \"json\" is the only format that must be\nquoted.  Without quotes, I see a syntax error.  I'm assuming there's a\nconflict with another json-related rule somewhere in gram.y, but I haven't\ntracked down exactly which one is causing it.\n\n> 1. Is supporting JSON array format sufficient, or does it need to support\n> some other options? How flexible does the support scheme need to be?\n\nI don't presently have a strong opinion on this one.  My instinct would be\nstart with something simple, though.  I don't think we offer any special\noptions for log_destination...\n\n> 2. This only supports COPY TO and we would undoubtedly want to support COPY\n> FROM for JSON as well, but is that required from the start?\n\nI would vote for including COPY FROM support from the start.\n\n> !     if (!cstate->opts.json_mode)\n\nI think it's unfortunate that this further complicates the branching in\nCopyOneRowTo(), but after some quick glances at the code, I'm not sure it's\nworth refactoring a bunch of stuff to make this nicer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 1 Dec 2023 22:00:29 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/1/23 22:00, Davin Shearer wrote:\n> I'm really glad to see this taken up as a possible new feature and will \n> definitely use it if it gets released.  I'm impressed with how clean, \n> understandable, and approachable the postgres codebase is in general and \n> how easy it is to read and understand this patch.\n> \n> I reviewed the patch (though I didn't build and test the code) and have \n> a concern with adding the '[' at the beginning and ']' at the end of the \n> json output.  Those are already added by `json_agg` \n> (https://www.postgresql.org/docs/current/functions-aggregate.html \n> <https://www.postgresql.org/docs/current/functions-aggregate.html>) as \n> you can see in my initial email.  Adding them in the COPY TO may be \n> redundant (e.g., [[{\"key\":\"value\"...}....]]).\n\nWith this patch in place you don't use json_agg() at all. See the \nexample output (this is real output with the patch applied):\n\n(oops -- I meant to send this with the same email as the patch)\n\n8<-------------------------------------------------\ncreate table foo(id int8, f1 text, f2 timestamptz);\ninsert into foo\n select g.i,\n 'line: ' || g.i::text,\n clock_timestamp()\n from generate_series(1,4) as g(i);\n\ncopy foo to stdout (format 'json');\n[\n {\"id\":1,\"f1\":\"line: 1\",\"f2\":\"2023-12-01T12:58:16.776863-05:00\"}\n,{\"id\":2,\"f1\":\"line: 2\",\"f2\":\"2023-12-01T12:58:16.777084-05:00\"}\n,{\"id\":3,\"f1\":\"line: 3\",\"f2\":\"2023-12-01T12:58:16.777096-05:00\"}\n,{\"id\":4,\"f1\":\"line: 4\",\"f2\":\"2023-12-01T12:58:16.777103-05:00\"}\n]\n8<-------------------------------------------------\n\n\n> I think COPY TO makes good sense to support, though COPY FROM maybe not \n> so much as JSON isn't necessarily flat and rectangular like CSV.\n\nYeah -- definitely not as straight forward but possibly we just support \nthe array-of-jsonobj-rows as input as well?\n\n> For my use-case, I'm emitting JSON files to Apache NiFi for processing, \n> and NiFi has superior handling of JSON (via JOLT parsers) versus CSV \n> where parsing is generally done with regex.  I want to be able to emit \n> JSON using a postgres function and thus COPY TO.\n> \n> Definitely +1 for COPY TO.\n> \n> I don't think COPY FROM will work out well unless the JSON is required \n> to be flat and rectangular.  I would vote -1 to leave it out due to the \n> necessary restrictions making it not generally useful.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 1 Dec 2023 22:10:54 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/1/23 18:09, Nathan Bossart wrote:\n> On Fri, Dec 01, 2023 at 02:28:55PM -0500, Joe Conway wrote:\n>> I did a quick PoC patch (attached) -- if there interest and no hard\n>> objections I would like to get it up to speed for the January commitfest.\n> \n> Cool. I would expect there to be interest, given all the other JSON\n> support that has been added thus far.\n\nThanks for the review\n\n> I noticed that, with the PoC patch, \"json\" is the only format that must be\n> quoted. Without quotes, I see a syntax error. I'm assuming there's a\n> conflict with another json-related rule somewhere in gram.y, but I haven't\n> tracked down exactly which one is causing it.\n\nIt seems to be because 'json' is also a type name ($$ = \nSystemTypeName(\"json\")).\n\nWhat do you think about using 'json_array' instead? It is more specific \nand accurate, and avoids the need to quote.\n\ntest=# copy foo to stdout (format json_array);\n[\n {\"id\":1,\"f1\":\"line: 1\",\"f2\":\"2023-12-01T12:58:16.776863-05:00\"}\n,{\"id\":2,\"f1\":\"line: 2\",\"f2\":\"2023-12-01T12:58:16.777084-05:00\"}\n,{\"id\":3,\"f1\":\"line: 3\",\"f2\":\"2023-12-01T12:58:16.777096-05:00\"}\n,{\"id\":4,\"f1\":\"line: 4\",\"f2\":\"2023-12-01T12:58:16.777103-05:00\"}\n]\n\n>> 1. Is supporting JSON array format sufficient, or does it need to support\n>> some other options? How flexible does the support scheme need to be?\n> \n> I don't presently have a strong opinion on this one. My instinct would be\n> start with something simple, though. I don't think we offer any special\n> options for log_destination...\n\nWFM\n\n>> 2. This only supports COPY TO and we would undoubtedly want to support COPY\n>> FROM for JSON as well, but is that required from the start?\n> \n> I would vote for including COPY FROM support from the start.\n\nCheck. My thought is to only accept the same format we emit -- i.e. only \ntake a json array.\n\n>> ! \tif (!cstate->opts.json_mode)\n> \n> I think it's unfortunate that this further complicates the branching in\n> CopyOneRowTo(), but after some quick glances at the code, I'm not sure it's\n> worth refactoring a bunch of stuff to make this nicer.\n\nYeah that was my conclusion.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 2 Dec 2023 09:31:46 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n>> I noticed that, with the PoC patch, \"json\" is the only format that must be\n>> quoted. Without quotes, I see a syntax error. I'm assuming there's a\n>> conflict with another json-related rule somewhere in gram.y, but I haven't\n>> tracked down exactly which one is causing it.\n\nWhile I've not looked too closely, I suspect this might be due to the\nFORMAT_LA hack in base_yylex:\n\n /* Replace FORMAT by FORMAT_LA if it's followed by JSON */\n switch (next_token)\n {\n case JSON:\n cur_token = FORMAT_LA;\n break;\n }\n\nSo if you are writing a production that might need to match\nFORMAT followed by JSON, you need to match FORMAT_LA too.\n\n(I spent a little bit of time last week trying to get rid of\nFORMAT_LA, thinking that it didn't look necessary. Did not\nsucceed yet.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Dec 2023 10:11:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Fri, Dec 1, 2023 at 11:32 AM Joe Conway <[email protected]> wrote:\n> 1. Is supporting JSON array format sufficient, or does it need to\n> support some other options? How flexible does the support scheme need to be?\n\n\"JSON Lines\" is a semi-standard format [1] that's basically just\nnewline-separated JSON values. (In fact, this is what\nlog_destination=jsonlog gives you for Postgres logs, no?) It might be\nworthwhile to support that, too.\n\n[1]: https://jsonlines.org/\n\n\n", "msg_date": "Sat, 2 Dec 2023 10:50:12 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Sat, Dec 02, 2023 at 10:11:20AM -0500, Tom Lane wrote:\n> So if you are writing a production that might need to match\n> FORMAT followed by JSON, you need to match FORMAT_LA too.\n\nThanks for the pointer. That does seem to be the culprit.\n\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex d631ac89a9..048494dd07 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -3490,6 +3490,10 @@ copy_generic_opt_elem:\n {\n $$ = makeDefElem($1, $2, @1);\n }\n+ | FORMAT_LA copy_generic_opt_arg\n+ {\n+ $$ = makeDefElem(\"format\", $2, @1);\n+ }\n ;\n \n copy_generic_opt_arg:\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 2 Dec 2023 15:53:20 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/2/23 16:53, Nathan Bossart wrote:\n> On Sat, Dec 02, 2023 at 10:11:20AM -0500, Tom Lane wrote:\n>> So if you are writing a production that might need to match\n>> FORMAT followed by JSON, you need to match FORMAT_LA too.\n> \n> Thanks for the pointer. That does seem to be the culprit.\n> \n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index d631ac89a9..048494dd07 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -3490,6 +3490,10 @@ copy_generic_opt_elem:\n> {\n> $$ = makeDefElem($1, $2, @1);\n> }\n> + | FORMAT_LA copy_generic_opt_arg\n> + {\n> + $$ = makeDefElem(\"format\", $2, @1);\n> + }\n> ;\n> \n> copy_generic_opt_arg:\n\n\nYep -- I concluded the same. Thanks Tom!\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 2 Dec 2023 17:37:48 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/2/23 13:50, Maciek Sakrejda wrote:\n> On Fri, Dec 1, 2023 at 11:32 AM Joe Conway <[email protected]> wrote:\n>> 1. Is supporting JSON array format sufficient, or does it need to\n>> support some other options? How flexible does the support scheme need to be?\n> \n> \"JSON Lines\" is a semi-standard format [1] that's basically just\n> newline-separated JSON values. (In fact, this is what\n> log_destination=jsonlog gives you for Postgres logs, no?) It might be\n> worthwhile to support that, too.\n> \n> [1]: https://jsonlines.org/\n\n\nYes, I have seen examples of that associated with other databases (MSSQL \nand Duckdb at least) as well. It probably makes sense to support that \nformat too.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 2 Dec 2023 17:43:52 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-02 Sa 17:43, Joe Conway wrote:\n> On 12/2/23 13:50, Maciek Sakrejda wrote:\n>> On Fri, Dec 1, 2023 at 11:32 AM Joe Conway <[email protected]> wrote:\n>>> 1. Is supporting JSON array format sufficient, or does it need to\n>>> support some other options? How flexible does the support scheme \n>>> need to be?\n>>\n>> \"JSON Lines\" is a semi-standard format [1] that's basically just\n>> newline-separated JSON values. (In fact, this is what\n>> log_destination=jsonlog gives you for Postgres logs, no?) It might be\n>> worthwhile to support that, too.\n>>\n>> [1]: https://jsonlines.org/\n>\n>\n> Yes, I have seen examples of that associated with other databases \n> (MSSQL and Duckdb at least) as well. It probably makes sense to \n> support that format too.\n\n\nYou can do that today, e.g.\n\n\ncopy (select to_json(q) from table_or_query q) to stdout\n\n\nYou can also do it as a single document as proposed here, like this:\n\n\ncopy (select json_agg(q) from table_or_query q) to stdout\n\n\nThe only downside to that is that it has to construct the aggregate, \nwhich could be ugly for large datasets, and that's why I'm not opposed \nto this patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 08:46:28 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/2/23 17:37, Joe Conway wrote:\n> On 12/2/23 16:53, Nathan Bossart wrote:\n>> On Sat, Dec 02, 2023 at 10:11:20AM -0500, Tom Lane wrote:\n>>> So if you are writing a production that might need to match\n>>> FORMAT followed by JSON, you need to match FORMAT_LA too.\n>> \n>> Thanks for the pointer. That does seem to be the culprit.\n>> \n>> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n>> index d631ac89a9..048494dd07 100644\n>> --- a/src/backend/parser/gram.y\n>> +++ b/src/backend/parser/gram.y\n>> @@ -3490,6 +3490,10 @@ copy_generic_opt_elem:\n>> {\n>> $$ = makeDefElem($1, $2, @1);\n>> }\n>> + | FORMAT_LA copy_generic_opt_arg\n>> + {\n>> + $$ = makeDefElem(\"format\", $2, @1);\n>> + }\n>> ;\n>> \n>> copy_generic_opt_arg:\n> \n> \n> Yep -- I concluded the same. Thanks Tom!\n\nThe attached implements the above repair, as well as adding support for \narray decoration (or not) and/or comma row delimiters when not an array.\n\nThis covers the three variations of json import/export formats that I \nhave found after light searching (SQL Server and DuckDB).\n\nStill lacks and documentation, tests, and COPY FROM support, but here is \nwhat it looks like in a nutshell:\n\n8<-----------------------------------------------\ncreate table foo(id int8, f1 text, f2 timestamptz);\ninsert into foo\n select g.i,\n 'line: ' || g.i::text,\n clock_timestamp()\n from generate_series(1,4) as g(i);\n\ncopy foo to stdout (format json);\n{\"id\":1,\"f1\":\"line: 1\",\"f2\":\"2023-12-01T12:58:16.776863-05:00\"}\n{\"id\":2,\"f1\":\"line: 2\",\"f2\":\"2023-12-01T12:58:16.777084-05:00\"}\n{\"id\":3,\"f1\":\"line: 3\",\"f2\":\"2023-12-01T12:58:16.777096-05:00\"}\n{\"id\":4,\"f1\":\"line: 4\",\"f2\":\"2023-12-01T12:58:16.777103-05:00\"}\n\ncopy foo to stdout (format json, force_array);\n[\n {\"id\":1,\"f1\":\"line: 1\",\"f2\":\"2023-12-01T12:58:16.776863-05:00\"}\n,{\"id\":2,\"f1\":\"line: 2\",\"f2\":\"2023-12-01T12:58:16.777084-05:00\"}\n,{\"id\":3,\"f1\":\"line: 3\",\"f2\":\"2023-12-01T12:58:16.777096-05:00\"}\n,{\"id\":4,\"f1\":\"line: 4\",\"f2\":\"2023-12-01T12:58:16.777103-05:00\"}\n]\n\ncopy foo to stdout (format json, force_row_delimiter);\n {\"id\":1,\"f1\":\"line: 1\",\"f2\":\"2023-12-01T12:58:16.776863-05:00\"}\n,{\"id\":2,\"f1\":\"line: 2\",\"f2\":\"2023-12-01T12:58:16.777084-05:00\"}\n,{\"id\":3,\"f1\":\"line: 3\",\"f2\":\"2023-12-01T12:58:16.777096-05:00\"}\n,{\"id\":4,\"f1\":\"line: 4\",\"f2\":\"2023-12-01T12:58:16.777103-05:00\"}\n\ncopy foo to stdout (force_array);\nERROR: COPY FORCE_ARRAY requires JSON mode\n\ncopy foo to stdout (force_row_delimiter);\nERROR: COPY FORCE_ROW_DELIMITER requires JSON mode\n8<-----------------------------------------------\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 3 Dec 2023 09:53:49 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-01 Fr 14:28, Joe Conway wrote:\n> On 11/29/23 10:32, Davin Shearer wrote:\n>> Thanks for the responses everyone.\n>>\n>> I worked around the issue using the `psql -tc` method as Filip \n>> described.\n>>\n>> I think it would be great to support writing JSON using COPY TO at \n>> some point so I can emit JSON to files using a PostgreSQL function \n>> directly.\n>>\n>> -Davin\n>>\n>> On Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>>     This would be a very special case for COPY. It applies only to a \n>> single\n>>     column of JSON values. The original problem can be solved with psql\n>>     --tuples-only as David wrote earlier.\n>>\n>>\n>>     $ psql -tc 'select json_agg(row_to_json(t))\n>>                    from (select * from public.tbl_json_test) t;'\n>>\n>>        [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n>>\n>>\n>>     Special-casing any encoding/escaping scheme leads to bugs and harder\n>>     parsing.\n>\n> (moved to hackers)\n>\n> I did a quick PoC patch (attached) -- if there interest and no hard \n> objections I would like to get it up to speed for the January commitfest.\n>\n> Currently the patch lacks documentation and regression test support.\n>\n> Questions:\n> ----------\n> 1. Is supporting JSON array format sufficient, or does it need to \n> support some other options? How flexible does the support scheme need \n> to be?\n>\n> 2. This only supports COPY TO and we would undoubtedly want to support \n> COPY FROM for JSON as well, but is that required from the start?\n>\n> Thanks for any feedback.\n\n\nI  realize this is just a POC, but I'd prefer to see composite_to_json() \nnot exposed. You could use the already public datum_to_json() instead, \npassing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third \narguments.\n\nI think JSON array format is sufficient.\n\nI can see both sides of the COPY FROM argument, but I think insisting on \nthat makes this less doable for release 17. On balance I would stick to \nCOPY TO for now.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 10:10:38 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Please be sure to include single and double quotes in the test values since\nthat was the original problem (double quoting in COPY TO breaking the JSON\nsyntax).\n\nOn Sun, Dec 3, 2023, 10:11 Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2023-12-01 Fr 14:28, Joe Conway wrote:\n> > On 11/29/23 10:32, Davin Shearer wrote:\n> >> Thanks for the responses everyone.\n> >>\n> >> I worked around the issue using the `psql -tc` method as Filip\n> >> described.\n> >>\n> >> I think it would be great to support writing JSON using COPY TO at\n> >> some point so I can emit JSON to files using a PostgreSQL function\n> >> directly.\n> >>\n> >> -Davin\n> >>\n> >> On Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected]\n> >> <mailto:[email protected]>> wrote:\n> >>\n> >> This would be a very special case for COPY. It applies only to a\n> >> single\n> >> column of JSON values. The original problem can be solved with psql\n> >> --tuples-only as David wrote earlier.\n> >>\n> >>\n> >> $ psql -tc 'select json_agg(row_to_json(t))\n> >> from (select * from public.tbl_json_test) t;'\n> >>\n> >> [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n> >>\n> >>\n> >> Special-casing any encoding/escaping scheme leads to bugs and harder\n> >> parsing.\n> >\n> > (moved to hackers)\n> >\n> > I did a quick PoC patch (attached) -- if there interest and no hard\n> > objections I would like to get it up to speed for the January commitfest.\n> >\n> > Currently the patch lacks documentation and regression test support.\n> >\n> > Questions:\n> > ----------\n> > 1. Is supporting JSON array format sufficient, or does it need to\n> > support some other options? How flexible does the support scheme need\n> > to be?\n> >\n> > 2. This only supports COPY TO and we would undoubtedly want to support\n> > COPY FROM for JSON as well, but is that required from the start?\n> >\n> > Thanks for any feedback.\n>\n>\n> I realize this is just a POC, but I'd prefer to see composite_to_json()\n> not exposed. You could use the already public datum_to_json() instead,\n> passing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third\n> arguments.\n>\n> I think JSON array format is sufficient.\n>\n> I can see both sides of the COPY FROM argument, but I think insisting on\n> that makes this less doable for release 17. On balance I would stick to\n> COPY TO for now.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nPlease be sure to include single and double quotes in the test values since that was the original problem (double quoting in COPY TO breaking the JSON syntax).On Sun, Dec 3, 2023, 10:11 Andrew Dunstan <[email protected]> wrote:\nOn 2023-12-01 Fr 14:28, Joe Conway wrote:\n> On 11/29/23 10:32, Davin Shearer wrote:\n>> Thanks for the responses everyone.\n>>\n>> I worked around the issue using the `psql -tc` method as Filip \n>> described.\n>>\n>> I think it would be great to support writing JSON using COPY TO at \n>> some point so I can emit JSON to files using a PostgreSQL function \n>> directly.\n>>\n>> -Davin\n>>\n>> On Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>>     This would be a very special case for COPY. It applies only to a \n>> single\n>>     column of JSON values. The original problem can be solved with psql\n>>     --tuples-only as David wrote earlier.\n>>\n>>\n>>     $ psql -tc 'select json_agg(row_to_json(t))\n>>                    from (select * from public.tbl_json_test) t;'\n>>\n>>        [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n>>\n>>\n>>     Special-casing any encoding/escaping scheme leads to bugs and harder\n>>     parsing.\n>\n> (moved to hackers)\n>\n> I did a quick PoC patch (attached) -- if there interest and no hard \n> objections I would like to get it up to speed for the January commitfest.\n>\n> Currently the patch lacks documentation and regression test support.\n>\n> Questions:\n> ----------\n> 1. Is supporting JSON array format sufficient, or does it need to \n> support some other options? How flexible does the support scheme need \n> to be?\n>\n> 2. This only supports COPY TO and we would undoubtedly want to support \n> COPY FROM for JSON as well, but is that required from the start?\n>\n> Thanks for any feedback.\n\n\nI  realize this is just a POC, but I'd prefer to see composite_to_json() \nnot exposed. You could use the already public datum_to_json() instead, \npassing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third \narguments.\n\nI think JSON array format is sufficient.\n\nI can see both sides of the COPY FROM argument, but I think insisting on \nthat makes this less doable for release 17. On balance I would stick to \nCOPY TO for now.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 3 Dec 2023 10:31:58 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/3/23 10:31, Davin Shearer wrote:\n> Please be sure to include single and double quotes in the test values \n> since that was the original problem (double quoting in COPY TO breaking \n> the JSON syntax).\n\ntest=# copy (select * from foo limit 4) to stdout (format json);\n{\"id\":2456092,\"f1\":\"line with ' in it: \n2456092\",\"f2\":\"2023-12-03T10:44:40.9712-05:00\"}\n{\"id\":2456093,\"f1\":\"line with \\\\\" in it: \n2456093\",\"f2\":\"2023-12-03T10:44:40.971221-05:00\"}\n{\"id\":2456094,\"f1\":\"line with ' in it: \n2456094\",\"f2\":\"2023-12-03T10:44:40.971225-05:00\"}\n{\"id\":2456095,\"f1\":\"line with \\\\\" in it: \n2456095\",\"f2\":\"2023-12-03T10:44:40.971228-05:00\"}\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 10:51:12 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/3/23 10:10, Andrew Dunstan wrote:\n> \n> On 2023-12-01 Fr 14:28, Joe Conway wrote:\n>> On 11/29/23 10:32, Davin Shearer wrote:\n>>> Thanks for the responses everyone.\n>>>\n>>> I worked around the issue using the `psql -tc` method as Filip \n>>> described.\n>>>\n>>> I think it would be great to support writing JSON using COPY TO at \n>>> some point so I can emit JSON to files using a PostgreSQL function \n>>> directly.\n>>>\n>>> -Davin\n>>>\n>>> On Tue, Nov 28, 2023 at 2:36 AM Filip Sedlák <[email protected] \n>>> <mailto:[email protected]>> wrote:\n>>>\n>>>     This would be a very special case for COPY. It applies only to a \n>>> single\n>>>     column of JSON values. The original problem can be solved with psql\n>>>     --tuples-only as David wrote earlier.\n>>>\n>>>\n>>>     $ psql -tc 'select json_agg(row_to_json(t))\n>>>                    from (select * from public.tbl_json_test) t;'\n>>>\n>>>        [{\"id\":1,\"t_test\":\"here's a \\\"string\\\"\"}]\n>>>\n>>>\n>>>     Special-casing any encoding/escaping scheme leads to bugs and harder\n>>>     parsing.\n>>\n>> (moved to hackers)\n>>\n>> I did a quick PoC patch (attached) -- if there interest and no hard \n>> objections I would like to get it up to speed for the January commitfest.\n>>\n>> Currently the patch lacks documentation and regression test support.\n>>\n>> Questions:\n>> ----------\n>> 1. Is supporting JSON array format sufficient, or does it need to \n>> support some other options? How flexible does the support scheme need \n>> to be?\n>>\n>> 2. This only supports COPY TO and we would undoubtedly want to support \n>> COPY FROM for JSON as well, but is that required from the start?\n>>\n>> Thanks for any feedback.\n> \n> I  realize this is just a POC, but I'd prefer to see composite_to_json()\n> not exposed. You could use the already public datum_to_json() instead,\n> passing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third\n> arguments.\n\nOk, thanks, will do\n\n> I think JSON array format is sufficient.\n\nThe other formats make sense from a completeness standpoint (versus \nother databases) and the latest patch already includes them, so I still \nlean toward supporting all three formats.\n\n> I can see both sides of the COPY FROM argument, but I think insisting on\n> that makes this less doable for release 17. On balance I would stick to\n> COPY TO for now.\n\nWFM.\n\n From your earlier post, regarding constructing the aggregate -- not \nextensive testing but one data point:\n8<--------------------------\ntest=# copy foo to '/tmp/buf' (format json, force_array);\nCOPY 10000000\nTime: 36353.153 ms (00:36.353)\ntest=# copy (select json_agg(foo) from foo) to '/tmp/buf';\nCOPY 1\nTime: 46835.238 ms (00:46.835)\n8<--------------------------\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 11:03:14 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/3/23 11:03, Joe Conway wrote:\n> From your earlier post, regarding constructing the aggregate -- not\n> extensive testing but one data point:\n> 8<--------------------------\n> test=# copy foo to '/tmp/buf' (format json, force_array);\n> COPY 10000000\n> Time: 36353.153 ms (00:36.353)\n> test=# copy (select json_agg(foo) from foo) to '/tmp/buf';\n> COPY 1\n> Time: 46835.238 ms (00:46.835)\n> 8<--------------------------\n\nAlso if the table is large enough, the aggregate method is not even \nfeasible whereas the COPY TO method works:\n8<--------------------------\ntest=# select count(*) from foo;\n count\n----------\n 20000000\n(1 row)\n\ntest=# copy (select json_agg(foo) from foo) to '/tmp/buf';\nERROR: out of memory\nDETAIL: Cannot enlarge string buffer containing 1073741822 bytes by 1 \nmore bytes.\n\ntest=# copy foo to '/tmp/buf' (format json, force_array);\nCOPY 20000000\n8<--------------------------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 12:11:34 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/3/23 11:03, Joe Conway wrote:\n> On 12/3/23 10:10, Andrew Dunstan wrote:\n>> I  realize this is just a POC, but I'd prefer to see composite_to_json()\n>> not exposed. You could use the already public datum_to_json() instead,\n>> passing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third\n>> arguments.\n> \n> Ok, thanks, will do\n\nJust FYI, this change does loose some performance in my not massively \nscientific A/B/A test:\n\n8<---------------------------\n-- with datum_to_json()\ntest=# \\timing\nTiming is on.\ntest=# copy foo to '/tmp/buf' (format json, force_array);\nCOPY 10000000\nTime: 37196.898 ms (00:37.197)\nTime: 37408.161 ms (00:37.408)\nTime: 38393.309 ms (00:38.393)\nTime: 36855.438 ms (00:36.855)\nTime: 37806.280 ms (00:37.806)\n\nAvg = 37532\n\n-- original patch\ntest=# \\timing\nTiming is on.\ntest=# copy foo to '/tmp/buf' (format json, force_array);\nCOPY 10000000\nTime: 37426.207 ms (00:37.426)\nTime: 36068.187 ms (00:36.068)\nTime: 38285.252 ms (00:38.285)\nTime: 36971.042 ms (00:36.971)\nTime: 35690.822 ms (00:35.691)\n\nAvg = 36888\n\n-- with datum_to_json()\ntest=# \\timing\nTiming is on.\ntest=# copy foo to '/tmp/buf' (format json, force_array);\nCOPY 10000000\nTime: 39083.467 ms (00:39.083)\nTime: 37249.326 ms (00:37.249)\nTime: 38529.721 ms (00:38.530)\nTime: 38704.920 ms (00:38.705)\nTime: 39001.326 ms (00:39.001)\n\nAvg = 38513\n8<---------------------------\n\nThat is somewhere in the 3% range.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 14:24:53 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-03 Su 12:11, Joe Conway wrote:\n> On 12/3/23 11:03, Joe Conway wrote:\n>>   From your earlier post, regarding constructing the aggregate -- not\n>> extensive testing but one data point:\n>> 8<--------------------------\n>> test=# copy foo to '/tmp/buf' (format json, force_array);\n>> COPY 10000000\n>> Time: 36353.153 ms (00:36.353)\n>> test=# copy (select json_agg(foo) from foo) to '/tmp/buf';\n>> COPY 1\n>> Time: 46835.238 ms (00:46.835)\n>> 8<--------------------------\n>\n> Also if the table is large enough, the aggregate method is not even \n> feasible whereas the COPY TO method works:\n> 8<--------------------------\n> test=# select count(*) from foo;\n>   count\n> ----------\n>  20000000\n> (1 row)\n>\n> test=# copy (select json_agg(foo) from foo) to '/tmp/buf';\n> ERROR:  out of memory\n> DETAIL:  Cannot enlarge string buffer containing 1073741822 bytes by 1 \n> more bytes.\n>\n> test=# copy foo to '/tmp/buf' (format json, force_array);\n> COPY 20000000\n> 8<--------------------------\n\n\nNone of this is surprising. As I mentioned, limitations with json_agg() \nare why I support the idea of this patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 14:47:42 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-03 Su 14:24, Joe Conway wrote:\n> On 12/3/23 11:03, Joe Conway wrote:\n>> On 12/3/23 10:10, Andrew Dunstan wrote:\n>>> I  realize this is just a POC, but I'd prefer to see \n>>> composite_to_json()\n>>> not exposed. You could use the already public datum_to_json() instead,\n>>> passing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third\n>>> arguments.\n>>\n>> Ok, thanks, will do\n>\n> Just FYI, this change does loose some performance in my not massively \n> scientific A/B/A test:\n>\n> 8<---------------------------\n> -- with datum_to_json()\n> test=# \\timing\n> Timing is on.\n> test=# copy foo to '/tmp/buf' (format json, force_array);\n> COPY 10000000\n> Time: 37196.898 ms (00:37.197)\n> Time: 37408.161 ms (00:37.408)\n> Time: 38393.309 ms (00:38.393)\n> Time: 36855.438 ms (00:36.855)\n> Time: 37806.280 ms (00:37.806)\n>\n> Avg = 37532\n>\n> -- original patch\n> test=# \\timing\n> Timing is on.\n> test=# copy foo to '/tmp/buf' (format json, force_array);\n> COPY 10000000\n> Time: 37426.207 ms (00:37.426)\n> Time: 36068.187 ms (00:36.068)\n> Time: 38285.252 ms (00:38.285)\n> Time: 36971.042 ms (00:36.971)\n> Time: 35690.822 ms (00:35.691)\n>\n> Avg = 36888\n>\n> -- with datum_to_json()\n> test=# \\timing\n> Timing is on.\n> test=# copy foo to '/tmp/buf' (format json, force_array);\n> COPY 10000000\n> Time: 39083.467 ms (00:39.083)\n> Time: 37249.326 ms (00:37.249)\n> Time: 38529.721 ms (00:38.530)\n> Time: 38704.920 ms (00:38.705)\n> Time: 39001.326 ms (00:39.001)\n>\n> Avg = 38513\n> 8<---------------------------\n>\n> That is somewhere in the 3% range.\n\n\nI assume it's because datum_to_json() constructs a text value from which \nyou then need to extract the cstring, whereas composite_to_json(), just \ngives you back the stringinfo. I guess that's a good enough reason to go \nwith exposing composite_to_json().\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 14:52:29 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/3/23 14:52, Andrew Dunstan wrote:\n> \n> On 2023-12-03 Su 14:24, Joe Conway wrote:\n>> On 12/3/23 11:03, Joe Conway wrote:\n>>> On 12/3/23 10:10, Andrew Dunstan wrote:\n>>>> I  realize this is just a POC, but I'd prefer to see \n>>>> composite_to_json()\n>>>> not exposed. You could use the already public datum_to_json() instead,\n>>>> passing JSONTYPE_COMPOSITE and F_RECORD_OUT as the second and third\n>>>> arguments.\n>>>\n>>> Ok, thanks, will do\n>>\n>> Just FYI, this change does loose some performance in my not massively \n>> scientific A/B/A test:\n>>\n>> 8<---------------------------\n<snip>\n>> 8<---------------------------\n>>\n>> That is somewhere in the 3% range.\n> \n> I assume it's because datum_to_json() constructs a text value from which\n> you then need to extract the cstring, whereas composite_to_json(), just\n> gives you back the stringinfo. I guess that's a good enough reason to go\n> with exposing composite_to_json().\n\nYeah, that was why I went that route in the first place. If you are good \nwith it I will go back to that. The code is a bit simpler too.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 15:09:36 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\" being quoted as \\\\\" breaks the JSON. It needs to be \\\". This has been my\nwhole problem with COPY TO for JSON.\n\nPlease validate that the output is in proper format with correct quoting\nfor special characters. I use `jq` on the command line to validate and\nformat the output.\n\nOn Sun, Dec 3, 2023, 10:51 Joe Conway <[email protected]> wrote:\n\n> On 12/3/23 10:31, Davin Shearer wrote:\n> > Please be sure to include single and double quotes in the test values\n> > since that was the original problem (double quoting in COPY TO breaking\n> > the JSON syntax).\n>\n> test=# copy (select * from foo limit 4) to stdout (format json);\n> {\"id\":2456092,\"f1\":\"line with ' in it:\n> 2456092\",\"f2\":\"2023-12-03T10:44:40.9712-05:00\"}\n> {\"id\":2456093,\"f1\":\"line with \\\\\" in it:\n> 2456093\",\"f2\":\"2023-12-03T10:44:40.971221-05:00\"}\n> {\"id\":2456094,\"f1\":\"line with ' in it:\n> 2456094\",\"f2\":\"2023-12-03T10:44:40.971225-05:00\"}\n> {\"id\":2456095,\"f1\":\"line with \\\\\" in it:\n> 2456095\",\"f2\":\"2023-12-03T10:44:40.971228-05:00\"}\n>\n> --\n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\n\" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has been my whole problem with COPY TO for JSON.Please validate that the output is in proper format with correct quoting for special characters. I use `jq` on the command line to validate and format the output. On Sun, Dec 3, 2023, 10:51 Joe Conway <[email protected]> wrote:On 12/3/23 10:31, Davin Shearer wrote:\n> Please be sure to include single and double quotes in the test values \n> since that was the original problem (double quoting in COPY TO breaking \n> the JSON syntax).\n\ntest=# copy (select * from foo limit 4) to stdout (format json);\n{\"id\":2456092,\"f1\":\"line with ' in it: \n2456092\",\"f2\":\"2023-12-03T10:44:40.9712-05:00\"}\n{\"id\":2456093,\"f1\":\"line with \\\\\" in it: \n2456093\",\"f2\":\"2023-12-03T10:44:40.971221-05:00\"}\n{\"id\":2456094,\"f1\":\"line with ' in it: \n2456094\",\"f2\":\"2023-12-03T10:44:40.971225-05:00\"}\n{\"id\":2456095,\"f1\":\"line with \\\\\" in it: \n2456095\",\"f2\":\"2023-12-03T10:44:40.971228-05:00\"}\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 3 Dec 2023 17:38:28 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "(please don't top quote on the Postgres lists)\n\nOn 12/3/23 17:38, Davin Shearer wrote:\n> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has been \n> my whole problem with COPY TO for JSON.\n> \n> Please validate that the output is in proper format with correct quoting \n> for special characters. I use `jq` on the command line to validate and \n> format the output.\n\nI just hooked existing \"row-to-json machinery\" up to the \"COPY TO\" \nstatement. If the output is wrong (just for for this use case?), that \nwould be a missing feature (or possibly a bug?).\n\nDavin -- how did you work around the issue with the way the built in \nfunctions output JSON?\n\nAndrew -- comments/thoughts?\n\nJoe\n\n\n> On Sun, Dec 3, 2023, 10:51 Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 12/3/23 10:31, Davin Shearer wrote:\n> > Please be sure to include single and double quotes in the test\n> values\n> > since that was the original problem (double quoting in COPY TO\n> breaking\n> > the JSON syntax).\n> \n> test=# copy (select * from foo limit 4) to stdout (format json);\n> {\"id\":2456092,\"f1\":\"line with ' in it:\n> 2456092\",\"f2\":\"2023-12-03T10:44:40.9712-05:00\"}\n> {\"id\":2456093,\"f1\":\"line with \\\\\" in it:\n> 2456093\",\"f2\":\"2023-12-03T10:44:40.971221-05:00\"}\n> {\"id\":2456094,\"f1\":\"line with ' in it:\n> 2456094\",\"f2\":\"2023-12-03T10:44:40.971225-05:00\"}\n> {\"id\":2456095,\"f1\":\"line with \\\\\" in it:\n> 2456095\",\"f2\":\"2023-12-03T10:44:40.971228-05:00\"}\n> \n> -- \n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com <https://aws.amazon.com>\n> \n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 3 Dec 2023 20:14:31 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "I worked around it by using select json_agg(t)... and redirecting it to\nfile via psql on the command line. COPY TO was working until we ran into\nbroken JSON and discovered the double quoting issue due to some values\ncontaining \" in them.\n\nI worked around it by using select json_agg(t)... and redirecting it to file via psql on the command line. COPY TO was working until we ran into broken JSON and discovered the double quoting issue due to some values containing \" in them.", "msg_date": "Sun, 3 Dec 2023 20:27:49 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-03 Su 20:14, Joe Conway wrote:\n> (please don't top quote on the Postgres lists)\n>\n> On 12/3/23 17:38, Davin Shearer wrote:\n>> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has \n>> been my whole problem with COPY TO for JSON.\n>>\n>> Please validate that the output is in proper format with correct \n>> quoting for special characters. I use `jq` on the command line to \n>> validate and format the output.\n>\n> I just hooked existing \"row-to-json machinery\" up to the \"COPY TO\" \n> statement. If the output is wrong (just for for this use case?), that \n> would be a missing feature (or possibly a bug?).\n>\n> Davin -- how did you work around the issue with the way the built in \n> functions output JSON?\n>\n> Andrew -- comments/thoughts?\n>\n>\n\nI meant to mention this when I was making comments yesterday.\n\nThe patch should not be using CopyAttributeOutText - it will try to \nescape characters such as \\, which produces the effect complained of \nhere, or else we need to change its setup so we have a way to inhibit \nthat escaping.\n\n\ncheers\n\n\nandrew\n\n\n\n>\n>\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 4 Dec 2023 07:41:20 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/4/23 07:41, Andrew Dunstan wrote:\n> \n> On 2023-12-03 Su 20:14, Joe Conway wrote:\n>> (please don't top quote on the Postgres lists)\n>>\n>> On 12/3/23 17:38, Davin Shearer wrote:\n>>> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has \n>>> been my whole problem with COPY TO for JSON.\n>>>\n>>> Please validate that the output is in proper format with correct \n>>> quoting for special characters. I use `jq` on the command line to \n>>> validate and format the output.\n>>\n>> I just hooked existing \"row-to-json machinery\" up to the \"COPY TO\" \n>> statement. If the output is wrong (just for for this use case?), that \n>> would be a missing feature (or possibly a bug?).\n>>\n>> Davin -- how did you work around the issue with the way the built in \n>> functions output JSON?\n>>\n>> Andrew -- comments/thoughts?\n> \n> I meant to mention this when I was making comments yesterday.\n> \n> The patch should not be using CopyAttributeOutText - it will try to\n> escape characters such as \\, which produces the effect complained of\n> here, or else we need to change its setup so we have a way to inhibit\n> that escaping.\n\n\nInteresting.\n\nI am surprised this has never been raised as a problem with COPY TO before.\n\nShould the JSON output, as produced by composite_to_json(), be sent \nas-is with no escaping at all? If yes, is JSON somehow unique in this \nregard?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 4 Dec 2023 08:37:23 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-04 Mo 08:37, Joe Conway wrote:\n> On 12/4/23 07:41, Andrew Dunstan wrote:\n>>\n>> On 2023-12-03 Su 20:14, Joe Conway wrote:\n>>> (please don't top quote on the Postgres lists)\n>>>\n>>> On 12/3/23 17:38, Davin Shearer wrote:\n>>>> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has \n>>>> been my whole problem with COPY TO for JSON.\n>>>>\n>>>> Please validate that the output is in proper format with correct \n>>>> quoting for special characters. I use `jq` on the command line to \n>>>> validate and format the output.\n>>>\n>>> I just hooked existing \"row-to-json machinery\" up to the \"COPY TO\" \n>>> statement. If the output is wrong (just for for this use case?), \n>>> that would be a missing feature (or possibly a bug?).\n>>>\n>>> Davin -- how did you work around the issue with the way the built in \n>>> functions output JSON?\n>>>\n>>> Andrew -- comments/thoughts?\n>>\n>> I meant to mention this when I was making comments yesterday.\n>>\n>> The patch should not be using CopyAttributeOutText - it will try to\n>> escape characters such as \\, which produces the effect complained of\n>> here, or else we need to change its setup so we have a way to inhibit\n>> that escaping.\n>\n>\n> Interesting.\n>\n> I am surprised this has never been raised as a problem with COPY TO \n> before.\n>\n> Should the JSON output, as produced by composite_to_json(), be sent \n> as-is with no escaping at all? If yes, is JSON somehow unique in this \n> regard?\n\n\nText mode output is in such a form that it can be read back in using \ntext mode input. There's nothing special about JSON in this respect - \nany text field will be escaped too. But output suitable for text mode \ninput is not what you're trying to produce here; you're trying to \nproduce valid JSON.\n\nSo, yes, the result of composite_to_json, which is already suitably \nescaped, should not be further escaped in this case.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 4 Dec 2023 09:25:04 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/4/23 09:25, Andrew Dunstan wrote:\n> \n> On 2023-12-04 Mo 08:37, Joe Conway wrote:\n>> On 12/4/23 07:41, Andrew Dunstan wrote:\n>>>\n>>> On 2023-12-03 Su 20:14, Joe Conway wrote:\n>>>> (please don't top quote on the Postgres lists)\n>>>>\n>>>> On 12/3/23 17:38, Davin Shearer wrote:\n>>>>> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has \n>>>>> been my whole problem with COPY TO for JSON.\n>>>>>\n>>>>> Please validate that the output is in proper format with correct \n>>>>> quoting for special characters. I use `jq` on the command line to \n>>>>> validate and format the output.\n>>>>\n>>>> I just hooked existing \"row-to-json machinery\" up to the \"COPY TO\" \n>>>> statement. If the output is wrong (just for for this use case?), \n>>>> that would be a missing feature (or possibly a bug?).\n>>>>\n>>>> Davin -- how did you work around the issue with the way the built in \n>>>> functions output JSON?\n>>>>\n>>>> Andrew -- comments/thoughts?\n>>>\n>>> I meant to mention this when I was making comments yesterday.\n>>>\n>>> The patch should not be using CopyAttributeOutText - it will try to\n>>> escape characters such as \\, which produces the effect complained of\n>>> here, or else we need to change its setup so we have a way to inhibit\n>>> that escaping.\n>>\n>>\n>> Interesting.\n>>\n>> I am surprised this has never been raised as a problem with COPY TO \n>> before.\n>>\n>> Should the JSON output, as produced by composite_to_json(), be sent \n>> as-is with no escaping at all? If yes, is JSON somehow unique in this \n>> regard?\n> \n> \n> Text mode output is in such a form that it can be read back in using\n> text mode input. There's nothing special about JSON in this respect -\n> any text field will be escaped too. But output suitable for text mode\n> input is not what you're trying to produce here; you're trying to\n> produce valid JSON.\n> \n> So, yes, the result of composite_to_json, which is already suitably\n> escaped, should not be further escaped in this case.\n\nGotcha.\n\nThis patch version uses CopySendData() instead and includes \ndocumentation changes. Still lacks regression tests.\n\nHopefully this looks better. Any other particular strings I ought to \ntest with?\n\n8<------------------\ntest=# copy (select * from foo limit 4) to stdout (format json, \nforce_array true);\n[\n {\"id\":1,\"f1\":\"line with \\\" in it: \n1\",\"f2\":\"2023-12-03T12:26:41.596053-05:00\"}\n,{\"id\":2,\"f1\":\"line with ' in it: \n2\",\"f2\":\"2023-12-03T12:26:41.596173-05:00\"}\n,{\"id\":3,\"f1\":\"line with \\\" in it: \n3\",\"f2\":\"2023-12-03T12:26:41.596179-05:00\"}\n,{\"id\":4,\"f1\":\"line with ' in it: \n4\",\"f2\":\"2023-12-03T12:26:41.596182-05:00\"}\n]\n8<------------------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Dec 2023 10:45:58 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Looking great!\n\nFor testing, in addition to the quotes, include DOS and Unix EOL, \\ and /,\nByte Order Markers, and mulitbyte characters like UTF-8.\n\nEssentially anything considered textural is fair game to be a value.\n\nOn Mon, Dec 4, 2023, 10:46 Joe Conway <[email protected]> wrote:\n\n> On 12/4/23 09:25, Andrew Dunstan wrote:\n> >\n> > On 2023-12-04 Mo 08:37, Joe Conway wrote:\n> >> On 12/4/23 07:41, Andrew Dunstan wrote:\n> >>>\n> >>> On 2023-12-03 Su 20:14, Joe Conway wrote:\n> >>>> (please don't top quote on the Postgres lists)\n> >>>>\n> >>>> On 12/3/23 17:38, Davin Shearer wrote:\n> >>>>> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\". This has\n> >>>>> been my whole problem with COPY TO for JSON.\n> >>>>>\n> >>>>> Please validate that the output is in proper format with correct\n> >>>>> quoting for special characters. I use `jq` on the command line to\n> >>>>> validate and format the output.\n> >>>>\n> >>>> I just hooked existing \"row-to-json machinery\" up to the \"COPY TO\"\n> >>>> statement. If the output is wrong (just for for this use case?),\n> >>>> that would be a missing feature (or possibly a bug?).\n> >>>>\n> >>>> Davin -- how did you work around the issue with the way the built in\n> >>>> functions output JSON?\n> >>>>\n> >>>> Andrew -- comments/thoughts?\n> >>>\n> >>> I meant to mention this when I was making comments yesterday.\n> >>>\n> >>> The patch should not be using CopyAttributeOutText - it will try to\n> >>> escape characters such as \\, which produces the effect complained of\n> >>> here, or else we need to change its setup so we have a way to inhibit\n> >>> that escaping.\n> >>\n> >>\n> >> Interesting.\n> >>\n> >> I am surprised this has never been raised as a problem with COPY TO\n> >> before.\n> >>\n> >> Should the JSON output, as produced by composite_to_json(), be sent\n> >> as-is with no escaping at all? If yes, is JSON somehow unique in this\n> >> regard?\n> >\n> >\n> > Text mode output is in such a form that it can be read back in using\n> > text mode input. There's nothing special about JSON in this respect -\n> > any text field will be escaped too. But output suitable for text mode\n> > input is not what you're trying to produce here; you're trying to\n> > produce valid JSON.\n> >\n> > So, yes, the result of composite_to_json, which is already suitably\n> > escaped, should not be further escaped in this case.\n>\n> Gotcha.\n>\n> This patch version uses CopySendData() instead and includes\n> documentation changes. Still lacks regression tests.\n>\n> Hopefully this looks better. Any other particular strings I ought to\n> test with?\n>\n> 8<------------------\n> test=# copy (select * from foo limit 4) to stdout (format json,\n> force_array true);\n> [\n> {\"id\":1,\"f1\":\"line with \\\" in it:\n> 1\",\"f2\":\"2023-12-03T12:26:41.596053-05:00\"}\n> ,{\"id\":2,\"f1\":\"line with ' in it:\n> 2\",\"f2\":\"2023-12-03T12:26:41.596173-05:00\"}\n> ,{\"id\":3,\"f1\":\"line with \\\" in it:\n> 3\",\"f2\":\"2023-12-03T12:26:41.596179-05:00\"}\n> ,{\"id\":4,\"f1\":\"line with ' in it:\n> 4\",\"f2\":\"2023-12-03T12:26:41.596182-05:00\"}\n> ]\n> 8<------------------\n>\n> --\n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nLooking great!For testing, in addition to the quotes, include DOS and Unix EOL, \\ and /, Byte Order Markers, and mulitbyte characters like UTF-8.Essentially anything considered textural is fair game to be a value. On Mon, Dec 4, 2023, 10:46 Joe Conway <[email protected]> wrote:On 12/4/23 09:25, Andrew Dunstan wrote:\n> \n> On 2023-12-04 Mo 08:37, Joe Conway wrote:\n>> On 12/4/23 07:41, Andrew Dunstan wrote:\n>>>\n>>> On 2023-12-03 Su 20:14, Joe Conway wrote:\n>>>> (please don't top quote on the Postgres lists)\n>>>>\n>>>> On 12/3/23 17:38, Davin Shearer wrote:\n>>>>> \" being quoted as \\\\\" breaks the JSON. It needs to be \\\".  This has \n>>>>> been my whole problem with COPY TO for JSON.\n>>>>>\n>>>>> Please validate that the output is in proper format with correct \n>>>>> quoting for special characters. I use `jq` on the command line to \n>>>>> validate and format the output.\n>>>>\n>>>> I just hooked existing \"row-to-json machinery\" up to the \"COPY TO\" \n>>>> statement. If the output is wrong (just for for this use case?), \n>>>> that would be a missing feature (or possibly a bug?).\n>>>>\n>>>> Davin -- how did you work around the issue with the way the built in \n>>>> functions output JSON?\n>>>>\n>>>> Andrew -- comments/thoughts?\n>>>\n>>> I meant to mention this when I was making comments yesterday.\n>>>\n>>> The patch should not be using CopyAttributeOutText - it will try to\n>>> escape characters such as \\, which produces the effect complained of\n>>> here, or else we need to change its setup so we have a way to inhibit\n>>> that escaping.\n>>\n>>\n>> Interesting.\n>>\n>> I am surprised this has never been raised as a problem with COPY TO \n>> before.\n>>\n>> Should the JSON output, as produced by composite_to_json(), be sent \n>> as-is with no escaping at all? If yes, is JSON somehow unique in this \n>> regard?\n> \n> \n> Text mode output is in such a form that it can be read back in using\n> text mode input. There's nothing special about JSON in this respect -\n> any text field will be escaped too. But output suitable for text mode\n> input is not what you're trying to produce here; you're trying to\n> produce valid JSON.\n> \n> So, yes, the result of composite_to_json, which is already suitably\n> escaped, should not be further escaped in this case.\n\nGotcha.\n\nThis patch version uses CopySendData() instead and includes \ndocumentation changes. Still lacks regression tests.\n\nHopefully this looks better. Any other particular strings I ought to \ntest with?\n\n8<------------------\ntest=# copy (select * from foo limit 4) to stdout (format json, \nforce_array true);\n[\n  {\"id\":1,\"f1\":\"line with \\\" in it: \n1\",\"f2\":\"2023-12-03T12:26:41.596053-05:00\"}\n,{\"id\":2,\"f1\":\"line with ' in it: \n2\",\"f2\":\"2023-12-03T12:26:41.596173-05:00\"}\n,{\"id\":3,\"f1\":\"line with \\\" in it: \n3\",\"f2\":\"2023-12-03T12:26:41.596179-05:00\"}\n,{\"id\":4,\"f1\":\"line with ' in it: \n4\",\"f2\":\"2023-12-03T12:26:41.596182-05:00\"}\n]\n8<------------------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Dec 2023 13:37:06 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 2023-12-04 Mo 13:37, Davin Shearer wrote:\n> Looking great!\n>\n> For testing, in addition to the quotes, include DOS and Unix EOL, \\ \n> and /, Byte Order Markers, and mulitbyte characters like UTF-8.\n>\n> Essentially anything considered textural is fair game to be a value.\n\n\nJoe already asked you to avoid top-posting on PostgreSQL lists. See \n<http://idallen.com/topposting.html> \n<http://idallen.com/topposting.html>> for an explanation.\n\nWe don't process BOMs elsewhere, and probably should not here either. \nThey are in fact neither required nor recommended for use with UTF8 \ndata, AIUI. See a recent discussion on this list on that topic: \n<https://www.postgresql.org/message-id/flat/81ca2b25-6b3a-499a-9a09-2dd21253c2cb%40unitrunker.net>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-12-04 Mo 13:37, Davin Shearer\n wrote:\n\n\n\nLooking great!\n \n\nFor testing, in addition to the quotes, include\n DOS and Unix EOL, \\ and /, Byte Order Markers, and mulitbyte\n characters like UTF-8.\n\n\nEssentially anything considered textural is fair\n game to be a value. \n\n\n\n\n\nJoe already asked you to avoid top-posting on PostgreSQL lists.\n See <http://idallen.com/topposting.html>\n for an explanation.\n\nWe don't process BOMs elsewhere, and probably should not here\n either. They are in fact neither required nor recommended for use\n with UTF8 data, AIUI. See a recent discussion on this list on that\n topic:\n<https://www.postgresql.org/message-id/flat/81ca2b25-6b3a-499a-9a09-2dd21253c2cb%40unitrunker.net>\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 4 Dec 2023 15:06:31 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Sorry about the top posting / top quoting... the link you sent me gives me\na 404. I'm not exactly sure what top quoting / posting means and Googling\nthose terms wasn't helpful for me, but I've removed the quoting that my\nmail client is automatically \"helpfully\" adding to my emails. I mean no\noffense.\n\nOkay, digging in more...\n\nIf the value contains text that has BOMs [footnote 1] in it, it must be\npreserved (the database doesn't need to interpret them or do anything\nspecial with them - just store it and fetch it). There are however a few\ncharacters that need to be escaped (per\nhttps://www.w3docs.com/snippets/java/how-should-i-escape-strings-in-json.html)\nso that the JSON format isn't broken. They are:\n\n\n 1. \" (double quote)\n 2. \\ (backslash)\n 3. / (forward slash)\n 4. \\b (backspace)\n 5. \\f (form feed)\n 6. \\n (new line)\n 7. \\r (carriage return)\n 8. \\t (horizontal tab)\n\nThese characters should be represented in the test cases to see how the\nescaping behaves and to ensure that the escaping is done properly per JSON\nrequirements. Forward slash comes as a bit of a surprise to me, but `jq`\nhandles it either way:\n\n➜ echo '{\"key\": \"this / is a forward slash\"}' | jq .\n{\n \"key\": \"this / is a forward slash\"\n}\n➜ echo '{\"key\": \"this \\/ is a forward slash\"}' | jq .\n{\n \"key\": \"this / is a forward slash\"\n}\n\nHope it helps, and thank you!\n\n1. I don't disagree that BOMs shouldn't be used for UTF-8, but I'm also\nprocessing UTF-16{BE,LE} and UTF-32{BE,LE} (as well as other textural\nformats that are neither ASCII or Unicode). I don't have the luxury of\nchanging the data that is given.\n\nSorry about the top posting / top quoting... the link you sent me gives me a 404.  I'm not exactly sure what top quoting / posting means and Googling those terms wasn't helpful for me, but I've removed the quoting that my mail client is automatically \"helpfully\" adding to my emails.  I mean no offense.Okay, digging in more...If the value contains text that has BOMs [footnote 1] in it, it must be preserved (the database doesn't need to interpret them or do anything special with them - just store it and fetch it).  There are however a few characters that need to be escaped (per https://www.w3docs.com/snippets/java/how-should-i-escape-strings-in-json.html) so that the JSON format isn't broken.  They are:\" (double quote)\\ (backslash)/ (forward slash)\\b (backspace)\\f (form feed)\\n (new line)\\r (carriage return)\\t (horizontal tab)These characters should be represented in the test cases to see how the escaping behaves and to ensure that the escaping is done properly per JSON requirements.  Forward slash comes as a bit of a surprise to me, but `jq` handles it either way:➜ echo '{\"key\": \"this / is a forward slash\"}' | jq .{  \"key\": \"this / is a forward slash\"}➜ echo '{\"key\": \"this \\/ is a forward slash\"}' | jq .{  \"key\": \"this / is a forward slash\"}Hope it helps, and thank you!1. I don't disagree that BOMs shouldn't be used for UTF-8, but I'm also processing UTF-16{BE,LE} and UTF-32{BE,LE} (as well as other textural formats that are neither ASCII or Unicode).  I don't have the luxury of changing the data that is given.", "msg_date": "Mon, 4 Dec 2023 17:55:00 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/4/23 17:55, Davin Shearer wrote:\n> Sorry about the top posting / top quoting... the link you sent me gives \n> me a 404.  I'm not exactly sure what top quoting / posting means and \n> Googling those terms wasn't helpful for me, but I've removed the quoting \n> that my mail client is automatically \"helpfully\" adding to my emails.  I \n> mean no offense.\n\nNo offense taken. But it is worthwhile to conform to the very long \nestablished norms of the mailing lists on which you participate. See:\n\n https://en.wikipedia.org/wiki/Posting_style\n\nI would describe the Postgres list style (based on that link) as\n\n \"inline replying, in which the different parts of the reply follow\n the relevant parts of the original post...[with]...trimming of the\n original text\"\n\n> There are however a few characters that need to be escaped\n\n> 1. |\"|(double quote)\n> 2. |\\|(backslash)\n> 3. |/|(forward slash)\n> 4. |\\b|(backspace)\n> 5. |\\f|(form feed)\n> 6. |\\n|(new line)\n> 7. |\\r|(carriage return)\n> 8. |\\t|(horizontal tab)\n> \n> These characters should be represented in the test cases to see how the \n> escaping behaves and to ensure that the escaping is done properly per \n> JSON requirements.\n\nI can look at adding these as test cases. The latest version of the \npatch (attached) includes some of that already. For reference, the tests \nso far include this:\n\n8<-------------------------------\ntest=# select * from copytest;\n style | test | filler\n---------+----------+--------\n DOS | abc\\r +| 1\n | def |\n Unix | abc +| 2\n | def |\n Mac | abc\\rdef | 3\n esc\\ape | a\\r\\\\r\\ +| 4\n | \\nb |\n(4 rows)\n\ntest=# copy copytest to stdout (format json);\n{\"style\":\"DOS\",\"test\":\"abc\\r\\ndef\",\"filler\":1}\n{\"style\":\"Unix\",\"test\":\"abc\\ndef\",\"filler\":2}\n{\"style\":\"Mac\",\"test\":\"abc\\rdef\",\"filler\":3}\n{\"style\":\"esc\\\\ape\",\"test\":\"a\\\\r\\\\\\r\\\\\\n\\\\nb\",\"filler\":4}\n8<-------------------------------\n\nAt this point \"COPY TO\" should be sending exactly the unaltered output \nof the postgres JSON processing functions.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Dec 2023 21:54:50 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-04 Mo 17:55, Davin Shearer wrote:\n> Sorry about the top posting / top quoting... the link you sent me \n> gives me a 404.  I'm not exactly sure what top quoting / posting means \n> and Googling those terms wasn't helpful for me, but I've removed the \n> quoting that my mail client is automatically \"helpfully\" adding to my \n> emails.  I mean no offense.\n\n\nHmm. Luckily the Wayback Machine has a copy: \n<http://web.archive.org/web/20230608210806/idallen.com/topposting.html>\n\nMaybe I'll put a copy in the developer wiki.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 09:56:03 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/4/23 21:54, Joe Conway wrote:\n> On 12/4/23 17:55, Davin Shearer wrote:\n>> There are however a few characters that need to be escaped\n> \n>> 1. |\"|(double quote)\n>> 2. |\\|(backslash)\n>> 3. |/|(forward slash)\n>> 4. |\\b|(backspace)\n>> 5. |\\f|(form feed)\n>> 6. |\\n|(new line)\n>> 7. |\\r|(carriage return)\n>> 8. |\\t|(horizontal tab)\n>> \n>> These characters should be represented in the test cases to see how the \n>> escaping behaves and to ensure that the escaping is done properly per \n>> JSON requirements.\n> \n> I can look at adding these as test cases.\nSo I did a quick check:\n8<--------------------------\nwith t(f1) as\n(\n values\n (E'aaa\\\"bbb'::text),\n (E'aaa\\\\bbb'::text),\n (E'aaa\\/bbb'::text),\n (E'aaa\\bbbb'::text),\n (E'aaa\\fbbb'::text),\n (E'aaa\\nbbb'::text),\n (E'aaa\\rbbb'::text),\n (E'aaa\\tbbb'::text)\n)\nselect\n length(t.f1),\n t.f1,\n row_to_json(t)\nfrom t;\n length | f1 | row_to_json\n--------+-------------+-------------------\n 7 | aaa\"bbb | {\"f1\":\"aaa\\\"bbb\"}\n 7 | aaa\\bbb | {\"f1\":\"aaa\\\\bbb\"}\n 7 | aaa/bbb | {\"f1\":\"aaa/bbb\"}\n 7 | aaa\\x08bbb | {\"f1\":\"aaa\\bbbb\"}\n 7 | aaa\\x0Cbbb | {\"f1\":\"aaa\\fbbb\"}\n 7 | aaa +| {\"f1\":\"aaa\\nbbb\"}\n | bbb |\n 7 | aaa\\rbbb | {\"f1\":\"aaa\\rbbb\"}\n 7 | aaa bbb | {\"f1\":\"aaa\\tbbb\"}\n(8 rows)\n\n8<--------------------------\n\nThis is all independent of my patch for COPY TO. If I am reading that \ncorrectly, everything matches Davin's table *except* the forward slash \n(\"/\"). I defer to the experts on the thread to debate that...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 11:54:42 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Thanks for the wayback machine link Andrew. I read it, understood it, and\nwill comply.\n\nJoe, those test cases look great and the outputs are the same as `jq`.\n\nAs for forward slashes being escaped, I found this:\nhttps://stackoverflow.com/questions/1580647/json-why-are-forward-slashes-escaped\n.\n\nForward slash escaping is optional, so not escaping them in Postgres is\nokay. The important thing is that the software _reading_ JSON interprets\nboth '\\/' and '/' as '/'.\n\nThanks for the wayback machine link Andrew.  I read it, understood it, and will comply.Joe, those test cases look great and the outputs are the same as `jq`.As for forward slashes being escaped, I found this: https://stackoverflow.com/questions/1580647/json-why-are-forward-slashes-escaped.Forward slash escaping is optional, so not escaping them in Postgres is okay.   The important thing is that the software _reading_ JSON interprets both '\\/' and '/' as '/'.", "msg_date": "Tue, 5 Dec 2023 12:43:31 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/5/23 12:43, Davin Shearer wrote:\n> Joe, those test cases look great and the outputs are the same as `jq`.\n\n<link to info regarding escaping of forward slashes>\n\n> Forward slash escaping is optional, so not escaping them in Postgres is \n> okay. The important thing is that the software _reading_ JSON \n> interprets both '\\/' and '/' as '/'.\n\nThanks for the review and info. I modified the existing regression test \nthus:\n\n8<--------------------------\ncreate temp table copyjsontest (\n id bigserial,\n f1 text,\n f2 timestamptz);\n\ninsert into copyjsontest\n select g.i,\n CASE WHEN g.i % 2 = 0 THEN\n 'line with '' in it: ' || g.i::text\n ELSE\n 'line with \" in it: ' || g.i::text\n END,\n 'Mon Feb 10 17:32:01 1997 PST'\n from generate_series(1,5) as g(i);\n\ninsert into copyjsontest (f1) values\n(E'aaa\\\"bbb'::text),\n(E'aaa\\\\bbb'::text),\n(E'aaa\\/bbb'::text),\n(E'aaa\\bbbb'::text),\n(E'aaa\\fbbb'::text),\n(E'aaa\\nbbb'::text),\n(E'aaa\\rbbb'::text),\n(E'aaa\\tbbb'::text);\ncopy copyjsontest to stdout json;\n{\"id\":1,\"f1\":\"line with \\\" in it: 1\",\"f2\":\"1997-02-10T20:32:01-05:00\"}\n{\"id\":2,\"f1\":\"line with ' in it: 2\",\"f2\":\"1997-02-10T20:32:01-05:00\"}\n{\"id\":3,\"f1\":\"line with \\\" in it: 3\",\"f2\":\"1997-02-10T20:32:01-05:00\"}\n{\"id\":4,\"f1\":\"line with ' in it: 4\",\"f2\":\"1997-02-10T20:32:01-05:00\"}\n{\"id\":5,\"f1\":\"line with \\\" in it: 5\",\"f2\":\"1997-02-10T20:32:01-05:00\"}\n{\"id\":1,\"f1\":\"aaa\\\"bbb\",\"f2\":null}\n{\"id\":2,\"f1\":\"aaa\\\\bbb\",\"f2\":null}\n{\"id\":3,\"f1\":\"aaa/bbb\",\"f2\":null}\n{\"id\":4,\"f1\":\"aaa\\bbbb\",\"f2\":null}\n{\"id\":5,\"f1\":\"aaa\\fbbb\",\"f2\":null}\n{\"id\":6,\"f1\":\"aaa\\nbbb\",\"f2\":null}\n{\"id\":7,\"f1\":\"aaa\\rbbb\",\"f2\":null}\n{\"id\":8,\"f1\":\"aaa\\tbbb\",\"f2\":null}\n8<--------------------------\n\nI think the code, documentation, and tests are in pretty good shape at \nthis point. Latest version attached.\n\nAny other comments or complaints out there?\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 5 Dec 2023 13:51:22 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Hi Joe,\n\nIn reviewing the 005 patch, I think that when used with FORCE ARRAY, we\nshould also _imply_ FORCE ROW DELIMITER. I can't envision a use case where\nsomeone would want to use FORCE ARRAY without also using FORCE ROW\nDELIMITER. I can, however, envision a use case where someone would want\nFORCE ROW DELIMITER without FORCE ARRAY, like maybe including into a larger\narray. I definitely appreciate these options and the flexibility that they\nafford from a user perspective.\n\nIn the test output, will you also show the different variations with FORCE\nARRAY and FORCE ROW DELIMITER => {(false, false), (true, false), (false,\ntrue), (true, true)}? Technically you've already shown me the (false,\nfalse) case as those are the defaults.\n\nThanks!\n\nHi Joe,In reviewing the 005 patch, I think that when used with FORCE ARRAY, we should also _imply_ FORCE ROW DELIMITER.  I can't envision a use case where someone would want to use FORCE ARRAY without also using FORCE ROW DELIMITER.  I can, however, envision a use case where someone would want FORCE ROW DELIMITER without FORCE ARRAY, like maybe including into a larger array.  I definitely appreciate these options and the flexibility that they afford from a user perspective.In the test output, will you also show the different variations with FORCE ARRAY and FORCE ROW DELIMITER => {(false, false), (true, false), (false, true), (true, true)}?  Technically you've already shown me the (false, false) case as those are the defaults.Thanks!", "msg_date": "Tue, 5 Dec 2023 14:50:23 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-05 Tu 14:50, Davin Shearer wrote:\n> Hi Joe,\n>\n> In reviewing the 005 patch, I think that when used with FORCE ARRAY, \n> we should also _imply_ FORCE ROW DELIMITER.  I can't envision a use \n> case where someone would want to use FORCE ARRAY without also using \n> FORCE ROW DELIMITER.  I can, however, envision a use case where \n> someone would want FORCE ROW DELIMITER without FORCE ARRAY, like maybe \n> including into a larger array.  I definitely appreciate these options \n> and the flexibility that they afford from a user perspective.\n>\n> In the test output, will you also show the different variations with \n> FORCE ARRAY and FORCE ROW DELIMITER => {(false, false), (true, false), \n> (false, true), (true, true)}?  Technically you've already shown me the \n> (false, false) case as those are the defaults.\n>\n>\n\nI don't understand the point of FORCE_ROW_DELIMITER at all. There is \nonly one legal delimiter of array items in JSON, and that's a comma. \nThere's no alternative and it's not optional. So in the array case you \nMUST have commas and in any other case (e.g. LINES) I can't see why you \nwould have them.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 15:55:53 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/5/23 15:55, Andrew Dunstan wrote:\n> \n> On 2023-12-05 Tu 14:50, Davin Shearer wrote:\n>> Hi Joe,\n>>\n>> In reviewing the 005 patch, I think that when used with FORCE ARRAY, \n>> we should also _imply_ FORCE ROW DELIMITER.  I can't envision a use \n>> case where someone would want to use FORCE ARRAY without also using \n>> FORCE ROW DELIMITER.  I can, however, envision a use case where \n>> someone would want FORCE ROW DELIMITER without FORCE ARRAY, like maybe \n>> including into a larger array.  I definitely appreciate these options \n>> and the flexibility that they afford from a user perspective.\n>>\n>> In the test output, will you also show the different variations with \n>> FORCE ARRAY and FORCE ROW DELIMITER => {(false, false), (true, false), \n>> (false, true), (true, true)}?  Technically you've already shown me the \n>> (false, false) case as those are the defaults.\n>>\n>>\n> \n> I don't understand the point of FORCE_ROW_DELIMITER at all. There is\n> only one legal delimiter of array items in JSON, and that's a comma.\n> There's no alternative and it's not optional. So in the array case you\n> MUST have commas and in any other case (e.g. LINES) I can't see why you\n> would have them.\n\nThe current patch already *does* imply row delimiters in the array case. \nIt says so here:\n8<---------------------------\n+ <varlistentry>\n+ <term><literal>FORCE_ARRAY</literal></term>\n+ <listitem>\n+ <para>\n+ Force output of array decorations at the beginning and end of \noutput.\n+ This option implies the <literal>FORCE_ROW_DELIMITER</literal>\n+ option. It is allowed only in <command>COPY TO</command>, and only\n+ when using <literal>JSON</literal> format.\n+ The default is <literal>false</literal>.\n+ </para>\n8<---------------------------\n\nand it does so here:\n8<---------------------------\n+ if (opts_out->force_array)\n+ opts_out->force_row_delimiter = true;\n8<---------------------------\n\nand it shows that here:\n8<---------------------------\n+ copy copytest to stdout (format json, force_array);\n+ [\n+ {\"style\":\"DOS\",\"test\":\"abc\\r\\ndef\",\"filler\":1}\n+ ,{\"style\":\"Unix\",\"test\":\"abc\\ndef\",\"filler\":2}\n+ ,{\"style\":\"Mac\",\"test\":\"abc\\rdef\",\"filler\":3}\n+ ,{\"style\":\"esc\\\\ape\",\"test\":\"a\\\\r\\\\\\r\\\\\\n\\\\nb\",\"filler\":4}\n+ ]\n8<---------------------------\n\nIt also does not allow explicitly setting row delimiters false while \nforce_array is true here:\n8<---------------------------\n\n+ \t\tif (opts_out->force_array &&\n+ \t\t\tforce_row_delimiter_specified &&\n+ \t\t\t!opts_out->force_row_delimiter)\n+ \t\t\tereport(ERROR,\n+ \t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ \t\t\t\t\t errmsg(\"cannot specify FORCE_ROW_DELIMITER false with \nFORCE_ARRAY true\")));\n8<---------------------------\n\nAm I understanding something incorrectly?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 16:02:06 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/5/23 16:02, Joe Conway wrote:\n> On 12/5/23 15:55, Andrew Dunstan wrote:\n>> and in any other case (e.g. LINES) I can't see why you\n>> would have them.\n\nOh I didn't address this -- I saw examples in the interwebs of MSSQL \nserver I think [1] which had the non-array with commas import and export \nstyle. It was not that tough to support and the code as written already \ndoes it, so why not?\n\n[1] \nhttps://learn.microsoft.com/en-us/sql/relational-databases/json/remove-square-brackets-from-json-without-array-wrapper-option?view=sql-server-ver16#example-multiple-row-result\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 16:09:13 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-05 Tu 16:02, Joe Conway wrote:\n> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>\n>> On 2023-12-05 Tu 14:50, Davin Shearer wrote:\n>>> Hi Joe,\n>>>\n>>> In reviewing the 005 patch, I think that when used with FORCE ARRAY, \n>>> we should also _imply_ FORCE ROW DELIMITER.  I can't envision a use \n>>> case where someone would want to use FORCE ARRAY without also using \n>>> FORCE ROW DELIMITER.  I can, however, envision a use case where \n>>> someone would want FORCE ROW DELIMITER without FORCE ARRAY, like \n>>> maybe including into a larger array.  I definitely appreciate these \n>>> options and the flexibility that they afford from a user perspective.\n>>>\n>>> In the test output, will you also show the different variations with \n>>> FORCE ARRAY and FORCE ROW DELIMITER => {(false, false), (true, \n>>> false), (false, true), (true, true)}? Technically you've already \n>>> shown me the (false, false) case as those are the defaults.\n>>>\n>>>\n>>\n>> I don't understand the point of FORCE_ROW_DELIMITER at all. There is\n>> only one legal delimiter of array items in JSON, and that's a comma.\n>> There's no alternative and it's not optional. So in the array case you\n>> MUST have commas and in any other case (e.g. LINES) I can't see why you\n>> would have them.\n>\n> The current patch already *does* imply row delimiters in the array \n> case. It says so here:\n> 8<---------------------------\n> +    <varlistentry>\n> + <term><literal>FORCE_ARRAY</literal></term>\n> +     <listitem>\n> +      <para>\n> +       Force output of array decorations at the beginning and end of \n> output.\n> +       This option implies the <literal>FORCE_ROW_DELIMITER</literal>\n> +       option. It is allowed only in <command>COPY TO</command>, and \n> only\n> +       when using <literal>JSON</literal> format.\n> +       The default is <literal>false</literal>.\n> +      </para>\n> 8<---------------------------\n>\n> and it does so here:\n> 8<---------------------------\n> +         if (opts_out->force_array)\n> +             opts_out->force_row_delimiter = true;\n> 8<---------------------------\n>\n> and it shows that here:\n> 8<---------------------------\n> + copy copytest to stdout (format json, force_array);\n> + [\n> +  {\"style\":\"DOS\",\"test\":\"abc\\r\\ndef\",\"filler\":1}\n> + ,{\"style\":\"Unix\",\"test\":\"abc\\ndef\",\"filler\":2}\n> + ,{\"style\":\"Mac\",\"test\":\"abc\\rdef\",\"filler\":3}\n> + ,{\"style\":\"esc\\\\ape\",\"test\":\"a\\\\r\\\\\\r\\\\\\n\\\\nb\",\"filler\":4}\n> + ]\n> 8<---------------------------\n>\n> It also does not allow explicitly setting row delimiters false while \n> force_array is true here:\n> 8<---------------------------\n>\n> +         if (opts_out->force_array &&\n> +             force_row_delimiter_specified &&\n> +             !opts_out->force_row_delimiter)\n> +             ereport(ERROR,\n> +                     (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +                      errmsg(\"cannot specify FORCE_ROW_DELIMITER \n> false with FORCE_ARRAY true\")));\n> 8<---------------------------\n>\n> Am I understanding something incorrectly?\n\n\nBut what's the point of having it if you're not using FORCE_ARRAY?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 16:12:00 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/5/23 16:12, Andrew Dunstan wrote:\n> \n> On 2023-12-05 Tu 16:02, Joe Conway wrote:\n>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>>\n>>> On 2023-12-05 Tu 14:50, Davin Shearer wrote:\n>>>> Hi Joe,\n>>>>\n>>>> In reviewing the 005 patch, I think that when used with FORCE ARRAY, \n>>>> we should also _imply_ FORCE ROW DELIMITER.  I can't envision a use \n>>>> case where someone would want to use FORCE ARRAY without also using \n>>>> FORCE ROW DELIMITER.  I can, however, envision a use case where \n>>>> someone would want FORCE ROW DELIMITER without FORCE ARRAY, like \n>>>> maybe including into a larger array.  I definitely appreciate these \n>>>> options and the flexibility that they afford from a user perspective.\n>>>>\n>>>> In the test output, will you also show the different variations with \n>>>> FORCE ARRAY and FORCE ROW DELIMITER => {(false, false), (true, \n>>>> false), (false, true), (true, true)}? Technically you've already \n>>>> shown me the (false, false) case as those are the defaults.\n>>>>\n>>>>\n>>>\n>>> I don't understand the point of FORCE_ROW_DELIMITER at all. There is\n>>> only one legal delimiter of array items in JSON, and that's a comma.\n>>> There's no alternative and it's not optional. So in the array case you\n>>> MUST have commas and in any other case (e.g. LINES) I can't see why you\n>>> would have them.\n>>\n>> The current patch already *does* imply row delimiters in the array \n>> case. It says so here:\n>> 8<---------------------------\n>> +    <varlistentry>\n>> + <term><literal>FORCE_ARRAY</literal></term>\n>> +     <listitem>\n>> +      <para>\n>> +       Force output of array decorations at the beginning and end of \n>> output.\n>> +       This option implies the <literal>FORCE_ROW_DELIMITER</literal>\n>> +       option. It is allowed only in <command>COPY TO</command>, and \n>> only\n>> +       when using <literal>JSON</literal> format.\n>> +       The default is <literal>false</literal>.\n>> +      </para>\n>> 8<---------------------------\n>>\n>> and it does so here:\n>> 8<---------------------------\n>> +         if (opts_out->force_array)\n>> +             opts_out->force_row_delimiter = true;\n>> 8<---------------------------\n>>\n>> and it shows that here:\n>> 8<---------------------------\n>> + copy copytest to stdout (format json, force_array);\n>> + [\n>> +  {\"style\":\"DOS\",\"test\":\"abc\\r\\ndef\",\"filler\":1}\n>> + ,{\"style\":\"Unix\",\"test\":\"abc\\ndef\",\"filler\":2}\n>> + ,{\"style\":\"Mac\",\"test\":\"abc\\rdef\",\"filler\":3}\n>> + ,{\"style\":\"esc\\\\ape\",\"test\":\"a\\\\r\\\\\\r\\\\\\n\\\\nb\",\"filler\":4}\n>> + ]\n>> 8<---------------------------\n>>\n>> It also does not allow explicitly setting row delimiters false while \n>> force_array is true here:\n>> 8<---------------------------\n>>\n>> +         if (opts_out->force_array &&\n>> +             force_row_delimiter_specified &&\n>> +             !opts_out->force_row_delimiter)\n>> +             ereport(ERROR,\n>> +                     (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> +                      errmsg(\"cannot specify FORCE_ROW_DELIMITER \n>> false with FORCE_ARRAY true\")));\n>> 8<---------------------------\n>>\n>> Am I understanding something incorrectly?\n> \n> \n> But what's the point of having it if you're not using FORCE_ARRAY?\n\n\nSee the follow up email -- other databases support it so why not? It \nseems to be a thing...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 16:15:29 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-05 Tu 16:09, Joe Conway wrote:\n> On 12/5/23 16:02, Joe Conway wrote:\n>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>> and in any other case (e.g. LINES) I can't see why you\n>>> would have them.\n>\n> Oh I didn't address this -- I saw examples in the interwebs of MSSQL \n> server I think [1] which had the non-array with commas import and \n> export style. It was not that tough to support and the code as written \n> already does it, so why not?\n>\n> [1] \n> https://learn.microsoft.com/en-us/sql/relational-databases/json/remove-square-brackets-from-json-without-array-wrapper-option?view=sql-server-ver16#example-multiple-row-result\n>\n>\n\nThat seems quite absurd, TBH. I know we've catered for some absurdity in \nthe CSV code (much of it down to me), so maybe we need to be liberal in \nwhat we accept here too. IMNSHO, we should produce either a single JSON \ndocument (the ARRAY case) or a series of JSON documents, one per row \n(the LINES case).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 16:20:11 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/5/23 16:20, Andrew Dunstan wrote:\n> On 2023-12-05 Tu 16:09, Joe Conway wrote:\n>> On 12/5/23 16:02, Joe Conway wrote:\n>>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>>> and in any other case (e.g. LINES) I can't see why you\n>>>> would have them.\n>>\n>> Oh I didn't address this -- I saw examples in the interwebs of MSSQL \n>> server I think [1] which had the non-array with commas import and \n>> export style. It was not that tough to support and the code as written \n>> already does it, so why not?\n> \n> That seems quite absurd, TBH. I know we've catered for some absurdity in\n> the CSV code (much of it down to me), so maybe we need to be liberal in\n> what we accept here too. IMNSHO, we should produce either a single JSON\n> document (the ARRAY case) or a series of JSON documents, one per row\n> (the LINES case).\n\n\nSo your preference would be to not allow the non-array-with-commas case \nbut if/when we implement COPY FROM we would accept that format? As in \nPostel'a law (\"be conservative in what you do, be liberal in what you \naccept from others\")?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 16:46:50 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "> Am I understanding something incorrectly?\n\nNo, you've got it. You already covered the concerns there.\n\n> That seems quite absurd, TBH. I know we've catered for some absurdity in\n> the CSV code (much of it down to me), so maybe we need to be liberal in\n> what we accept here too. IMNSHO, we should produce either a single JSON\n> document (the ARRAY case) or a series of JSON documents, one per row\n> (the LINES case).\n\nFor what it's worth, I agree with Andrew on this. I also agree with COPY\nFROM allowing for potentially bogus commas at the end of non-arrays for\ninterop with other products, but to not do that in COPY TO (unless there is\nsome real compelling case to do so). Emitting bogus JSON (non-array with\ncommas) feels wrong and would be nice to not perpetuate that, if possible.\n\nThanks again for doing this. If I can be of any help, let me know.\nIf\\When this makes it into the production product, I'll be using this\nfeature for sure.\n\n-Davin\n\n> Am I understanding something incorrectly?No, you've got it.  You already covered the concerns there.> That seems quite absurd, TBH. I know we've catered for some absurdity in> the CSV code (much of it down to me), so maybe we need to be liberal in> what we accept here too. IMNSHO, we should produce either a single JSON> document (the ARRAY case) or a series of JSON documents, one per row> (the LINES case).For what it's worth, I agree with Andrew on this.  I also agree with COPY FROM allowing for potentially bogus commas at the end of non-arrays for interop with other products, but to not do that in COPY TO (unless there is some real compelling case to do so).  Emitting bogus JSON (non-array with commas) feels wrong and would be nice to not perpetuate that, if possible.Thanks again for doing this.  If I can be of any help, let me know.  If\\When this makes it into the production product, I'll be using this feature for sure.-Davin", "msg_date": "Tue, 5 Dec 2023 18:45:24 -0500", "msg_from": "Davin Shearer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-05 Tu 16:46, Joe Conway wrote:\n> On 12/5/23 16:20, Andrew Dunstan wrote:\n>> On 2023-12-05 Tu 16:09, Joe Conway wrote:\n>>> On 12/5/23 16:02, Joe Conway wrote:\n>>>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>>>> and in any other case (e.g. LINES) I can't see why you\n>>>>> would have them.\n>>>\n>>> Oh I didn't address this -- I saw examples in the interwebs of MSSQL \n>>> server I think [1] which had the non-array with commas import and \n>>> export style. It was not that tough to support and the code as \n>>> written already does it, so why not?\n>>\n>> That seems quite absurd, TBH. I know we've catered for some absurdity in\n>> the CSV code (much of it down to me), so maybe we need to be liberal in\n>> what we accept here too. IMNSHO, we should produce either a single JSON\n>> document (the ARRAY case) or a series of JSON documents, one per row\n>> (the LINES case).\n>\n>\n> So your preference would be to not allow the non-array-with-commas \n> case but if/when we implement COPY FROM we would accept that format? \n> As in Postel'a law (\"be conservative in what you do, be liberal in \n> what you accept from others\")?\n\n\nYes, I think so.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 07:36:36 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 07:36, Andrew Dunstan wrote:\n> \n> On 2023-12-05 Tu 16:46, Joe Conway wrote:\n>> On 12/5/23 16:20, Andrew Dunstan wrote:\n>>> On 2023-12-05 Tu 16:09, Joe Conway wrote:\n>>>> On 12/5/23 16:02, Joe Conway wrote:\n>>>>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>>>>> and in any other case (e.g. LINES) I can't see why you\n>>>>>> would have them.\n>>>>\n>>>> Oh I didn't address this -- I saw examples in the interwebs of MSSQL \n>>>> server I think [1] which had the non-array with commas import and \n>>>> export style. It was not that tough to support and the code as \n>>>> written already does it, so why not?\n>>>\n>>> That seems quite absurd, TBH. I know we've catered for some absurdity in\n>>> the CSV code (much of it down to me), so maybe we need to be liberal in\n>>> what we accept here too. IMNSHO, we should produce either a single JSON\n>>> document (the ARRAY case) or a series of JSON documents, one per row\n>>> (the LINES case).\n>>\n>> So your preference would be to not allow the non-array-with-commas \n>> case but if/when we implement COPY FROM we would accept that format? \n>> As in Postel'a law (\"be conservative in what you do, be liberal in \n>> what you accept from others\")?\n> \n> \n> Yes, I think so.\n\nAwesome. The attached does it that way. I also ran pgindent.\n\nI believe this is ready to commit unless there are further comments or \nobjections.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Dec 2023 08:49:13 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-06 We 08:49, Joe Conway wrote:\n> On 12/6/23 07:36, Andrew Dunstan wrote:\n>>\n>> On 2023-12-05 Tu 16:46, Joe Conway wrote:\n>>> On 12/5/23 16:20, Andrew Dunstan wrote:\n>>>> On 2023-12-05 Tu 16:09, Joe Conway wrote:\n>>>>> On 12/5/23 16:02, Joe Conway wrote:\n>>>>>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>>>>>> and in any other case (e.g. LINES) I can't see why you\n>>>>>>> would have them.\n>>>>>\n>>>>> Oh I didn't address this -- I saw examples in the interwebs of \n>>>>> MSSQL server I think [1] which had the non-array with commas \n>>>>> import and export style. It was not that tough to support and the \n>>>>> code as written already does it, so why not?\n>>>>\n>>>> That seems quite absurd, TBH. I know we've catered for some \n>>>> absurdity in\n>>>> the CSV code (much of it down to me), so maybe we need to be \n>>>> liberal in\n>>>> what we accept here too. IMNSHO, we should produce either a single \n>>>> JSON\n>>>> document (the ARRAY case) or a series of JSON documents, one per row\n>>>> (the LINES case).\n>>>\n>>> So your preference would be to not allow the non-array-with-commas \n>>> case but if/when we implement COPY FROM we would accept that format? \n>>> As in Postel'a law (\"be conservative in what you do, be liberal in \n>>> what you accept from others\")?\n>>\n>>\n>> Yes, I think so.\n>\n> Awesome. The attached does it that way. I also ran pgindent.\n>\n> I believe this is ready to commit unless there are further comments or \n> objections.\n\n\nSorry to bikeshed a little more, I'm a bit late looking at this.\n\nI suspect that most users will actually want the table as a single JSON \ndocument, so it should probably be the default. In any case FORCE_ARRAY \nas an option has a slightly wrong feel to it. I'm having trouble coming \nup with a good name for the reverse of that, off the top of my head.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 10:32:41 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> I believe this is ready to commit unless there are further comments or \n> objections.\n\nI thought we were still mostly at proof-of-concept stage?\n\nIn particular, has anyone done any performance testing?\nI'm concerned about that because composite_to_json() has\nzero capability to cache any metadata across calls, meaning\nthere is going to be a large amount of duplicated work\nper row.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Dec 2023 10:44:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 10:32, Andrew Dunstan wrote:\n> \n> On 2023-12-06 We 08:49, Joe Conway wrote:\n>> On 12/6/23 07:36, Andrew Dunstan wrote:\n>>>\n>>> On 2023-12-05 Tu 16:46, Joe Conway wrote:\n>>>> On 12/5/23 16:20, Andrew Dunstan wrote:\n>>>>> On 2023-12-05 Tu 16:09, Joe Conway wrote:\n>>>>>> On 12/5/23 16:02, Joe Conway wrote:\n>>>>>>> On 12/5/23 15:55, Andrew Dunstan wrote:\n>>>>>>>> and in any other case (e.g. LINES) I can't see why you\n>>>>>>>> would have them.\n>>>>>>\n>>>>>> Oh I didn't address this -- I saw examples in the interwebs of \n>>>>>> MSSQL server I think [1] which had the non-array with commas \n>>>>>> import and export style. It was not that tough to support and the \n>>>>>> code as written already does it, so why not?\n>>>>>\n>>>>> That seems quite absurd, TBH. I know we've catered for some \n>>>>> absurdity in\n>>>>> the CSV code (much of it down to me), so maybe we need to be \n>>>>> liberal in\n>>>>> what we accept here too. IMNSHO, we should produce either a single \n>>>>> JSON\n>>>>> document (the ARRAY case) or a series of JSON documents, one per row\n>>>>> (the LINES case).\n>>>>\n>>>> So your preference would be to not allow the non-array-with-commas \n>>>> case but if/when we implement COPY FROM we would accept that format? \n>>>> As in Postel'a law (\"be conservative in what you do, be liberal in \n>>>> what you accept from others\")?\n>>>\n>>>\n>>> Yes, I think so.\n>>\n>> Awesome. The attached does it that way. I also ran pgindent.\n>>\n>> I believe this is ready to commit unless there are further comments or \n>> objections.\n> \n> Sorry to bikeshed a little more, I'm a bit late looking at this.\n> \n> I suspect that most users will actually want the table as a single JSON\n> document, so it should probably be the default. In any case FORCE_ARRAY\n> as an option has a slightly wrong feel to it.\n\nSure, I can make that happen, although I figured that for the \nmany-rows-scenario the single array size might be an issue for whatever \nyou are importing into.\n\n> I'm having trouble coming up with a good name for the reverse of\n> that, off the top of my head.\n\nWill think about it and propose something with the next patch revision.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 11:15:39 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 10:44, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> I believe this is ready to commit unless there are further comments or \n>> objections.\n> \n> I thought we were still mostly at proof-of-concept stage?\n\nThe concept is narrowly scoped enough that I think we are homing in on \nthe final patch.\n\n> In particular, has anyone done any performance testing?\n> I'm concerned about that because composite_to_json() has\n> zero capability to cache any metadata across calls, meaning\n> there is going to be a large amount of duplicated work\n> per row.\n\nI will devise some kind of test and report back. I suppose something \nwith many rows and many narrow columns comparing time to COPY \ntext/csv/json modes would do the trick?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 11:19:19 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-06 We 10:44, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> I believe this is ready to commit unless there are further comments or\n>> objections.\n> I thought we were still mostly at proof-of-concept stage?\n>\n> In particular, has anyone done any performance testing?\n> I'm concerned about that because composite_to_json() has\n> zero capability to cache any metadata across calls, meaning\n> there is going to be a large amount of duplicated work\n> per row.\n>\n> \t\t\t\n\n\nYeah, that's hard to deal with, too, as it can be called recursively.\n\nOTOH I'd rather have a version of this that worked slowly than none at all.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 11:19:54 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> On 12/6/23 10:44, Tom Lane wrote:\n>> In particular, has anyone done any performance testing?\n\n> I will devise some kind of test and report back. I suppose something \n> with many rows and many narrow columns comparing time to COPY \n> text/csv/json modes would do the trick?\n\nYeah. If it's at least in the same ballpark as the existing text/csv\nformats then I'm okay with it. I'm worried that it might be 10x worse,\nin which case I think we'd need to do something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Dec 2023 11:26:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Big +1 to this overall feature.\n\nThis is something I've wanted for a long time as well. While it's possible\nto use a COPY with text output for a trivial case, the double escaping\nfalls apart quickly for arbitrary data. It's really only usable when you\nknow exactly what you are querying and know it will not be a problem.\n\nRegarding the defaults for the output, I think JSON lines (rather than a\nJSON array of objects) would be preferred. It's more natural to combine\nthem and generate that type of data on the fly rather than forcing\naggregation into a single object.\n\nCouple more features / use cases come to mind as well. Even if they're not\npart of a first round of this feature I think it'd be helpful to document\nthem now as it might give some ideas for what does make that first cut:\n\n1. Outputting a top level JSON object without the additional column keys.\nIIUC, the top level keys are always the column names. A common use case\nwould be a single json/jsonb column that is already formatted exactly as\nthe user would like for output. Rather than enveloping it in an object with\na dedicated key, it would be nice to be able to output it directly. This\nwould allow non-object results to be outputted as well (e.g., lines of JSON\narrays, numbers, or strings). Due to how JSON is structured, I think this\nwould play nice with the JSON lines v.s. array concept.\n\nCOPY (SELECT json_build_object('foo', x) AS i_am_ignored FROM\ngenerate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON,\nSOME_OPTION_TO_NOT_ENVELOPE)\n{\"foo\":1}\n{\"foo\":2}\n{\"foo\":3}\n\n2. An option to ignore null fields so they are excluded from the output.\nThis would not be a default but would allow shrinking the total size of the\noutput data in many situations. This would be recursive to allow nested\nobjects to be shrunk down (not just the top level). This might be\nworthwhile as a standalone JSON function though handling it during output\nwould be more efficient as it'd only be read once.\n\nCOPY (SELECT json_build_object('foo', CASE WHEN x > 1 THEN x END) FROM\ngenerate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON,\nSOME_OPTION_TO_NOT_ENVELOPE, JSON_SKIP_NULLS)\n{}\n{\"foo\":2}\n{\"foo\":3}\n\n3. Reverse of #2 when copying data in to allow defaulting missing fields to\nNULL.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nBig +1 to this overall feature.This is something I've wanted for a long time as well. While it's possible to use a COPY with text output for a trivial case, the double escaping falls apart quickly for arbitrary data. It's really only usable when you know exactly what you are querying and know it will not be a problem.Regarding the defaults for the output, I think JSON lines (rather than a JSON array of objects) would be preferred. It's more natural to combine them and generate that type of data on the fly rather than forcing aggregation into a single object.Couple more features / use cases come to mind as well. Even if they're not part of a first round of this feature I think it'd be helpful to document them now as it might give some ideas for what does make that first cut:1. Outputting a top level JSON object without the additional column keys. IIUC, the top level keys are always the column names. A common use case would be a single json/jsonb column that is already formatted exactly as the user would like for output. Rather than enveloping it in an object with a dedicated key, it would be nice to be able to output it directly. This would allow non-object results to be outputted as well (e.g., lines of JSON arrays, numbers, or strings). Due to how JSON is structured, I think this would play nice with the JSON lines v.s. array concept.COPY (SELECT json_build_object('foo', x) AS i_am_ignored FROM generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON, SOME_OPTION_TO_NOT_ENVELOPE){\"foo\":1}{\"foo\":2}{\"foo\":3}2. An option to ignore null fields so they are excluded from the output. This would not be a default but would allow shrinking the total size of the output data in many situations. This would be recursive to allow nested objects to be shrunk down (not just the top level). This might be worthwhile as a standalone JSON function though handling it during output would be more efficient as it'd only be read once.COPY (SELECT json_build_object('foo', CASE WHEN x > 1 THEN x END) FROM generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON, SOME_OPTION_TO_NOT_ENVELOPE, JSON_SKIP_NULLS){}{\"foo\":2}{\"foo\":3}3. Reverse of #2 when copying data in to allow defaulting missing fields to NULL.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Wed, 6 Dec 2023 11:28:33 -0500", "msg_from": "Sehrope Sarkuni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-12-06 We 10:44, Tom Lane wrote:\n>> In particular, has anyone done any performance testing?\n>> I'm concerned about that because composite_to_json() has\n>> zero capability to cache any metadata across calls, meaning\n>> there is going to be a large amount of duplicated work\n>> per row.\n\n> Yeah, that's hard to deal with, too, as it can be called recursively.\n\nRight. On the plus side, if we did improve this it would presumably\nalso benefit other callers of composite_to_json[b].\n\n> OTOH I'd rather have a version of this that worked slowly than none at all.\n\nIt might be acceptable to plan on improving the performance later,\ndepending on just how bad it is now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Dec 2023 11:28:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 06, 2023 at 11:28:59AM -0500, Tom Lane wrote:\n> It might be acceptable to plan on improving the performance later,\n> depending on just how bad it is now.\n\nOn 10M rows with 11 integers each, I'm seeing the following:\n\n\t(format text)\n\tTime: 10056.311 ms (00:10.056)\n\tTime: 8789.331 ms (00:08.789)\n\tTime: 8755.070 ms (00:08.755)\n\n\t(format csv)\n\tTime: 12295.480 ms (00:12.295)\n\tTime: 12311.059 ms (00:12.311)\n\tTime: 12305.469 ms (00:12.305)\n\n\t(format json)\n\tTime: 24568.621 ms (00:24.569)\n\tTime: 23756.234 ms (00:23.756)\n\tTime: 24265.730 ms (00:24.266)\n\n'perf top' tends to look a bit like this:\n\n 13.31% postgres [.] appendStringInfoString\n 7.57% postgres [.] datum_to_json_internal\n 6.82% postgres [.] SearchCatCache1\n 5.35% [kernel] [k] intel_gpio_irq\n 3.57% postgres [.] composite_to_json\n 3.31% postgres [.] IsValidJsonNumber\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 10:33:49 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 06, 2023 at 10:33:49AM -0600, Nathan Bossart wrote:\n> \t(format csv)\n> \tTime: 12295.480 ms (00:12.295)\n> \tTime: 12311.059 ms (00:12.311)\n> \tTime: 12305.469 ms (00:12.305)\n> \n> \t(format json)\n> \tTime: 24568.621 ms (00:24.569)\n> \tTime: 23756.234 ms (00:23.756)\n> \tTime: 24265.730 ms (00:24.266)\n\nI should also note that the json output is 85% larger than the csv output.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 10:44:39 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\tAndrew Dunstan wrote:\n\n> IMNSHO, we should produce either a single JSON \n> document (the ARRAY case) or a series of JSON documents, one per row \n> (the LINES case).\n\n\"COPY Operations\" in the doc says:\n\n\" The backend sends a CopyOutResponse message to the frontend, followed\n by zero or more CopyData messages (always one per row), followed by\n CopyDone\".\n\nIn the ARRAY case, the first messages with the copyjsontest\nregression test look like this (tshark output):\n\nPostgreSQL\n Type: CopyOut response\n Length: 13\n Format: Text (0)\n Columns: 3\n\tFormat: Text (0)\nPostgreSQL\n Type: Copy data\n Length: 6\n Copy data: 5b0a\nPostgreSQL\n Type: Copy data\n Length: 76\n Copy data:\n207b226964223a312c226631223a226c696e652077697468205c2220696e2069743a2031…\n\nThe first Copy data message with contents \"5b0a\" does not qualify\nas a row of data with 3 columns as advertised in the CopyOut\nmessage. Isn't that a problem?\n\nAt least the json non-ARRAY case (\"json lines\") doesn't have\nthis issue, since every CopyData message corresponds effectively\nto a row in the table.\n\n\n[1] https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-COPY\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 06 Dec 2023 19:59:11 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 13:59, Daniel Verite wrote:\n> \tAndrew Dunstan wrote:\n> \n>> IMNSHO, we should produce either a single JSON \n>> document (the ARRAY case) or a series of JSON documents, one per row \n>> (the LINES case).\n> \n> \"COPY Operations\" in the doc says:\n> \n> \" The backend sends a CopyOutResponse message to the frontend, followed\n> by zero or more CopyData messages (always one per row), followed by\n> CopyDone\".\n> \n> In the ARRAY case, the first messages with the copyjsontest\n> regression test look like this (tshark output):\n> \n> PostgreSQL\n> Type: CopyOut response\n> Length: 13\n> Format: Text (0)\n> Columns: 3\n> \tFormat: Text (0)\n> PostgreSQL\n> Type: Copy data\n> Length: 6\n> Copy data: 5b0a\n> PostgreSQL\n> Type: Copy data\n> Length: 76\n> Copy data:\n> 207b226964223a312c226631223a226c696e652077697468205c2220696e2069743a2031…\n> \n> The first Copy data message with contents \"5b0a\" does not qualify\n> as a row of data with 3 columns as advertised in the CopyOut\n> message. Isn't that a problem?\n\n\nIs it a real problem, or just a bit of documentation change that I missed?\n\nAnything receiving this and looking for a json array should know how to \nassemble the data correctly despite the extra CopyData messages.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 14:47:25 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 11:44, Nathan Bossart wrote:\n> On Wed, Dec 06, 2023 at 10:33:49AM -0600, Nathan Bossart wrote:\n>> \t(format csv)\n>> \tTime: 12295.480 ms (00:12.295)\n>> \tTime: 12311.059 ms (00:12.311)\n>> \tTime: 12305.469 ms (00:12.305)\n>> \n>> \t(format json)\n>> \tTime: 24568.621 ms (00:24.569)\n>> \tTime: 23756.234 ms (00:23.756)\n>> \tTime: 24265.730 ms (00:24.266)\n> \n> I should also note that the json output is 85% larger than the csv output.\n\nI'll see if I can add some caching to composite_to_json(), but based on \nthe relative data size it does not sound like there is much performance \nleft on the table to go after, no?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 14:48:52 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> I'll see if I can add some caching to composite_to_json(), but based on \n> the relative data size it does not sound like there is much performance \n> left on the table to go after, no?\n\nIf Nathan's perf results hold up elsewhere, it seems like some\nmicro-optimization around the text-pushing (appendStringInfoString)\nmight be more useful than caching. The 7% spent in cache lookups\ncould be worth going after later, but it's not the top of the list.\n\nThe output size difference does say that maybe we should pay some\nattention to the nearby request to not always label every field.\nPerhaps there should be an option for each row to transform to\na JSON array rather than an object?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Dec 2023 15:20:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\nOn 2023-12-06 We 15:20, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> I'll see if I can add some caching to composite_to_json(), but based on\n>> the relative data size it does not sound like there is much performance\n>> left on the table to go after, no?\n> If Nathan's perf results hold up elsewhere, it seems like some\n> micro-optimization around the text-pushing (appendStringInfoString)\n> might be more useful than caching. The 7% spent in cache lookups\n> could be worth going after later, but it's not the top of the list.\n>\n> The output size difference does say that maybe we should pay some\n> attention to the nearby request to not always label every field.\n> Perhaps there should be an option for each row to transform to\n> a JSON array rather than an object?\n>\n> \t\t\t\n\n\nI doubt it. People who want this are likely to want pretty much what \nthis patch is providing, not something they would have to transform in \norder to get it. If they want space-efficient data they won't really be \nwanting JSON. Maybe they want Protocol Buffers or something in that vein.\n\nI see there's  nearby proposal to make this area pluggable at \n<https://postgr.es/m/[email protected]>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 16:03:12 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 11:28, Sehrope Sarkuni wrote:\n> Big +1 to this overall feature.\n\ncool!\n\n> Regarding the defaults for the output, I think JSON lines (rather than a \n> JSON array of objects) would be preferred. It's more natural to combine \n> them and generate that type of data on the fly rather than forcing \n> aggregation into a single object.\n\nSo that is +2 (Sehrope and me) for the status quo (JSON lines), and +2 \n(Andrew and Davin) for defaulting to json arrays. Anyone else want to \nweigh in on that issue?\n\n> Couple more features / use cases come to mind as well. Even if they're \n> not part of a first round of this feature I think it'd be helpful to \n> document them now as it might give some ideas for what does make that \n> first cut:\n> \n> 1. Outputting a top level JSON object without the additional column \n> keys. IIUC, the top level keys are always the column names. A common use \n> case would be a single json/jsonb column that is already formatted \n> exactly as the user would like for output. Rather than enveloping it in \n> an object with a dedicated key, it would be nice to be able to output it \n> directly. This would allow non-object results to be outputted as well \n> (e.g., lines of JSON arrays, numbers, or strings). Due to how JSON is \n> structured, I think this would play nice with the JSON lines v.s. array \n> concept.\n> \n> COPY (SELECT json_build_object('foo', x) AS i_am_ignored FROM \n> generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON, \n> SOME_OPTION_TO_NOT_ENVELOPE)\n> {\"foo\":1}\n> {\"foo\":2}\n> {\"foo\":3}\n\nYour example does not match what you describe, or do I misunderstand? I \nthought your goal was to eliminate the repeated \"foo\" from each row...\n\n> 2. An option to ignore null fields so they are excluded from the output. \n> This would not be a default but would allow shrinking the total size of \n> the output data in many situations. This would be recursive to allow \n> nested objects to be shrunk down (not just the top level). This might be \n> worthwhile as a standalone JSON function though handling it during \n> output would be more efficient as it'd only be read once.\n> \n> COPY (SELECT json_build_object('foo', CASE WHEN x > 1 THEN x END) FROM \n> generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON, \n> SOME_OPTION_TO_NOT_ENVELOPE, JSON_SKIP_NULLS)\n> {}\n> {\"foo\":2}\n> {\"foo\":3}\n\nclear enough I think\n\n> 3. Reverse of #2 when copying data in to allow defaulting missing fields \n> to NULL.\n\ngood to record the ask, but applies to a different feature (COPY FROM \ninstead of COPY TO).\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 16:28:59 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 4:03 PM Andrew Dunstan <[email protected]> wrote:\n\n> > The output size difference does say that maybe we should pay some\n> > attention to the nearby request to not always label every field.\n> > Perhaps there should be an option for each row to transform to\n> > a JSON array rather than an object?\n>\n> I doubt it. People who want this are likely to want pretty much what\n> this patch is providing, not something they would have to transform in\n> order to get it. If they want space-efficient data they won't really be\n> wanting JSON. Maybe they want Protocol Buffers or something in that vein.\n>\n\nFor arrays v.s. objects, it's not just about data size. There are plenty of\nsituations where a JSON array is superior to an object (e.g. duplicate\ncolumn names). Lines of JSON arrays of strings is pretty much CSV with JSON\nescaping rules and a pair of wrapping brackets. It's common for tabular\ndata in node.js environments as you don't need a separate CSV parser.\n\nEach one has its place and a default of the row_to_json(...) representation\nof the row still makes sense. But if the user has the option of outputting\na single json/jsonb field for each row without an object or array wrapper,\nthen it's possible to support all of these use cases as the user can\nexplicitly pick whatever envelope makes sense:\n\n-- Lines of JSON arrays:\nCOPY (SELECT json_build_array('test-' || a, b) FROM generate_series(1, 3)\na, generate_series(5,6) b) TO STDOUT WITH (FORMAT JSON,\nSOME_OPTION_TO_DISABLE_ENVELOPE);\n[\"test-1\", 5]\n[\"test-2\", 5]\n[\"test-3\", 5]\n[\"test-1\", 6]\n[\"test-2\", 6]\n[\"test-3\", 6]\n\n-- Lines of JSON strings:\nCOPY (SELECT to_json('test-' || x) FROM generate_series(1, 5) x) TO STDOUT\nWITH (FORMAT JSON, SOME_OPTION_TO_DISABLE_ENVELOPE);\n\"test-1\"\n\"test-2\"\n\"test-3\"\n\"test-4\"\n\"test-5\"\n\nI'm not sure how I feel about the behavior being automatic if it's a single\ntop level json / jsonb field rather than requiring the explicit option.\nIt's probably what a user would want but it also feels odd to change the\noutput wrapper automatically based on the fields in the response. If it is\nautomatic and the user wants the additional envelope, the option always\nexists to wrap it further in another: json_build_object('some_field\",\nmy_field_i_want_wrapped)\n\nThe duplicate field names would be a good test case too. I haven't gone\nthrough this patch but I'm guessing it doesn't filter out duplicates so the\nbehavior would match up with row_to_json(...), i.e. duplicates are\npreserved:\n\n=> SELECT row_to_json(t.*) FROM (SELECT 1 AS a, 2 AS a) t;\n row_to_json\n---------------\n {\"a\":1,\"a\":2}\n\nIf so, that's a good test case to add as however that's handled should be\ndeterministic.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Wed, Dec 6, 2023 at 4:03 PM Andrew Dunstan <[email protected]> wrote:> The output size difference does say that maybe we should pay some\n> attention to the nearby request to not always label every field.\n> Perhaps there should be an option for each row to transform to\n> a JSON array rather than an object?\nI doubt it. People who want this are likely to want pretty much what \nthis patch is providing, not something they would have to transform in \norder to get it. If they want space-efficient data they won't really be \nwanting JSON. Maybe they want Protocol Buffers or something in that vein.For arrays v.s. objects, it's not just about data size. There are plenty of situations where a JSON array is superior to an object (e.g. duplicate column names). Lines of JSON arrays of strings is pretty much CSV with JSON escaping rules and a pair of wrapping brackets. It's common for tabular data in node.js environments as you don't need a separate CSV parser.Each one has its place and a default of the row_to_json(...) representation of the row still makes sense. But if the user has the option of outputting a single json/jsonb field for each row without an object or array wrapper, then it's possible to support all of these use cases as the user can explicitly pick whatever envelope makes sense:-- Lines of JSON arrays:COPY (SELECT json_build_array('test-' || a, b) FROM generate_series(1, 3) a, generate_series(5,6) b) TO STDOUT WITH (FORMAT JSON, SOME_OPTION_TO_DISABLE_ENVELOPE);[\"test-1\", 5][\"test-2\", 5][\"test-3\", 5][\"test-1\", 6][\"test-2\", 6][\"test-3\", 6]-- Lines of JSON strings:COPY (SELECT to_json('test-' || x) FROM generate_series(1, 5) x) TO STDOUT WITH (FORMAT JSON, SOME_OPTION_TO_DISABLE_ENVELOPE);\"test-1\"\"test-2\"\"test-3\"\"test-4\"\"test-5\"I'm not sure how I feel about the behavior being automatic if it's a single top level json / jsonb field rather than requiring the explicit option. It's probably what a user would want but it also feels odd to change the output wrapper automatically based on the fields in the response. If it is automatic and the user wants the additional envelope, the option always exists to wrap it further in another: json_build_object('some_field\", my_field_i_want_wrapped)The duplicate field names would be a good test case too. I haven't gone through this patch but I'm guessing it doesn't filter out duplicates so the behavior would match up with row_to_json(...), i.e. duplicates are preserved:=> SELECT row_to_json(t.*) FROM (SELECT 1 AS a, 2 AS a) t;  row_to_json  --------------- {\"a\":1,\"a\":2}If so, that's a good test case to add as however that's handled should be deterministic.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Wed, 6 Dec 2023 16:36:02 -0500", "msg_from": "Sehrope Sarkuni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 06, 2023 at 03:20:46PM -0500, Tom Lane wrote:\n> If Nathan's perf results hold up elsewhere, it seems like some\n> micro-optimization around the text-pushing (appendStringInfoString)\n> might be more useful than caching. The 7% spent in cache lookups\n> could be worth going after later, but it's not the top of the list.\n\nAgreed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 15:41:11 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 4:29 PM Joe Conway <[email protected]> wrote:\n\n> > 1. Outputting a top level JSON object without the additional column\n> > keys. IIUC, the top level keys are always the column names. A common use\n> > case would be a single json/jsonb column that is already formatted\n> > exactly as the user would like for output. Rather than enveloping it in\n> > an object with a dedicated key, it would be nice to be able to output it\n> > directly. This would allow non-object results to be outputted as well\n> > (e.g., lines of JSON arrays, numbers, or strings). Due to how JSON is\n> > structured, I think this would play nice with the JSON lines v.s. array\n> > concept.\n> >\n> > COPY (SELECT json_build_object('foo', x) AS i_am_ignored FROM\n> > generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON,\n> > SOME_OPTION_TO_NOT_ENVELOPE)\n> > {\"foo\":1}\n> > {\"foo\":2}\n> > {\"foo\":3}\n>\n> Your example does not match what you describe, or do I misunderstand? I\n> thought your goal was to eliminate the repeated \"foo\" from each row...\n>\n\nThe \"foo\" in this case is explicit as I'm adding it when building the\nobject. What I was trying to show was not adding an additional object\nwrapper / envelope.\n\nSo each row is:\n\n{\"foo\":1}\n\nRather than:\n\n\"{\"json_build_object\":{\"foo\":1}}\n\nIf each row has exactly one json / jsonb field, then the user has already\nindicated the format for each row.\n\nThat same mechanism can be used to remove the \"foo\" entirely via a\njson/jsonb array.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Wed, Dec 6, 2023 at 4:29 PM Joe Conway <[email protected]> wrote:> 1. Outputting a top level JSON object without the additional column \n> keys. IIUC, the top level keys are always the column names. A common use \n> case would be a single json/jsonb column that is already formatted \n> exactly as the user would like for output. Rather than enveloping it in \n> an object with a dedicated key, it would be nice to be able to output it \n> directly. This would allow non-object results to be outputted as well \n> (e.g., lines of JSON arrays, numbers, or strings). Due to how JSON is \n> structured, I think this would play nice with the JSON lines v.s. array \n> concept.\n> \n> COPY (SELECT json_build_object('foo', x) AS i_am_ignored FROM \n> generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON, \n> SOME_OPTION_TO_NOT_ENVELOPE)\n> {\"foo\":1}\n> {\"foo\":2}\n> {\"foo\":3}\n\nYour example does not match what you describe, or do I misunderstand? I \nthought your goal was to eliminate the repeated \"foo\" from each row...The \"foo\" in this case is explicit as I'm adding it when building the object. What I was trying to show was not adding an additional object wrapper / envelope.So each row is:{\"foo\":1}Rather than:\"{\"json_build_object\":{\"foo\":1}}If each row has exactly one json / jsonb field, then the user has already indicated the format for each row.That same mechanism can be used to remove the \"foo\" entirely via a json/jsonb array.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Wed, 6 Dec 2023 16:42:11 -0500", "msg_from": "Sehrope Sarkuni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 16:42, Sehrope Sarkuni wrote:\n> On Wed, Dec 6, 2023 at 4:29 PM Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > 1. Outputting a top level JSON object without the additional column\n> > keys. IIUC, the top level keys are always the column names. A\n> common use\n> > case would be a single json/jsonb column that is already formatted\n> > exactly as the user would like for output. Rather than enveloping\n> it in\n> > an object with a dedicated key, it would be nice to be able to\n> output it\n> > directly. This would allow non-object results to be outputted as\n> well\n> > (e.g., lines of JSON arrays, numbers, or strings). Due to how\n> JSON is\n> > structured, I think this would play nice with the JSON lines v.s.\n> array\n> > concept.\n> >\n> > COPY (SELECT json_build_object('foo', x) AS i_am_ignored FROM\n> > generate_series(1, 3) x) TO STDOUT WITH (FORMAT JSON,\n> > SOME_OPTION_TO_NOT_ENVELOPE)\n> > {\"foo\":1}\n> > {\"foo\":2}\n> > {\"foo\":3}\n> \n> Your example does not match what you describe, or do I misunderstand? I\n> thought your goal was to eliminate the repeated \"foo\" from each row...\n> \n> \n> The \"foo\" in this case is explicit as I'm adding it when building the \n> object. What I was trying to show was not adding an additional object \n> wrapper / envelope.\n> \n> So each row is:\n> \n> {\"foo\":1}\n> \n> Rather than:\n> \n> \"{\"json_build_object\":{\"foo\":1}}\n\nI am still getting confused ;-)\n\nLet's focus on the current proposed patch with a \"minimum required \nfeature set\".\n\nRight now the default behavior is \"JSON lines\":\n8<-------------------------------\nCOPY (SELECT x.i, 'val' || x.i as v FROM\n generate_series(1, 3) x(i))\nTO STDOUT WITH (FORMAT JSON);\n{\"i\":1,\"v\":\"val1\"}\n{\"i\":2,\"v\":\"val2\"}\n{\"i\":3,\"v\":\"val3\"}\n8<-------------------------------\n\nand the other, non-default option is \"JSON array\":\n8<-------------------------------\nCOPY (SELECT x.i, 'val' || x.i as v FROM\n generate_series(1, 3) x(i))\nTO STDOUT WITH (FORMAT JSON, FORCE_ARRAY);\n[\n {\"i\":1,\"v\":\"val1\"}\n,{\"i\":2,\"v\":\"val2\"}\n,{\"i\":3,\"v\":\"val3\"}\n]\n8<-------------------------------\n\nSo the questions are:\n1. Do those two formats work for the initial implementation?\n2. Is the default correct or should it be switched\n e.g. rather than specifying FORCE_ARRAY to get an\n array, something like FORCE_NO_ARRAY to get JSON lines\n and the JSON array is default?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 17:38:21 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 3:38 PM Joe Conway <[email protected]> wrote:\n\n> So the questions are:\n> 1. Do those two formats work for the initial implementation?\n>\n\nYes. We provide a stream-oriented format and one atomic-import format.\n\n2. Is the default correct or should it be switched\n> e.g. rather than specifying FORCE_ARRAY to get an\n> array, something like FORCE_NO_ARRAY to get JSON lines\n> and the JSON array is default?\n>\n>\nNo default?\n\nRequire explicit of a sub-format when the main format is JSON.\n\nJSON_OBJECT_ROWS\nJSON_ARRAY_OF_OBJECTS\n\nFor a future compact array-structured-composites sub-format:\nJSON_ARRAY_OF_ARRAYS\nJSON_ARRAY_ROWS\n\nDavid J.\n\nOn Wed, Dec 6, 2023 at 3:38 PM Joe Conway <[email protected]> wrote:So the questions are:\n1. Do those two formats work for the initial implementation?Yes.  We provide a stream-oriented format and one atomic-import format.\n2. Is the default correct or should it be switched\n    e.g. rather than specifying FORCE_ARRAY to get an\n    array, something like FORCE_NO_ARRAY to get JSON lines\n    and the JSON array is default?No default?Require explicit of a sub-format when the main format is JSON.JSON_OBJECT_ROWSJSON_ARRAY_OF_OBJECTSFor a future compact array-structured-composites sub-format:JSON_ARRAY_OF_ARRAYSJSON_ARRAY_ROWSDavid J.", "msg_date": "Wed, 6 Dec 2023 15:56:22 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 14:47, Joe Conway wrote:\n> On 12/6/23 13:59, Daniel Verite wrote:\n>> \tAndrew Dunstan wrote:\n>> \n>>> IMNSHO, we should produce either a single JSON \n>>> document (the ARRAY case) or a series of JSON documents, one per row \n>>> (the LINES case).\n>> \n>> \"COPY Operations\" in the doc says:\n>> \n>> \" The backend sends a CopyOutResponse message to the frontend, followed\n>> by zero or more CopyData messages (always one per row), followed by\n>> CopyDone\".\n>> \n>> In the ARRAY case, the first messages with the copyjsontest\n>> regression test look like this (tshark output):\n>> \n>> PostgreSQL\n>> Type: CopyOut response\n>> Length: 13\n>> Format: Text (0)\n>> Columns: 3\n>> \tFormat: Text (0)\n>> PostgreSQL\n>> Type: Copy data\n>> Length: 6\n>> Copy data: 5b0a\n>> PostgreSQL\n>> Type: Copy data\n>> Length: 76\n>> Copy data:\n>> 207b226964223a312c226631223a226c696e652077697468205c2220696e2069743a2031…\n>> \n>> The first Copy data message with contents \"5b0a\" does not qualify\n>> as a row of data with 3 columns as advertised in the CopyOut\n>> message. Isn't that a problem?\n> \n> \n> Is it a real problem, or just a bit of documentation change that I missed?\n> \n> Anything receiving this and looking for a json array should know how to\n> assemble the data correctly despite the extra CopyData messages.\n\nHmm, maybe the real problem here is that Columns do not equal \"3\" for \nthe json mode case -- that should really say \"1\" I think, because the \nrow is not represented as 3 columns but rather 1 json object.\n\nDoes that sound correct?\n\nAssuming yes, there is still maybe an issue that there are two more \n\"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those \nare less likely to cause some hazard?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 18:09:30 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 4:09 PM Joe Conway <[email protected]> wrote:\n\n> On 12/6/23 14:47, Joe Conway wrote:\n> > On 12/6/23 13:59, Daniel Verite wrote:\n> >> Andrew Dunstan wrote:\n> >>\n> >>> IMNSHO, we should produce either a single JSON\n> >>> document (the ARRAY case) or a series of JSON documents, one per row\n> >>> (the LINES case).\n> >>\n> >> \"COPY Operations\" in the doc says:\n> >>\n> >> \" The backend sends a CopyOutResponse message to the frontend, followed\n> >> by zero or more CopyData messages (always one per row), followed by\n> >> CopyDone\".\n> >>\n> >> In the ARRAY case, the first messages with the copyjsontest\n> >> regression test look like this (tshark output):\n> >>\n> >> PostgreSQL\n> >> Type: CopyOut response\n> >> Length: 13\n> >> Format: Text (0)\n> >> Columns: 3\n> >> Format: Text (0)\n>\n> > Anything receiving this and looking for a json array should know how to\n> > assemble the data correctly despite the extra CopyData messages.\n>\n> Hmm, maybe the real problem here is that Columns do not equal \"3\" for\n> the json mode case -- that should really say \"1\" I think, because the\n> row is not represented as 3 columns but rather 1 json object.\n>\n> Does that sound correct?\n>\n> Assuming yes, there is still maybe an issue that there are two more\n> \"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those\n> are less likely to cause some hazard?\n>\n>\nWhat is the limitation, if any, of introducing new type codes for these. n\n= 2..N for the different variants? Or even -1 for \"raw text\"? And\ndocument that columns and structural rows need to be determined\nout-of-band. Continuing to use 1 (text) for this non-csv data seems like a\nhack even if we can technically make it function. The semantics,\nespecially for the array case, are completely discarded or wrong.\n\nDavid J.\n\nOn Wed, Dec 6, 2023 at 4:09 PM Joe Conway <[email protected]> wrote:On 12/6/23 14:47, Joe Conway wrote:\n> On 12/6/23 13:59, Daniel Verite wrote:\n>>      Andrew Dunstan wrote:\n>> \n>>> IMNSHO, we should produce either a single JSON \n>>> document (the ARRAY case) or a series of JSON documents, one per row \n>>> (the LINES case).\n>> \n>> \"COPY Operations\" in the doc says:\n>> \n>> \" The backend sends a CopyOutResponse message to the frontend, followed\n>>     by zero or more CopyData messages (always one per row), followed by\n>>     CopyDone\".\n>> \n>> In the ARRAY case, the first messages with the copyjsontest\n>> regression test look like this (tshark output):\n>> \n>> PostgreSQL\n>>      Type: CopyOut response\n>>      Length: 13\n>>      Format: Text (0)\n>>      Columns: 3\n>>      Format: Text (0)\n> Anything receiving this and looking for a json array should know how to\n> assemble the data correctly despite the extra CopyData messages.\n\nHmm, maybe the real problem here is that Columns do not equal \"3\" for \nthe json mode case -- that should really say \"1\" I think, because the \nrow is not represented as 3 columns but rather 1 json object.\n\nDoes that sound correct?\n\nAssuming yes, there is still maybe an issue that there are two more \n\"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those \nare less likely to cause some hazard?What is the limitation, if any, of introducing new type codes for these.  n = 2..N for the different variants?  Or even -1 for \"raw text\"?  And document that columns and structural rows need to be determined out-of-band.  Continuing to use 1 (text) for this non-csv data seems like a hack even if we can technically make it function.  The semantics, especially for the array case, are completely discarded or wrong.David J.", "msg_date": "Wed, 6 Dec 2023 16:28:06 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 4:28 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Wed, Dec 6, 2023 at 4:09 PM Joe Conway <[email protected]> wrote:\n>\n>> On 12/6/23 14:47, Joe Conway wrote:\n>> > On 12/6/23 13:59, Daniel Verite wrote:\n>> >> Andrew Dunstan wrote:\n>> >>\n>> >>> IMNSHO, we should produce either a single JSON\n>> >>> document (the ARRAY case) or a series of JSON documents, one per row\n>> >>> (the LINES case).\n>> >>\n>> >> \"COPY Operations\" in the doc says:\n>> >>\n>> >> \" The backend sends a CopyOutResponse message to the frontend, followed\n>> >> by zero or more CopyData messages (always one per row), followed by\n>> >> CopyDone\".\n>> >>\n>> >> In the ARRAY case, the first messages with the copyjsontest\n>> >> regression test look like this (tshark output):\n>> >>\n>> >> PostgreSQL\n>> >> Type: CopyOut response\n>> >> Length: 13\n>> >> Format: Text (0)\n>> >> Columns: 3\n>> >> Format: Text (0)\n>>\n>> > Anything receiving this and looking for a json array should know how to\n>> > assemble the data correctly despite the extra CopyData messages.\n>>\n>> Hmm, maybe the real problem here is that Columns do not equal \"3\" for\n>> the json mode case -- that should really say \"1\" I think, because the\n>> row is not represented as 3 columns but rather 1 json object.\n>>\n>> Does that sound correct?\n>>\n>> Assuming yes, there is still maybe an issue that there are two more\n>> \"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those\n>> are less likely to cause some hazard?\n>>\n>>\n> What is the limitation, if any, of introducing new type codes for these.\n> n = 2..N for the different variants? Or even -1 for \"raw text\"? And\n> document that columns and structural rows need to be determined\n> out-of-band. Continuing to use 1 (text) for this non-csv data seems like a\n> hack even if we can technically make it function. The semantics,\n> especially for the array case, are completely discarded or wrong.\n>\n>\nAlso, it seems like this answer would be easier to make if we implement\nCOPY FROM now since how is the server supposed to deal with decomposing\nthis data into tables without accurate type information? I don't see\nimplementing only half of the feature being a good idea. I've had much\nmore desire for FROM compared to TO personally.\n\nDavid J.\n\nOn Wed, Dec 6, 2023 at 4:28 PM David G. Johnston <[email protected]> wrote:On Wed, Dec 6, 2023 at 4:09 PM Joe Conway <[email protected]> wrote:On 12/6/23 14:47, Joe Conway wrote:\n> On 12/6/23 13:59, Daniel Verite wrote:\n>>      Andrew Dunstan wrote:\n>> \n>>> IMNSHO, we should produce either a single JSON \n>>> document (the ARRAY case) or a series of JSON documents, one per row \n>>> (the LINES case).\n>> \n>> \"COPY Operations\" in the doc says:\n>> \n>> \" The backend sends a CopyOutResponse message to the frontend, followed\n>>     by zero or more CopyData messages (always one per row), followed by\n>>     CopyDone\".\n>> \n>> In the ARRAY case, the first messages with the copyjsontest\n>> regression test look like this (tshark output):\n>> \n>> PostgreSQL\n>>      Type: CopyOut response\n>>      Length: 13\n>>      Format: Text (0)\n>>      Columns: 3\n>>      Format: Text (0)\n> Anything receiving this and looking for a json array should know how to\n> assemble the data correctly despite the extra CopyData messages.\n\nHmm, maybe the real problem here is that Columns do not equal \"3\" for \nthe json mode case -- that should really say \"1\" I think, because the \nrow is not represented as 3 columns but rather 1 json object.\n\nDoes that sound correct?\n\nAssuming yes, there is still maybe an issue that there are two more \n\"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those \nare less likely to cause some hazard?What is the limitation, if any, of introducing new type codes for these.  n = 2..N for the different variants?  Or even -1 for \"raw text\"?  And document that columns and structural rows need to be determined out-of-band.  Continuing to use 1 (text) for this non-csv data seems like a hack even if we can technically make it function.  The semantics, especially for the array case, are completely discarded or wrong.Also, it seems like this answer would be easier to make if we implement COPY FROM now since how is the server supposed to deal with decomposing this data into tables without accurate type information?  I don't see implementing only half of the feature being a good idea.  I've had much more desire for FROM compared to TO personally.David J.", "msg_date": "Wed, 6 Dec 2023 16:38:32 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 18:28, David G. Johnston wrote:\n> On Wed, Dec 6, 2023 at 4:09 PM Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 12/6/23 14:47, Joe Conway wrote:\n> > On 12/6/23 13:59, Daniel Verite wrote:\n> >>      Andrew Dunstan wrote:\n> >>\n> >>> IMNSHO, we should produce either a single JSON\n> >>> document (the ARRAY case) or a series of JSON documents, one\n> per row\n> >>> (the LINES case).\n> >>\n> >> \"COPY Operations\" in the doc says:\n> >>\n> >> \" The backend sends a CopyOutResponse message to the frontend,\n> followed\n> >>     by zero or more CopyData messages (always one per row),\n> followed by\n> >>     CopyDone\".\n> >>\n> >> In the ARRAY case, the first messages with the copyjsontest\n> >> regression test look like this (tshark output):\n> >>\n> >> PostgreSQL\n> >>      Type: CopyOut response\n> >>      Length: 13\n> >>      Format: Text (0)\n> >>      Columns: 3\n> >>      Format: Text (0)\n> \n> > Anything receiving this and looking for a json array should know\n> how to\n> > assemble the data correctly despite the extra CopyData messages.\n> \n> Hmm, maybe the real problem here is that Columns do not equal \"3\" for\n> the json mode case -- that should really say \"1\" I think, because the\n> row is not represented as 3 columns but rather 1 json object.\n> \n> Does that sound correct?\n> \n> Assuming yes, there is still maybe an issue that there are two more\n> \"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those\n> are less likely to cause some hazard?\n> \n> \n> What is the limitation, if any, of introducing new type codes for \n> these.  n = 2..N for the different variants?  Or even -1 for \"raw \n> text\"?  And document that columns and structural rows need to be \n> determined out-of-band.  Continuing to use 1 (text) for this non-csv \n> data seems like a hack even if we can technically make it function.  The \n> semantics, especially for the array case, are completely discarded or wrong.\n\nI am not following you here. SendCopyBegin looks like this currently:\n\n8<--------------------------------\nSendCopyBegin(CopyToState cstate)\n{\n\tStringInfoData buf;\n\tint\t\t\tnatts = list_length(cstate->attnumlist);\n\tint16\t\tformat = (cstate->opts.binary ? 1 : 0);\n\tint\t\t\ti;\n\n\tpq_beginmessage(&buf, PqMsg_CopyOutResponse);\n\tpq_sendbyte(&buf, format);\t/* overall format */\n\tpq_sendint16(&buf, natts);\n\tfor (i = 0; i < natts; i++)\n\t\tpq_sendint16(&buf, format); /* per-column formats */\n\tpq_endmessage(&buf);\n\tcstate->copy_dest = COPY_FRONTEND;\n}\n8<--------------------------------\n\nThe \"1\" is saying are we binary mode or not. JSON mode will never be \nsending in binary in the current implementation at least. And it always \naggregates all the columns as one json object. So the correct answer is \n(I think):\n8<--------------------------------\n*************** SendCopyBegin(CopyToState cstate)\n*** 146,154 ****\n\n \tpq_beginmessage(&buf, PqMsg_CopyOutResponse);\n \tpq_sendbyte(&buf, format);\t/* overall format */\n! \tpq_sendint16(&buf, natts);\n! \tfor (i = 0; i < natts; i++)\n! \t\tpq_sendint16(&buf, format); /* per-column formats */\n \tpq_endmessage(&buf);\n \tcstate->copy_dest = COPY_FRONTEND;\n }\n--- 150,169 ----\n\n \tpq_beginmessage(&buf, PqMsg_CopyOutResponse);\n \tpq_sendbyte(&buf, format);\t/* overall format */\n! \tif (!cstate->opts.json_mode)\n! \t{\n! \t\tpq_sendint16(&buf, natts);\n! \t\tfor (i = 0; i < natts; i++)\n! \t\t\tpq_sendint16(&buf, format); /* per-column formats */\n! \t}\n! \telse\n! \t{\n! \t\t/*\n! \t\t * JSON mode is always one non-binary column\n! \t\t */\n! \t\tpq_sendint16(&buf, 1);\n! \t\tpq_sendint16(&buf, 0);\n! \t}\n \tpq_endmessage(&buf);\n \tcstate->copy_dest = COPY_FRONTEND;\n }\n8<--------------------------------\n\nThat still leaves the need to fix the documentation:\n\n\" The backend sends a CopyOutResponse message to the frontend, followed\n by zero or more CopyData messages (always one per row), followed by\n CopyDone\"\n\nprobably \"always one per row\" would be changed to note that json array \nformat outputs two extra rows for the start/end bracket.\n\nIn fact, as written the patch does this:\n8<--------------------------------\nCOPY (SELECT x.i, 'val' || x.i as v FROM\n generate_series(1, 3) x(i) WHERE false)\nTO STDOUT WITH (FORMAT JSON, FORCE_ARRAY);\n[\n]\n8<--------------------------------\n\nNot sure if that is a problem or not.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 18:45:52 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 18:38, David G. Johnston wrote:\n> On Wed, Dec 6, 2023 at 4:28 PM David G. Johnston \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On Wed, Dec 6, 2023 at 4:09 PM Joe Conway <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On 12/6/23 14:47, Joe Conway wrote:\n> > On 12/6/23 13:59, Daniel Verite wrote:\n> >>      Andrew Dunstan wrote:\n> >>\n> >>> IMNSHO, we should produce either a single JSON\n> >>> document (the ARRAY case) or a series of JSON documents,\n> one per row\n> >>> (the LINES case).\n> >>\n> >> \"COPY Operations\" in the doc says:\n> >>\n> >> \" The backend sends a CopyOutResponse message to the\n> frontend, followed\n> >>     by zero or more CopyData messages (always one per row),\n> followed by\n> >>     CopyDone\".\n> >>\n> >> In the ARRAY case, the first messages with the copyjsontest\n> >> regression test look like this (tshark output):\n> >>\n> >> PostgreSQL\n> >>      Type: CopyOut response\n> >>      Length: 13\n> >>      Format: Text (0)\n> >>      Columns: 3\n> >>      Format: Text (0)\n> \n> > Anything receiving this and looking for a json array should\n> know how to\n> > assemble the data correctly despite the extra CopyData messages.\n> \n> Hmm, maybe the real problem here is that Columns do not equal\n> \"3\" for\n> the json mode case -- that should really say \"1\" I think,\n> because the\n> row is not represented as 3 columns but rather 1 json object.\n> \n> Does that sound correct?\n> \n> Assuming yes, there is still maybe an issue that there are two more\n> \"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe\n> those\n> are less likely to cause some hazard?\n> \n> \n> What is the limitation, if any, of introducing new type codes for\n> these.  n = 2..N for the different variants?  Or even -1 for \"raw\n> text\"?  And document that columns and structural rows need to be\n> determined out-of-band.  Continuing to use 1 (text) for this non-csv\n> data seems like a hack even if we can technically make it function. \n> The semantics, especially for the array case, are completely\n> discarded or wrong.\n> \n> Also, it seems like this answer would be easier to make if we implement \n> COPY FROM now since how is the server supposed to deal with decomposing \n> this data into tables without accurate type information?  I don't see \n> implementing only half of the feature being a good idea.  I've had much \n> more desire for FROM compared to TO personally.\n\nSeveral people have weighed in on the side of getting COPY TO done by \nitself first. Given how long this discussion has already become for a \nrelatively small and simple feature, I am a big fan of not expanding the \nscope now.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 18:48:54 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 4:45 PM Joe Conway <[email protected]> wrote:\n\n>\n> \" The backend sends a CopyOutResponse message to the frontend, followed\n> by zero or more CopyData messages (always one per row), followed by\n> CopyDone\"\n>\n> probably \"always one per row\" would be changed to note that json array\n> format outputs two extra rows for the start/end bracket.\n>\n\nFair, I was ascribing much more semantic meaning to this than it wants.\n\nI don't see any real requirement, given the lack of semantics, to mention\nJSON at all. It is one CopyData per row, regardless of the contents. We\ndon't delineate between the header and non-header data in CSV. It isn't a\nprotocol concern.\n\nBut I still cannot shake the belief that using a format code of 1 - which\nreally could be interpreted as meaning \"textual csv\" in practice - for this\nJSON output is unwise and we should introduce a new integer value for the\nnew fundamental output format.\n\nDavid J.\n\nOn Wed, Dec 6, 2023 at 4:45 PM Joe Conway <[email protected]> wrote:\n\" The backend sends a CopyOutResponse message to the frontend, followed\n    by zero or more CopyData messages (always one per row), followed by\n    CopyDone\"\n\nprobably \"always one per row\" would be changed to note that json array \nformat outputs two extra rows for the start/end bracket.Fair, I was ascribing much more semantic meaning to this than it wants.I don't see any real requirement, given the lack of semantics, to mention JSON at all.  It is one CopyData per row, regardless of the contents.  We don't delineate between the header and non-header data in CSV.  It isn't a protocol concern.But I still cannot shake the belief that using a format code of 1 - which really could be interpreted as meaning \"textual csv\" in practice - for this JSON output is unwise and we should introduce a new integer value for the new fundamental output format.David J.", "msg_date": "Wed, 6 Dec 2023 17:39:11 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 19:39, David G. Johnston wrote:\n> On Wed, Dec 6, 2023 at 4:45 PM Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> \n> \" The backend sends a CopyOutResponse message to the frontend, followed\n>     by zero or more CopyData messages (always one per row), followed by\n>     CopyDone\"\n> \n> probably \"always one per row\" would be changed to note that json array\n> format outputs two extra rows for the start/end bracket.\n> \n> \n> Fair, I was ascribing much more semantic meaning to this than it wants.\n> \n> I don't see any real requirement, given the lack of semantics, to \n> mention JSON at all.  It is one CopyData per row, regardless of the \n> contents.  We don't delineate between the header and non-header data in \n> CSV.  It isn't a protocol concern.\n\ngood point\n\n> But I still cannot shake the belief that using a format code of 1 - \n> which really could be interpreted as meaning \"textual csv\" in practice - \n> for this JSON output is unwise and we should introduce a new integer \n> value for the new fundamental output format.\n\nNo, I am pretty sure you still have that wrong. The \"1\" means binary \nmode. As in\n8<----------------------\nFORMAT\n\n Selects the data format to be read or written: text, csv (Comma \nSeparated Values), or binary. The default is text.\n8<----------------------\n\nThat is completely separate from text and csv. It literally means to use \nthe binary output functions instead of the usual ones:\n\n8<----------------------\n if (cstate->opts.binary)\n getTypeBinaryOutputInfo(attr->atttypid,\n &out_func_oid,\n &isvarlena);\n else\n getTypeOutputInfo(attr->atttypid,\n &out_func_oid,\n &isvarlena);\n8<----------------------\n\nBoth \"text\" and \"csv\" mode use are non-binary output formats. I believe \nthe JSON output format is also non-binary.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 19:57:42 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 5:57 PM Joe Conway <[email protected]> wrote:\n\n> On 12/6/23 19:39, David G. Johnston wrote:\n> > On Wed, Dec 6, 2023 at 4:45 PM Joe Conway <[email protected]\n> > <mailto:[email protected]>> wrote:\n>\n> > But I still cannot shake the belief that using a format code of 1 -\n> > which really could be interpreted as meaning \"textual csv\" in practice -\n> > for this JSON output is unwise and we should introduce a new integer\n> > value for the new fundamental output format.\n>\n> No, I am pretty sure you still have that wrong. The \"1\" means binary\n> mode\n\n\nOk. I made the same typo twice, I did mean to write 0 instead of 1. But\nthe point that we should introduce a 2 still stands. The new code would\nmean: use text output functions but that there is no inherent tabular\nstructure in the underlying contents. Instead the copy format was JSON and\nthe output layout is dependent upon the json options in the copy command\nand that there really shouldn't be any attempt to turn the contents\ndirectly into a tabular data structure like you presently do with the CSV\ndata under format 0. Ignore the column count and column formats as they\nare fixed or non-existent.\n\nDavid J.\n\nOn Wed, Dec 6, 2023 at 5:57 PM Joe Conway <[email protected]> wrote:On 12/6/23 19:39, David G. Johnston wrote:\n> On Wed, Dec 6, 2023 at 4:45 PM Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n\n> But I still cannot shake the belief that using a format code of 1 - \n> which really could be interpreted as meaning \"textual csv\" in practice - \n> for this JSON output is unwise and we should introduce a new integer \n> value for the new fundamental output format.\n\nNo, I am pretty sure you still have that wrong. The \"1\" means binary \nmodeOk.  I made the same typo twice, I did mean to write 0 instead of 1.  But the point that we should introduce a 2 still stands.  The new code would mean: use text output functions but that there is no inherent tabular structure in the underlying contents.  Instead the copy format was JSON and the output layout is dependent upon the json options in the copy command and that there really shouldn't be any attempt to turn the contents directly into a tabular data structure like you presently do with the CSV data under format 0.  Ignore the column count and column formats as they are fixed or non-existent.David J.", "msg_date": "Wed, 6 Dec 2023 18:09:22 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 18:09, Joe Conway wrote:\n> On 12/6/23 14:47, Joe Conway wrote:\n>> On 12/6/23 13:59, Daniel Verite wrote:\n>>> \tAndrew Dunstan wrote:\n>>> \n>>>> IMNSHO, we should produce either a single JSON \n>>>> document (the ARRAY case) or a series of JSON documents, one per row \n>>>> (the LINES case).\n>>> \n>>> \"COPY Operations\" in the doc says:\n>>> \n>>> \" The backend sends a CopyOutResponse message to the frontend, followed\n>>> by zero or more CopyData messages (always one per row), followed by\n>>> CopyDone\".\n>>> \n>>> In the ARRAY case, the first messages with the copyjsontest\n>>> regression test look like this (tshark output):\n>>> \n>>> PostgreSQL\n>>> Type: CopyOut response\n>>> Length: 13\n>>> Format: Text (0)\n>>> Columns: 3\n>>> \tFormat: Text (0)\n>>> PostgreSQL\n>>> Type: Copy data\n>>> Length: 6\n>>> Copy data: 5b0a\n>>> PostgreSQL\n>>> Type: Copy data\n>>> Length: 76\n>>> Copy data:\n>>> 207b226964223a312c226631223a226c696e652077697468205c2220696e2069743a2031…\n>>> \n>>> The first Copy data message with contents \"5b0a\" does not qualify\n>>> as a row of data with 3 columns as advertised in the CopyOut\n>>> message. Isn't that a problem?\n>> \n>> \n>> Is it a real problem, or just a bit of documentation change that I missed?\n>> \n>> Anything receiving this and looking for a json array should know how to\n>> assemble the data correctly despite the extra CopyData messages.\n> \n> Hmm, maybe the real problem here is that Columns do not equal \"3\" for\n> the json mode case -- that should really say \"1\" I think, because the\n> row is not represented as 3 columns but rather 1 json object.\n> \n> Does that sound correct?\n> \n> Assuming yes, there is still maybe an issue that there are two more\n> \"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those\n> are less likely to cause some hazard?\n\n\nThe attached should fix the CopyOut response to say one column. I.e. it \nought to look something like:\n\nPostgreSQL\n Type: CopyOut response\n Length: 13\n Format: Text (0)\n Columns: 1\n Format: Text (0)\nPostgreSQL\n Type: Copy data\n Length: 6\n Copy data: 5b0a\nPostgreSQL\n Type: Copy data\n Length: 76\n Copy data: [...]\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Dec 2023 20:10:21 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 20:09, David G. Johnston wrote:\n> On Wed, Dec 6, 2023 at 5:57 PM Joe Conway <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 12/6/23 19:39, David G. Johnston wrote:\n> > On Wed, Dec 6, 2023 at 4:45 PM Joe Conway <[email protected]\n> <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>> wrote:\n> \n> > But I still cannot shake the belief that using a format code of 1 -\n> > which really could be interpreted as meaning \"textual csv\" in\n> practice -\n> > for this JSON output is unwise and we should introduce a new integer\n> > value for the new fundamental output format.\n> \n> No, I am pretty sure you still have that wrong. The \"1\" means binary\n> mode\n> \n> \n> Ok.  I made the same typo twice, I did mean to write 0 instead of 1.\n\nFair enough.\n\n> But the point that we should introduce a 2 still stands.  The new code \n> would mean: use text output functions but that there is no inherent \n> tabular structure in the underlying contents.  Instead the copy format \n> was JSON and the output layout is dependent upon the json options in the \n> copy command and that there really shouldn't be any attempt to turn the \n> contents directly into a tabular data structure like you presently do \n> with the CSV data under format 0.  Ignore the column count and column \n> formats as they are fixed or non-existent.\n\nI think that amounts to a protocol change, which we tend to avoid at all \ncosts.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 20:14:09 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023 at 6:14 PM Joe Conway <[email protected]> wrote:\n\n>\n> > But the point that we should introduce a 2 still stands. The new code\n> > would mean: use text output functions but that there is no inherent\n> > tabular structure in the underlying contents. Instead the copy format\n> > was JSON and the output layout is dependent upon the json options in the\n> > copy command and that there really shouldn't be any attempt to turn the\n> > contents directly into a tabular data structure like you presently do\n> > with the CSV data under format 0. Ignore the column count and column\n> > formats as they are fixed or non-existent.\n>\n> I think that amounts to a protocol change, which we tend to avoid at all\n> costs.\n>\n>\nI wasn't sure on that point but figured it might be the case. It is a\nvalue change, not structural, which seems like it is the kind of\nmodification any living system might allow and be expected to have. But I\nalso don't see any known problem with the current change of content\nsemantics without the format identification change. Most of the relevant\ncontext ends up out-of-band in the copy command itself.\n\nDavid J.\n\nOn Wed, Dec 6, 2023 at 6:14 PM Joe Conway <[email protected]> wrote:\n> But the point that we should introduce a 2 still stands.  The new code \n> would mean: use text output functions but that there is no inherent \n> tabular structure in the underlying contents.  Instead the copy format \n> was JSON and the output layout is dependent upon the json options in the \n> copy command and that there really shouldn't be any attempt to turn the \n> contents directly into a tabular data structure like you presently do \n> with the CSV data under format 0.  Ignore the column count and column \n> formats as they are fixed or non-existent.\n\nI think that amounts to a protocol change, which we tend to avoid at all \ncosts.I wasn't sure on that point but figured it might be the case.  It is a value change, not structural, which seems like it is the kind of modification any living system might allow and be expected to have.  But I also don't see any known problem with the current change of content semantics without the format identification change.  Most of the relevant context ends up out-of-band in the copy command itself.David J.", "msg_date": "Wed, 6 Dec 2023 18:21:28 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 6, 2023, at 3:59 PM, Daniel Verite wrote:\n> The first Copy data message with contents \"5b0a\" does not qualify\n> as a row of data with 3 columns as advertised in the CopyOut\n> message. Isn't that a problem?\n> \n> At least the json non-ARRAY case (\"json lines\") doesn't have\n> this issue, since every CopyData message corresponds effectively\n> to a row in the table.\n\nMoreover, if your interface wants to process the COPY data stream while\nreceiving it, you cannot provide \"json array\" format because each row (plus all\nof the received ones) is not a valid JSON. Hence, a JSON parser cannot be\nexecuted until you receive the whole data set. (wal2json format 1 has this\ndisadvantage. Format 2 was born to provide a better alternative -- each row is\na valid JSON.) I'm not saying that \"json array\" is not useful but that for\nlarge data sets, it is less useful.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Dec 6, 2023, at 3:59 PM, Daniel Verite wrote:The first Copy data message with contents \"5b0a\" does not qualifyas a row of data with 3 columns as advertised in the CopyOutmessage. Isn't that a problem?At least the json non-ARRAY case (\"json lines\") doesn't havethis issue, since every CopyData message corresponds effectivelyto a row in the table.Moreover, if your interface wants to process the COPY data stream whilereceiving it, you cannot provide \"json array\" format because each row (plus allof the received ones) is not a valid JSON. Hence, a JSON parser cannot beexecuted until you receive the whole data set. (wal2json format 1 has thisdisadvantage. Format 2 was born to provide a better alternative -- each row isa valid JSON.) I'm not saying that \"json array\" is not useful but that forlarge data sets, it is less useful.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 06 Dec 2023 23:42:06 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Dec 06, 2023 at 03:20:46PM -0500, Tom Lane wrote:\n> If Nathan's perf results hold up elsewhere, it seems like some\n> micro-optimization around the text-pushing (appendStringInfoString)\n> might be more useful than caching. The 7% spent in cache lookups\n> could be worth going after later, but it's not the top of the list.\n\nHah, it turns out my benchmark of 110M integers really stresses the\nJSONTYPE_NUMERIC path in datum_to_json_internal(). That particular path\ncalls strlen() twice: once for IsValidJsonNumber(), and once in\nappendStringInfoString(). If I save the result from IsValidJsonNumber()\nand give it to appendBinaryStringInfo() instead, the COPY goes ~8% faster.\nIt's probably worth giving datum_to_json_internal() a closer look in a new\nthread.\n\ndiff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c\nindex 71ae53ff97..1951e93d9d 100644\n--- a/src/backend/utils/adt/json.c\n+++ b/src/backend/utils/adt/json.c\n@@ -180,6 +180,7 @@ datum_to_json_internal(Datum val, bool is_null, StringInfo result,\n {\n char *outputstr;\n text *jsontext;\n+ int len;\n \n check_stack_depth();\n \n@@ -223,8 +224,8 @@ datum_to_json_internal(Datum val, bool is_null, StringInfo result,\n * Don't call escape_json for a non-key if it's a valid JSON\n * number.\n */\n- if (!key_scalar && IsValidJsonNumber(outputstr, strlen(outputstr)))\n- appendStringInfoString(result, outputstr);\n+ if (!key_scalar && IsValidJsonNumber(outputstr, (len = strlen(outputstr))))\n+ appendBinaryStringInfo(result, outputstr, len);\n else\n escape_json(result, outputstr);\n pfree(outputstr);\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 20:56:22 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/6/23 21:56, Nathan Bossart wrote:\n> On Wed, Dec 06, 2023 at 03:20:46PM -0500, Tom Lane wrote:\n>> If Nathan's perf results hold up elsewhere, it seems like some\n>> micro-optimization around the text-pushing (appendStringInfoString)\n>> might be more useful than caching. The 7% spent in cache lookups\n>> could be worth going after later, but it's not the top of the list.\n> \n> Hah, it turns out my benchmark of 110M integers really stresses the\n> JSONTYPE_NUMERIC path in datum_to_json_internal(). That particular path\n> calls strlen() twice: once for IsValidJsonNumber(), and once in\n> appendStringInfoString(). If I save the result from IsValidJsonNumber()\n> and give it to appendBinaryStringInfo() instead, the COPY goes ~8% faster.\n> It's probably worth giving datum_to_json_internal() a closer look in a new\n> thread.\n\nYep, after looking through that code I was going to make the point that \nyour 11 integer test was over indexing on that one type. I am sure there \nare other micro-optimizations to be made here, but I also think that it \nis outside the scope of the COPY TO JSON patch.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 07:15:28 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 2023-12-06 We 17:56, David G. Johnston wrote:\n> On Wed, Dec 6, 2023 at 3:38 PM Joe Conway <[email protected]> wrote:\n>\n> So the questions are:\n> 1. Do those two formats work for the initial implementation?\n>\n>\n> Yes.  We provide a stream-oriented format and one atomic-import format.\n>\n> 2. Is the default correct or should it be switched\n>     e.g. rather than specifying FORCE_ARRAY to get an\n>     array, something like FORCE_NO_ARRAY to get JSON lines\n>     and the JSON array is default?\n>\n>\n> No default?\n>\n> Require explicit of a sub-format when the main format is JSON.\n>\n> JSON_OBJECT_ROWS\n> JSON_ARRAY_OF_OBJECTS\n>\n> For a future compact array-structured-composites sub-format:\n> JSON_ARRAY_OF_ARRAYS\n> JSON_ARRAY_ROWS\n>\n>\n\nNo default seems unlike the way we treat other COPY options. I'm not \nterribly fussed about which format to have as the default, but I think \nwe should have one.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-12-06 We 17:56, David G.\n Johnston wrote:\n\n\n\n\n\nOn Wed, Dec\n 6, 2023 at 3:38 PM Joe Conway <[email protected]>\n wrote:\n\n\n\nSo the questions are:\n 1. Do those two formats work for the initial implementation?\n\n\n\n\nYes.  We\n provide a stream-oriented format and one atomic-import\n format.\n\n\n\n\n 2. Is the default correct or should it be switched\n     e.g. rather than specifying FORCE_ARRAY to get an\n     array, something like FORCE_NO_ARRAY to get JSON lines\n     and the JSON array is default?\n\n\n\n\nNo default?\n\n\nRequire\n explicit of a sub-format when the main format is JSON.\n\n\nJSON_OBJECT_ROWS\nJSON_ARRAY_OF_OBJECTS\n\n\nFor a future\n compact array-structured-composites sub-format:\nJSON_ARRAY_OF_ARRAYS\nJSON_ARRAY_ROWS\n\n\n\n\n\n\n\n\n\nNo default seems unlike the way we treat other COPY options. I'm\n not terribly fussed about which format to have as the default, but\n I think we should have one. \n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 7 Dec 2023 08:19:19 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\tJoe Conway wrote:\n\n> The attached should fix the CopyOut response to say one column. I.e. it \n> ought to look something like:\n\nSpending more time with the doc I came to the opinion that in this bit\nof the protocol, in CopyOutResponse (B)\n...\nInt16\nThe number of columns in the data to be copied (denoted N below).\n...\n\nthis number must be the number of columns in the source.\nThat is for COPY table(a,b,c)\tthe number is 3, independently\non whether the result is formatted in text, cvs, json or binary.\n\nI think that changing it for json can reasonably be interpreted\nas a protocol break and we should not do it.\n\nThe fact that this value does not help parsing the CopyData\nmessages that come next is not a new issue. A reader that\ndoesn't know the field separator and whether it's text or csv\ncannot parse these messages into fields anyway.\nBut just knowing how much columns there are in the original\ndata might be useful by itself and we don't want to break that.\n\n\nThe other question for me is, in the CopyData message, this\nbit:\n\" Messages sent from the backend will always correspond to single data rows\"\n\nISTM that considering that the \"[\" starting the json array is a\n\"data row\" is a stretch.\nThat might be interpreted as a protocol break, depending\non how strict the interpretation is.\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 07 Dec 2023 14:35:52 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Thursday, December 7, 2023, Daniel Verite <[email protected]>\nwrote:\n\n> Joe Conway wrote:\n>\n> > The attached should fix the CopyOut response to say one column. I.e. it\n> > ought to look something like:\n>\n> Spending more time with the doc I came to the opinion that in this bit\n> of the protocol, in CopyOutResponse (B)\n> ...\n> Int16\n> The number of columns in the data to be copied (denoted N below).\n> ...\n>\n> this number must be the number of columns in the source.\n> That is for COPY table(a,b,c) the number is 3, independently\n> on whether the result is formatted in text, cvs, json or binary.\n>\n> I think that changing it for json can reasonably be interpreted\n> as a protocol break and we should not do it.\n>\n> The fact that this value does not help parsing the CopyData\n> messages that come next is not a new issue. A reader that\n> doesn't know the field separator and whether it's text or csv\n> cannot parse these messages into fields anyway.\n> But just knowing how much columns there are in the original\n> data might be useful by itself and we don't want to break that.\n\n\nThis argument for leaving 3 as the column count makes sense to me. I agree\nthis content is not meant to facilitate interpreting the contents at a\nprotocol level.\n\n\n>\n>\n> The other question for me is, in the CopyData message, this\n> bit:\n> \" Messages sent from the backend will always correspond to single data\n> rows\"\n>\n> ISTM that considering that the \"[\" starting the json array is a\n> \"data row\" is a stretch.\n> That might be interpreted as a protocol break, depending\n> on how strict the interpretation is.\n>\n\nWe already effectively interpret this as “one content line per copydata\nmessage” in the csv text with header line case. I’d probably reword it to\nstate that explicitly and then we again don’t have to worry about the\nprotocol caring about any data semantics of the underlying content, only\nphysical semantics.\n\nDavid J.\n\nOn Thursday, December 7, 2023, Daniel Verite <[email protected]> wrote:        Joe Conway wrote:\n\n> The attached should fix the CopyOut response to say one column. I.e. it \n> ought to look something like:\n\nSpending more time with the doc I came to the opinion that in this bit\nof the protocol, in CopyOutResponse (B)\n...\nInt16\nThe number of columns in the data to be copied (denoted N below).\n...\n\nthis number must be the number of columns in the source.\nThat is for COPY table(a,b,c)   the number is 3, independently\non whether the result is formatted in text, cvs, json or binary.\n\nI think that changing it for json can reasonably be interpreted\nas a protocol break and we should not do it.\n\nThe fact that this value does not help parsing the CopyData\nmessages that come next is not a new issue. A reader that\ndoesn't know the field separator and whether it's text or csv\ncannot parse these messages into fields anyway.\nBut just knowing how much columns there are in the original\ndata might be useful by itself and we don't want to break that.This argument for leaving 3 as the column count makes sense to me.  I agree this content is not meant to facilitate interpreting the contents at a protocol level. \n\n\nThe other question for me is, in the CopyData message, this\nbit:\n\" Messages sent from the backend will always correspond to single data rows\"\n\nISTM that considering that the \"[\" starting the json array is a\n\"data row\" is a stretch.\nThat might be interpreted as a protocol break, depending\non how strict the interpretation is.\nWe already effectively interpret this as “one content line per copydata message” in the csv text with header line case.  I’d probably reword it to state that explicitly and then we again don’t have to worry about the protocol caring about any data semantics of the underlying content, only physical semantics.David J.", "msg_date": "Thu, 7 Dec 2023 06:47:10 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/7/23 08:35, Daniel Verite wrote:\n> \tJoe Conway wrote:\n> \n>> The attached should fix the CopyOut response to say one column. I.e. it \n>> ought to look something like:\n> \n> Spending more time with the doc I came to the opinion that in this bit\n> of the protocol, in CopyOutResponse (B)\n> ...\n> Int16\n> The number of columns in the data to be copied (denoted N below).\n> ...\n> \n> this number must be the number of columns in the source.\n> That is for COPY table(a,b,c)\tthe number is 3, independently\n> on whether the result is formatted in text, cvs, json or binary.\n> \n> I think that changing it for json can reasonably be interpreted\n> as a protocol break and we should not do it.\n> \n> The fact that this value does not help parsing the CopyData\n> messages that come next is not a new issue. A reader that\n> doesn't know the field separator and whether it's text or csv\n> cannot parse these messages into fields anyway.\n> But just knowing how much columns there are in the original\n> data might be useful by itself and we don't want to break that.\n\nOk, that sounds reasonable to me -- I will revert that change.\n\n> The other question for me is, in the CopyData message, this\n> bit:\n> \" Messages sent from the backend will always correspond to single data rows\"\n> \n> ISTM that considering that the \"[\" starting the json array is a\n> \"data row\" is a stretch.\n> That might be interpreted as a protocol break, depending\n> on how strict the interpretation is.\n\nIf we really think that is a problem I can see about changing it to this \nformat for json array:\n\n8<------------------\ncopy\n(\n with ss(f1, f2) as\n (\n select 1, g.i from generate_series(1, 3) g(i)\n )\n select ss from ss\n) to stdout (format json, force_array);\n[{\"ss\":{\"f1\":1,\"f2\":1}}\n,{\"ss\":{\"f1\":1,\"f2\":2}}\n,{\"ss\":{\"f1\":1,\"f2\":3}}]\n8<------------------\n\nIs this acceptable to everyone?\n\nOr maybe this is preferred?\n8<------------------\n[{\"ss\":{\"f1\":1,\"f2\":1}},\n {\"ss\":{\"f1\":1,\"f2\":2}},\n {\"ss\":{\"f1\":1,\"f2\":3}}]\n8<------------------\n\nOr as long as we are painting the shed, maybe this?\n8<------------------\n[{\"ss\":{\"f1\":1,\"f2\":1}},\n{\"ss\":{\"f1\":1,\"f2\":2}},\n{\"ss\":{\"f1\":1,\"f2\":3}}]\n8<------------------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 08:52:39 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/7/23 08:52, Joe Conway wrote:\n> Or maybe this is preferred?\n> 8<------------------\n> [{\"ss\":{\"f1\":1,\"f2\":1}},\n> {\"ss\":{\"f1\":1,\"f2\":2}},\n> {\"ss\":{\"f1\":1,\"f2\":3}}]\n> 8<------------------\n\nI don't know why my mail client keeps adding extra spaces, but the \nintention here is a single space in front of row 2 and 3 in order to \nline the json objects up at column 2.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 08:56:41 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Thursday, December 7, 2023, Joe Conway <[email protected]> wrote:\n\n> On 12/7/23 08:35, Daniel Verite wrote:\n>\n>> Joe Conway wrote:\n>>\n>> The attached should fix the CopyOut response to say one column. I.e. it\n>>> ought to look something like:\n>>>\n>>\n>> Spending more time with the doc I came to the opinion that in this bit\n>> of the protocol, in CopyOutResponse (B)\n>> ...\n>> Int16\n>> The number of columns in the data to be copied (denoted N below).\n>> ...\n>>\n>> this number must be the number of columns in the source.\n>> That is for COPY table(a,b,c) the number is 3, independently\n>> on whether the result is formatted in text, cvs, json or binary.\n>>\n>> I think that changing it for json can reasonably be interpreted\n>> as a protocol break and we should not do it.\n>>\n>> The fact that this value does not help parsing the CopyData\n>> messages that come next is not a new issue. A reader that\n>> doesn't know the field separator and whether it's text or csv\n>> cannot parse these messages into fields anyway.\n>> But just knowing how much columns there are in the original\n>> data might be useful by itself and we don't want to break that.\n>>\n>\n> Ok, that sounds reasonable to me -- I will revert that change.\n>\n> The other question for me is, in the CopyData message, this\n>> bit:\n>> \" Messages sent from the backend will always correspond to single data\n>> rows\"\n>>\n>> ISTM that considering that the \"[\" starting the json array is a\n>> \"data row\" is a stretch.\n>> That might be interpreted as a protocol break, depending\n>> on how strict the interpretation is.\n>>\n>\n> If we really think that is a problem I can see about changing it to this\n> format for json array:\n>\n> 8<------------------\n> copy\n> (\n> with ss(f1, f2) as\n> (\n> select 1, g.i from generate_series(1, 3) g(i)\n> )\n> select ss from ss\n> ) to stdout (format json, force_array);\n> [{\"ss\":{\"f1\":1,\"f2\":1}}\n> ,{\"ss\":{\"f1\":1,\"f2\":2}}\n> ,{\"ss\":{\"f1\":1,\"f2\":3}}]\n> 8<------------------\n>\n> Is this acceptable to everyone?\n>\n> Or maybe this is preferred?\n> 8<------------------\n> [{\"ss\":{\"f1\":1,\"f2\":1}},\n> {\"ss\":{\"f1\":1,\"f2\":2}},\n> {\"ss\":{\"f1\":1,\"f2\":3}}]\n> 8<------------------\n>\n> Or as long as we are painting the shed, maybe this?\n> 8<------------------\n> [{\"ss\":{\"f1\":1,\"f2\":1}},\n> {\"ss\":{\"f1\":1,\"f2\":2}},\n> {\"ss\":{\"f1\":1,\"f2\":3}}]\n> 8<------------------\n>\n\nThose are all the same breakage though - if truly interpreted as data rows\nthe protocol is basically written such that the array format is not\nsupportable and only the lines format can be used. Hence my “format 0\ndoesn’t work” comment for array output and we should explicitly add format\n2 where we explicitly decouple lines of output from rows of data. That\nsaid, it would seem in practice format 0 already decouples them and so the\ncurrent choice of the brackets on their own lines is acceptable.\n\nI’d prefer to keep them on their own line.\n\nI also don’t know why you introduced another level of object nesting here.\nThat seems quite undesirable.\n\nDavid J.\n\nOn Thursday, December 7, 2023, Joe Conway <[email protected]> wrote:On 12/7/23 08:35, Daniel Verite wrote:\n\n        Joe Conway wrote:\n\n\nThe attached should fix the CopyOut response to say one column. I.e. it ought to look something like:\n\n\nSpending more time with the doc I came to the opinion that in this bit\nof the protocol, in CopyOutResponse (B)\n...\nInt16\nThe number of columns in the data to be copied (denoted N below).\n...\n\nthis number must be the number of columns in the source.\nThat is for COPY table(a,b,c)   the number is 3, independently\non whether the result is formatted in text, cvs, json or binary.\n\nI think that changing it for json can reasonably be interpreted\nas a protocol break and we should not do it.\n\nThe fact that this value does not help parsing the CopyData\nmessages that come next is not a new issue. A reader that\ndoesn't know the field separator and whether it's text or csv\ncannot parse these messages into fields anyway.\nBut just knowing how much columns there are in the original\ndata might be useful by itself and we don't want to break that.\n\n\nOk, that sounds reasonable to me -- I will revert that change.\n\n\nThe other question for me is, in the CopyData message, this\nbit:\n\" Messages sent from the backend will always correspond to single data rows\"\n\nISTM that considering that the \"[\" starting the json array is a\n\"data row\" is a stretch.\nThat might be interpreted as a protocol break, depending\non how strict the interpretation is.\n\n\nIf we really think that is a problem I can see about changing it to this format for json array:\n\n8<------------------\ncopy\n(\n  with ss(f1, f2) as\n  (\n    select 1, g.i from generate_series(1, 3) g(i)\n  )\n  select ss from ss\n) to stdout (format json, force_array);\n[{\"ss\":{\"f1\":1,\"f2\":1}}\n,{\"ss\":{\"f1\":1,\"f2\":2}}\n,{\"ss\":{\"f1\":1,\"f2\":3}}]\n8<------------------\n\nIs this acceptable to everyone?\n\nOr maybe this is preferred?\n8<------------------\n[{\"ss\":{\"f1\":1,\"f2\":1}},\n {\"ss\":{\"f1\":1,\"f2\":2}},\n {\"ss\":{\"f1\":1,\"f2\":3}}]\n8<------------------\n\nOr as long as we are painting the shed, maybe this?\n8<------------------\n[{\"ss\":{\"f1\":1,\"f2\":1}},\n{\"ss\":{\"f1\":1,\"f2\":2}},\n{\"ss\":{\"f1\":1,\"f2\":3}}]\n8<------------------\nThose are all the same breakage though - if truly interpreted as data rows the protocol is basically written such that the array format is not supportable and only the lines format can be used.  Hence my “format 0 doesn’t work” comment for array output and we should explicitly add format 2 where we explicitly decouple lines of output from rows of data.  That said, it would seem in practice format 0 already decouples them and so the current choice of the brackets on their own lines is acceptable.I’d prefer to keep them on their own line.I also don’t know why you introduced another level of object nesting here.  That seems quite undesirable.David J.", "msg_date": "Thu, 7 Dec 2023 07:11:08 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/7/23 09:11, David G. Johnston wrote:\n> Those are all the same breakage though - if truly interpreted as data \n> rows the protocol is basically written such that the array format is not \n> supportable and only the lines format can be used.  Hence my “format 0 \n> doesn’t work” comment for array output and we should explicitly add \n> format 2 where we explicitly decouple lines of output from rows of \n> data.  That said, it would seem in practice format 0 already decouples \n> them and so the current choice of the brackets on their own lines is \n> acceptable.\n> \n> I’d prefer to keep them on their own line.\n\nWFM ¯\\_(ツ)_/¯\n\nI am merely responding with options to the many people opining on the \nthread.\n\n> I also don’t know why you introduced another level of object nesting \n> here.  That seems quite undesirable.\n\nI didn't add anything. It is an artifact of the particular query I wrote \nin the copy to statement (I did \"select ss from ss\" instead of \"select * \nfrom ss\"), mea culpa.\n\nThis is what the latest patch, as written today, outputs:\n8<----------------------\ncopy\n(select 1, g.i from generate_series(1, 3) g(i))\nto stdout (format json, force_array);\n[\n {\"?column?\":1,\"i\":1}\n,{\"?column?\":1,\"i\":2}\n,{\"?column?\":1,\"i\":3}\n]\n8<----------------------\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 10:07:59 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Thu, 7 Dec 2023 at 08:47, David G. Johnston <[email protected]>\nwrote:\n\n> On Thursday, December 7, 2023, Daniel Verite <[email protected]>\n> wrote:\n>\n>> Joe Conway wrote:\n>>\n>> > The attached should fix the CopyOut response to say one column. I.e. it\n>> > ought to look something like:\n>>\n>> Spending more time with the doc I came to the opinion that in this bit\n>> of the protocol, in CopyOutResponse (B)\n>> ...\n>> Int16\n>> The number of columns in the data to be copied (denoted N below).\n>> ...\n>>\n>> this number must be the number of columns in the source.\n>> That is for COPY table(a,b,c) the number is 3, independently\n>> on whether the result is formatted in text, cvs, json or binary.\n>>\n>> I think that changing it for json can reasonably be interpreted\n>> as a protocol break and we should not do it.\n>>\n>> The fact that this value does not help parsing the CopyData\n>> messages that come next is not a new issue. A reader that\n>> doesn't know the field separator and whether it's text or csv\n>> cannot parse these messages into fields anyway.\n>> But just knowing how much columns there are in the original\n>> data might be useful by itself and we don't want to break that.\n>\n>\n> This argument for leaving 3 as the column count makes sense to me. I\n> agree this content is not meant to facilitate interpreting the contents at\n> a protocol level.\n>\n\nI'd disagree. From my POV if the data comes back as a JSON Array this is\none object and this should be reflected in the column count.\n\n>\n>\n>>\n>>\n>> The other question for me is, in the CopyData message, this\n>> bit:\n>> \" Messages sent from the backend will always correspond to single data\n>> rows\"\n>>\n>> ISTM that considering that the \"[\" starting the json array is a\n>> \"data row\" is a stretch.\n>> That might be interpreted as a protocol break, depending\n>> on how strict the interpretation is.\n>>\n>\nWell technically it is a single row if you send an array.\n\nRegardless, I expect Euler's comment above that JSON lines format is going\nto be the preferred format as the client doesn't have to wait for the\nentire object before starting to parse.\n\nDave\n\n>\n\nOn Thu, 7 Dec 2023 at 08:47, David G. Johnston <[email protected]> wrote:On Thursday, December 7, 2023, Daniel Verite <[email protected]> wrote:        Joe Conway wrote:\n\n> The attached should fix the CopyOut response to say one column. I.e. it \n> ought to look something like:\n\nSpending more time with the doc I came to the opinion that in this bit\nof the protocol, in CopyOutResponse (B)\n...\nInt16\nThe number of columns in the data to be copied (denoted N below).\n...\n\nthis number must be the number of columns in the source.\nThat is for COPY table(a,b,c)   the number is 3, independently\non whether the result is formatted in text, cvs, json or binary.\n\nI think that changing it for json can reasonably be interpreted\nas a protocol break and we should not do it.\n\nThe fact that this value does not help parsing the CopyData\nmessages that come next is not a new issue. A reader that\ndoesn't know the field separator and whether it's text or csv\ncannot parse these messages into fields anyway.\nBut just knowing how much columns there are in the original\ndata might be useful by itself and we don't want to break that.This argument for leaving 3 as the column count makes sense to me.  I agree this content is not meant to facilitate interpreting the contents at a protocol level.I'd disagree. From my POV if the data comes back as a JSON Array this is one object and this should be reflected in the column count.  \n\n\nThe other question for me is, in the CopyData message, this\nbit:\n\" Messages sent from the backend will always correspond to single data rows\"\n\nISTM that considering that the \"[\" starting the json array is a\n\"data row\" is a stretch.\nThat might be interpreted as a protocol break, depending\non how strict the interpretation is.Well technically it is a single row if you send an array.Regardless, I expect Euler's comment above that JSON lines format is going to be the preferred format as the client doesn't have to wait for the entire object before starting to parse.Dave", "msg_date": "Fri, 8 Dec 2023 09:01:07 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\tJoe Conway wrote:\n\n> copyto_json.007.diff\n\nWhen the source has json fields with non-significant line feeds, the COPY\noutput has these line feeds too, which makes the output incompatible\nwith rule #2 at https://jsonlines.org (\"2. Each Line is a Valid JSON\nValue\").\n\ncreate table j(f json);\n\ninsert into j values('{\"a\":1,\n\"b\":2\n}');\n\ncopy j to stdout (format json);\n\nResult:\n{\"f\":{\"a\":1,\n\"b\":2\n}}\n\nIs that expected? copy.sgml in 007 doesn't describe the output\nin terms of lines so it's hard to tell from the doc.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 08 Dec 2023 20:45:23 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "\tDave Cramer wrote:\n\n> > This argument for leaving 3 as the column count makes sense to me. I\n> > agree this content is not meant to facilitate interpreting the contents at\n> > a protocol level.\n> >\n> \n> I'd disagree. From my POV if the data comes back as a JSON Array this is\n> one object and this should be reflected in the column count.\n\nThe doc says this:\n\"Int16\n The number of columns in the data to be copied (denoted N below).\"\n\nand this formulation is repeated in PQnfields() for libpq:\n\n\"PQnfields\n Returns the number of columns (fields) to be copied.\"\n\nHow to interpret that sentence? \n\"to be copied\" from what, into what, and by what way?\n\nA plausible interpretation is \"to be copied from the source data\ninto the COPY stream, by the backend\".\tSo the number of columns\nto be copied likely refers to the columns of the dataset, not the\n\"in-transit form\" that is text or csv or json.\n\nThe interpetation you're proposing also makes sense, that it's just\none json column per row, or even a single-row single-column for the\nentire dataset in the force_array case, but then the question is why\nisn't that number of columns always 1 for the original \"text\" format,\nsince each row is represented in the stream as a single long piece of\ntext?\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 08 Dec 2023 21:35:39 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 12/8/23 14:45, Daniel Verite wrote:\n> \tJoe Conway wrote:\n> \n>> copyto_json.007.diff\n> \n> When the source has json fields with non-significant line feeds, the COPY\n> output has these line feeds too, which makes the output incompatible\n> with rule #2 at https://jsonlines.org (\"2. Each Line is a Valid JSON\n> Value\").\n> \n> create table j(f json);\n> \n> insert into j values('{\"a\":1,\n> \"b\":2\n> }');\n> \n> copy j to stdout (format json);\n> \n> Result:\n> {\"f\":{\"a\":1,\n> \"b\":2\n> }}\n> \n> Is that expected? copy.sgml in 007 doesn't describe the output\n> in terms of lines so it's hard to tell from the doc.\n\nThe patch as-is just does the equivalent of row_to_json():\n8<----------------------------\nselect row_to_json(j) from j;\n row_to_json\n--------------\n {\"f\":{\"a\":1,+\n \"b\":2 +\n }}\n(1 row)\n8<----------------------------\n\nSo yeah, that is how it works today. I will take a look at what it would \ntake to fix it.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 8 Dec 2023 16:26:40 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": ">\n\nOn Sat, Dec 2, 2023 at 4:11 PM Tom Lane <[email protected]> wrote:\n>\n> Joe Conway <[email protected]> writes:\n> >> I noticed that, with the PoC patch, \"json\" is the only format that must be\n> >> quoted. Without quotes, I see a syntax error.\n\n\nIn longer term we should move any specific COPY flag names and values\nout of grammar and their checking into the parts that actually\nimplement whatever the flag is influencing\n\nSimilar to what we do with OPTIONS in all levels of FDW definitions\n(WRAPPER itself, SERVER, USER MAPPING, FOREIGN TABLE)\n\n[*] https://www.postgresql.org/docs/current/sql-createforeigndatawrapper.html\n\n> >> I'm assuming there's a\n> >> conflict with another json-related rule somewhere in gram.y, but I haven't\n> >> tracked down exactly which one is causing it.\n>\n> While I've not looked too closely, I suspect this might be due to the\n> FORMAT_LA hack in base_yylex:\n>\n> /* Replace FORMAT by FORMAT_LA if it's followed by JSON */\n> switch (next_token)\n> {\n> case JSON:\n> cur_token = FORMAT_LA;\n> break;\n> }\n\nMy hope is that turning the WITH into a fully independent part with no\ngrammar-defined keys or values would also solve the issue of quoting\n\"json\".\n\nFor backwards compatibility we may even go the route of keeping the\nWITH as is but add the OPTIONS which can take any values at grammar\nlevel.\n\nI shared my \"Pluggable Copy \" talk slides from Berlin '22 in another thread\n\n--\nHannu\n\n\n", "msg_date": "Sat, 9 Dec 2023 12:56:23 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Thu, 7 Dec 2023 at 01:10, Joe Conway <[email protected]> wrote:\n>\n> The attached should fix the CopyOut response to say one column.\n>\n\nPlaying around with this, I found a couple of cases that generate an error:\n\nCOPY (SELECT 1 UNION ALL SELECT 2) TO stdout WITH (format json);\n\nCOPY (VALUES (1), (2)) TO stdout WITH (format json);\n\nboth of those generate the following:\n\nERROR: record type has not been registered\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 8 Jan 2024 19:36:34 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 1/8/24 14:36, Dean Rasheed wrote:\n> On Thu, 7 Dec 2023 at 01:10, Joe Conway <[email protected]> wrote:\n>>\n>> The attached should fix the CopyOut response to say one column.\n>>\n> \n> Playing around with this, I found a couple of cases that generate an error:\n> \n> COPY (SELECT 1 UNION ALL SELECT 2) TO stdout WITH (format json);\n> \n> COPY (VALUES (1), (2)) TO stdout WITH (format json);\n> \n> both of those generate the following:\n> \n> ERROR: record type has not been registered\n\n\nThanks -- will have a look\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 8 Jan 2024 15:40:23 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Tue, Jan 9, 2024 at 4:40 AM Joe Conway <[email protected]> wrote:\n>\n> On 1/8/24 14:36, Dean Rasheed wrote:\n> > On Thu, 7 Dec 2023 at 01:10, Joe Conway <[email protected]> wrote:\n> >>\n> >> The attached should fix the CopyOut response to say one column.\n> >>\n> >\n> > Playing around with this, I found a couple of cases that generate an error:\n> >\n> > COPY (SELECT 1 UNION ALL SELECT 2) TO stdout WITH (format json);\n> >\n> > COPY (VALUES (1), (2)) TO stdout WITH (format json);\n> >\n> > both of those generate the following:\n> >\n> > ERROR: record type has not been registered\n>\n>\n> Thanks -- will have a look\n>\n> --\n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n>\n\nIn the function CopyOneRowTo, I try to call the function BlessTupleDesc again.\n\n+BlessTupleDesc(slot->tts_tupleDescriptor);\nrowdata = ExecFetchSlotHeapTupleDatum(slot);\n\nPlease check the attachment. (one test.sql file, one patch, one bless twice).\n\nNow the error cases are gone, less cases return error.\nbut the new result is not the expected.\n\n`COPY (SELECT g from generate_series(1,1) g) TO stdout WITH (format json);`\nreturns\n{\"\":1}\nThe expected result would be `{\"g\":1}`.\n\nI think the reason is maybe related to the function copy_dest_startup.", "msg_date": "Tue, 16 Jan 2024 11:46:59 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Tue, Jan 16, 2024 at 11:46 AM jian he <[email protected]> wrote:\n>\n>\n> I think the reason is maybe related to the function copy_dest_startup.\nI was wrong about this sentence.\n\nin the function CopyOneRowTo `if (!cstate->opts.json_mode)` else branch\nchange to the following:\n\nelse\n{\nDatum rowdata;\nStringInfo result;\nif (slot->tts_tupleDescriptor->natts == 1)\n{\n/* Flat-copy the attribute array */\nmemcpy(TupleDescAttr(slot->tts_tupleDescriptor, 0),\nTupleDescAttr(cstate->queryDesc->tupDesc, 0),\n1 * sizeof(FormData_pg_attribute));\n}\nBlessTupleDesc(slot->tts_tupleDescriptor);\nrowdata = ExecFetchSlotHeapTupleDatum(slot);\nresult = makeStringInfo();\ncomposite_to_json(rowdata, result, false);\nif (json_row_delim_needed &&\ncstate->opts.force_array)\n{\nCopySendChar(cstate, ',');\n}\nelse if (cstate->opts.force_array)\n{\n/* first row needs no delimiter */\nCopySendChar(cstate, ' ');\njson_row_delim_needed = true;\n}\nCopySendData(cstate, result->data, result->len);\n}\n\nall the cases work, more like a hack.\nbecause I cannot fully explain it to you why it works.\n-------------------------------------------------------------------------------\ndemo\n\n\ndrop function if exists execute_into_test cascade;\nNOTICE: function execute_into_test() does not exist, skipping\nDROP FUNCTION\ndrop type if exists execute_into_test cascade;\nNOTICE: type \"execute_into_test\" does not exist, skipping\nDROP TYPE\ncreate type eitype as (i integer, y integer);\nCREATE TYPE\ncreate or replace function execute_into_test() returns eitype as $$\ndeclare\n _v eitype;\nbegin\n execute 'select 1,2' into _v;\n return _v;\nend; $$ language plpgsql;\nCREATE FUNCTION\n\nCOPY (SELECT 1 from generate_series(1,1) g) TO stdout WITH (format json);\n{\"?column?\":1}\nCOPY (SELECT g from generate_series(1,1) g) TO stdout WITH (format json);\n{\"g\":1}\nCOPY (SELECT g,1 from generate_series(1,1) g) TO stdout WITH (format json);\n{\"g\":1,\"?column?\":1}\nCOPY (select * from execute_into_test()) TO stdout WITH (format json);\n{\"i\":1,\"y\":2}\nCOPY (select * from execute_into_test() sub) TO stdout WITH (format json);\n{\"i\":1,\"y\":2}\nCOPY (select sub from execute_into_test() sub) TO stdout WITH (format json);\n{\"sub\":{\"i\":1,\"y\":2}}\nCOPY (select sub.i from execute_into_test() sub) TO stdout WITH (format json);\n{\"i\":1}\nCOPY (select sub.y from execute_into_test() sub) TO stdout WITH (format json);\n{\"y\":2}\nCOPY (VALUES (1), (2)) TO stdout WITH (format json);\n{\"column1\":1}\n{\"column1\":2}\n COPY (SELECT 1 UNION ALL SELECT 2) TO stdout WITH (format json);\n{\"?column?\":1}\n{\"?column?\":2}\n\n\n", "msg_date": "Tue, 16 Jan 2024 15:45:29 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Thu, Dec 7, 2023 at 10:10 AM Joe Conway <[email protected]> wrote:\n>\n> On 12/6/23 18:09, Joe Conway wrote:\n> > On 12/6/23 14:47, Joe Conway wrote:\n> >> On 12/6/23 13:59, Daniel Verite wrote:\n> >>> Andrew Dunstan wrote:\n> >>>\n> >>>> IMNSHO, we should produce either a single JSON\n> >>>> document (the ARRAY case) or a series of JSON documents, one per row\n> >>>> (the LINES case).\n> >>>\n> >>> \"COPY Operations\" in the doc says:\n> >>>\n> >>> \" The backend sends a CopyOutResponse message to the frontend, followed\n> >>> by zero or more CopyData messages (always one per row), followed by\n> >>> CopyDone\".\n> >>>\n> >>> In the ARRAY case, the first messages with the copyjsontest\n> >>> regression test look like this (tshark output):\n> >>>\n> >>> PostgreSQL\n> >>> Type: CopyOut response\n> >>> Length: 13\n> >>> Format: Text (0)\n> >>> Columns: 3\n> >>> Format: Text (0)\n> >>> PostgreSQL\n> >>> Type: Copy data\n> >>> Length: 6\n> >>> Copy data: 5b0a\n> >>> PostgreSQL\n> >>> Type: Copy data\n> >>> Length: 76\n> >>> Copy data:\n> >>> 207b226964223a312c226631223a226c696e652077697468205c2220696e2069743a2031…\n> >>>\n> >>> The first Copy data message with contents \"5b0a\" does not qualify\n> >>> as a row of data with 3 columns as advertised in the CopyOut\n> >>> message. Isn't that a problem?\n> >>\n> >>\n> >> Is it a real problem, or just a bit of documentation change that I missed?\n> >>\n> >> Anything receiving this and looking for a json array should know how to\n> >> assemble the data correctly despite the extra CopyData messages.\n> >\n> > Hmm, maybe the real problem here is that Columns do not equal \"3\" for\n> > the json mode case -- that should really say \"1\" I think, because the\n> > row is not represented as 3 columns but rather 1 json object.\n> >\n> > Does that sound correct?\n> >\n> > Assuming yes, there is still maybe an issue that there are two more\n> > \"rows\" that actual output rows (the \"[\" and the \"]\"), but maybe those\n> > are less likely to cause some hazard?\n>\n>\n> The attached should fix the CopyOut response to say one column. I.e. it\n> ought to look something like:\n>\n> PostgreSQL\n> Type: CopyOut response\n> Length: 13\n> Format: Text (0)\n> Columns: 1\n> Format: Text (0)\n> PostgreSQL\n> Type: Copy data\n> Length: 6\n> Copy data: 5b0a\n> PostgreSQL\n> Type: Copy data\n> Length: 76\n> Copy data: [...]\n>\n\nIf I'm not missing, copyto_json.007.diff is the latest patch but it\nneeds to be rebased to the current HEAD. Here are random comments:\n\n---\n if (opts_out->json_mode)\n+ {\n+ if (is_from)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot use JSON mode in COPY FROM\")));\n+ }\n+ else if (opts_out->force_array)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"COPY FORCE_ARRAY requires JSON mode\")));\n\nI think that flatting these two condition make the code more readable:\n\nif (opts_out->json_mode && is_from)\nereport(ERROR, ...);\n\nif (!opts_out->json_mode && opts_out->force_array)\nereport(ERROR, ...);\n\nAlso these checks can be moved close to other checks at the end of\nProcessCopyOptions().\n\n---\n@@ -3395,6 +3395,10 @@ copy_opt_item:\n {\n $$ = makeDefElem(\"format\", (Node *) makeString(\"csv\"), @1);\n }\n+ | JSON\n+ {\n+ $$ = makeDefElem(\"format\", (Node *) makeString(\"json\"), @1);\n+ }\n | HEADER_P\n {\n $$ = makeDefElem(\"header\", (Node *) makeBoolean(true), @1);\n@@ -3427,6 +3431,10 @@ copy_opt_item:\n {\n $$ = makeDefElem(\"encoding\", (Node *) makeString($2), @1);\n }\n+ | FORCE ARRAY\n+ {\n+ $$ = makeDefElem(\"force_array\", (Node *)\nmakeBoolean(true), @1);\n+ }\n ;\n\nI believe we don't need to support new options in old-style syntax.\n\n---\n@@ -3469,6 +3477,10 @@ copy_generic_opt_elem:\n {\n $$ = makeDefElem($1, $2, @1);\n }\n+ | FORMAT_LA copy_generic_opt_arg\n+ {\n+ $$ = makeDefElem(\"format\", $2, @1);\n+ }\n ;\n\nI think it's not necessary. \"format\" option is already handled in\ncopy_generic_opt_elem.\n\n---\n+/* need delimiter to start next json array element */\n+static bool json_row_delim_needed = false;\n\nI think it's cleaner to include json_row_delim_needed into CopyToStateData.\n\n---\nSplitting the patch into two patches: add json format and add\nforce_array option would make reviews easy.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Jan 2024 17:09:47 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Fri, Jan 19, 2024 at 4:10 PM Masahiko Sawada <[email protected]> wrote:\n>\n> If I'm not missing, copyto_json.007.diff is the latest patch but it\n> needs to be rebased to the current HEAD. Here are random comments:\n>\n\nplease check the latest version.\n\n> if (opts_out->json_mode)\n> + {\n> + if (is_from)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot use JSON mode in COPY FROM\")));\n> + }\n> + else if (opts_out->force_array)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"COPY FORCE_ARRAY requires JSON mode\")));\n>\n> I think that flatting these two condition make the code more readable:\n\nI make it two condition check\n\n> if (opts_out->json_mode && is_from)\n> ereport(ERROR, ...);\n>\n> if (!opts_out->json_mode && opts_out->force_array)\n> ereport(ERROR, ...);\n>\n> Also these checks can be moved close to other checks at the end of\n> ProcessCopyOptions().\n>\nYes. I did it, please check it.\n\n> @@ -3395,6 +3395,10 @@ copy_opt_item:\n> {\n> $$ = makeDefElem(\"format\", (Node *) makeString(\"csv\"), @1);\n> }\n> + | JSON\n> + {\n> + $$ = makeDefElem(\"format\", (Node *) makeString(\"json\"), @1);\n> + }\n> | HEADER_P\n> {\n> $$ = makeDefElem(\"header\", (Node *) makeBoolean(true), @1);\n> @@ -3427,6 +3431,10 @@ copy_opt_item:\n> {\n> $$ = makeDefElem(\"encoding\", (Node *) makeString($2), @1);\n> }\n> + | FORCE ARRAY\n> + {\n> + $$ = makeDefElem(\"force_array\", (Node *)\n> makeBoolean(true), @1);\n> + }\n> ;\n>\n> I believe we don't need to support new options in old-style syntax.\n>\n> ---\n> @@ -3469,6 +3477,10 @@ copy_generic_opt_elem:\n> {\n> $$ = makeDefElem($1, $2, @1);\n> }\n> + | FORMAT_LA copy_generic_opt_arg\n> + {\n> + $$ = makeDefElem(\"format\", $2, @1);\n> + }\n> ;\n>\n> I think it's not necessary. \"format\" option is already handled in\n> copy_generic_opt_elem.\n>\n\ntest it, I found out this part is necessary.\nbecause a query with WITH like `copy (select 1) to stdout with\n(format json, force_array false); ` will fail.\n\n> ---\n> +/* need delimiter to start next json array element */\n> +static bool json_row_delim_needed = false;\n>\n> I think it's cleaner to include json_row_delim_needed into CopyToStateData.\nyes. I agree. So I did it.\n\n> ---\n> Splitting the patch into two patches: add json format and add\n> force_array option would make reviews easy.\n>\ndone. one patch for json format, another one for force_array option.\n\nI also made the following cases fail.\ncopy copytest to stdout (format csv, force_array false);\nERROR: specify COPY FORCE_ARRAY is only allowed in JSON mode.\n\nIf copy to table then call table_scan_getnextslot no need to worry\nabout the Tupdesc.\nhowever if we copy a query output as format json, we may need to consider it.\n\ncstate->queryDesc->tupDesc is the output of Tupdesc, we can rely on this.\nfor copy a query result to json, I memcpy( cstate->queryDesc->tupDesc)\nto the the slot's slot->tts_tupleDescriptor\nso composite_to_json can use cstate->queryDesc->tupDesc to do the work.\nI guess this will make it more bullet-proof.", "msg_date": "Tue, 23 Jan 2024 13:31:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Hi hackers,\n\nKou-san(CCed) has been working on *Make COPY format extendable[1]*, so\nI think making *copy to json* based on that work might be the right direction.\n\nI write an extension for that purpose, and here is the patch set together\nwith Kou-san's *extendable copy format* implementation:\n\n0001-0009 is the implementation of extendable copy format\n00010 is the pg_copy_json extension\n\nI also created a PR[2] if anybody likes the github review style.\n\nThe *extendable copy format* feature is still being developed, I post this\nemail in case the patch set in this thread is committed without knowing\nthe *extendable copy format* feature.\n\nI'd like to hear your opinions.\n\n[1]: https://www.postgresql.org/message-id/20240124.144936.67229716500876806.kou%40clear-code.com\n[2]: https://github.com/zhjwpku/postgres/pull/2/files\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sat, 27 Jan 2024 13:55:23 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Sat, 27 Jan 2024 at 11:25, Junwang Zhao <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> Kou-san(CCed) has been working on *Make COPY format extendable[1]*, so\n> I think making *copy to json* based on that work might be the right direction.\n>\n> I write an extension for that purpose, and here is the patch set together\n> with Kou-san's *extendable copy format* implementation:\n>\n> 0001-0009 is the implementation of extendable copy format\n> 00010 is the pg_copy_json extension\n>\n> I also created a PR[2] if anybody likes the github review style.\n>\n> The *extendable copy format* feature is still being developed, I post this\n> email in case the patch set in this thread is committed without knowing\n> the *extendable copy format* feature.\n>\n> I'd like to hear your opinions.\n\nCFBot shows that one of the test is failing as in [1]:\n[05:46:41.678] /bin/sh: 1: cannot open\n/tmp/cirrus-ci-build/contrib/pg_copy_json/sql/test_copy_format.sql: No\nsuch file\n[05:46:41.678] diff:\n/tmp/cirrus-ci-build/contrib/pg_copy_json/expected/test_copy_format.out:\nNo such file or directory\n[05:46:41.678] diff:\n/tmp/cirrus-ci-build/contrib/pg_copy_json/results/test_copy_format.out:\nNo such file or directory\n[05:46:41.678] # diff command failed with status 512: diff\n\"/tmp/cirrus-ci-build/contrib/pg_copy_json/expected/test_copy_format.out\"\n\"/tmp/cirrus-ci-build/contrib/pg_copy_json/results/test_copy_format.out\"\n> \"/tmp/cirrus-ci-build/contrib/pg_copy_json/results/test_copy_format.out.diff\"\n[05:46:41.678] Bail out!make[2]: *** [../../src/makefiles/pgxs.mk:454:\ncheck] Error 2\n[05:46:41.679] make[1]: *** [Makefile:96: check-pg_copy_json-recurse] Error 2\n[05:46:41.679] make: *** [GNUmakefile:71: check-world-contrib-recurse] Error 2\n\nPlease post an updated version for the same.\n\n[1] - https://cirrus-ci.com/task/5322439115145216\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 31 Jan 2024 15:19:51 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Hi Vignesh,\n\nOn Wed, Jan 31, 2024 at 5:50 PM vignesh C <[email protected]> wrote:\n>\n> On Sat, 27 Jan 2024 at 11:25, Junwang Zhao <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > Kou-san(CCed) has been working on *Make COPY format extendable[1]*, so\n> > I think making *copy to json* based on that work might be the right direction.\n> >\n> > I write an extension for that purpose, and here is the patch set together\n> > with Kou-san's *extendable copy format* implementation:\n> >\n> > 0001-0009 is the implementation of extendable copy format\n> > 00010 is the pg_copy_json extension\n> >\n> > I also created a PR[2] if anybody likes the github review style.\n> >\n> > The *extendable copy format* feature is still being developed, I post this\n> > email in case the patch set in this thread is committed without knowing\n> > the *extendable copy format* feature.\n> >\n> > I'd like to hear your opinions.\n>\n> CFBot shows that one of the test is failing as in [1]:\n> [05:46:41.678] /bin/sh: 1: cannot open\n> /tmp/cirrus-ci-build/contrib/pg_copy_json/sql/test_copy_format.sql: No\n> such file\n> [05:46:41.678] diff:\n> /tmp/cirrus-ci-build/contrib/pg_copy_json/expected/test_copy_format.out:\n> No such file or directory\n> [05:46:41.678] diff:\n> /tmp/cirrus-ci-build/contrib/pg_copy_json/results/test_copy_format.out:\n> No such file or directory\n> [05:46:41.678] # diff command failed with status 512: diff\n> \"/tmp/cirrus-ci-build/contrib/pg_copy_json/expected/test_copy_format.out\"\n> \"/tmp/cirrus-ci-build/contrib/pg_copy_json/results/test_copy_format.out\"\n> > \"/tmp/cirrus-ci-build/contrib/pg_copy_json/results/test_copy_format.out.diff\"\n> [05:46:41.678] Bail out!make[2]: *** [../../src/makefiles/pgxs.mk:454:\n> check] Error 2\n> [05:46:41.679] make[1]: *** [Makefile:96: check-pg_copy_json-recurse] Error 2\n> [05:46:41.679] make: *** [GNUmakefile:71: check-world-contrib-recurse] Error 2\n>\n> Please post an updated version for the same.\n\nThanks for the reminder, the patch set I posted is not for commit but\nfor further discussion.\n\nI will post more information about the *extendable copy* feature\nwhen it's about to be committed.\n\n>\n> [1] - https://cirrus-ci.com/task/5322439115145216\n>\n> Regards,\n> Vignesh\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 31 Jan 2024 17:58:00 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 2024-Jan-23, jian he wrote:\n\n> > + | FORMAT_LA copy_generic_opt_arg\n> > + {\n> > + $$ = makeDefElem(\"format\", $2, @1);\n> > + }\n> > ;\n> >\n> > I think it's not necessary. \"format\" option is already handled in\n> > copy_generic_opt_elem.\n> \n> test it, I found out this part is necessary.\n> because a query with WITH like `copy (select 1) to stdout with\n> (format json, force_array false); ` will fail.\n\nRight, because \"FORMAT JSON\" is turned into FORMAT_LA JSON by parser.c\n(see base_yylex there). I'm not really sure but I think it might be\nbetter to make it \"| FORMAT_LA JSON\" instead of invoking the whole\ncopy_generic_opt_arg syntax. Not because of performance, but just\nbecause it's much clearer what's going on.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 31 Jan 2024 14:26:28 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Wed, Jan 31, 2024 at 9:26 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jan-23, jian he wrote:\n>\n> > > + | FORMAT_LA copy_generic_opt_arg\n> > > + {\n> > > + $$ = makeDefElem(\"format\", $2, @1);\n> > > + }\n> > > ;\n> > >\n> > > I think it's not necessary. \"format\" option is already handled in\n> > > copy_generic_opt_elem.\n> >\n> > test it, I found out this part is necessary.\n> > because a query with WITH like `copy (select 1) to stdout with\n> > (format json, force_array false); ` will fail.\n>\n> Right, because \"FORMAT JSON\" is turned into FORMAT_LA JSON by parser.c\n> (see base_yylex there). I'm not really sure but I think it might be\n> better to make it \"| FORMAT_LA JSON\" instead of invoking the whole\n> copy_generic_opt_arg syntax. Not because of performance, but just\n> because it's much clearer what's going on.\n>\n\nsorry to bother you.\nNow I didn't apply any patch, just at the master.\nI don't know much about gram.y.\n\ncopy (select 1) to stdout with (format json1);\nERROR: COPY format \"json1\" not recognized\nLINE 1: copy (select 1) to stdout with (format json1);\n ^\ncopy (select 1) to stdout with (format json);\nERROR: syntax error at or near \"format\"\nLINE 1: copy (select 1) to stdout with (format json);\n ^\n\njson is a keyword. Is it possible to escape it?\nmake `copy (select 1) to stdout with (format json)` error message the same as\n`copy (select 1) to stdout with (format json1)`\n\n\n", "msg_date": "Fri, 2 Feb 2024 16:25:28 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 2024-Feb-02, jian he wrote:\n\n> copy (select 1) to stdout with (format json);\n> ERROR: syntax error at or near \"format\"\n> LINE 1: copy (select 1) to stdout with (format json);\n> ^\n> \n> json is a keyword. Is it possible to escape it?\n> make `copy (select 1) to stdout with (format json)` error message the same as\n> `copy (select 1) to stdout with (format json1)`\n\nSure, you can use \n copy (select 1) to stdout with (format \"json\");\nand then you get\nERROR: COPY format \"json\" not recognized\n\nis that what you meant?\n\nIf you want the server to send this message when the JSON word is not in\nquotes, I'm afraid that's not possible, due to the funny nature of the\nFORMAT keyword when the JSON keyword appears after it. But why do you\ncare? If you use the patch, then you no longer need to have the \"not\nrecognized\" error messages anymore, because the JSON format is indeed\na recognized one.\n\nMaybe I didn't understand your question.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 2 Feb 2024 10:47:59 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Fri, Feb 2, 2024 at 5:48 PM Alvaro Herrera <[email protected]> wrote:\n>\n> If you want the server to send this message when the JSON word is not in\n> quotes, I'm afraid that's not possible, due to the funny nature of the\n> FORMAT keyword when the JSON keyword appears after it. But why do you\n> care? If you use the patch, then you no longer need to have the \"not\n> recognized\" error messages anymore, because the JSON format is indeed\n> a recognized one.\n>\n\n\"JSON word is not in quotes\" is my intention.\n\nNow it seems when people implement any custom format for COPY,\nif the format_name is a keyword then we need single quotes.\n\nThanks for clarifying!\n\n\n", "msg_date": "Fri, 2 Feb 2024 18:05:53 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Fri, Jan 19, 2024 at 4:10 PM Masahiko Sawada <[email protected]> wrote:\n>\n> if (opts_out->json_mode && is_from)\n> ereport(ERROR, ...);\n>\n> if (!opts_out->json_mode && opts_out->force_array)\n> ereport(ERROR, ...);\n>\n> Also these checks can be moved close to other checks at the end of\n> ProcessCopyOptions().\n>\n> ---\n> @@ -3395,6 +3395,10 @@ copy_opt_item:\n> {\n> $$ = makeDefElem(\"format\", (Node *) makeString(\"csv\"), @1);\n> }\n> + | JSON\n> + {\n> + $$ = makeDefElem(\"format\", (Node *) makeString(\"json\"), @1);\n> + }\n> | HEADER_P\n> {\n> $$ = makeDefElem(\"header\", (Node *) makeBoolean(true), @1);\n> @@ -3427,6 +3431,10 @@ copy_opt_item:\n> {\n> $$ = makeDefElem(\"encoding\", (Node *) makeString($2), @1);\n> }\n> + | FORCE ARRAY\n> + {\n> + $$ = makeDefElem(\"force_array\", (Node *)\n> makeBoolean(true), @1);\n> + }\n> ;\n>\n> I believe we don't need to support new options in old-style syntax.\n>\nyou are right about the force_array case.\nwe don't need to add force_array related changes in gram.y.\n\n\nOn Wed, Jan 31, 2024 at 9:26 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jan-23, jian he wrote:\n>\n> > > + | FORMAT_LA copy_generic_opt_arg\n> > > + {\n> > > + $$ = makeDefElem(\"format\", $2, @1);\n> > > + }\n> > > ;\n> > >\n> > > I think it's not necessary. \"format\" option is already handled in\n> > > copy_generic_opt_elem.\n> >\n> > test it, I found out this part is necessary.\n> > because a query with WITH like `copy (select 1) to stdout with\n> > (format json, force_array false); ` will fail.\n>\n> Right, because \"FORMAT JSON\" is turned into FORMAT_LA JSON by parser.c\n> (see base_yylex there). I'm not really sure but I think it might be\n> better to make it \"| FORMAT_LA JSON\" instead of invoking the whole\n> copy_generic_opt_arg syntax. Not because of performance, but just\n> because it's much clearer what's going on.\n>\nI am not sure what alternative you are referring to.\nI've rebased the patch, made some cosmetic changes.\nNow I think it's pretty neat.\nyou can, based on it, make your change, then I may understand the\nalternative you are referring to.", "msg_date": "Mon, 19 Feb 2024 11:43:39 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Hello everyone!\n\nThanks for working on this, really nice feature!\n\n> On 9 Jan 2024, at 01:40, Joe Conway <[email protected]> wrote:\n> \n> Thanks -- will have a look\n\nJoe, recently folks proposed a lot of patches in this thread that seem like diverted from original way of implementation.\nAs an author of CF entry [0] can you please comment on which patch version needs review?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4716/\n\n", "msg_date": "Fri, 8 Mar 2024 22:28:04 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On 3/8/24 12:28, Andrey M. Borodin wrote:\n> Hello everyone!\n> \n> Thanks for working on this, really nice feature!\n> \n>> On 9 Jan 2024, at 01:40, Joe Conway <[email protected]> wrote:\n>> \n>> Thanks -- will have a look\n> \n> Joe, recently folks proposed a lot of patches in this thread that seem like diverted from original way of implementation.\n> As an author of CF entry [0] can you please comment on which patch version needs review?\n\n\nI don't know if I agree with the proposed changes, but I have also been \nwaiting to see how the parallel discussion regarding COPY extensibility \nshakes out.\n\nAnd there were a couple of issues found that need to be tracked down.\n\nAdditionally I have had time/availability challenges recently.\n\nOverall, chances seem slim that this will make it into 17, but I have \nnot quite given up hope yet either.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 8 Mar 2024 13:01:50 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Sat, Mar 9, 2024 at 2:03 AM Joe Conway <[email protected]> wrote:\n>\n> On 3/8/24 12:28, Andrey M. Borodin wrote:\n> > Hello everyone!\n> >\n> > Thanks for working on this, really nice feature!\n> >\n> >> On 9 Jan 2024, at 01:40, Joe Conway <[email protected]> wrote:\n> >>\n> >> Thanks -- will have a look\n> >\n> > Joe, recently folks proposed a lot of patches in this thread that seem like diverted from original way of implementation.\n> > As an author of CF entry [0] can you please comment on which patch version needs review?\n>\n>\n> I don't know if I agree with the proposed changes, but I have also been\n> waiting to see how the parallel discussion regarding COPY extensibility\n> shakes out.\n>\n> And there were a couple of issues found that need to be tracked down.\n>\n> Additionally I have had time/availability challenges recently.\n>\n> Overall, chances seem slim that this will make it into 17, but I have\n> not quite given up hope yet either.\n\nHi.\nsummary changes I've made in v9 patches at [0]\n\nmeta: rebased. Now you need to use `git apply` or `git am`, previously\ncopyto_json.007.diff, you need to use GNU patch.\n\n\nat [1], Dean Rasheed found some corner cases when the returned slot's\ntts_tupleDescriptor\nfrom\n`\nExecutorRun(cstate->queryDesc, ForwardScanDirection, 0, true);\nprocessed = ((DR_copy *) cstate->queryDesc->dest)->processed;\n`\ncannot be used for composite_to_json.\ngenerally DestReceiver->rStartup is to send the TupleDesc to the DestReceiver,\nThe COPY TO DestReceiver's rStartup function is copy_dest_startup,\nhowever copy_dest_startup is a no-op.\nThat means to make the final TupleDesc of COPY TO (FORMAT JSON)\noperation bullet proof,\nwe need to copy the tupDesc from CopyToState's queryDesc.\nThis only applies to when the COPY TO source is a query (example:\ncopy (select 1) to stdout), not a table.\nThe above is my interpretation.\n\n\nat [2], Masahiko Sawada made several points.\nMainly split the patch to two, one for format json, second is for\noptions force_array.\nSplitting into two is easier to review, I think.\nMy changes also addressed all the points Masahiko Sawada had mentioned.\n\n\n\n[0] https://postgr.es/m/CACJufxHd6ZRmJJBsDOGpovaVAekMS-u6AOrcw0Ja-Wyi-0kGtA@mail.gmail.com\n[1] https://postgr.es/m/CAEZATCWh29787xf=4NgkoixeqRHrqi0Qd33Z6_-F8t2dZ0yLCQ@mail.gmail.com\n[2] https://postgr.es/m/CAD21AoCb02zhZM3vXb8HSw8fwOsL+iRdEFb--Kmunv8PjPAWjw@mail.gmail.com\n\n\n", "msg_date": "Sat, 9 Mar 2024 09:13:34 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Sat, Mar 9, 2024 at 9:13 AM jian he <[email protected]> wrote:\n>\n> On Sat, Mar 9, 2024 at 2:03 AM Joe Conway <[email protected]> wrote:\n> >\n> > On 3/8/24 12:28, Andrey M. Borodin wrote:\n> > > Hello everyone!\n> > >\n> > > Thanks for working on this, really nice feature!\n> > >\n> > >> On 9 Jan 2024, at 01:40, Joe Conway <[email protected]> wrote:\n> > >>\n> > >> Thanks -- will have a look\n> > >\n> > > Joe, recently folks proposed a lot of patches in this thread that seem like diverted from original way of implementation.\n> > > As an author of CF entry [0] can you please comment on which patch version needs review?\n> >\n> >\n> > I don't know if I agree with the proposed changes, but I have also been\n> > waiting to see how the parallel discussion regarding COPY extensibility\n> > shakes out.\n> >\n> > And there were a couple of issues found that need to be tracked down.\n> >\n> > Additionally I have had time/availability challenges recently.\n> >\n> > Overall, chances seem slim that this will make it into 17, but I have\n> > not quite given up hope yet either.\n>\n> Hi.\n> summary changes I've made in v9 patches at [0]\n>\n> meta: rebased. Now you need to use `git apply` or `git am`, previously\n> copyto_json.007.diff, you need to use GNU patch.\n>\n>\n> at [1], Dean Rasheed found some corner cases when the returned slot's\n> tts_tupleDescriptor\n> from\n> `\n> ExecutorRun(cstate->queryDesc, ForwardScanDirection, 0, true);\n> processed = ((DR_copy *) cstate->queryDesc->dest)->processed;\n> `\n> cannot be used for composite_to_json.\n> generally DestReceiver->rStartup is to send the TupleDesc to the DestReceiver,\n> The COPY TO DestReceiver's rStartup function is copy_dest_startup,\n> however copy_dest_startup is a no-op.\n> That means to make the final TupleDesc of COPY TO (FORMAT JSON)\n> operation bullet proof,\n> we need to copy the tupDesc from CopyToState's queryDesc.\n> This only applies to when the COPY TO source is a query (example:\n> copy (select 1) to stdout), not a table.\n> The above is my interpretation.\n>\n\ntrying to simplify the explanation.\nfirst refer to the struct DestReceiver.\nCOPY TO (FORMAT JSON), we didn't send the preliminary Tupdesc to the\nDestReceiver\nvia the rStartup function pointer within struct _DestReceiver.\n\n`CopyOneRowTo(CopyToState cstate, TupleTableSlot *slot)`\nthe slot is the final slot returned via query execution.\nbut we cannot use the tupdesc (slot->tts_tupleDescriptor) to do\ncomposite_to_json.\nbecause the final return slot Tupdesc may change during the query execution.\n\nso we need to copy the tupDesc from CopyToState's queryDesc.\n\naslo rebased, now we can apply it cleanly.", "msg_date": "Mon, 1 Apr 2024 20:00:11 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Mon, Apr 1, 2024 at 8:00 PM jian he <[email protected]> wrote:\n>\nrebased.\nminor cosmetic error message change.\n\nI think all the issues in this thread have been addressed.", "msg_date": "Mon, 19 Aug 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "On Mon, Aug 19, 2024 at 8:00 AM jian he <[email protected]> wrote:\n>\n> On Mon, Apr 1, 2024 at 8:00 PM jian he <[email protected]> wrote:\n> >\n> rebased.\n> minor cosmetic error message change.\n>\n> I think all the issues in this thread have been addressed.\n\nhi.\nI did some minor changes based on the v11.\n\nmainly changing some error code from\nERRCODE_FEATURE_NOT_SUPPORTED\nto\nERRCODE_INVALID_PARAMETER_VALUE.", "msg_date": "Thu, 22 Aug 2024 13:19:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" }, { "msg_contents": "Hi.\n\nin ExecutePlan\nwe have:\n\n for (;;)\n {\n ResetPerTupleExprContext(estate);\n slot = ExecProcNode(planstate);\n if (!TupIsNull(slot))\n {\n if((slot != NULL) && (slot->tts_tupleDescriptor != NULL)\n && (slot->tts_tupleDescriptor->natts > 0)\n && (slot->tts_tupleDescriptor->attrs->attname.data[0] == '\\0'))\n elog(INFO, \"%s:%d %s this slot first attribute attname is\nnull\", __FILE_NAME__, __LINE__, __func__);\n }\n if (TupIsNull(slot))\n break;\n if (sendTuples)\n {\n if (!dest->receiveSlot(slot, dest))\n break;\n }\n\n\ndest->receiveSlot(slot, dest) is responsible for sending values to destination,\nfor COPY TO it will call copy_dest_receive, CopyOneRowTo.\n\nFor the copy to format json, we need to make sure\nin \"dest->receiveSlot(slot, dest))\", the slot->tts_tupleDescriptor has\nproper information.\nbecause we *use* slot->tts_tupleDescriptor->attrs->attname as the json key.\n\nFor example, if (slot->tts_tupleDescriptor->attrs->attname.data[0] == '\\0')\nthen output json may look like: {\"\":12}\nwhich is not what we want.\n\n\n\nin ExecutePlan i use\nelog(INFO, \"%s:%d %s this slot first attribute attname is null\",\n__FILE_NAME__, __LINE__, __func__);\nto find sql queries that attribute name is not good.\n\nbased on that, i found out many COPY TO (FORMAT JSON) queries will either\nerror out or the output json key be empty string\nif in CopyOneRowTo we didn't copy the cstate->queryDesc->tupDesc\nto the slot->tts_tupleDescriptor\n\n\nYou can test it yourself.\nfirst `git am v12-0001-introduce-json-format-for-COPY-TO.patch`\nafter that, comment out the memcpy call in CopyOneRowTo, just like the\nfollowing:\n if(!cstate->rel)\n {\n // memcpy(TupleDescAttr(slot->tts_tupleDescriptor, 0),\n // TupleDescAttr(cstate->queryDesc->tupDesc, 0),\n // cstate->queryDesc->tupDesc->natts *\nsizeof(FormData_pg_attribute));\n\nbuild and test with the attached script.\nyou will see COPY TO FORMAT JSON, lots of cases where the json key\nbecomes an empty string.\n\n\nI think this thread related issues has been resolved.", "msg_date": "Fri, 13 Sep 2024 22:42:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Emitting JSON to file using COPY TO" } ]
[ { "msg_contents": "Hello,\n\nWhen trying to use a custom dump with the test pg_upgrade/002_pg_upgrade,\nI observe the following test failure on Windows:\n >meson test --suite setup\n >echo create database regression>...\\dump.sql\n >set olddump=...\\dump.sql& set oldinstall=.../tmp_install/usr/local/pgsql& meson test pg_upgrade/002_pg_upgrade\n\n1/1 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade ERROR            11.38s   exit status 1\n\nregress_log_002_pg_upgrade.txt contains:\n...\n[09:07:06.704](3.793s) ok 11 - run of pg_upgrade for new instance\n...\n[09:07:07.301](0.001s) not ok 15 - old and new dumps match after pg_upgrade\n[09:07:07.301](0.000s) #   Failed test 'old and new dumps match after pg_upgrade'\n#   at .../src/bin/pg_upgrade/t/002_pg_upgrade.pl line 452.\n[09:07:07.301](0.000s) #          got: '1'\n#     expected: '0'\n=== diff of ...\\build\\testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8/dump1.sql and \n...\\build\\testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8/dump2.sql\n=== stdout ===\n=== stderr ===\n=== EOF ===\n\n\n >dir \"testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8/\"\n11/25/2023  09:06 AM             2,729 dump1.sql\n11/25/2023  09:07 AM             2,590 dump2.sql\n\n >diff -s \"testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8\\dump1.sql\" \n\"testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8\\dump2.sql\"\nFiles testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8\\dump1.sql and \ntestrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_ifk8\\dump2.sql are identical\n\nAs I can see, dump1.sql contains line endings 0d 0a, while dump2.sql — 0a.\n\nThe attached patch fixes the issue for me.\n\nBest regards,\nAlexander", "msg_date": "Sat, 25 Nov 2023 23:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "On Sat, Nov 25, 2023 at 11:00:01PM +0300, Alexander Lakhin wrote:\n> diff --git a/src/bin/pg_upgrade/t/002_pg_upgrade.pl b/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n> index c6d83d3c21..d34b45e346 100644\n> --- a/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n> +++ b/src/bin/pg_upgrade/t/002_pg_upgrade.pl\n> @@ -293,6 +293,7 @@ if (defined($ENV{oldinstall}))\n> \t}\n> \n> \topen my $fh, \">\", $dump1_file or die \"could not open dump file\";\n> +\tbinmode $fh;\n> \tprint $fh $dump_data;\n> \tclose $fh;\n\nThere is something I don't get here. The old and new dump files\nshould be processed in filter_dump(), where\nAdjustUpgrade::adjust_old_dumpfile does the following so binmode\nshould not be needed:\n # use Unix newlines\n $dump =~ s/\\r\\n/\\n/g;\n\nOr you have used the test suite with an old installation that has the\nsame major version as the new installation, meaning that the filtering\nwas not happening, still you have detected some diffs? It sounds to\nme that we should just apply the filters to the dumps all the time if\nyou have used matching versions. The filtering would remove only the\ncomments, some extra newlines and replace the CRLFs in this case.\n--\nMichael", "msg_date": "Tue, 5 Dec 2023 16:56:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "Hi Michael,\n\n05.12.2023 10:56, Michael Paquier wrote:\n\n> Or you have used the test suite with an old installation that has the\n> same major version as the new installation, meaning that the filtering\n> was not happening, still you have detected some diffs? It sounds to\n> me that we should just apply the filters to the dumps all the time if\n> you have used matching versions. The filtering would remove only the\n> comments, some extra newlines and replace the CRLFs in this case.\n\nYes, my case is with the same version, literally:\nbuild>echo create database regression>c:\\temp\\dump.sql\nbuild>set olddump=c:\\temp\\dump.sql& set oldinstall=%CD%/tmp_install/usr/local/pgsql& meson test pg_upgrade/002_pg_upgrade\n\nSo removing the condition \"if ($oldnode->pg_version != $newnode->pg_version)\"\nworks here as well, but maybe switching the file mode (to preserve EOLs\nproduced by pg_dump) in the block \"After dumping, update references ...\"\nis more efficient than filtering dumps (on all OSes?).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 5 Dec 2023 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "On Tue, Dec 05, 2023 at 12:00:00PM +0300, Alexander Lakhin wrote:\n> So removing the condition \"if ($oldnode->pg_version != $newnode->pg_version)\"\n> works here as well, but maybe switching the file mode (to preserve EOLs\n> produced by pg_dump) in the block \"After dumping, update references ...\"\n> is more efficient than filtering dumps (on all OSes?).\n\nWell, there's the argument that we replace the library references in\na SQL file that we are handling as a text file, so switching it to use\nthe binary mode is not right. A second argument is to apply the same\nfiltering logic across both the old and new dumps, even if we know\nthat the second dump file taken by pg_dump with not append CRLFs.\n\nAt the end, just applying the filtering all the time makes the most\nsense to me, so I've applied a patch doing just that.\n--\nMichael", "msg_date": "Wed, 6 Dec 2023 10:17:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "06.12.2023 04:17, Michael Paquier wrote:\n> At the end, just applying the filtering all the time makes the most\n> sense to me, so I've applied a patch doing just that.\n\nThank you for the fix!\n\nNow that test with the minimal dump passes fine, but when I tried to run\nit with a complete dump borrowed from a normal test run:\nset olddump=& set oldinstall=& set PG_TEST_NOCLEAN=1& meson test pg_upgrade/002_pg_upgrade\nREM this test succeeded\ncopy testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_*\\dump1.sql\nset olddump=c:\\temp\\dump1.sql& set oldinstall=%CD%/tmp_install/usr/local/pgsql& meson test pg_upgrade/002_pg_upgrade\n\nI encountered another failure:\n...\nCreating dump of database schemas                             ok\nChecking for presence of required libraries                   fatal\n\nYour installation references loadable libraries that are missing from the\nnew installation.  You can add these libraries to the new installation,\nor remove the functions using them from the old installation.  A list of\nproblem libraries is in the file:\n.../build/testrun/pg_upgrade/002_pg_upgrade/data/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20231205T223247.304/loadable_libraries.txt\nFailure, exiting\n[22:32:51.086](3.796s) not ok 11 - run of pg_upgrade for new instance\n...\n\nloadable_libraries.txt contains:\ncould not load library \".../src/test/regress/refint.dll\": ERROR: could not access file \n\".../src/test/regress/refint.dll\": No such file or directory\nIn database: regression\ncould not load library \".../src/test/regress/autoinc.dll\": ERROR: could not access file \n\".../src/test/regress/autoinc.dll\": No such file or directory\nIn database: regression\ncould not load library \".../src/test/regress/regress.dll\": ERROR: could not access file \n\".../src/test/regress/regress.dll\": No such file or directory\nIn database: regression\n\nReally, I can see refint.dll in ...\\build\\src\\test\\regress and in\n...\\build\\tmp_install\\usr\\local\\pgsql\\lib, but not in\n.../src/test/regress/regress.dll\n\nc:\\temp\\dump1.sql contains:\n...\nCREATE FUNCTION public.check_primary_key() RETURNS trigger\n     LANGUAGE c\n     AS '.../build/src/test/regress/refint.dll', 'check_primary_key';\n\nwhile ...\\build\\testrun\\pg_upgrade\\002_pg_upgrade\\data\\tmp_test_T6jE\\dump1.sql\n(for the failed test):\n...\nCREATE FUNCTION public.check_primary_key() RETURNS trigger\n     LANGUAGE c\n     AS '.../src/test/regress/refint.dll', 'check_primary_key';\n\nThe same is on Linux:\nPG_TEST_NOCLEAN=1 meson test pg_upgrade/002_pg_upgrade\ncp testrun/pg_upgrade/002_pg_upgrade/data/tmp_test_*/dump1.sql /tmp/\nolddump=/tmp/dump1.sql oldinstall=`pwd`/tmp_install/usr/local/pgsql meson test pg_upgrade/002_pg_upgrade\n\nSo it looks like\n     my $newregresssrc = \"$srcdir/src/test/regress\";\nis incorrect for meson.\nMaybe it should be?:\n     my $newregresssrc = dirname($ENV{REGRESS_SHLIB});\n(With this line the test passes for me on Windows and Linux).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 6 Dec 2023 11:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "On Wed, Dec 06, 2023 at 11:00:01AM +0300, Alexander Lakhin wrote:\n> So it looks like\n>     my $newregresssrc = \"$srcdir/src/test/regress\";\n> is incorrect for meson.\n> Maybe it should be?:\n>     my $newregresssrc = dirname($ENV{REGRESS_SHLIB});\n> (With this line the test passes for me on Windows and Linux).\n\nHmm. Yes, it looks like you're right here. That should allow all the\nscenarios we expect to work to update the paths for the functions.\n--\nMichael", "msg_date": "Thu, 7 Dec 2023 17:44:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "On Thu, Dec 07, 2023 at 05:44:53PM +0900, Michael Paquier wrote:\n> Hmm. Yes, it looks like you're right here. That should allow all the\n> scenarios we expect to work to update the paths for the functions.\n\nAnd done this one as well down to v15, where not only meson, but also\nvpath could have been confused with an update to an incorrect path.\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 10:55:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" }, { "msg_contents": "On 2023-Dec-08, Michael Paquier wrote:\n\n> On Thu, Dec 07, 2023 at 05:44:53PM +0900, Michael Paquier wrote:\n> > Hmm. Yes, it looks like you're right here. That should allow all the\n> > scenarios we expect to work to update the paths for the functions.\n> \n> And done this one as well down to v15, where not only meson, but also\n> vpath could have been confused with an update to an incorrect path.\n\nArgh, yeah, this has caused me pain a couple of times. Thanks for fixing.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n", "msg_date": "Fri, 8 Dec 2023 11:51:53 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 002_pg_upgrade fails with olddump on Windows" } ]
[ { "msg_contents": "Hi,\n\nwhile working on a patch I noticed we do this in the SGML docs (for\nexample in indexam.sgml and a bunch of other files):\n\n <para>\n ... some text ...\n </para>\n\n <para>\n<programlisting>\nsome code\n</programlisting>\n ... description of the code.\n </para>\n\nThat is, the program listing is in a paragraph that starts immediately\nbefore it. I just noticed this ends up like this in the HTML:\n\n <p>... some text ...</p>\n\n <p></p>\n\n <pre>some code</pre>\n\n <p>... description of the code.</p>\n\nThat is, there's an empty <p></p> before <pre>, which seems a bit weird,\nbut it seems to render fine (at least in Firefox), so maybe it looks\nweird but is not a problem in practice ...\n\nI did search for what (X)HTML says about this, and the only thing I\nfound is HTML5 flow control section [1], which says\n\n ... elements whose content model allows any flow content should\n have either at least one descendant text node ...\n\nOfc, we're rendering into XHTML, not HTML5. However, W3 advises against\nthis in the HTML4.0 document too [2]:\n\n We discourage authors from using empty P elements. User agents\n should ignore empty P elements.\n\nSo it might be \"OK\" because browsers ignore those elements, but maybe we\nshould stop doing that anyway?\n\n\nregards\n\n\n[1]\nhttps://www.w3.org/TR/2011/WD-html5-20110525/content-models.html#flow-content-0\n\n[2] https://www.w3.org/TR/1998/REC-html40-19980424/struct/text.html#h-9.3.1\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 25 Nov 2023 21:44:38 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "strange para/programlisting pattern in sgml docs" }, { "msg_contents": "On 25.11.23 21:44, Tomas Vondra wrote:\n> while working on a patch I noticed we do this in the SGML docs (for\n> example in indexam.sgml and a bunch of other files):\n> \n> <para>\n> ... some text ...\n> </para>\n> \n> <para>\n> <programlisting>\n> some code\n> </programlisting>\n> ... description of the code.\n> </para>\n> \n> That is, the program listing is in a paragraph that starts immediately\n> before it. I just noticed this ends up like this in the HTML:\n> \n> <p>... some text ...</p>\n> \n> <p></p>\n> \n> <pre>some code</pre>\n> \n> <p>... description of the code.</p>\n> \n> That is, there's an empty <p></p> before <pre>, which seems a bit weird,\n> but it seems to render fine (at least in Firefox), so maybe it looks\n> weird but is not a problem in practice ...\n\nThis is because in HTML you can't have <pre> inside <p> but in DocBook \nyou can have <programlisting> inside <para> (and other similar cases). \nSo the DocBook XSLT stylesheets fix that up by splitting the <p> into \nseparate <p> elements before and after the <pre>. It's just a \ncoincidence that one of them is empty in this case.\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 11:16:46 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange para/programlisting pattern in sgml docs" } ]
[ { "msg_contents": "Your Email Content\n\n\n \n Greetings from the community of programmers of IIT Guwahati and IIT Patna! \n \n This is Gautam Sharma, pursuing a BTech in the Department of Computer Science and Engineering at the Indian Institute of Technology, Guwahati. \n I am also Competitive Programming Head at Coding Club, IIT Guwahati. We at Coding Club, IIT Guwahati were truly awe-inspired by your project PostgreSQL!\n Apart from the wonderful open-source community that constantly improves and develops PostgreSQL, we would like to invite you to put PostgreSQL at CodePeak-2023 for contributions!\n \n Every year, the Coding Club, IITG organizes CodePeak, which is a month-long open-source event organized in collaboration with NJACK, IIT Patna. The event invites participants to contribute to various projects, aiming to lay a foundation for future involvement in larger programs like GSoC and Outreachy. The event targets first-timers who wish to participate in Free and Open Source(FOSS) Contributions and the experienced developers who want to show their skills by contributing to real-world projects.\n \n CodePeak 2023, now in it's third iteration is one of the biggest open-source events for Indian college students, serving as a pre-cursor for many prestigious events like GSoC and Outreachy. We have had over 6000 participants, with over 150 projects, and 3800 pull requests in the past iterations of CodePeak! The contributors and mentors are also rewarded handsomely for their efforts towards open source by the organizers and our respected sponsors! This year, the winners will get a chance to visit Oxford!\n \n Mentoring PostgreSQL at CodePeak 2023 can go a long way, making it mainstream/known to the untapped talent pool of open-source enthusiasts who would definitely enjoy contributing to PostgreSQL during and after CodePeak 2023.\n \n Kindly refer to the mentor guide of CodePeak 2023 here.\n \n Looking forward to your reply and involvement in CodePeak 2023!\n \n CodePeak 2023: www.codepeak.tech\n\n Coding Club IITG Instagram Page: www.instagram.com/codingclubiitg/\n\n Warm Regards\n \n Gautam Sharma\n \n Competitive Programming Head, Coding Club IITG\n \n Department of Computer Science and Engineering\n \n Indian Institute of Technology, Guwahati\n \n Assam, India", "msg_date": "Sat, 25 Nov 2023 23:36:19 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Mentor PostgreSQL at CodePeak 2023!" } ]
[ { "msg_contents": "Greeting hackers,\n\nIn the operator precedence table[1] table, AT TIME ZONE isn't explicitly\nlisted out; that means it's to be interpreted in the \"any other operator\ncategory\".\n\nHowever, it seems that the precedence of AT TIME ZONE is actually higher\nthan that of the addition operator:\n\n-- Fails with \"function pg_catalog.timezone(unknown, interval) does not\nexist\nSELECT now() + INTERVAL '14 days' AT TIME ZONE 'UTC';\n\n-- Works:\nSELECT (now() + INTERVAL '14 days') AT TIME ZONE 'UTC';\n\nNote that missing parentheses for this were discussed in the context\nof pg_catalog.pg_get_viewdef[2].\n\nIs there a missing line in the operator precedence table in the docs?\n\nThanks,\n\nShay\n\n[1]\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-PRECEDENCE\n[2]\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nGreeting hackers,In the operator precedence table[1] table, AT TIME ZONE isn't explicitly listed out; that means it's to be interpreted in the \"any other operator category\".However, it seems that the precedence of AT TIME ZONE is actually higher than that of the addition operator:-- Fails with \"function pg_catalog.timezone(unknown, interval) does not existSELECT now() + INTERVAL '14 days' AT TIME ZONE 'UTC';-- Works:SELECT (now() + INTERVAL '14 days') AT TIME ZONE 'UTC';Note that missing parentheses for this were discussed in the context of pg_catalog.pg_get_viewdef[2].Is there a missing line in the operator precedence table in the docs?Thanks,Shay[1] https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-PRECEDENCE[2] https://www.postgresql.org/message-id/flat/[email protected]", "msg_date": "Sun, 26 Nov 2023 11:13:39 +0100", "msg_from": "Shay Rojansky <[email protected]>", "msg_from_op": true, "msg_subject": "Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "On Sun, Nov 26, 2023 at 11:13:39AM +0100, Shay Rojansky wrote:\n> Greeting hackers,\n> \n> In the operator precedence table[1] table, AT TIME ZONE isn't explicitly listed\n> out; that means it's to be interpreted in the \"any other operator category\".\n> \n> However, it seems that the precedence of AT TIME ZONE is actually higher than\n> that of the addition operator:\n> \n> -- Fails with \"function pg_catalog.timezone(unknown, interval) does not exist\n> SELECT now() + INTERVAL '14 days' AT TIME ZONE 'UTC';\n> \n> -- Works:\n> SELECT (now() + INTERVAL '14 days') AT TIME ZONE 'UTC';\n> \n> Note that missing parentheses for this were discussed in the context\n> of pg_catalog.pg_get_viewdef[2].\n> \n> Is there a missing line in the operator precedence table in the docs?\n\nI think the big question is whether AT TIME ZONE is significant enough\nto list there because there are many other clauses we could potentially\nadd there.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 26 Nov 2023 09:27:34 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Sun, Nov 26, 2023 at 11:13:39AM +0100, Shay Rojansky wrote:\n>> Is there a missing line in the operator precedence table in the docs?\n\n> I think the big question is whether AT TIME ZONE is significant enough\n> to list there because there are many other clauses we could potentially\n> add there.\n\nComparing the precedence list in the grammar with the doc table,\nthe only omissions I feel bad about are AT and COLLATE. There's\na group of keywords that have \"almost the same precedence as IDENT\"\nwhich probably don't need documentation; but these are not in that\ngroup.\n\nI am, however, feeling a little bit on the warpath about the\ngrammar comments for the SQL/JSON keyword precedences:\n\n/* SQL/JSON related keywords */\n%nonassoc\tUNIQUE JSON\n%nonassoc\tKEYS OBJECT_P SCALAR VALUE_P\n%nonassoc\tWITH WITHOUT\n\nEvery other case where we're doing this has a para of explanation\nin the block comment just below here. These not only have no\nmeaningful explanation, they are in the wrong place --- it looks\nlike they are unrelated to the block comment, whereas actually\n(I think) they are another instance of it. I consider this\nwell below project standard.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Nov 2023 11:35:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": ">> Is there a missing line in the operator precedence table in the docs?\n>\n> I think the big question is whether AT TIME ZONE is significant enough\n> to list there because there are many other clauses we could potentially\n> add there.\n\nJust to give more context, I'm a maintainer on Entity Framework Core (the\n.NET ORM), and this caused the provider to generate incorrect SQL etc.\n\nIf you decide to not have a comprehensive operator precedence table (though\nI do hope you do), I'd at least amend the \"any other operator\" and \"all\nother native and user-defined operators\" to clearly indicate that some\noperators aren't listed and have undocumented precedences, so implementers\ncan at least be aware and test the unlisted ones etc.\n\n>> Is there a missing line in the operator precedence table in the docs?>> I think the big question is whether AT TIME ZONE is significant enough> to list there because there are many other clauses we could potentially> add there.Just to give more context, I'm a maintainer on Entity Framework Core (the .NET ORM), and this caused the provider to generate incorrect SQL etc.If you decide to not have a comprehensive operator precedence table (though I do hope you do), I'd at least amend the \"any other operator\" and \"all other native and user-defined operators\" to clearly indicate that some operators aren't listed and have undocumented precedences, so implementers can at least be aware and test the unlisted ones etc.", "msg_date": "Sun, 26 Nov 2023 18:45:46 +0100", "msg_from": "Shay Rojansky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "I wrote:\n> Comparing the precedence list in the grammar with the doc table,\n> the only omissions I feel bad about are AT and COLLATE.\n\nConcretely, as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 26 Nov 2023 15:11:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "On 2023-Nov-26, Tom Lane wrote:\n\n> I am, however, feeling a little bit on the warpath about the\n> grammar comments for the SQL/JSON keyword precedences:\n> \n> /* SQL/JSON related keywords */\n> %nonassoc\tUNIQUE JSON\n> %nonassoc\tKEYS OBJECT_P SCALAR VALUE_P\n> %nonassoc\tWITH WITHOUT\n> \n> Every other case where we're doing this has a para of explanation\n> in the block comment just below here. These not only have no\n> meaningful explanation, they are in the wrong place --- it looks\n> like they are unrelated to the block comment, whereas actually\n> (I think) they are another instance of it. I consider this\n> well below project standard.\n\nI introduced those in commit 6ee30209a6f1. That is the minimal set of\nkeywords for which the precedence had to be declared that was necessary\nso that the grammar would compile for the new feature; I extracted that\nfrom a much larger set that was in the original patch submission. I\nspent a long time trying to figure out whether the block comment\napplied, and I wasn't sure, so I ended up leaving the comment at what\nyou see there.\n\nLooking at it again:\n\nUNIQUE and KEYS are there for \"WITH UNIQUE KEYS\" (& WITHOUT), where KEYS\nis optional and the whole clause is optional in some rules. So as I\nunderstand it, we need to establish the relative precedence of UNIQUE\n(first group), KEYS (second group) and WITH/WITHOUT (third group).\nWe also have a \"%prec KEYS\" declaration in the\njson_key_uniqueness_constraint_opt rule for this.\n\nWe also need a relative precedence between JSON and the set below:\nVALUE, OBJECT, SCALAR, for the \"IS JSON {VALUE/OBJECT/SCALAR}\"\nconstruct.\n\nI put KEYS in the same set as the three above just because it was not a\nproblem to do so; likewise UNIQUE together with JSON. (I think it would\nalso work to put WITH and WITHOUT in the second group, but I only ran\nbison to verify this, didn't run any tests.)\n\nI am also not sure if the current location of those three groups (or\ntwo, if we merge those) relative to the rest of the groups below the\nlarge block comment is a good one. As far as compilability of the\ngrammar goes, it looks like they could even be at the very bottom of the\nprecedence list, below the join operators.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"¿Qué importan los años? Lo que realmente importa es comprobar que\na fin de cuentas la mejor edad de la vida es estar vivo\" (Mafalda)\n\n\n", "msg_date": "Mon, 27 Nov 2023 17:35:25 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "We could do something like this. Is this good?\n\nI tried to merge WITH and WITHOUT with the precedence class immediately\nabove, but that failed: the main grammar compiles fine and no tests\nfail, but ECPG does fail to compile the sqljson.pgc test, so there's\nsome problem there. Now, the ecpg grammar stuff *is* absolute black\nmagic to me, so I have no idea what to do about that.\n\n(TBH I don't think the added comments really explain the problems fully.\nThat's most likely because I don't actually understand what the problems\nare.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)", "msg_date": "Mon, 27 Nov 2023 18:32:54 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> (TBH I don't think the added comments really explain the problems fully.\n> That's most likely because I don't actually understand what the problems\n> are.)\n\nThe actual problem is that nobody has applied a cluestick to the SQL\ncommittee about writing an unambiguous grammar :-(. But I digress.\n\nI don't like the existing coding for more reasons than just\nunderdocumentation. Global assignment of precedence is a really,\nreally dangerous tool for solving ambiguous-grammar problems, because\nit can mask problems unrelated to the one you think you are solving:\nbasically, it eliminates bison's complaints about grammar ambiguities\nrelated to the token you mark. (Commits 12b716457 and 28a61fc6c are\nrelevant here.) Attaching precedence to individual productions is\nfar safer, because it won't have any effect that extends beyond that\nproduction. You still need a precedence attached to the lookahead\ntoken; but I think we should try very hard to not assign a precedence\ndifferent from IDENT's to any unreserved keywords.\n\nAfter a bit of fooling around I found a patch that seems to meet\nthat criterion; attached.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 27 Nov 2023 15:34:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "\nOn 2023-11-27 Mo 15:34, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> (TBH I don't think the added comments really explain the problems fully.\n>> That's most likely because I don't actually understand what the problems\n>> are.)\n> The actual problem is that nobody has applied a cluestick to the SQL\n> committee about writing an unambiguous grammar :-(. But I digress.\n>\n> I don't like the existing coding for more reasons than just\n> underdocumentation. Global assignment of precedence is a really,\n> really dangerous tool for solving ambiguous-grammar problems, because\n> it can mask problems unrelated to the one you think you are solving:\n> basically, it eliminates bison's complaints about grammar ambiguities\n> related to the token you mark. (Commits 12b716457 and 28a61fc6c are\n> relevant here.) Attaching precedence to individual productions is\n> far safer, because it won't have any effect that extends beyond that\n> production. You still need a precedence attached to the lookahead\n> token; but I think we should try very hard to not assign a precedence\n> different from IDENT's to any unreserved keywords.\n>\n> After a bit of fooling around I found a patch that seems to meet\n> that criterion; attached.\n>\n> \t\t\t\n\n\n\nLooks good. Perhaps the comments above the UNBOUNDED precedence setting \n(esp. the first paragraph) need strengthening, with a stern injunction \nto avoid different precedence for non-reserved keywords if at all possible.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 16:09:10 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "On 2023-Nov-27, Tom Lane wrote:\n\n> I don't like the existing coding for more reasons than just\n> underdocumentation. Global assignment of precedence is a really,\n> really dangerous tool for solving ambiguous-grammar problems, because\n> it can mask problems unrelated to the one you think you are solving:\n> basically, it eliminates bison's complaints about grammar ambiguities\n> related to the token you mark. (Commits 12b716457 and 28a61fc6c are\n> relevant here.) Attaching precedence to individual productions is\n> far safer, because it won't have any effect that extends beyond that\n> production. You still need a precedence attached to the lookahead\n> token; but I think we should try very hard to not assign a precedence\n> different from IDENT's to any unreserved keywords.\n\nOoh, this is very useful, thank you.\n\n> After a bit of fooling around I found a patch that seems to meet\n> that criterion; attached.\n\nIt looks good and passes tests, including the ecpg ones.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n", "msg_date": "Tue, 28 Nov 2023 14:26:52 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Looks good. Perhaps the comments above the UNBOUNDED precedence setting \n> (esp. the first paragraph) need strengthening, with a stern injunction \n> to avoid different precedence for non-reserved keywords if at all possible.\n\nOK. How about rewriting that first para like this?\n\n * Sometimes it is necessary to assign precedence to keywords that are not\n * really part of the operator hierarchy, in order to resolve grammar\n * ambiguities. It's best to avoid doing so whenever possible, because such\n * assignments have global effect and may hide ambiguities besides the one\n * you intended to solve. (Attaching a precedence to a single rule with\n * %prec is far safer and should be preferred.) If you must give precedence\n * to a new keyword, try very hard to give it the same precedence as IDENT.\n * If the keyword has IDENT's precedence then it clearly acts the same as\n * non-keywords and other similar keywords, thus reducing the risk of\n * unexpected precedence effects.\n * \n * We used to need to assign IDENT an explicit precedence just less than Op,\n * to support target_el without AS. While that's not really necessary since\n * we removed postfix operators, we continue to do so because it provides a\n * reference point for a precedence level that we can assign to other\n * keywords that lack a natural precedence level.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:27:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "\nOn 2023-11-28 Tu 10:27, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> Looks good. Perhaps the comments above the UNBOUNDED precedence setting\n>> (esp. the first paragraph) need strengthening, with a stern injunction\n>> to avoid different precedence for non-reserved keywords if at all possible.\n> OK. How about rewriting that first para like this?\n>\n> * Sometimes it is necessary to assign precedence to keywords that are not\n> * really part of the operator hierarchy, in order to resolve grammar\n> * ambiguities. It's best to avoid doing so whenever possible, because such\n> * assignments have global effect and may hide ambiguities besides the one\n> * you intended to solve. (Attaching a precedence to a single rule with\n> * %prec is far safer and should be preferred.) If you must give precedence\n> * to a new keyword, try very hard to give it the same precedence as IDENT.\n> * If the keyword has IDENT's precedence then it clearly acts the same as\n> * non-keywords and other similar keywords, thus reducing the risk of\n> * unexpected precedence effects.\n> *\n> * We used to need to assign IDENT an explicit precedence just less than Op,\n> * to support target_el without AS. While that's not really necessary since\n> * we removed postfix operators, we continue to do so because it provides a\n> * reference point for a precedence level that we can assign to other\n> * keywords that lack a natural precedence level.\n>\n> \t\t\t\n\n\nLGTM. Thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:48:00 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-11-28 Tu 10:27, Tom Lane wrote:\n>> OK. How about rewriting that first para like this?\n\n> LGTM. Thanks.\n\nThanks for reviewing. While checking things over one more time,\nI noticed that there was an additional violation of this precept,\ndating back to long before we understood the hazards: SET is\ngiven its own priority, when it could perfectly well share that\nof IDENT. I adjusted that and pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 13:34:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missing docs on AT TIME ZONE precedence?" } ]
[ { "msg_contents": "Hello!\n\nFound that if i set a specific time zone for a template database,\nit will not be inherited in the database created from that template.\n\npsql (17devel)\nType \"help\" for help.\n\npostgres=# select now();\n now\n-------------------------------\n 2023-11-26 17:24:58.242086+03\n(1 row)\n\npostgres=# ALTER DATABASE template1 SET TimeZone = 'UTC';\nALTER DATABASE\npostgres=# \\c template1\nYou are now connected to database \"template1\" as user \"postgres\".\ntemplate1=# select now();\n now\n-------------------------------\n 2023-11-26 14:26:09.291082+00\n(1 row)\n\ntemplate1=# CREATE DATABASE test;\nCREATE DATABASE\ntemplate1=# \\c test\nYou are now connected to database \"test\" as user \"postgres\".\ntest=# select now();\n now\n-------------------------------\n 2023-11-26 17:29:05.487984+03\n(1 row)\n\nCould you clarify please. Is this normal, predictable behavior?\n\nWould be very grateful!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 26 Nov 2023 17:47:49 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Should timezone be inherited from template database?" }, { "msg_contents": "On Sun, Nov 26, 2023 at 7:47 AM Anton A. Melnikov <[email protected]>\nwrote:\n\n>\n> postgres=# ALTER DATABASE template1 SET TimeZone = 'UTC';\n>\n> Could you clarify please. Is this normal, predictable behavior?\n>\n>\nhttps://www.postgresql.org/docs/current/sql-createdatabase.html\n\n Database-level configuration parameters (set via ALTER DATABASE) and\ndatabase-level permissions (set via GRANT) are not copied from the template\ndatabase.\n\nDavid J.\n\nOn Sun, Nov 26, 2023 at 7:47 AM Anton A. Melnikov <[email protected]> wrote:\npostgres=# ALTER DATABASE template1 SET TimeZone = 'UTC';\nCould you clarify please. Is this normal, predictable behavior? https://www.postgresql.org/docs/current/sql-createdatabase.html Database-level configuration parameters (set via ALTER DATABASE) and database-level permissions (set via GRANT) are not copied from the template database.David J.", "msg_date": "Sun, 26 Nov 2023 08:53:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should timezone be inherited from template database?" }, { "msg_contents": "\nOn 26.11.2023 18:53, David G. Johnston wrote:\n> \n> https://www.postgresql.org/docs/current/sql-createdatabase.html <https://www.postgresql.org/docs/current/sql-createdatabase.html>\n> \n>  Database-level configuration parameters (set via ALTER DATABASE) and database-level permissions (set via GRANT) are not copied from the template database.\n> \n\nClear. Thank you very much!\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 26 Nov 2023 19:02:28 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should timezone be inherited from template database?" } ]
[ { "msg_contents": "Respected team,\nI am Kirtika Gautam, a third year civil engineering grad from NIT Durgapur.\nI am new to open source contributions but I am well aware of the\ntechnologies like C++/C,javascript,node.js,react.Basically I'm a MERN Stack\ndeveloper and I love to learn new technologies.\nI would love to contribute to your organisation if you could tell me how to\nget started and can guide me.\nHoping to hear from you soon.\nRegards\n\nRespected team,I am Kirtika Gautam, a third year civil engineering grad from NIT Durgapur. I am new to open source contributions but I am well aware of the technologies like C++/C,javascript,node.js,react.Basically I'm a MERN Stack developer and I love to learn new technologies. I would love to contribute to your organisation if you could tell me how to get started and can guide me.Hoping to hear from you soon.Regards", "msg_date": "Sun, 26 Nov 2023 20:44:35 +0530", "msg_from": "Kirtika Gautam <[email protected]>", "msg_from_op": true, "msg_subject": "How to get started with contributions" }, { "msg_contents": "On Mon, Nov 27, 2023 at 2:41 PM Kirtika Gautam <[email protected]> wrote:\n>\n> Respected team,\n> I am Kirtika Gautam, a third year civil engineering grad from NIT Durgapur. I am new to open source contributions but I am well aware of the technologies like C++/C,javascript,node.js,react.Basically I'm a MERN Stack developer and I love to learn new technologies.\n> I would love to contribute to your organisation if you could tell me how to get started and can guide me.\n> Hoping to hear from you soon.\n\nHi Gautam, https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\nmight help you. There are many ways to contribute to the PostgreSQL\nproject. Here are latest commitfests for code related contributions to\nstart with - https://commitfest.postgresql.org/45/,\nhttps://commitfest.postgresql.org/46/.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:52:37 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get started with contributions" }, { "msg_contents": "Hi Kirtika,\nThanks for your interest in the project.\n\nYou may want to start at\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F.\n\nOn Mon, Nov 27, 2023 at 2:41 PM Kirtika Gautam <[email protected]> wrote:\n>\n> Respected team,\n> I am Kirtika Gautam, a third year civil engineering grad from NIT Durgapur. I am new to open source contributions but I am well aware of the technologies like C++/C,javascript,node.js,react.Basically I'm a MERN Stack developer and I love to learn new technologies.\n> I would love to contribute to your organisation if you could tell me how to get started and can guide me.\n> Hoping to hear from you soon.\n> Regards\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:54:28 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get started with contributions" } ]
[ { "msg_contents": "Hi all,\n\nI have noticed that GetHeapamTableAmRoutine() is listed as being a\nmember of tableamapi.c but it is a convenience routine located in\nheapam_handler.c. Shouldn't the header be fixed with something like\nthe attached?\n\nThoughts or comments?\n--\nMichael", "msg_date": "Mon, 27 Nov 2023 14:50:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect comment in tableam.h regarding GetHeapamTableAmRoutine()" }, { "msg_contents": "On Mon, Nov 27, 2023 at 1:50 PM Michael Paquier <[email protected]> wrote:\n\n> I have noticed that GetHeapamTableAmRoutine() is listed as being a\n> member of tableamapi.c but it is a convenience routine located in\n> heapam_handler.c. Shouldn't the header be fixed with something like\n> the attached?\n\n\n+1. Nice catch.\n\nThanks\nRichard\n\nOn Mon, Nov 27, 2023 at 1:50 PM Michael Paquier <[email protected]> wrote:\nI have noticed that GetHeapamTableAmRoutine() is listed as being a\nmember of tableamapi.c but it is a convenience routine located in\nheapam_handler.c.  Shouldn't the header be fixed with something like\nthe attached?+1. Nice catch.ThanksRichard", "msg_date": "Mon, 27 Nov 2023 14:14:32 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in tableam.h regarding\n GetHeapamTableAmRoutine()" }, { "msg_contents": "On Mon, Nov 27, 2023 at 02:14:32PM +0800, Richard Guo wrote:\n> +1. Nice catch.\n\nThanks, applied it.\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 08:45:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect comment in tableam.h regarding\n GetHeapamTableAmRoutine()" } ]
[ { "msg_contents": "Hi,\n\nSSL tests fail on OpenSSL v3.2.0. I tested both on macOS (CI) and\ndebian (my local) and both failed with the same errors. To trigger\nthese errors on CI, you may need to clear the repository cache;\notherwise macOS won't install the v3.2.0 of the OpenSSL.\n\n001_ssltests:\npsql exited with signal 6 (core dumped): 'psql: error: connection to\nserver at \"127.0.0.1\", port 56718 failed: server closed the connection\nunexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nSSL SYSCALL error: Connection reset by peer' while running 'psql -XAtq\n-d sslkey=invalid sslcert=invalid sslrootcert=invalid sslcrl=invalid\nsslcrldir=invalid user=ssltestuser dbname=trustdb hostaddr=127.0.0.1\nhost=common-name.pg-ssltest.test sslrootcert=invalid sslmode=require\n-f - -w' at /Users/admin/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm\n\n\n002_scram:\npsql exited with signal 6 (core dumped): 'psql: error: connection to\nserver at \"127.0.0.1\", port 54531 failed: server closed the connection\nunexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nSSL SYSCALL error: Connection reset by peer' while running 'psql -XAtq\n-d dbname=trustdb sslmode=require sslcert=invalid sslrootcert=invalid\nhostaddr=127.0.0.1 host=localhost user=ssltestuser -f - -w' at\n/Users/admin/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm line 1997.\n\n\n003_sslinfo:\npsql exited with signal 6 (core dumped): 'psql: error: connection to\nserver at \"127.0.0.1\", port 59337 failed: server closed the connection\nunexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nSSL SYSCALL error: Connection reset by peer' while running 'psql -XAtq\n-d sslkey=invalid sslcert=invalid sslrootcert=invalid sslcrl=invalid\nsslcrldir=invalid sslrootcert=ssl/root+server_ca.crt sslmode=require\ndbname=certdb hostaddr=127.0.0.1 host=localhost user=ssltestuser\nsslcert=ssl/client_ext.crt\nsslkey=/Users/admin/pgsql/build/testrun/ssl/003_sslinfo/data/tmp_test_q11O/client_ext.key\n-f - -w' at /Users/admin/pgsql/src/test/perl/PostgreSQL/Test/Cluster.pm\nline 1997.\n\nmacOS CI run: https://cirrus-ci.com/task/5128008789393408\n\nI couldn't find the cause yet but just wanted to inform you.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 27 Nov 2023 21:05:42 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "Nazir,\n\nThanks for opening a thread. Was just about to start one, here what we \ncame up with so far.\n\nHomebrew users discovered a regression[0] when using Postgres compiled \nand linked against OpenSSL version 3.2.\n\n$ psql \"postgresql://$DB?sslmode=require\"\npsql: error: connection to server at \"redacted\" (redacted), port 5432 failed: ERROR: Parameter 'user' is missing in startup packet.\ndouble free or corruption (out)\nAborted (core dumped)\n\nAnalyzing the backtrace, OpenSSL was overwriting heap-allocated data in\nour PGconn struct because it thought BIO::ptr was a struct bss_sock_st\n*. OpenSSL then called a memset() on a member of that struct, and we\nzeroed out data in our PGconn struct.\n\nBIO_get_data(3) says the following:\n\n> These functions are mainly useful when implementing a custom BIO.\n>\n> The BIO_set_data() function associates the custom data pointed to by ptr\n> with the BIO a. This data can subsequently be retrieved via a call to\n> BIO_get_data(). This can be used by custom BIOs for storing\n> implementation specific information.\n\nIf you take a look at my_BIO_s_socket(), we create a partially custom\nBIO, but for the most part are defaulting to the methods defined by\nBIO_s_socket(). We need to set application-specific data and not BIO\nprivate data, so that the BIO implementation we rely on, can properly\nassert that its private data is what it expects.\n\nThe ssl test suite continues to pass with this patch. This patch should \nbe backported to every supported Postgres version most likely.\n\n[0]: https://github.com/Homebrew/homebrew-core/issues/155651\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 27 Nov 2023 12:17:45 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "Here is a v2 which adds back a comment that was not meant to be removed.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 27 Nov 2023 12:33:49 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Mon, Nov 27, 2023 at 12:33:49PM -0600, Tristan Partin wrote:\n> -\t\tres = secure_raw_read(((Port *) BIO_get_data(h)), buf, size);\n> +\t\tres = secure_raw_read(((Port *) BIO_get_app_data(h)), buf, size);\n> \t\tBIO_clear_retry_flags(h);\n> \t\tif (res <= 0)\n\nInteresting. I have yet to look at that in details, but\nBIO_get_app_data() exists down to 0.9.8, which is the oldest version\nwe need to support for stable branches. So that looks like a safe\nbet.\n\n> -#ifndef HAVE_BIO_GET_DATA\n> -#define BIO_get_data(bio) (bio->ptr)\n> -#define BIO_set_data(bio, data) (bio->ptr = data)\n> -#endif\n\nShouldn't this patch do a refresh of configure.ac and remove the check\non BIO_get_data() if HAVE_BIO_GET_DATA is gone?\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 08:53:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Mon Nov 27, 2023 at 5:53 PM CST, Michael Paquier wrote:\n> On Mon, Nov 27, 2023 at 12:33:49PM -0600, Tristan Partin wrote:\n> > -#ifndef HAVE_BIO_GET_DATA\n> > -#define BIO_get_data(bio) (bio->ptr)\n> > -#define BIO_set_data(bio, data) (bio->ptr = data)\n> > -#endif\n>\n> Shouldn't this patch do a refresh of configure.ac and remove the check\n> on BIO_get_data() if HAVE_BIO_GET_DATA is gone?\n\nSee the attached v3. I am unfamiliar with autotools, so I just hand \nedited the configure.ac script instead of whatever \"refresh\" means.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 27 Nov 2023 18:00:00 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Interesting. I have yet to look at that in details, but\n> BIO_get_app_data() exists down to 0.9.8, which is the oldest version\n> we need to support for stable branches. So that looks like a safe\n> bet.\n\nWhat about LibreSSL? In general, I'm not too pleased with just assuming\nthat BIO_get_app_data exists. If we can do that, we can probably remove\nmost of the OpenSSL function probes that configure.ac has today. Even\nif that's a good idea in HEAD, I doubt we want to do it all the way back.\n\nI'd be inclined to form the patch more along the lines of\ns/BIO_get_data/BIO_get_app_data/g, with a configure check for\nBIO_get_app_data and falling back to the existing direct use of\nbio->ptr if it's not there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 19:21:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "It was first added in SSLeay 0.8.1 which predates OpenSSL let alone the LibreSSL fork.\r\n\r\nIt probably doesn’t exist in BoringSSL but neither does a lot of things.\r\n\r\n> On 28 Nov 2023, at 00:21, Tom Lane <[email protected]> wrote:\r\n> \r\n> Michael Paquier <[email protected]> writes:\r\n>> Interesting. I have yet to look at that in details, but\r\n>> BIO_get_app_data() exists down to 0.9.8, which is the oldest version\r\n>> we need to support for stable branches. So that looks like a safe\r\n>> bet.\r\n> \r\n> What about LibreSSL? In general, I'm not too pleased with just assuming\r\n> that BIO_get_app_data exists. If we can do that, we can probably remove\r\n> most of the OpenSSL function probes that configure.ac has today. Even\r\n> if that's a good idea in HEAD, I doubt we want to do it all the way back.\r\n> \r\n> I'd be inclined to form the patch more along the lines of\r\n> s/BIO_get_data/BIO_get_app_data/g, with a configure check for\r\n> BIO_get_app_data and falling back to the existing direct use of\r\n> bio->ptr if it's not there.\r\n> \r\n> regards, tom lane\r\n", "msg_date": "Tue, 28 Nov 2023 00:29:41 +0000", "msg_from": "Bo Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Mon Nov 27, 2023 at 6:21 PM CST, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n> > Interesting. I have yet to look at that in details, but\n> > BIO_get_app_data() exists down to 0.9.8, which is the oldest version\n> > we need to support for stable branches. So that looks like a safe\n> > bet.\n>\n> What about LibreSSL? In general, I'm not too pleased with just assuming\n> that BIO_get_app_data exists. If we can do that, we can probably remove\n> most of the OpenSSL function probes that configure.ac has today. Even\n> if that's a good idea in HEAD, I doubt we want to do it all the way back.\n\nAs Bo said, this has been available since before LibreSSL forked off of \nOpenSSL.\n\n> I'd be inclined to form the patch more along the lines of\n> s/BIO_get_data/BIO_get_app_data/g, with a configure check for\n> BIO_get_app_data and falling back to the existing direct use of\n> bio->ptr if it's not there.\n\nFalling back to what existed before is invalid. BIO::ptr is private data \nfor the BIO implementation. BIO_{get,set}_app_data() does\nsomething completely different than setting BIO::ptr. In Postgres we \ncall BIO_meth_set_create() with BIO_meth_get_create() from \nBIO_s_socket(). The create function we pass allocates bi->ptr to \na struct bss_sock_st * as previously stated, and that's been the case \nsince March 10, 2022[0]. Essentially Postgres only worked because the \nBIO implementation didn't use the private data section until the linked \ncommit. I don't see any reason to keep compatibility with what only \nworked by accident.\n\n[0]: https://github.com/openssl/openssl/commit/a3e53d56831adb60d6875297b3339a4251f735d2\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 27 Nov 2023 18:48:13 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Mon Nov 27, 2023 at 6:21 PM CST, Tom Lane wrote:\n>> What about LibreSSL? In general, I'm not too pleased with just assuming\n>> that BIO_get_app_data exists.\n\n> Falling back to what existed before is invalid.\n\nWell, sure it only worked by accident, but it did work with older\nOpenSSL versions. If we assume that BIO_get_app_data exists, and\nsomebody tries to use it with a version that hasn't got that,\nit won't work.\n\nHaving said that, my concern was mainly driven by the comments in\nconfigure.ac claiming that this was an OpenSSL 1.1.0 addition.\nLooking at the relevant commits, 593d4e47d and 5c6df67e0, it seems\nthat that was less about \"the function doesn't exist before 1.1.0\"\nand more about \"in 1.1.0 we have to use the function because we\ncan no longer directly access the ptr field\". If the function\ndoes exist in 0.9.8 then I concur that we don't need to test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 20:14:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Mon Nov 27, 2023 at 7:14 PM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Mon Nov 27, 2023 at 6:21 PM CST, Tom Lane wrote:\n> >> What about LibreSSL? In general, I'm not too pleased with just assuming\n> >> that BIO_get_app_data exists.\n>\n> > Falling back to what existed before is invalid.\n>\n> Well, sure it only worked by accident, but it did work with older\n> OpenSSL versions. If we assume that BIO_get_app_data exists, and\n> somebody tries to use it with a version that hasn't got that,\n> it won't work.\n>\n> Having said that, my concern was mainly driven by the comments in\n> configure.ac claiming that this was an OpenSSL 1.1.0 addition.\n> Looking at the relevant commits, 593d4e47d and 5c6df67e0, it seems\n> that that was less about \"the function doesn't exist before 1.1.0\"\n> and more about \"in 1.1.0 we have to use the function because we\n> can no longer directly access the ptr field\". If the function\n> does exist in 0.9.8 then I concur that we don't need to test.\n\nI have gone back all the way to 1.0.0 and confirmed that the function \nexists. Didn't choose to go further than that since Postgres doesn't \nsupport it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 27 Nov 2023 19:28:19 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Mon Nov 27, 2023 at 7:14 PM CST, Tom Lane wrote:\n>> ... If the function\n>> does exist in 0.9.8 then I concur that we don't need to test.\n\n> I have gone back all the way to 1.0.0 and confirmed that the function \n> exists. Didn't choose to go further than that since Postgres doesn't \n> support it.\n\nSince this is something we'd need to back-patch, OpenSSL 0.9.8\nand later are relevant: the v12 branch still supports those.\nIt's moot given Bo's claim about the origin of the function,\nthough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 20:32:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "I can confirm that we also fail when using up-to-date MacPorts, which\nseems to have started shipping 3.2.0 last week or so. I tried the v3\npatch, and while that stops the crash, it looks like 3.2.0 has also\nmade some random changes in error messages:\n\n# +++ tap check in src/test/ssl +++\nt/001_ssltests.pl .. 163/? \n# Failed test 'certificate authorization fails with revoked client cert: matches'\n# at t/001_ssltests.pl line 775.\n# 'psql: error: connection to server at \"127.0.0.1\", port 58332 failed: SSL error: ssl/tls alert certificate revoked'\n# doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n# Failed test 'certificate authorization fails with revoked client cert with server-side CRL directory: matches'\n# at t/001_ssltests.pl line 880.\n# 'psql: error: connection to server at \"127.0.0.1\", port 58332 failed: SSL error: ssl/tls alert certificate revoked'\n# doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n# Failed test 'certificate authorization fails with revoked UTF-8 client cert with server-side CRL directory: matches'\n# at t/001_ssltests.pl line 893.\n# 'psql: error: connection to server at \"127.0.0.1\", port 58332 failed: SSL error: ssl/tls alert certificate revoked'\n# doesn't match '(?^:SSL error: sslv3 alert certificate revoked)'\n# Looks like you failed 3 tests of 205.\nt/001_ssltests.pl .. Dubious, test returned 3 (wstat 768, 0x300)\nFailed 3/205 subtests \nt/002_scram.pl ..... ok \nt/003_sslinfo.pl ... ok \n\nGuess we'll need to adjust the test script a bit too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 21:04:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Mon, Nov 27, 2023 at 08:32:28PM -0500, Tom Lane wrote:\n> Since this is something we'd need to back-patch, OpenSSL 0.9.8\n> and later are relevant: the v12 branch still supports those.\n> It's moot given Bo's claim about the origin of the function,\n> though.\n\nYep, unfortunately this needs to be checked down to 0.9.8. I've just\ndone this exercise yesterday for another backpatch..\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 12:47:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Mon, Nov 27, 2023 at 09:04:23PM -0500, Tom Lane wrote:\n> I can confirm that we also fail when using up-to-date MacPorts, which\n> seems to have started shipping 3.2.0 last week or so. I tried the v3\n> patch, and while that stops the crash, it looks like 3.2.0 has also\n> made some random changes in error messages:\n> \n> Failed 3/205 subtests \n> t/002_scram.pl ..... ok \n> t/003_sslinfo.pl ... ok \n> \n> Guess we'll need to adjust the test script a bit too.\n\nSigh. We could use an extra check_pg_config() with a routine new in\n3.2.0. Looking at CHANGES.md, SSL_get0_group_name() seems to be one\ngeneric choice here.\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 12:55:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Tue, Nov 28, 2023 at 12:55:37PM +0900, Michael Paquier wrote:\n> Sigh. We could use an extra check_pg_config() with a routine new in\n> 3.2.0. Looking at CHANGES.md, SSL_get0_group_name() seems to be one\n> generic choice here.\n\nOr even simpler: plant a (ssl\\/tls|sslv3) in these strings.\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 12:58:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Or even simpler: plant a (ssl\\/tls|sslv3) in these strings.\n\nYeah, weakening the pattern match was what I had in mind.\nI was thinking of something like \"ssl[a-z0-9/]*\" but your\nproposal works too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 23:18:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "> On 28 Nov 2023, at 01:29, Bo Anderson <[email protected]> wrote:\n\n> It probably doesn’t exist in BoringSSL but neither does a lot of things.\n\nThats not an issue, we don't support building with BoringSSL.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:38:12 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> Thats not an issue, we don't support building with BoringSSL.\n\nRight. I'll work on getting this pushed, unless someone else\nis already on it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:00:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Tue Nov 28, 2023 at 9:00 AM CST, Tom Lane wrote:\n> Daniel Gustafsson <[email protected]> writes:\n> > Thats not an issue, we don't support building with BoringSSL.\n>\n> Right. I'll work on getting this pushed, unless someone else\n> is already on it?\n\nWhen you say \"this\" are you referring to the patch I sent or adding \nsupport for BoringSSL?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 28 Nov 2023 09:16:31 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "How are you guys running the tests? I have PG_TEST_EXTRA=ssl and \neverything passes for me. Granted, I am using the Meson build.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 28 Nov 2023 09:17:31 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> How are you guys running the tests? I have PG_TEST_EXTRA=ssl and \n> everything passes for me. Granted, I am using the Meson build.\n\nI'm doing what it says in test/ssl/README:\n\n\tmake check PG_TEST_EXTRA=ssl\n\nI don't know whether the meson build has support for running these\nextra \"unsafe\" tests or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:31:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Tue Nov 28, 2023 at 9:31 AM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > How are you guys running the tests? I have PG_TEST_EXTRA=ssl and \n> > everything passes for me. Granted, I am using the Meson build.\n>\n> I'm doing what it says in test/ssl/README:\n>\n> \tmake check PG_TEST_EXTRA=ssl\n>\n> I don't know whether the meson build has support for running these\n> extra \"unsafe\" tests or not.\n\nThanks Tom. I'll check again. Maybe I didn't set LD_LIBRARY_PATH when \nrunning the tests. I have openssl installing to a non-default prefix.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 28 Nov 2023 09:33:04 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> When you say \"this\" are you referring to the patch I sent or adding \n> support for BoringSSL?\n\nI have no interest in supporting BoringSSL. I just replied to\nDaniel's comment because it seemed to resolve the last concern\nabout whether your patch is OK.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:42:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Tue Nov 28, 2023 at 9:42 AM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > When you say \"this\" are you referring to the patch I sent or adding \n> > support for BoringSSL?\n>\n> I have no interest in supporting BoringSSL. I just replied to\n> Daniel's comment because it seemed to resolve the last concern\n> about whether your patch is OK.\n\nIf you haven't started fixing the tests, then I'll get to work on it and \nsend a new revision before the end of the day. Thanks!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 28 Nov 2023 09:44:18 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Tue Nov 28, 2023 at 9:42 AM CST, Tom Lane wrote:\n>> I have no interest in supporting BoringSSL. I just replied to\n>> Daniel's comment because it seemed to resolve the last concern\n>> about whether your patch is OK.\n\n> If you haven't started fixing the tests, then I'll get to work on it and \n> send a new revision before the end of the day. Thanks!\n\nNo need, I can finish it up from here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 11:06:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Tue Nov 28, 2023 at 10:06 AM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Tue Nov 28, 2023 at 9:42 AM CST, Tom Lane wrote:\n> >> I have no interest in supporting BoringSSL. I just replied to\n> >> Daniel's comment because it seemed to resolve the last concern\n> >> about whether your patch is OK.\n>\n> > If you haven't started fixing the tests, then I'll get to work on it and \n> > send a new revision before the end of the day. Thanks!\n>\n> No need, I can finish it up from here.\n\nSweet. I appreciate your help.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:07:18 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "FTR, I've pushed this and the buildfarm seems happy. In particular,\nI just updated indri to the latest MacPorts packages including\nOpenSSL 3.2.0, so we'll have coverage of that going forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 23:30:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Tue Nov 28, 2023 at 9:42 AM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > When you say \"this\" are you referring to the patch I sent or adding \n> > support for BoringSSL?\n>\n> I have no interest in supporting BoringSSL.\n\nFunnily enough, here[0] is BoringSSL adding the BIO_{get,set}_app_data() \nAPIs.\n\n[0]: https://github.com/google/boringssl/commit/2139aba2e3e28cd1cdefbd9b48e2c31a75441203\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 29 Nov 2023 09:21:31 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "> On 29 Nov 2023, at 16:21, Tristan Partin <[email protected]> wrote:\n> \n> On Tue Nov 28, 2023 at 9:42 AM CST, Tom Lane wrote:\n>> \"Tristan Partin\" <[email protected]> writes:\n>> > When you say \"this\" are you referring to the patch I sent or adding > support for BoringSSL?\n>> \n>> I have no interest in supporting BoringSSL.\n> \n> Funnily enough, here[0] is BoringSSL adding the BIO_{get,set}_app_data() APIs.\n\nStill doesn't seem like a good candidate for a postgres TLS library since they\nthemselves claim:\n\n \"Although BoringSSL is an open source project, it is not intended for\n general use, as OpenSSL is. We don't recommend that third parties depend\n upon it. Doing so is likely to be frustrating because there are no\n guarantees of API or ABI stability.\"\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 29 Nov 2023 16:26:16 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 29 Nov 2023, at 16:21, Tristan Partin <[email protected]> wrote:\n>> Funnily enough, here[0] is BoringSSL adding the BIO_{get,set}_app_data() APIs.\n\n> Still doesn't seem like a good candidate for a postgres TLS library since they\n> themselves claim:\n> \"Although BoringSSL is an open source project, it is not intended for\n> general use, as OpenSSL is. We don't recommend that third parties depend\n> upon it. Doing so is likely to be frustrating because there are no\n> guarantees of API or ABI stability.\"\n\nKind of odd that, with that mission statement, they are adding\nBIO_{get,set}_app_data on the justification that OpenSSL has it\nand Postgres is starting to use it. Nonetheless, that commit\nalso seems to prove the point about lack of API/ABI stability.\n\nI'm content to take their advice and not try to support BoringSSL.\nIt's not clear what benefit to us there would be, and we already\nhave our hands full coping with all the different OpenSSL and LibreSSL\nversions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Nov 2023 11:32:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Wed Nov 29, 2023 at 10:32 AM CST, Tom Lane wrote:\n> Daniel Gustafsson <[email protected]> writes:\n> > On 29 Nov 2023, at 16:21, Tristan Partin <[email protected]> wrote:\n> >> Funnily enough, here[0] is BoringSSL adding the BIO_{get,set}_app_data() APIs.\n>\n> > Still doesn't seem like a good candidate for a postgres TLS library since they\n> > themselves claim:\n> > \"Although BoringSSL is an open source project, it is not intended for\n> > general use, as OpenSSL is. We don't recommend that third parties depend\n> > upon it. Doing so is likely to be frustrating because there are no\n> > guarantees of API or ABI stability.\"\n>\n> Kind of odd that, with that mission statement, they are adding\n> BIO_{get,set}_app_data on the justification that OpenSSL has it\n> and Postgres is starting to use it. Nonetheless, that commit\n> also seems to prove the point about lack of API/ABI stability.\n>\n> I'm content to take their advice and not try to support BoringSSL.\n> It's not clear what benefit to us there would be, and we already\n> have our hands full coping with all the different OpenSSL and LibreSSL\n> versions.\n\nYep, I just wanted to point it out in the interest of relevancy to our \nconversation yesterday :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 29 Nov 2023 10:48:23 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On 2023-Nov-29, Tom Lane wrote:\n\n> Kind of odd that, with that mission statement, they are adding\n> BIO_{get,set}_app_data on the justification that OpenSSL has it\n> and Postgres is starting to use it. Nonetheless, that commit\n> also seems to prove the point about lack of API/ABI stability.\n\nAs I understand it, this simply means that Google is already building\ntheir own fork of Postgres, patching it to use BoringSSL. (This makes\nsense, since they offer Postgres databases in their cloud offerings.)\nThey don't need PGDG to support BoringSSL, but they do need to make sure\nthat BoringSSL is able to support being used by Postgres.\n\n> I'm content to take their advice and not try to support BoringSSL.\n\nThat seems the right reaction. It is not our problem.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n", "msg_date": "Wed, 29 Nov 2023 17:58:22 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "I ran into an SSL issue when using the MSYS2/MINGW build of Postgres\nfor the PgBouncer test suite. Postgres crashed whenever you tried to\nopen an ssl connection to it.\nhttps://github.com/msys2/MINGW-packages/issues/19851\n\nI'm wondering if the issue described in this thread could be related\nto the issue I ran into. Afaict the merged patch has not been released\nyet.\n\n\n", "msg_date": "Wed, 24 Jan 2024 16:58:17 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" }, { "msg_contents": "On Wed Jan 24, 2024 at 9:58 AM CST, Jelte Fennema-Nio wrote:\n> I ran into an SSL issue when using the MSYS2/MINGW build of Postgres\n> for the PgBouncer test suite. Postgres crashed whenever you tried to\n> open an ssl connection to it.\n> https://github.com/msys2/MINGW-packages/issues/19851\n>\n> I'm wondering if the issue described in this thread could be related\n> to the issue I ran into. Afaict the merged patch has not been released\n> yet.\n\nDo you have a backtrace? Given that the version is 3.2.0, seems likely.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 24 Jan 2024 10:23:45 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL tests fail on OpenSSL v3.2.0" } ]
[ { "msg_contents": "I had some interesting conversations with a couple PostgreSQL community\nmembers at PASS Data Summit the week before last about the collation\nproblem, and then - just in this last week - I saw two more people on\npublic channels hitting corruption problems. One person on the public\nPostgreSQL Slack, we eventually figured out they had upgraded from\nUbuntu 18.04 to 22.04 which hits glibc 2.28; a second person here on the\npgsql-general list reported by Daniel Westermann and I assume\nrepresenting a client of dbi services [1]. Everyone who's been tracking\nthis over the past few years has seen the steady stream of quiet\ncomplaints in public from people at almost every major PG company,\nlargely around the glibc 2.28 debacle.\n\nI've been tracking the discussions around collation here on the lists\nand I've had a number of conversations with folks working deeply in this\narea inside and outside of AWS, and I was part of the effort to address\nit at AWS since we first became aware of it many years ago.\n\nIt seems to me the general perspective on the mailing lists is that:\n\n1) \"collation changes are uncommon\" (which is relatively correct)\n2) \"most users would rather have ease-of-use than 100% safety, since\nit's uncommon\"\n\nAnd I think this led to the current behavior of issuing a warning rather\nthan an error, and providing a SQL command \"alter ... refresh collation\"\nwhich simply instructs the database to permanently forget the warning\nwithout changing anything. I agree that some users might prefer this\nbehavior, but I think businesses like banks or healthcare companies\nwould be appalled, and would prefer to do the extra work and have\ncertainty of avoiding small but known probabilities of silent data\ncorruption.\n\nAs I said on the pgsql-general thread: glibc 2.28 has certainly been the\nmost obvious and impactful case, so the focus is understandable, but\nthere's a bit of a myth in the general public that the problem is only\nwith glibc 2.28 (and not ICU or other glibc versions or data structures\nother than indexes). Unfortunately, contrary to current popular belief,\nthe only truly safe way to update an operating system under PosgreSQL is\nwith logical dump/load or logical replication, or continuing to compile\nand use the exact older version of ICU from the old OS (if you use ICU).\nI think the ICU folks are generally careful enough that it'll be far\nless likely for compiler changes and new compiler optimizations to\ninadvertently change collation on newer operating systems and newer\nbuild toolchains (eg. for strings with don't have linguistically defined\ncollation, like mixing characters from multiple languages and classes).\n\nIt's been two years now since I published the collation torture test\nover on github, which directly compares 10 years of both glibc and ICU\nchanges and demonstrates clearly that both ICU and glibc libraries have\nregular small changes, and both libraries have had at least one release\nwith a massive number of changes. [2]\n\nI also published a blog post this past March with a step-by-step\nreproducible demonstration of silent corruption without any indexes\ninvolved by using ICU (not glibc) with an OS upgrade from Ubuntu 20.04\nto 22.04. [3] The warning was not even displayed to the user, because\nit happened at connection time rather than query time.\n\nThat blog also listed many reasons that glibc & ICU regularly include\nsmall changes and linked to real examples from ICU: new characters\n(which use existing code points), fixing incorrect rules, governments or\nuniversities clarifying rules, general improvements, and unintentional\nchanges from code changes or refactors (like glibc in Ubuntu 15.04\nchanging sort order for 22 thousand CJK characters, many years prior to\n2.28).\n\nMy own personal opinion here is that PostgreSQL is on a clear trajectory\nto soon be the dominant database of businesses like banks and healthcare\ncompanies, and that the PostgreSQL default configuration with regard to\nsafety and durability should be bank defaults rather than \"easy\" defaults.\n\nFor this reason, I'd like to revisit two specific current behaviors of\nPostgreSQL and get a sense of how strongly everyone feels about them.\n\nFirst: I'd suggest that a collation version mismatch should cause an\nERROR rather than a WARNING by default. If we want to have a GUC that\nallows warning behavior, I think that's OK but I think it should be\nsuperuser-only and documented as a \"developer\" setting similar to\nzero_damaged_pages.\n\nSecond: I'd suggest that all of the \"alter ... refresh collation\"\ncommands should be strictly superuser-only rather than\nowner-of-collation-privs, and that they should be similarly documented\nas something that is generally advised against and exists for\nextraordinary circumstances.\n\nI know these things have been discussed before, and I realize the\nimplications are important and inconvenient for many users, and also I\nrealize that I'm not often involved in discussions here on the hackers\nemail list. (I usually catch up on hackers from archives irregularly,\nfor areas I'm interested in, and I'm involved more regularly with public\nslack and user groups.) But I'm a bit unsatisfied with the current state\nof things and want to bring up the topic here and see what happens.\n\nRespectfully,\nJeremy\n\n\n\n1:\nhttps://www.postgresql.org/message-id/flat/CA%2BfnDAZufFS-4-6%3DO3L%2BqG9iFT8tm6BvtZXNnSm1dkJ8GciCkA%40mail.gmail.com#beefde2f9e54dcee813a8f731993247d\n\n2: https://github.com/ardentperf/glibc-unicode-sorting/\n\n3: https://ardentperf.com/2023/03/26/did-postgres-lose-my-data/\n\n\n\n-- \nJeremy Schneider\nDatabase and Performance Engineer\nAmazon Web Services\n\n\n", "msg_date": "Mon, 27 Nov 2023 11:06:13 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": true, "msg_subject": "proposal: change behavior on collation version mismatch" }, { "msg_contents": "On Mon, 2023-11-27 at 11:06 -0800, Jeremy Schneider wrote:\n> First: I'd suggest that a collation version mismatch should cause an\n> ERROR rather than a WARNING by default. If we want to have a GUC that\n> allows warning behavior, I think that's OK but I think it should be\n> superuser-only and documented as a \"developer\" setting similar to\n> zero_damaged_pages.\n> \n> Second: I'd suggest that all of the \"alter ... refresh collation\"\n> commands should be strictly superuser-only rather than\n> owner-of-collation-privs, and that they should be similarly documented\n> as something that is generally advised against and exists for\n> extraordinary circumstances.\n\nThanks for spending thought on this painful subject.\n\nI can get behind changing the collation version mismatch warning into\nan error. It would cause more annoyance, but might avert bigger pain\nlater on.\n\nBut I don't think that ALTER DATABASE ... REFRESH COLLATION VERSION\nneed be superuser-only. Whoever creates an object may alter it in\nPostgreSQL, and system collations are owned by the bootstrap superuser\nanyway. The point of the warning (or proposed error) is that the user\nknows \"here is a potential problem, have a closer look\".\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 27 Nov 2023 20:17:44 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "I forgot to add that the problem will remain a problem until the\nday we start keeping our own copy of the ICU library in the source\ntree...\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 27 Nov 2023 20:19:45 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On Mon, 2023-11-27 at 11:06 -0800, Jeremy Schneider wrote:\n> I've been tracking the discussions around collation here on the lists\n> and I've had a number of conversations with folks working deeply in\n> this\n> area inside and outside of AWS, and I was part of the effort to\n> address\n> it at AWS since we first became aware of it many years ago.\n\nFor the record, I don't have a strong opinion on your specific\nproposals. Not because I don't care, but because the available options\nall seem pretty bad -- including the status quo.\n\nMy general opinion (not tied specifically to your proposals) is that we\nneed to pursue a lot of different approaches and hope to mitigate the\nproblem. With that in mind, I think your proposals have merit but we of\ncourse need to consider the downsides.\n\n> 2) \"most users would rather have ease-of-use than 100% safety, since\n> it's uncommon\"\n> \n> And I think this led to the current behavior of issuing a warning\n> rather\n> than an error\n\nThe elevel trade-off is *availability* vs safety, not ease-of-use vs\nsafety. It's harder to reason about what most users might want in that\nsituation.\n\n> First: I'd suggest that a collation version mismatch should cause an\n> ERROR rather than a WARNING by default.\n\nIs this proposal based on our current notion of collation version?\nThere's been a lot of reasonable skepticism that what's stored in\ndatcollversion is a good indicator.\n\n> If we want to have a GUC that\n> allows warning behavior, I think that's OK but I think it should be\n> superuser-only and documented as a \"developer\" setting similar to\n> zero_damaged_pages.\n\nA GUC seems sensible to express the availability-vs-safety trade-off. I\nsuspect we can get a GUC that defaults to \"warning\" committed, but\nanything else will be controversial.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 12:29:47 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On Mon, 2023-11-27 at 20:19 +0100, Laurenz Albe wrote:\n> I forgot to add that the problem will remain a problem until the\n> day we start keeping our own copy of the ICU library in the source\n> tree...\n\nAnother option is for packagers to keep specific ICU versions around\nfor an extended time, and make it possible for Postgres to link to the\nright one more flexibly (e.g. tie at initdb time, or some kind of\nmulti-lib system).\n\nEven if ICU is available, we still have the problem of defaults. initdb\ndefaults to libc, and so does CREATE COLLATION (even if the database\ncollation is ICU). So it will be a long time before it's used widely\nenough to consider the problem solved.\n\nAnd even after all of that, ICU is not perfect, and our support for it\nstill has various rough edges.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 12:39:57 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On Mon, Nov 27, 2023 at 9:30 PM Jeff Davis <[email protected]> wrote:\n>\n> On Mon, 2023-11-27 at 11:06 -0800, Jeremy Schneider wrote:\n> > If we want to have a GUC that\n> > allows warning behavior, I think that's OK but I think it should be\n> > superuser-only and documented as a \"developer\" setting similar to\n> > zero_damaged_pages.\n>\n> A GUC seems sensible to express the availability-vs-safety trade-off. I\n> suspect we can get a GUC that defaults to \"warning\" committed, but\n> anything else will be controversial.\n\nA guc like this would bring a set of problems similar to what we have\ne.g. with fsync.\n\nThat is, set it to \"warnings only\", insert a single row into the table\nwith an \"unlucky\" key, set it back to \"errors always\" and you now have\na corrupt database, but your setting reflects that it shouldn't be\ncorrupt. Sure, people shouldn't do that - but people will, and it will\nmake things harder to debug.\n\nThere's been talk before about adding a \"tainted\" flag or similar to\npg_control that gets set if you ever start the system with fsync=off.\nSimilar things could be done here of course, but I'd worry a bit about\nadding another flag like this which can lead to\nhard-to-determine-state without resolving that.\n\n(The fact that we have \"fsync\" under WAL config and not developer\noptions is an indication that we can't really use the classification\nof the config parameters are a good indicator of what's safe and not\nsafe to set)\n\nI could get behind turning it into an error though :)\n\n//Magnus\n\n\n", "msg_date": "Mon, 27 Nov 2023 22:37:06 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On Mon, 2023-11-27 at 22:37 +0100, Magnus Hagander wrote:\n> That is, set it to \"warnings only\", insert a single row into the\n> table\n> with an \"unlucky\" key, set it back to \"errors always\" and you now\n> have\n> a corrupt database, but your setting reflects that it shouldn't be\n> corrupt.\n\nYou would be giving the setting too much credit if you assume that\nconsistently keeping it on \"error\" is a guarantee against corruption.\n\nIt only affects what we do when we detect potential corruption, but our\ndetection is subject to both false positives and false negatives.\n\nWe'd need to document the setting so that users understand the\nconsequences and limitations.\n\nI won't push strongly for such a setting to exist because I know that\nit's far from a complete solution. But I believe it would be sensible\nconsidering that this problem is going to take a while to resolve.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 14:21:59 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On 11/27/23 12:29 PM, Jeff Davis wrote:\n>> 2) \"most users would rather have ease-of-use than 100% safety, since\n>> it's uncommon\"\n>>\n>> And I think this led to the current behavior of issuing a warning\n>> rather\n>> than an error\n> The elevel trade-off is *availability* vs safety, not ease-of-use vs\n> safety. It's harder to reason about what most users might want in that\n> situation.\n\nI'm not in agreement with the idea that this is hard to reason about;\nI've always thought durability & correctness is generally supposed to be\nprioritized over availability in databases. For many enterprise\ncustomers, if they ask why their database wouldn't accept connections\nafter an OS upgrade and we explained that durability & correctness is\nprioritized over availability, I think they would agree we're doing the\nright thing.\n\nIn practice this always happens after a major operating system update of\nsome kind (it would be an unintentional bug in a minor OS upgrade).  In\nmost cases, I hope the error will happen immediately because users\nideally won't even be able to connect (for DB-level glibc and for ICU\ndefault setting).  Giving a hard error quickly after an OS upgrade is\nactually pretty easy for most people to deal with. For most users,\nthey'll immediately understand that something went wrong related to the\nOS upgrade.  And basic testing would turn up connection errors before\nthe production upgrade as long as a connection was attempted as part of\nthe test.\n\nIt seems to me that much of the hand-wringing is around taking a hard\nline on not allowing in-place OS upgrades. We're all aware that when\nyou're talking about tens of terrabytes, in-place upgrade is just a lot\nmore convenient and easy than the alternatives. And we're aware that\nsome other relational databases support this (and also bundle collation\nlibs directly in the DB rather than using external libraries).\n\nI myself wouldn't frame this as an availability issue, I think it's more\nabout ease-of-use in the sense of allowing low-downtime major OS\nupgrades without the complexity of logical replication (but perhaps with\na risk of data loss, because with unicode nobody can actually be 100%\nsure there's no risky characters stored in the DB, and even those of us\nwith extensive expert knowledge struggle to accurately characterize the\nrisk level).\n\nThe hand-wringing often comes down to the argument \"but MAYBE en_US\ndidn't change in those 3 major version releases of ICU that you jumped\nacross to land a new Ubuntu LTS release\" ~~ however I believe it's one\nthing to make this argument with ISO 8859 but in the unicode world en_US\nhas default sort rules for japanese, chinese, arabic, cyrilic, nepalese,\nand all kinds of strings with nonsensical combinations of all these\ncharacters.  After some years of ICU and PG, I'm just coming to a\nconclusion that the right thing to do is stay safe and don't change ICU\nversions (or glibc versions) for existing databases in-place.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nPerformance Engineer\nAmazon Web Services\n\n\n\n\n\n\nOn 11/27/23 12:29 PM, Jeff Davis wrote:\n\n\n\n\n2) \"most users would rather have ease-of-use than 100% safety, since\nit's uncommon\"\n\nAnd I think this led to the current behavior of issuing a warning\nrather\nthan an error\n\n\n\nThe elevel trade-off is *availability* vs safety, not ease-of-use vs\nsafety. It's harder to reason about what most users might want in that\nsituation.\n\n\n I'm not in agreement with the idea that this is hard to reason\n about; I've always thought durability & correctness is generally\n supposed to be prioritized over availability in databases. For many\n enterprise customers, if they ask why their database wouldn't accept\n connections after an OS upgrade and we explained that durability\n & correctness is prioritized over availability, I think they\n would agree we're doing the right thing.\n\n In practice this always happens after a major operating system\n update of some kind (it would be an unintentional bug in a minor OS\n upgrade).  In most cases, I hope the error will happen immediately\n because users ideally won't even be able to connect (for DB-level\n glibc and for ICU default setting).  Giving a hard error quickly\n after an OS upgrade is actually pretty easy for most people to deal\n with. For most users, they'll immediately understand that something\n went wrong related to the OS upgrade.  And basic testing would turn\n up connection errors before the production upgrade as long as a\n connection was attempted as part of the test.\n\n It seems to me that much of the hand-wringing is around taking a\n hard line on not allowing in-place OS upgrades. We're all aware that\n when you're talking about tens of terrabytes, in-place upgrade is\n just a lot more convenient and easy than the alternatives. And we're\n aware that some other relational databases support this (and also\n bundle collation libs directly in the DB rather than using external\n libraries).\n\n I myself wouldn't frame this as an availability issue, I think it's\n more about ease-of-use in the sense of allowing low-downtime major\n OS upgrades without the complexity of logical replication (but\n perhaps with a risk of data loss, because with unicode nobody can\n actually be 100% sure there's no risky characters stored in the DB,\n and even those of us with extensive expert knowledge struggle to\n accurately characterize the risk level).\n\n The hand-wringing often comes down to the argument \"but MAYBE en_US\n didn't change in those 3 major version releases of ICU that you\n jumped across to land a new Ubuntu LTS release\" ~~ however I believe\n it's one thing to make this argument with ISO 8859 but in the\n unicode world en_US has default sort rules for japanese, chinese,\n arabic, cyrilic, nepalese, and all kinds of strings with nonsensical\n combinations of all these characters.  After some years of ICU and\n PG, I'm just coming to a conclusion that the right thing to do is\n stay safe and don't change ICU versions (or glibc versions) for\n existing databases in-place.\n\n -Jeremy\n\n\n-- \nJeremy Schneider\nPerformance Engineer\nAmazon Web Services", "msg_date": "Mon, 27 Nov 2023 15:35:19 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": true, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On Mon, 2023-11-27 at 15:35 -0800, Jeremy Schneider wrote:\n\n> For many enterprise customers, if they ask why their database\n> wouldn't accept connections after an OS upgrade and we explained that\n> durability & correctness is prioritized over availability, I think\n> they would agree we're doing the right thing.\n\nThey may agree, but their database is still down, and they'll be\nlooking for a clear process to get it going again, ideally within their\nmaintenance window.\n\nIt would be really nice to agree on such a process, or even better, to\nimplement it in code.\n\n> After some years of ICU and PG, I'm just coming to a conclusion that\n> the right thing to do is stay safe and don't change ICU versions (or\n> glibc versions) for existing databases in-place.\n\nI don't disagree, but for a lot of users that depend on their operating\nsystem and packaging infrastructure, that is not very practical.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 18:28:20 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "\tJeremy Schneider wrote:\n\n> 1) \"collation changes are uncommon\" (which is relatively correct)\n> 2) \"most users would rather have ease-of-use than 100% safety, since\n> it's uncommon\"\n> \n> And I think this led to the current behavior of issuing a warning rather\n> than an error,\n\nThere's a technical reason for this being a warning.\nIf it was an error, any attempt to do anything with the collation\nwould fail, which includes REINDEX on indexes using\nthat collation.\nAnd yet that's precisely what you're supposed to do in that\nsituation.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 28 Nov 2023 11:12:52 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: proposal: change behavior on collation version mismatch" }, { "msg_contents": "On 11/28/23 2:12 AM, Daniel Verite wrote:\n> Jeremy Schneider wrote:\n>> 1) \"collation changes are uncommon\" (which is relatively correct)\n>> 2) \"most users would rather have ease-of-use than 100% safety, since\n>> it's uncommon\"\n>>\n>> And I think this led to the current behavior of issuing a warning rather\n>> than an error,\n> There's a technical reason for this being a warning.\n> If it was an error, any attempt to do anything with the collation\n> would fail, which includes REINDEX on indexes using\n> that collation.\n> And yet that's precisely what you're supposed to do in that\n> situation.\n\n\nIndexes are the most obvious and impactful corruption, so the focus is\nunderstandable, but there's a bit of a myth in the general public that\nREINDEX means you fixed your database.  I'm concerned that too many\npeople believe this falsehood, and don't realize that things like\nconstraints and partitions can also be affected by a major OS update\nwhen leaving PG data files in place.  Also there's a tendancy to use\namcheck and validate btree indexes, but skip other index types.  And of\ncourse none of this is possible when people mistakenly use a different\nmajor OS for the hot standby (but Postgres willingly sends incorrect\nquery results to users).\n\nThis is why my original proposal included an update to the ALTER ...\nREFRESH/COLLATION docs.  Today's conventional wisdom suggests this is a\nsafe command.  It's really not, if you're using unicode (which everyone\nis). Fifteen years ago, you needed to buy a french keyboard to type\nfrench accented characters.  Today it's a quick tap on your phone to get\nchinese, russian, tibetan, emojis, and any other character you can dream\nof.  All of those surprising characters eventually get stored in Postres\ndatabases, often to the surprise of devs and admins, after they discover\ncorruption from an OS upgrade.\n\nAnd to recap some data about historical ICU versions from the torture test:\n\nICU Version | OS Version | en-US characters changed collation |\nzh-Hans-CN characters changed collation | fr-FR characters changed collation\n55.1-7ubuntu0.5 | Ubuntu 16.04.7 LTS | 286,654 | 286,654 | 286,654\n60.2-3ubuntu3.1 | Ubuntu 18.04.6 LTS | 23,741 | 24,415 | 23,741\n63.1-6 | Ubuntu 19.04 | 688 | 688 | 688\n66.1-2ubuntu2 | Ubuntu 20.04.3 LTS | 6,497 | 6,531 | 6,497\n70.1-2 | Ubuntu 22.04 LTS | 879 | 887 | 879\n\nThe very clear trend here is that most changes are made in the root\ncollation rules, affecting all locales.  This means that worrying about\nspecific collation versions of different locales is really focusing on\nan irrelevant edge case.  In ICU development, all the locales tend to\nchange.\n\nIf anyone thinks the Collation Apocalypse is bad now, I predict the\nKubernetes wave will be mayhem.  Fifteen years ago it was rare to\nphysically move PG datafiles to a new major OS.  Most people would dump\nand load their databases, sized in GBs.  Today's multi-TB Postgres\ndatabases have meant an increase of in-place OS upgrades in recent\nyears.  People started to either detach/attach their storage, or they\nused a hot standby. Kubernetes will make these moves across major OS's a\ndaily, effortless occurrence.\n\nICU doesn't fix anything directly.  We do need ICU - only because it\nfinally enables us to compile that old version of ICU forever on every\nnew OS we move to going forward. This was simply impossible with glibc.\nOver the past couple decades, not even Oracle or IBM has managed to\ndeprecate a single version of ICU from a relational database, and not\nfor lack of desire.\n\n-Jeremy\n\n-- \nJeremy Schneider\nPerformance Engineer\nAmazon Web Services\n\n\n\n\n\n\nOn 11/28/23 2:12 AM, Daniel Verite\n wrote:\n\n\n Jeremy Schneider wrote:\n\n\n1) \"collation changes are uncommon\" (which is relatively correct)\n2) \"most users would rather have ease-of-use than 100% safety, since\nit's uncommon\"\n\nAnd I think this led to the current behavior of issuing a warning rather\nthan an error,\n\n\n\nThere's a technical reason for this being a warning.\nIf it was an error, any attempt to do anything with the collation\nwould fail, which includes REINDEX on indexes using\nthat collation.\nAnd yet that's precisely what you're supposed to do in that\nsituation.\n\n\n\n\n Indexes are the most obvious and impactful corruption, so the focus\n is understandable, but there's a bit of a myth in the general public\n that REINDEX means you fixed your database.  I'm concerned that too\n many people believe this falsehood, and don't realize that things\n like constraints and partitions can also be affected by a major OS\n update when leaving PG data files in place.  Also there's a tendancy\n to use amcheck and validate btree indexes, but skip other index\n types.  And of course none of this is possible when people\n mistakenly use a different major OS for the hot standby (but\n Postgres willingly sends incorrect query results to users).\n\n This is why my original proposal included an update to the ALTER ...\n REFRESH/COLLATION docs.  Today's conventional wisdom suggests this\n is a safe command.  It's really not, if you're using unicode (which\n everyone is). Fifteen years ago, you needed to buy a french keyboard\n to type french accented characters.  Today it's a quick tap on your\n phone to get chinese, russian, tibetan, emojis, and any other\n character you can dream of.  All of those surprising characters\n eventually get stored in Postres databases, often to the surprise of\n devs and admins, after they discover corruption from an OS upgrade.\n\n And to recap some data about historical ICU versions from the\n torture test:\n\nICU Version | OS Version | en-US characters\n changed collation | zh-Hans-CN characters changed collation |\n fr-FR characters changed collation\n 55.1-7ubuntu0.5 | Ubuntu 16.04.7 LTS | 286,654 | 286,654 | 286,654\n 60.2-3ubuntu3.1 | Ubuntu 18.04.6 LTS | 23,741 | 24,415 | 23,741\n 63.1-6 | Ubuntu 19.04 | 688 | 688 | 688\n 66.1-2ubuntu2 | Ubuntu 20.04.3 LTS | 6,497 | 6,531 | 6,497\n 70.1-2 | Ubuntu 22.04 LTS | 879 | 887 | 879\n\n The very clear trend here is that most changes are made in the root\n collation rules, affecting all locales.  This means that worrying\n about specific collation versions of different locales is really\n focusing on an irrelevant edge case.  In ICU development, all the\n locales tend to change.\n\n If anyone thinks the Collation Apocalypse is bad now, I predict the\n Kubernetes wave will be mayhem.  Fifteen years ago it was rare to\n physically move PG datafiles to a new major OS.  Most people would\n dump and load their databases, sized in GBs.  Today's multi-TB\n Postgres databases have meant an increase of in-place OS upgrades in\n recent years.  People started to either detach/attach their storage,\n or they used a hot standby. Kubernetes will make these moves across\n major OS's a daily, effortless occurrence.\n\n ICU doesn't fix anything directly.  We do need ICU - only because it\n finally enables us to compile that old version of ICU forever on\n every new OS we move to going forward. This was simply impossible\n with glibc. Over the past couple decades, not even Oracle or IBM has\n managed to deprecate a single version of ICU from a relational\n database, and not for lack of desire.\n\n -Jeremy\n\n-- \nJeremy Schneider\nPerformance Engineer\nAmazon Web Services", "msg_date": "Wed, 29 Nov 2023 17:03:45 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": true, "msg_subject": "Re: proposal: change behavior on collation version mismatch" } ]
[ { "msg_contents": "Hi,\n\nI want to dynamically generate a nested json file. I have written a\nfunction for it in PL/PGSQL that accepts 3 arrays. First one is an array of\nall json fields, second one is an array of all json fields with columns\nfrom tables present in db, third one mentions the type for all the fields\ninside the json file.\n\nThis what I have so for that is working:\n\ndeclare outputs text;\n begin\n outputs = '';\n for i in 1 .. array_upper(fieldtype, 1) loop\n select case\n when lower(fieldtype[i]) = 'field' then (outputs || '' ||\njsonb_build_object( fname[i], tcolumn[i] )::text)::text\n\nwhen lower(fieldtype[i]) = 'json object' then (outputs || '' ||\njsonb_build_object( fname[i], jsonb_build_object() )::text)::text\n\n when lower(fieldtype[i]) = 'json array' then (outputs || '' ||\njson_build_array( fname[i], json_build_array() )::text)::text\n\n else 'It is not field, object or an array'::text\nend case into outputs\n from tblname;\nend loop;\n return outputs;\nend;\n\nSo, not for example the input for my function is:\nfname: [‘passenger’, ‘firstname’, ‘lastname’, ‘address’, ‘city’, ‘state’,\n‘country’]\ntcolumn: [,’pass.fname’, ‘pass.lname’, , ‘address.city’, ‘address.state’,\n‘address.country’]\nftype: [‘json object’, ‘field’, ‘field’, ‘json array’, ‘field’, ‘field’,\n‘field’]\n\nThis is what I want my output to look like:\n{\n passenger: {\n “firstname”: “john”,\n “lastname”: “smith”,\n “address”: [\n {\n “city”: “Houston”,\n “state”: “Texas”,\n “country”: “USA”\n }\n ]\n }\n}\n\nBut currently I am having difficulty adding firstname inside passenger json\nobject.\n\nI know that I need to again loop through the json field names array to go\nto next one inside jsonb_build_object() function to get the fields and\narrays inside but that would make my function very big. This is what I need\nsome assistance with.\n\nThanks for all the help.\n\nHi,I want to dynamically generate a nested json file. I have written a function for it in PL/PGSQL that accepts 3 arrays. First one is an array of all json fields, second one is an array of all json fields with columns from tables present in db, third one mentions the type for all the fields inside the json file. This what I have so for that is working:declare\n\toutputs text;  begin  outputs = '';  for i in 1 .. array_upper(fieldtype, 1) loop  select \n\t\t\tcase  when lower(fieldtype[i]) = 'field' then \n\t\t\t\t\t(outputs || '' || jsonb_build_object(\n\t\t\t\t\t\tfname[i], tcolumn[i]\n\t\t\t\t\t)::text)::text when lower(fieldtype[i]) = 'json object' then \n\t\t\t \t\t(outputs || '' || jsonb_build_object(\n\t\t\t\t\t\tfname[i], jsonb_build_object()\n\t\t\t\t\t)::text)::text  when lower(fieldtype[i]) = 'json array' then \n\t\t\t \t\t(outputs || '' || json_build_array(\n\t\t\t\t\t\tfname[i], json_build_array()\n\t\t\t\t\t)::text)::text  else 'It is not field, object or an array'::text end case\n\t\tinto outputs  from tblname; end loop;  return outputs; end;So, not for example the input for my function is: fname: [‘passenger’, ‘firstname’, ‘lastname’, ‘address’, ‘city’, ‘state’, ‘country’]tcolumn: [,’pass.fname’, ‘pass.lname’, , ‘address.city’, ‘address.state’, ‘address.country’]ftype: [‘json object’, ‘field’, ‘field’, ‘json array’, ‘field’, ‘field’, ‘field’]This is what I want my output to look like:{  passenger: {       “firstname”: “john”,       “lastname”: “smith”,       “address”: [         {           “city”: “Houston”,           “state”: “Texas”,           “country”: “USA”         }        ]    }}But currently I am having difficulty adding firstname inside passenger json object.I know that I need to again loop through the json field names array to go to next one inside jsonb_build_object() function to get the fields and arrays inside but that would make my function very big. This is what I need some assistance with. Thanks for all the help.", "msg_date": "Mon, 27 Nov 2023 13:09:59 -0800", "msg_from": "Rushabh Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Dynamically generate a nested JSON file" }, { "msg_contents": "On Mon, Nov 27, 2023 at 2:10 PM Rushabh Shah <[email protected]> wrote:\n\n>\n> I want to dynamically generate a nested json file. I have written a\n> function for it in PL/PGSQL that accepts 3 arrays. First one is an array of\n> all json fields, second one is an array of all json fields with columns\n> from tables present in db, third one mentions the type for all the fields\n> inside the json file.\n>\n>\nThis is a completely inappropriate question for this list - it is for\ndiscussions related to writing patches to the project source code.\n\nThe -general mailing list is where you can solicit help for problems you\nare having while using PostgreSQL.\n\nDavid J.\n\nOn Mon, Nov 27, 2023 at 2:10 PM Rushabh Shah <[email protected]> wrote:I want to dynamically generate a nested json file. I have written a function for it in PL/PGSQL that accepts 3 arrays. First one is an array of all json fields, second one is an array of all json fields with columns from tables present in db, third one mentions the type for all the fields inside the json file. This is a completely inappropriate question for this list - it is for discussions related to writing patches to the project source code. The -general mailing list is where you can solicit help for problems you are having while using PostgreSQL.David J.", "msg_date": "Mon, 27 Nov 2023 15:27:58 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dynamically generate a nested JSON file" } ]
[ { "msg_contents": "Hi,\n\nWhile adding some TAP tests, I realized that set_query_timer_restart()\nin BackgroundPsql may not work. Specifically, it seems not to work\nunless we pass an argument to the function. Here is the test script I\nused:\n\nuse strict;\nuse warnings;\nuse PostgreSQL::Test::Cluster;\nuse PostgreSQL::Test::Utils;\nuse Test::More;\n\nmy $node = PostgreSQL::Test::Cluster->new('main');\n$node->init;\n$node->start;\n\n$PostgreSQL::Test::Utils::timeout_default = 5;\nmy $bg_psql = $node->background_psql('postgres', on_error_stop => 1);\n\n$bg_psql->query_safe(\"select pg_sleep(3)\");\n$bg_psql->set_query_timer_restart();\n$bg_psql->query_safe(\"select pg_sleep(3)\");\n$bg_psql->quit;\nis(1,1,\"dummy\");\n\n$node->stop;\ndone_testing();\n\n\nIf calling set_query_timer_restart() works properly, this test would\npass since we reset the query timeout before executing the second\npg_sleep(3). However, this test fail on my environment unless I use\nset_query_timer_restart(1) (i.e. passing something to the function).\n\nCurrently authentication/t/001_password.pl is the sole user of\nset_query_timer_restart() function. I think we should define a value\nto query_timer_restart in set_query_timer_restart() function even if\nno argument is passed, like the patch attached, or we should change\nthe caller to pass 1 to the function.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 28 Nov 2023 10:17:32 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "BackgroundPsql's set_query_timer_restart() may not work" }, { "msg_contents": "On Tue, Nov 28, 2023 at 6:48 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> While adding some TAP tests, I realized that set_query_timer_restart()\n> in BackgroundPsql may not work. Specifically, it seems not to work\n> unless we pass an argument to the function. Here is the test script I\n> used:\n>\n> If calling set_query_timer_restart() works properly, this test would\n> pass since we reset the query timeout before executing the second\n> pg_sleep(3). However, this test fail on my environment unless I use\n> set_query_timer_restart(1) (i.e. passing something to the function).\n\nRight.\n\n> Currently authentication/t/001_password.pl is the sole user of\n> set_query_timer_restart() function. I think we should define a value\n> to query_timer_restart in set_query_timer_restart() function even if\n> no argument is passed, like the patch attached, or we should change\n> the caller to pass 1 to the function.\n\nIt is added by commit 664d7575 and I agree that calling the function\nwithout argument doesn't reset the query_timer_restart as intended.\n\nA nitpick on the patch - how about honoring the passed-in parameter\nwith something like $self->{query_timer_restart} = 1 if !defined\n$self->{query_timer_restart}; instead of just setting it to 1 (a value\nother than undef) $self->{query_timer_restart} = 1;?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Nov 2023 11:58:27 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BackgroundPsql's set_query_timer_restart() may not work" }, { "msg_contents": "Bharath Rupireddy <[email protected]> writes:\n> A nitpick on the patch - how about honoring the passed-in parameter\n> with something like $self->{query_timer_restart} = 1 if !defined\n> $self->{query_timer_restart}; instead of just setting it to 1 (a value\n> other than undef) $self->{query_timer_restart} = 1;?\n\nI wondered about that too, but the evidence of existing callers is\nthat nobody cares. If we did make the code do something like that,\n(a) I don't think your fragment is right, and (b) we'd need to rewrite\nthe function's comment to explain it. I'm not seeing a reason to\nthink it's worth spending effort on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Nov 2023 01:53:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BackgroundPsql's set_query_timer_restart() may not work" }, { "msg_contents": "On Tue, Nov 28, 2023 at 12:23 PM Tom Lane <[email protected]> wrote:\n>\n> Bharath Rupireddy <[email protected]> writes:\n> > A nitpick on the patch - how about honoring the passed-in parameter\n> > with something like $self->{query_timer_restart} = 1 if !defined\n> > $self->{query_timer_restart}; instead of just setting it to 1 (a value\n> > other than undef) $self->{query_timer_restart} = 1;?\n>\n> I wondered about that too, but the evidence of existing callers is\n> that nobody cares. If we did make the code do something like that,\n> (a) I don't think your fragment is right, and (b) we'd need to rewrite\n> the function's comment to explain it. I'm not seeing a reason to\n> think it's worth spending effort on.\n\nHm. I don't mind doing just the $self->{query_timer_restart} = 1; like\nin Sawada-san's patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 29 Nov 2023 13:00:32 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BackgroundPsql's set_query_timer_restart() may not work" }, { "msg_contents": "On Wed, Nov 29, 2023 at 4:30 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Nov 28, 2023 at 12:23 PM Tom Lane <[email protected]> wrote:\n> >\n> > Bharath Rupireddy <[email protected]> writes:\n> > > A nitpick on the patch - how about honoring the passed-in parameter\n> > > with something like $self->{query_timer_restart} = 1 if !defined\n> > > $self->{query_timer_restart}; instead of just setting it to 1 (a value\n> > > other than undef) $self->{query_timer_restart} = 1;?\n> >\n> > I wondered about that too, but the evidence of existing callers is\n> > that nobody cares. If we did make the code do something like that,\n> > (a) I don't think your fragment is right, and (b) we'd need to rewrite\n> > the function's comment to explain it. I'm not seeing a reason to\n> > think it's worth spending effort on.\n\nAgreed.\n\n> Hm. I don't mind doing just the $self->{query_timer_restart} = 1; like\n> in Sawada-san's patch.\n\nOkay, I've attached the patch that I'm going to push through v16,\nbarring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 29 Nov 2023 17:19:11 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BackgroundPsql's set_query_timer_restart() may not work" }, { "msg_contents": "On Wed, Nov 29, 2023 at 1:49 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 4:30 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Tue, Nov 28, 2023 at 12:23 PM Tom Lane <[email protected]> wrote:\n> > >\n> > > Bharath Rupireddy <[email protected]> writes:\n> > > > A nitpick on the patch - how about honoring the passed-in parameter\n> > > > with something like $self->{query_timer_restart} = 1 if !defined\n> > > > $self->{query_timer_restart}; instead of just setting it to 1 (a value\n> > > > other than undef) $self->{query_timer_restart} = 1;?\n> > >\n> > > I wondered about that too, but the evidence of existing callers is\n> > > that nobody cares. If we did make the code do something like that,\n> > > (a) I don't think your fragment is right, and (b) we'd need to rewrite\n> > > the function's comment to explain it. I'm not seeing a reason to\n> > > think it's worth spending effort on.\n>\n> Agreed.\n>\n> > Hm. I don't mind doing just the $self->{query_timer_restart} = 1; like\n> > in Sawada-san's patch.\n>\n> Okay, I've attached the patch that I'm going to push through v16,\n> barring any objections.\n\nHow about the commit message summary 'Fix TAP function\nset_query_timer_restart() issue without argument.'? Also, it's good to\nspecify the commit 664d7575 that introduced the TAP function in the\ncommit message description.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 29 Nov 2023 16:18:46 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BackgroundPsql's set_query_timer_restart() may not work" }, { "msg_contents": "On Wed, Nov 29, 2023 at 7:48 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 1:49 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Nov 29, 2023 at 4:30 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Tue, Nov 28, 2023 at 12:23 PM Tom Lane <[email protected]> wrote:\n> > > >\n> > > > Bharath Rupireddy <[email protected]> writes:\n> > > > > A nitpick on the patch - how about honoring the passed-in parameter\n> > > > > with something like $self->{query_timer_restart} = 1 if !defined\n> > > > > $self->{query_timer_restart}; instead of just setting it to 1 (a value\n> > > > > other than undef) $self->{query_timer_restart} = 1;?\n> > > >\n> > > > I wondered about that too, but the evidence of existing callers is\n> > > > that nobody cares. If we did make the code do something like that,\n> > > > (a) I don't think your fragment is right, and (b) we'd need to rewrite\n> > > > the function's comment to explain it. I'm not seeing a reason to\n> > > > think it's worth spending effort on.\n> >\n> > Agreed.\n> >\n> > > Hm. I don't mind doing just the $self->{query_timer_restart} = 1; like\n> > > in Sawada-san's patch.\n> >\n> > Okay, I've attached the patch that I'm going to push through v16,\n> > barring any objections.\n>\n> How about the commit message summary 'Fix TAP function\n> set_query_timer_restart() issue without argument.'? Also, it's good to\n> specify the commit 664d7575 that introduced the TAP function in the\n> commit message description.\n\nThanks! I've incorporated your comment and pushed the patch.\n\nRegards,\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 10:18:50 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BackgroundPsql's set_query_timer_restart() may not work" } ]
[ { "msg_contents": "Hi\n\nI posted this question already on pgsql-general, but it got no answers.\nMaybe the topic is too technical? So I'm trying it here. Maybe a SSI\nspecialist is here on the list.\n\nWe have a PostgreSql 15 server serving around 30 databases, one schema each\nwith the same layout. Each database is used by one application instance.\nThe application consistently uses transactions with isolation level\nserializable to access the database, optimizing by using explicit read only\ntransactions, where applicable. Once the server reaches 100% CPU load we\nget an increased amount of serialize conflict errors. This is expected, due\nto more concurrent access. But I fail to explain this kind of error:\n\nERROR: could not serialize access due to read/write dependencies among\ntransactions\n Detail: Reason code: Canceled on identification as a pivot, with conflict\nout to old committed transaction 61866959.\n\nThere is a variation of the error:\n\nPSQLException: ERROR: could not serialize access due to read/write\ndependencies among transactions\n Detail: Reason code: Canceled on conflict out to old pivot 61940806.\n\nWe're logging the id, begin and end of every transaction. Transaction\n61940806 was committed without errors. The transaction responsible for the\nabove error was started 40min later (and failed immediately). With 61866959\nit is even more extreme: the first conflict error occurred 2.5h after\n61866959 was committed.\n\nThe DB table access pattern is too complex to lay out here. There are like\n20 tables that are read/written to. Transactions are usually short living.\nThe longest transaction that could occur is 1 min long. My understanding of\nserializable isolation is that only overlapping transactions can conflict.\nI can be pretty sure that in the above cases there is no single\ntransaction, which overlaps with 61940806 and with the failing transaction\n40 min later. Such long running transactions would cause different types of\nerrors in our system (\"out of shared memory\", \"You might need to increase\nmax_pred_locks_per_transaction\").\n\nWhy does PostgreSql detect a conflict with a transaction which was\ncommitted more than 1h before? Can there be a long dependency chain between\nmany short running transactions? Does the high load prevent Postgres from\ndoing some clean up?\n\nCheers,\nEduard\n\nHiI posted this question already on pgsql-general, but it got no answers. Maybe the topic is too technical? So I'm trying it here. Maybe a SSI specialist is here on the list.We have a PostgreSql 15 server serving around 30 databases, one schema each with the same layout. Each database is used by one application instance. The application consistently uses transactions with isolation level serializable to access the database, optimizing by using explicit read only transactions, where applicable. Once the server reaches 100% CPU load we get an increased amount of serialize conflict errors. This is expected, due to more concurrent access. But I fail to explain this kind of error:ERROR: could not serialize access due to read/write dependencies among transactions  Detail: Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction 61866959.There is a variation of the error:PSQLException: ERROR: could not serialize access due to read/write dependencies among transactions  Detail: Reason code: Canceled on conflict out to old pivot 61940806.We're logging the id, begin and end of every transaction. Transaction 61940806 was committed without errors. The transaction responsible for the above error was started 40min later (and failed immediately). With 61866959 it is even more extreme: the first conflict error occurred 2.5h after 61866959 was committed.The DB table access pattern is too complex to lay out here. There are like 20 tables that are read/written to. Transactions are usually short living. The longest transaction that could occur is 1 min long. My understanding of serializable isolation is that only overlapping transactions can conflict. I can be pretty sure that in the above cases there is no single transaction, which overlaps with 61940806 and with the failing transaction 40 min later. Such long running transactions would cause different types of errors in our system (\"out of shared memory\", \"You might need to increase max_pred_locks_per_transaction\").Why does PostgreSql detect a conflict with a transaction which was committed more than 1h before? Can there be a long dependency chain between many short running transactions? Does the high load prevent Postgres from doing some clean up?Cheers,Eduard", "msg_date": "Tue, 28 Nov 2023 06:41:31 +0100", "msg_from": "\"Wirch, Eduard\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "Hi,\n\n> On 28. Nov 2023, at 06:41, Wirch, Eduard <[email protected]> wrote:\n> \n> \n> \n> :\n> \n> ERROR: could not serialize access due to read/write dependencies among transactions\n> Detail: Reason code: Canceled on identification as a pivot, with conflict out to old committed transaction 61866959.\n> \n> There is a variation of the error:\n> \n> PSQLException: ERROR: could not serialize access due to read/write dependencies among transactions\n> Detail: Reason code: Canceled on conflict out to old pivot 61940806.\n\nCould you show explain analyze output for those queries which fail with such errors?\n\n\n> \n> \n\n\n", "msg_date": "Tue, 28 Nov 2023 07:14:38 +0100", "msg_from": "Ilya Kosmodemiansky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "On 28/11/2023 07:41, Wirch, Eduard wrote:\n> ERROR: could not serialize access due to read/write dependencies among \n> transactions\n>   Detail: Reason code: Canceled on identification as a pivot, with \n> conflict out to old committed transaction 61866959.\n> \n> There is a variation of the error:\n> \n> PSQLException: ERROR: could not serialize access due to read/write \n> dependencies among transactions\n>   Detail: Reason code: Canceled on conflict out to old pivot 61940806.\n\nBoth of these errors are coming from CheckForSerializableConflictOut(), \nand are indeed variations of the same kind of conflict.\n\n> We're logging the id, begin and end of every transaction. Transaction \n> 61940806 was committed without errors. The transaction responsible for \n> the above error was started 40min later (and failed immediately). With \n> 61866959 it is even more extreme: the first conflict error occurred 2.5h \n> after 61866959 was committed.\n\nWeird indeed. There is only one caller of \nCheckForSerializableConflictOut(), and it does this:\n\n> \t/*\n> \t * Find top level xid. Bail out if xid is too early to be a conflict, or\n> \t * if it's our own xid.\n> \t */\n> \tif (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))\n> \t\treturn;\n> \txid = SubTransGetTopmostTransaction(xid);\n> \tif (TransactionIdPrecedes(xid, TransactionXmin))\n> \t\treturn;\n> \n> \tCheckForSerializableConflictOut(relation, xid, snapshot);\n\nThat check with TransactionXmin is very clear: if 'xid' precedes the \nxmin of the current transaction, IOW if there were no transactions with \n'xid' or older running when the current transcaction started, \nCheckForSerializableConflictOut() is not called.\n\n> The DB table access pattern is too complex to lay out here. There are \n> like 20 tables that are read/written to. Transactions are usually short \n> living. The longest transaction that could occur is 1 min long. My \n> understanding of serializable isolation is that only overlapping \n> transactions can conflict. I can be pretty sure that in the above cases \n> there is no single transaction, which overlaps with 61940806 and with \n> the failing transaction 40 min later.\n\nI hate to drill on this, but are you very sure about that? I don't see \nhow this could happen if there are no long-running transactions. Maybe a \nforgotten two-phase commit transaction? A transaction in a different \ndatabase? A developer who did \"begin;\" in psql and went for lunch?\n\n> Such long running transactions \n> would cause different types of errors in our system (\"out of shared \n> memory\", \"You might need to increase max_pred_locks_per_transaction\").\n\nI don't see why that would necessarily be the case, unless it's \nsomething very specific to your application.\n\n> Why does PostgreSql detect a conflict with a transaction which was \n> committed more than 1h before? Can there be a long dependency chain \n> between many short running transactions? Does the high load prevent \n> Postgres from doing some clean up?\n\nThe dependencies don't chain like that, but there is a system of \n\"summarizing\" old transactions to limit the shared memory usage. When a \ntransaction has dependencies on other transactions, we track those \ndependencies in shared memory. But if we run short on the space reserved \nfor that, we summarize the dependencies, losing granularity. We lose \ninformation of which relations/pages/tuples the xid accessed and which \ntransactions exactly it had a dependency on. That is safe, but can cause \nfalse positives.\n\nThe amount of shared memory reserved for tracking the dependencies is \ndetermined by max_pred_locks_per_transaction, so you could try \nincreasing that to reduce those false positives, even if you never get \nthe \"out of shared memory\" error.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:53:41 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "Thanks for the detailed answer, Heikki.\n\n> > The longest transaction that could occur is 1 min long.\n> I hate to drill on this, but are you very sure about that? A transaction\nin a different database?\n\nDon't be sorry for that, drilling down is important. ;) It took me so long\nto reply because I had to prepare the information carefully. You're right,\non that day I observed the behavior, there were indeed long running\ntransactions in different DBs! My understanding of serializable isolation\nis that only transactions which can somehow affect each other can conflict.\nIt should be clear for PostgreSql, that transactions belonging to different\ndatabases cannot affect each other. Why do they cause serializable\nconflicts?\n\nIf you want something visual, I prepared a SO question with similar content\nlike this mail, but added an image of the tx flow:\nhttps://stackoverflow.com/questions/77544821/postgresql-canceled-on-conflict-out-to-old-pivot\n\nCheers,\nEduard\n\n\nAm Di., 28. Nov. 2023 um 09:53 Uhr schrieb Heikki Linnakangas <\[email protected]>:\n\n> On 28/11/2023 07:41, Wirch, Eduard wrote:\n> > ERROR: could not serialize access due to read/write dependencies among\n> > transactions\n> > Detail: Reason code: Canceled on identification as a pivot, with\n> > conflict out to old committed transaction 61866959.\n> >\n> > There is a variation of the error:\n> >\n> > PSQLException: ERROR: could not serialize access due to read/write\n> > dependencies among transactions\n> > Detail: Reason code: Canceled on conflict out to old pivot 61940806.\n>\n> Both of these errors are coming from CheckForSerializableConflictOut(),\n> and are indeed variations of the same kind of conflict.\n>\n> > We're logging the id, begin and end of every transaction. Transaction\n> > 61940806 was committed without errors. The transaction responsible for\n> > the above error was started 40min later (and failed immediately). With\n> > 61866959 it is even more extreme: the first conflict error occurred 2.5h\n> > after 61866959 was committed.\n>\n> Weird indeed. There is only one caller of\n> CheckForSerializableConflictOut(), and it does this:\n>\n> > /*\n> > * Find top level xid. Bail out if xid is too early to be a\n> conflict, or\n> > * if it's our own xid.\n> > */\n> > if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))\n> > return;\n> > xid = SubTransGetTopmostTransaction(xid);\n> > if (TransactionIdPrecedes(xid, TransactionXmin))\n> > return;\n> >\n> > CheckForSerializableConflictOut(relation, xid, snapshot);\n>\n> That check with TransactionXmin is very clear: if 'xid' precedes the\n> xmin of the current transaction, IOW if there were no transactions with\n> 'xid' or older running when the current transcaction started,\n> CheckForSerializableConflictOut() is not called.\n>\n> > The DB table access pattern is too complex to lay out here. There are\n> > like 20 tables that are read/written to. Transactions are usually short\n> > living. The longest transaction that could occur is 1 min long. My\n> > understanding of serializable isolation is that only overlapping\n> > transactions can conflict. I can be pretty sure that in the above cases\n> > there is no single transaction, which overlaps with 61940806 and with\n> > the failing transaction 40 min later.\n>\n> I hate to drill on this, but are you very sure about that? I don't see\n> how this could happen if there are no long-running transactions. Maybe a\n> forgotten two-phase commit transaction? A transaction in a different\n> database? A developer who did \"begin;\" in psql and went for lunch?\n>\n> > Such long running transactions\n> > would cause different types of errors in our system (\"out of shared\n> > memory\", \"You might need to increase max_pred_locks_per_transaction\").\n>\n> I don't see why that would necessarily be the case, unless it's\n> something very specific to your application.\n>\n> > Why does PostgreSql detect a conflict with a transaction which was\n> > committed more than 1h before? Can there be a long dependency chain\n> > between many short running transactions? Does the high load prevent\n> > Postgres from doing some clean up?\n>\n> The dependencies don't chain like that, but there is a system of\n> \"summarizing\" old transactions to limit the shared memory usage. When a\n> transaction has dependencies on other transactions, we track those\n> dependencies in shared memory. But if we run short on the space reserved\n> for that, we summarize the dependencies, losing granularity. We lose\n> information of which relations/pages/tuples the xid accessed and which\n> transactions exactly it had a dependency on. That is safe, but can cause\n> false positives.\n>\n> The amount of shared memory reserved for tracking the dependencies is\n> determined by max_pred_locks_per_transaction, so you could try\n> increasing that to reduce those false positives, even if you never get\n> the \"out of shared memory\" error.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n\nThanks for the detailed answer, Heikki.> > The longest transaction that could occur is 1 min long.> I hate to drill on this, but are you very sure about that? A transaction in a different database?Don't be sorry for that, drilling down is important. ;) It took me so long to reply because I had to prepare the information carefully. You're right, on that day I observed the behavior, there were indeed long running transactions in different DBs! My understanding of serializable isolation is that only transactions which can somehow affect each other can conflict. It should be clear for PostgreSql, that transactions belonging to different databases cannot affect each other. Why do they cause serializable conflicts?If you want something visual, I prepared a SO question with similar content like this mail, but added an image of the tx flow: https://stackoverflow.com/questions/77544821/postgresql-canceled-on-conflict-out-to-old-pivotCheers,EduardAm Di., 28. Nov. 2023 um 09:53 Uhr schrieb Heikki Linnakangas <[email protected]>:On 28/11/2023 07:41, Wirch, Eduard wrote:\n> ERROR: could not serialize access due to read/write dependencies among \n> transactions\n>    Detail: Reason code: Canceled on identification as a pivot, with \n> conflict out to old committed transaction 61866959.\n> \n> There is a variation of the error:\n> \n> PSQLException: ERROR: could not serialize access due to read/write \n> dependencies among transactions\n>    Detail: Reason code: Canceled on conflict out to old pivot 61940806.\n\nBoth of these errors are coming from CheckForSerializableConflictOut(), \nand are indeed variations of the same kind of conflict.\n\n> We're logging the id, begin and end of every transaction. Transaction \n> 61940806 was committed without errors. The transaction responsible for \n> the above error was started 40min later (and failed immediately). With \n> 61866959 it is even more extreme: the first conflict error occurred 2.5h \n> after 61866959 was committed.\n\nWeird indeed. There is only one caller of \nCheckForSerializableConflictOut(), and it does this:\n\n>       /*\n>        * Find top level xid.  Bail out if xid is too early to be a conflict, or\n>        * if it's our own xid.\n>        */\n>       if (TransactionIdEquals(xid, GetTopTransactionIdIfAny()))\n>               return;\n>       xid = SubTransGetTopmostTransaction(xid);\n>       if (TransactionIdPrecedes(xid, TransactionXmin))\n>               return;\n> \n>       CheckForSerializableConflictOut(relation, xid, snapshot);\n\nThat check with TransactionXmin is very clear: if 'xid' precedes the \nxmin of the current transaction, IOW if there were no transactions with \n'xid' or older running when the current transcaction started, \nCheckForSerializableConflictOut() is not called.\n\n> The DB table access pattern is too complex to lay out here. There are \n> like 20 tables that are read/written to. Transactions are usually short \n> living. The longest transaction that could occur is 1 min long. My \n> understanding of serializable isolation is that only overlapping \n> transactions can conflict. I can be pretty sure that in the above cases \n> there is no single transaction, which overlaps with 61940806 and with \n> the failing transaction 40 min later.\n\nI hate to drill on this, but are you very sure about that? I don't see \nhow this could happen if there are no long-running transactions. Maybe a \nforgotten two-phase commit transaction? A transaction in a different \ndatabase? A developer who did \"begin;\" in psql and went for lunch?\n\n> Such long running transactions \n> would cause different types of errors in our system (\"out of shared \n> memory\", \"You might need to increase max_pred_locks_per_transaction\").\n\nI don't see why that would necessarily be the case, unless it's \nsomething very specific to your application.\n\n> Why does PostgreSql detect a conflict with a transaction which was \n> committed more than 1h before? Can there be a long dependency chain \n> between many short running transactions? Does the high load prevent \n> Postgres from doing some clean up?\n\nThe dependencies don't chain like that, but there is a system of \n\"summarizing\" old transactions to limit the shared memory usage. When a \ntransaction has dependencies on other transactions, we track those \ndependencies in shared memory. But if we run short on the space reserved \nfor that, we summarize the dependencies, losing granularity. We lose \ninformation of which relations/pages/tuples the xid accessed and which \ntransactions exactly it had a dependency on. That is safe, but can cause \nfalse positives.\n\nThe amount of shared memory reserved for tracking the dependencies is \ndetermined by max_pred_locks_per_transaction, so you could try \nincreasing that to reduce those false positives, even if you never get \nthe \"out of shared memory\" error.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 30 Nov 2023 17:24:57 +0100", "msg_from": "\"Wirch, Eduard\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "On 30/11/2023 18:24, Wirch, Eduard wrote:\n> > > The longest transaction that could occur is 1 min long.\n>> I hate to drill on this, but are you very sure about that? A transaction \n> in a different database?\n> \n> Don't be sorry for that, drilling down is important. ;) It took me so \n> long to reply because I had to prepare the information carefully. You're \n> right, on that day I observed the behavior, there were indeed long \n> running transactions in different DBs!\n\nA-ha! :-D\n\n> My understanding of serializable isolation is that only transactions\n> which can somehow affect each other can conflict. It should be clear\n> for PostgreSql, that transactions belonging to different databases\n> cannot affect each other. Why do they cause serializable conflicts?\n\nWhen the system runs low on the memory reserved to track the potential \nconflicts, it \"summarizes\" old transactions and writes them to disk. The \nsummarization loses a lot of information: all we actually store on disk \nis the commit sequence number of the earliest \"out-conflicting\" \ntransaction. We don't store the database oid that the transaction ran in.\n\nThe whole SSI mechanism is conservative so that you can get false \nserialization errors even when there is no actual problem. This is just \nan extreme case of that.\n\nPerhaps we should keep more information in the on-disk summary format. \nPreserving the database OID would make a lot of sense, it would be only \n4 bytes extra per transaction. But we don't preserve it today.\n\n> If you want something visual, I prepared a SO question with similar \n> content like this mail, but added an image of the tx flow: \n> https://stackoverflow.com/questions/77544821/postgresql-canceled-on-conflict-out-to-old-pivot <https://stackoverflow.com/questions/77544821/postgresql-canceled-on-conflict-out-to-old-pivot>\n\nNice graphs!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 1 Dec 2023 01:35:09 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 30/11/2023 18:24, Wirch, Eduard wrote:\n>> My understanding of serializable isolation is that only transactions\n>> which can somehow affect each other can conflict. It should be clear\n>> for PostgreSql, that transactions belonging to different databases\n>> cannot affect each other. Why do they cause serializable conflicts?\n\nOn what grounds do you assert that? Operations on shared catalogs\nare visible across databases. Admittedly they can't be written by\nordinary DML, and I'm not sure that we make any promises about DDL\nwrites honoring serializability. But I'm unwilling to add\n\"optimizations\" that assume that that will never happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Nov 2023 18:51:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "Hi,\n\nOn 2023-11-30 18:51:35 -0500, Tom Lane wrote:\n> On what grounds do you assert that? Operations on shared catalogs\n> are visible across databases. Admittedly they can't be written by\n> ordinary DML, and I'm not sure that we make any promises about DDL\n> writes honoring serializability. But I'm unwilling to add\n> \"optimizations\" that assume that that will never happen.\n\nI'd say the issue is more that it's quite expensive to collect the\ninformation. I tried in the past to make the xmin computation in\nGetSnapshotData() be database specific, but it quickly shows in profiles, and\nGetSnapshotData() unfortunately is really performance / scalability critical.\n\nIf that weren't the case, we could check a shared horizon for shared tables,\nand a non-shared horizon otherwise.\n\nIn some cases we can compute a \"narrower\" horizon when it's worth the cost,\nbut quite often we lack the necessary data, because various backends have\nstored the \"global\" xmin in the procarray.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:31:38 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" }, { "msg_contents": "Thanks guys for the valuable info. The key take away for me is clear: keep\ntransactions short under all circumstances.\n\nCheers,\nEduard\n\nAm Fr., 1. Dez. 2023 um 01:31 Uhr schrieb Andres Freund <[email protected]\n>:\n\n> Hi,\n>\n> On 2023-11-30 18:51:35 -0500, Tom Lane wrote:\n> > On what grounds do you assert that? Operations on shared catalogs\n> > are visible across databases. Admittedly they can't be written by\n> > ordinary DML, and I'm not sure that we make any promises about DDL\n> > writes honoring serializability. But I'm unwilling to add\n> > \"optimizations\" that assume that that will never happen.\n>\n> I'd say the issue is more that it's quite expensive to collect the\n> information. I tried in the past to make the xmin computation in\n> GetSnapshotData() be database specific, but it quickly shows in profiles,\n> and\n> GetSnapshotData() unfortunately is really performance / scalability\n> critical.\n>\n> If that weren't the case, we could check a shared horizon for shared\n> tables,\n> and a non-shared horizon otherwise.\n>\n> In some cases we can compute a \"narrower\" horizon when it's worth the cost,\n> but quite often we lack the necessary data, because various backends have\n> stored the \"global\" xmin in the procarray.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThanks guys for the valuable info. The key take away for me is clear: keep transactions short under all circumstances.Cheers,EduardAm Fr., 1. Dez. 2023 um 01:31 Uhr schrieb Andres Freund <[email protected]>:Hi,\n\nOn 2023-11-30 18:51:35 -0500, Tom Lane wrote:\n> On what grounds do you assert that?  Operations on shared catalogs\n> are visible across databases.  Admittedly they can't be written by\n> ordinary DML, and I'm not sure that we make any promises about DDL\n> writes honoring serializability.  But I'm unwilling to add\n> \"optimizations\" that assume that that will never happen.\n\nI'd say the issue is more that it's quite expensive to collect the\ninformation. I tried in the past to make the xmin computation in\nGetSnapshotData() be database specific, but it quickly shows in profiles, and\nGetSnapshotData() unfortunately is really performance / scalability critical.\n\nIf that weren't the case, we could check a shared horizon for shared tables,\nand a non-shared horizon otherwise.\n\nIn some cases we can compute a \"narrower\" horizon when it's worth the cost,\nbut quite often we lack the necessary data, because various backends have\nstored the \"global\" xmin in the procarray.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 1 Dec 2023 09:55:34 +0100", "msg_from": "\"Wirch, Eduard\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSql: Canceled on conflict out to old pivot" } ]
[ { "msg_contents": "Hi hackers,\r\n \r\nI found a problem when doing the test shown below:\r\n \r\nTime\r\n \r\nSession A\r\n \r\nSession B\r\n \r\n \r\nT1\r\n \r\npostgres=# create table test(a int);\r\n \r\nCREATE TABLE\r\n \r\npostgres=# insert into test values (1);\r\n \r\nINSERT 0 1\r\n \r\n&nbsp;\r\n \r\n \r\nT2\r\n \r\npostgres=# begin;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \r\n \r\nBEGIN&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \r\n \r\npostgres=*# lock table test in access exclusive mode ; \r\n \r\nLOCK TABLE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \r\n \r\n&nbsp;\r\n \r\n \r\nT3\r\n \r\n&nbsp;\r\n \r\npostgres=# begin;\r\n \r\nBEGIN\r\n \r\npostgres=*# lock table test in exclusive mode ;\r\n \r\n \r\nT4\r\n \r\nCase 1:\r\n \r\npostgres=*# lock table test in share row exclusive mode nowait; \r\n \r\nERROR: &nbsp;could not obtain lock on relation \"test\"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \r\n \r\n--------------------------------------------\r\n \r\nCase 2:\r\n \r\npostgres=*# lock table test in share row exclusive mode;\r\n \r\nLOCK TABLE\r\n \r\n&nbsp;\r\n \r\n \r\nAt T4 moment in session A, (case 1) when executing SQL “lock table test in share row exclusive mode nowait;”, an error occurs with message “could not obtain lock on relation test\";However, (case 2) when executing the SQL above without nowait, lock can be obtained successfully. \r\n \r\nDigging into the source code, I find that in case 2 the lock was obtained in the function ProcSleep instead of LockAcquireExtended. Due to nowait logic processed before WaitOnLock-&gt;ProcSleep, acquiring lock failed in case 1. Can any changes be made so that the act of such lock granted occurs before WaitOnLock?\r\n \r\n&nbsp;\r\n \r\nProviding a more universal case:\r\n \r\nTransaction A already holds an n-mode lock on table test. If then transaction A requests an m-mode lock on table Test, m and n have the following constraints:\r\n \r\n(lockMethodTable-&gt;conflictTab[n] &amp; lockMethodTable-&gt;conflictTab[m]) == lockMethodTable-&gt;conflictTab[m]\r\n \r\nObviously, in this case, m<=n.\r\n \r\nShould the m-mode lock be granted before WaitOnLock?\r\n \r\n&nbsp;\r\n \r\nIn the case of m=n (i.e. we already hold the lock), the m-mode lock is immediately granted in the LocalLock path, without the need of lock conflict check.\r\n \r\nBased on the facts above, can we obtain a weaker lock (m<n) on the same object within the same transaction without doing lock conflict check?\r\n \r\nSince m=n works, m<n should certainly work too.\r\n \r\n&nbsp;\r\n \r\nI am attaching a patch here with which the problem in case 1 fixed.\r\n\r\n\r\n\r\n\r\nWith&nbsp;Regards,\r\nJingxian Li.\r\n\r\n\r\n&nbsp;", "msg_date": "Tue, 28 Nov 2023 20:52:31 +0800", "msg_from": "\"=?gb18030?B?SmluZ3hpYW4gTGk=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hi,\n\nOn 2023-11-28 20:52:31 +0800, Jingxian Li wrote:\n> postgres=*# lock table test in exclusive mode ;\n>\n>\n> T4\n>\n> Case 1:\n>\n> postgres=*# lock table test in share row exclusive mode nowait;\n>\n> ERROR: &nbsp;could not obtain lock on relation \"test\"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n>\n> --------------------------------------------\n>\n> Case 2:\n>\n> postgres=*# lock table test in share row exclusive mode;\n>\n> LOCK TABLE\n>\n> &nbsp;\n>\n>\n> At T4 moment in session A, (case 1) when executing SQL “lock table test in share row exclusive mode nowait;”, an error occurs with message “could not obtain lock on relation test\";However, (case 2) when executing the SQL above without nowait, lock can be obtained successfully.\n>\n> Digging into the source code, I find that in case 2 the lock was obtained in\n> the function ProcSleep instead of LockAcquireExtended. Due to nowait logic\n> processed before WaitOnLock-&gt;ProcSleep, acquiring lock failed in case\n> 1. Can any changes be made so that the act of such lock granted occurs\n> before WaitOnLock?\n\nI don't think that'd make sense - lock reordering is done to prevent deadlocks\nand is quite expensive. Why should NOWAIT incur that cost?\n\n\n> &nbsp;\n>\n> Providing a more universal case:\n>\n> Transaction A already holds an n-mode lock on table test. If then transaction A requests an m-mode lock on table Test, m and n have the following constraints:\n>\n> (lockMethodTable-&gt;conflictTab[n] &amp; lockMethodTable-&gt;conflictTab[m]) == lockMethodTable-&gt;conflictTab[m]\n>\n> Obviously, in this case, m<=n.\n>\n> Should the m-mode lock be granted before WaitOnLock?\n>\n> &nbsp;\n>\n> In the case of m=n (i.e. we already hold the lock), the m-mode lock is\n> immediately granted in the LocalLock path, without the need of lock conflict\n> check.\n\nSure - it'd not help anybody to wait for a lock we already hold - in fact it'd\ncreate a lot of deadlocks.\n\n\n> Based on the facts above, can we obtain a weaker lock (m<n) on the same\n> object within the same transaction without doing lock conflict check?\n\nPerhaps. There's no inherent \"lock strength\" ordering for all locks though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Nov 2023 08:51:39 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hi Andres, Thanks for your quick reply!\n\nOn 2023/11/29 0:51, Andres Freund wrote:\n> Hi,\n>\n> On 2023-11-28 20:52:31 +0800, Jingxian Li wrote:\n>> postgres=*# lock table test in exclusive mode ;\n>>\n>>\n>> T4\n>>\n>> Case 1:\n>>\n>> postgres=*# lock table test in share row exclusive mode nowait;\n>>\n>> ERROR: &nbsp;could not obtain lock on relation \"test\"\n>>\n>> --------------------------------------------\n>>\n>> Case 2:\n>>\n>> postgres=*# lock table test in share row exclusive mode;\n>>\n>> LOCK TABLE\n>>\n>>\n>>\n>> At T4 moment in session A, (case 1) when executing SQL “lock table test in share row exclusive mode nowait;”, an error occurs with message “could not obtain lock on relation test\";However, (case 2) when executing the SQL above without nowait, lock can be obtained successfully.\n>>\n>> Digging into the source code, I find that in case 2 the lock was obtained in\n>> the function ProcSleep instead of LockAcquireExtended. Due to nowait logic\n>> processed before WaitOnLock-&gt;ProcSleep, acquiring lock failed in case\n>> 1. Can any changes be made so that the act of such lock granted occurs\n>> before WaitOnLock?\n> I don't think that'd make sense - lock reordering is done to prevent deadlocks\n> and is quite expensive. Why should NOWAIT incur that cost?\n>\n>\n>>\n>> Providing a more universal case:\n>>\n>> Transaction A already holds an n-mode lock on table test. If then transaction A requests an m-mode lock on table Test, m and n have the following constraints:\n>>\n>> (lockMethodTable-&gt;conflictTab[n] &amp; lockMethodTable-&gt;conflictTab[m]) == lockMethodTable-&gt;conflictTab[m]\n>>\n>> Obviously, in this case, m<=n.\n>>\n>> Should the m-mode lock be granted before WaitOnLock?\n>>\n>>\n>> In the case of m=n (i.e. we already hold the lock), the m-mode lock is\n>> immediately granted in the LocalLock path, without the need of lock conflict\n>> check.\n> Sure - it'd not help anybody to wait for a lock we already hold - in fact it'd\n> create a lot of deadlocks.\n>\n>\n>> Based on the facts above, can we obtain a weaker lock (m<n) on the same\n>> object within the same transaction without doing lock conflict check?\n> Perhaps. There's no inherent \"lock strength\" ordering for all locks though.\n\n\n\nI also noticed that there is no inherent \"lock strength\" orderingfor all locks.\nSo I use the following method in the code to determine the strength of the lock:\nif (m<n &&(lockMethodTable->conflictTab[n] &\nlockMethodTable->conflictTab[m]) == lockMethodTable->conflictTab[m])\nthen we can say that m-mode lock is weaker than n-mode lock.\n\n\nTransaction A already holds an n-mode lock on table test,\nthat is, there is no locks held conflicting with the n-mode lock on table test,\nIf then transaction A requests an m-mode lock on table test,\nas n's confilctTab covers m, it can be concluded that\nthere are no locks conflicting with the requested m-mode lock.\n\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nWith regards,\n\nJingxian Li\n", "msg_date": "Thu, 30 Nov 2023 11:43:19 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Tue, 28 Nov 2023 at 18:23, Jingxian Li <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I found a problem when doing the test shown below:\n>\n> Time\n>\n> Session A\n>\n> Session B\n>\n> T1\n>\n> postgres=# create table test(a int);\n>\n> CREATE TABLE\n>\n> postgres=# insert into test values (1);\n>\n> INSERT 0 1\n>\n>\n>\n> T2\n>\n> postgres=# begin;\n>\n> BEGIN\n>\n> postgres=*# lock table test in access exclusive mode ;\n>\n> LOCK TABLE\n>\n>\n>\n> T3\n>\n>\n>\n> postgres=# begin;\n>\n> BEGIN\n>\n> postgres=*# lock table test in exclusive mode ;\n>\n> T4\n>\n> Case 1:\n>\n> postgres=*# lock table test in share row exclusive mode nowait;\n>\n> ERROR: could not obtain lock on relation \"test\"\n>\n> --------------------------------------------\n>\n> Case 2:\n>\n> postgres=*# lock table test in share row exclusive mode;\n>\n> LOCK TABLE\n>\n>\n>\n> At T4 moment in session A, (case 1) when executing SQL “lock table test in share row exclusive mode nowait;”, an error occurs with message “could not obtain lock on relation test\";However, (case 2) when executing the SQL above without nowait, lock can be obtained successfully.\n>\n> Digging into the source code, I find that in case 2 the lock was obtained in the function ProcSleep instead of LockAcquireExtended. Due to nowait logic processed before WaitOnLock->ProcSleep, acquiring lock failed in case 1. Can any changes be made so that the act of such lock granted occurs before WaitOnLock?\n>\n>\n>\n> Providing a more universal case:\n>\n> Transaction A already holds an n-mode lock on table test. If then transaction A requests an m-mode lock on table Test, m and n have the following constraints:\n>\n> (lockMethodTable->conflictTab[n] & lockMethodTable->conflictTab[m]) == lockMethodTable->conflictTab[m]\n>\n> Obviously, in this case, m<=n.\n>\n> Should the m-mode lock be granted before WaitOnLock?\n>\n>\n>\n> In the case of m=n (i.e. we already hold the lock), the m-mode lock is immediately granted in the LocalLock path, without the need of lock conflict check.\n>\n> Based on the facts above, can we obtain a weaker lock (m<n) on the same object within the same transaction without doing lock conflict check?\n>\n> Since m=n works, m<n should certainly work too.\n>\n>\n>\n> I am attaching a patch here with which the problem in case 1 fixed.\n\nI did not see any test added for this, should we add a test case for this?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 20:51:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hello Jingxian Li!\n\nI agree with you that this behavior seems surprising. I don't think\nit's quite a bug, more of a limitation. However, I think it would be\nnice to fix it if we can find a good way to do that.\n\nOn Wed, Nov 29, 2023 at 10:43 PM Jingxian Li <[email protected]> wrote:\n> Transaction A already holds an n-mode lock on table test,\n> that is, there is no locks held conflicting with the n-mode lock on table test,\n> If then transaction A requests an m-mode lock on table test,\n> as n's confilctTab covers m, it can be concluded that\n> there are no locks conflicting with the requested m-mode lock.\n\nThis algorithm seems correct to me, but I think Andres is right to be\nconcerned about overhead. You're proposing to inject a call to\nCheckLocalLockConflictTabCover() into the main code path of\nLockAcquireExtended(), so practically every lock acquisition will pay\nthe cost of that function. And that function is not particularly\ncheap: every call to LockHeldByMe is a hash table lookup. That sounds\npretty painful. If we could incur the overhead of this only when we\nknew for certain that we would otherwise have to fail, that would be\nmore palatable, but doing it on every non-fastpath heavyweight lock\nacquisition seems way too expensive.\n\nEven aside from overhead, the approach the patch takes doesn't seem\nquite right to me. As you noted, ProcSleep() has logic to jump the\nqueue if adding ourselves at the end would inevitably result in\ndeadlock, which is why your test case doesn't need to wait until\ndeadlock_timeout for the lock acquisition to succeed. But because that\nlogic happens in ProcSleep(), it's not reached in the NOWAIT case,\nwhich means that it doesn't help any more once NOWAIT is specified. I\nthink that the right way to fix the problem would be to reach that\ncheck even in the NOWAIT case, which could be done either by hoisting\nsome of the logic in ProcSleep() up into LockAcquireExtended(), or by\npushing the nowait flag down into ProcSleep() so that we can fail only\nif we're definitely going to sleep. The former seems more elegant in\ntheory, but the latter looks easier to implement, at least at first\nglance.\n\nBut the patch as proposed instead invents a new way of making the test\ncase work, not leveraging the existing logic and, I suspect, not\nmatching the behavior in all cases.\n\nI also agree with Vignesh that a test case would be a good idea. It\nwould need to be an isolation test, since the regular regression\ntester isn't powerful enough for this (at least, I don't see how to\nmake it work).\n\nI hope that this input is helpful to you.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 14:07:13 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hello Robert,\n\nThank you for your advice. It is very helpful to me.\n\nOn 2024/1/16 3:07, Robert Haas wrote:\n> Hello Jingxian Li!\n>\n> I agree with you that this behavior seems surprising. I don't think\n> it's quite a bug, more of a limitation. However, I think it would be\n> nice to fix it if we can find a good way to do that.\n>\n> On Wed, Nov 29, 2023 at 10:43 PM Jingxian Li <[email protected]> wrote:\n>> Transaction A already holds an n-mode lock on table test,\n>> that is, there is no locks held conflicting with the n-mode lock on table test,\n>> If then transaction A requests an m-mode lock on table test,\n>> as n's confilctTab covers m, it can be concluded that\n>> there are no locks conflicting with the requested m-mode lock.\n> This algorithm seems correct to me, but I think Andres is right to be\n> concerned about overhead. You're proposing to inject a call to\n> CheckLocalLockConflictTabCover() into the main code path of\n> LockAcquireExtended(), so practically every lock acquisition will pay\n> the cost of that function. And that function is not particularly\n> cheap: every call to LockHeldByMe is a hash table lookup. That sounds\n> pretty painful. If we could incur the overhead of this only when we\n> knew for certain that we would otherwise have to fail, that would be\n> more palatable, but doing it on every non-fastpath heavyweight lock\n> acquisition seems way too expensive.\n>\n> Even aside from overhead, the approach the patch takes doesn't seem\n> quite right to me. As you noted, ProcSleep() has logic to jump the\n> queue if adding ourselves at the end would inevitably result in\n> deadlock, which is why your test case doesn't need to wait until\n> deadlock_timeout for the lock acquisition to succeed. But because that\n> logic happens in ProcSleep(), it's not reached in the NOWAIT case,\n> which means that it doesn't help any more once NOWAIT is specified. I\n> think that the right way to fix the problem would be to reach that\n> check even in the NOWAIT case, which could be done either by hoisting\n> some of the logic in ProcSleep() up into LockAcquireExtended(), or by\n> pushing the nowait flag down into ProcSleep() so that we can fail only\n> if we're definitely going to sleep. The former seems more elegant in\n> theory, but the latter looks easier to implement, at least at first\n> glance.\nAccording to what you said, I resubmitted a patch which splits the ProcSleep\nlogic into two parts, the former is responsible for inserting self to\nWaitQueue,\nthe latter is responsible for deadlock detection and processing, and the\nformer part is directly called by LockAcquireExtended before nowait fails.\nIn this way the nowait case can also benefit from adjusting the insertion\norder of WaitQueue.\n>\n> But the patch as proposed instead invents a new way of making the test\n> case work, not leveraging the existing logic and, I suspect, not\n> matching the behavior in all cases.\n>\n> I also agree with Vignesh that a test case would be a good idea. It\n> would need to be an isolation test, since the regular regression\n> tester isn't powerful enough for this (at least, I don't see how to\n> make it work).\n>\nA test case was also added in the dir src/test/isolation.\n\nJingxian Li", "msg_date": "Thu, 1 Feb 2024 15:16:35 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Thu, Feb 1, 2024 at 2:16 AM Jingxian Li <[email protected]> wrote:\n> According to what you said, I resubmitted a patch which splits the ProcSleep\n> logic into two parts, the former is responsible for inserting self to\n> WaitQueue,\n> the latter is responsible for deadlock detection and processing, and the\n> former part is directly called by LockAcquireExtended before nowait fails.\n> In this way the nowait case can also benefit from adjusting the insertion\n> order of WaitQueue.\n\nI don't have time for a full review of this patch right now\nunfortunately, but just looking at it quickly:\n\n- It will be helpful if you write a clear commit message. If it gets\ncommitted, there is a high chance the committer will rewrite your\nmessage, but in the meantime it will help understanding.\n\n- The comment for InsertSelfIntoWaitQueue needs improvement. It is\nonly one line. And it says \"Insert self into queue if dontWait is\nfalse\" but then someone will wonder why the function would ever be\ncalled with dontWait = true.\n\n- Between the comments and the commit message, the division of\nresponsibilities between InsertSelfIntoWaitQueue and ProcSleep needs\nto be clearly explained. Right now I don't think it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Feb 2024 16:05:18 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hello Robert,\n\nOn 2024/2/2 5:05, Robert Haas wrote:\n> On Thu, Feb 1, 2024 at 2:16 AM Jingxian Li <[email protected]> wrote:\n>> According to what you said, I resubmitted a patch which splits the ProcSleep\n>> logic into two parts, the former is responsible for inserting self to\n>> WaitQueue,\n>> the latter is responsible for deadlock detection and processing, and the\n>> former part is directly called by LockAcquireExtended before nowait fails.\n>> In this way the nowait case can also benefit from adjusting the insertion\n>> order of WaitQueue.\n>\n> I don't have time for a full review of this patch right now\n> unfortunately, but just looking at it quickly:\n>\n> - It will be helpful if you write a clear commit message. If it gets\n> committed, there is a high chance the committer will rewrite your\n> message, but in the meantime it will help understanding.\n>\n> - The comment for InsertSelfIntoWaitQueue needs improvement. It is\n> only one line. And it says \"Insert self into queue if dontWait is\n> false\" but then someone will wonder why the function would ever be\n> called with dontWait = true.\n>\n> - Between the comments and the commit message, the division of\n> responsibilities between InsertSelfIntoWaitQueue and ProcSleep needs\n> to be clearly explained. Right now I don't think it is.\n\nBased on your comments above, I improve the commit message and comment for \nInsertSelfIntoWaitQueue in new patch.\n\n--\nJingxian Li\n", "msg_date": "Thu, 8 Feb 2024 18:25:14 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hello Robert,\n\nOn 2024/2/2 5:05, Robert Haas wrote:\n> On Thu, Feb 1, 2024 at 2:16 AM Jingxian Li <[email protected]> wrote:\n>> According to what you said, I resubmitted a patch which splits the ProcSleep\n>> logic into two parts, the former is responsible for inserting self to\n>> WaitQueue,\n>> the latter is responsible for deadlock detection and processing, and the\n>> former part is directly called by LockAcquireExtended before nowait fails.\n>> In this way the nowait case can also benefit from adjusting the insertion\n>> order of WaitQueue.\n>\n> I don't have time for a full review of this patch right now\n> unfortunately, but just looking at it quickly:\n>\n> - It will be helpful if you write a clear commit message. If it gets\n> committed, there is a high chance the committer will rewrite your\n> message, but in the meantime it will help understanding.\n>\n> - The comment for InsertSelfIntoWaitQueue needs improvement. It is\n> only one line. And it says \"Insert self into queue if dontWait is\n> false\" but then someone will wonder why the function would ever be\n> called with dontWait = true.\n>\n> - Between the comments and the commit message, the division of\n> responsibilities between InsertSelfIntoWaitQueue and ProcSleep needs\n> to be clearly explained. Right now I don't think it is.\n\nBased on your comments above, I improve the commit message and comment for \nInsertSelfIntoWaitQueue in new patch.\n\n--\nJingxian Li", "msg_date": "Thu, 8 Feb 2024 18:28:09 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Thu, Feb 8, 2024 at 5:28 AM Jingxian Li <[email protected]> wrote:\n> Based on your comments above, I improve the commit message and comment for\n> InsertSelfIntoWaitQueue in new patch.\n\nWell, I had a look at this patch today, and even after reading the new\ncommit message, I couldn't really convince myself that it was correct.\nIt may well be entirely correct, but I simply find it hard to tell. It\nwould help if the comments had been adjusted a bit more, e.g.\n\n /* Skip the wait and just\ngrant myself the lock. */\n- GrantLock(lock, proclock, lockmode);\n- GrantAwaitedLock();\n return PROC_WAIT_STATUS_OK;\n\nSurely this is not an acceptable change. The comments says \"and just\ngrant myself the lock\" but the code no longer does that.\n\nBut instead of just complaining, I decided to try writing a version of\nthe patch that seemed acceptable to me. Here it is. I took a different\napproach than you. Instead of splitting up ProcSleep(), I just passed\ndown the dontWait flag to WaitOnLock() and ProcSleep(). In\nLockAcquireExtended(), I moved the existing code that handles giving\nup in the don't-wait case from before the call to ProcSleep() to\nafterward. As far as I can see, the major way this could be wrong is\nif calling ProcSleep() with dontWait = true and having it fail to\nacquire the lock changes the state in some way that makes the cleanup\ncode that I moved incorrect. I don't *think* that's the case, but I\nmight be wrong.\n\nWhat do you think of this version?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Mar 2024 12:02:38 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Hello Robert,\nOn 2024/3/8 1:02, Robert Haas wrote:\n>\n> But instead of just complaining, I decided to try writing a version of\n> the patch that seemed acceptable to me. Here it is. I took a different\n> approach than you. Instead of splitting up ProcSleep(), I just passed\n> down the dontWait flag to WaitOnLock() and ProcSleep(). In\n> LockAcquireExtended(), I moved the existing code that handles giving\n> up in the don't-wait case from before the call to ProcSleep() to\n> afterward. As far as I can see, the major way this could be wrong is\n> if calling ProcSleep() with dontWait = true and having it fail to\n> acquire the lock changes the state in some way that makes the cleanup\n> code that I moved incorrect. I don't *think* that's the case, but I\n> might be wrong.\n>\n> What do you think of this version?\n\nYour version changes less code than mine by pushing the nowait flag down \ninto ProcSleep(). This looks fine in general, except for a little advice,\nwhich I don't think there is necessary to add 'waiting' suffix to the \nprocess name in function WaitOnLock with dontwait being true, as follows:\n\n--- a/src/backend/storage/lmgr/lock.c\n+++ b/src/backend/storage/lmgr/lock.c\n@@ -1801,8 +1801,12 @@ WaitOnLock(LOCALLOCK *locallock, ResourceOwner owner, bool dontWait)\n \tLOCK_PRINT(\"WaitOnLock: sleeping on lock\",\n \t\t\t locallock->lock, locallock->tag.mode);\n \n-\t/* adjust the process title to indicate that it's waiting */\n-\tset_ps_display_suffix(\"waiting\");\n+\tif (!dontWait)\n+\t{\n+\t\t/* adjust the process title to indicate that it's waiting */\n+\t\tset_ps_display_suffix(\"waiting\");\t\t\t\n+\t}\n+\n \n \tawaitedLock = locallock;\n \tawaitedOwner = owner;\n@@ -1855,9 +1859,12 @@ WaitOnLock(LOCALLOCK *locallock, ResourceOwner owner, bool dontWait)\n \t{\n \t\t/* In this path, awaitedLock remains set until LockErrorCleanup */\n \n-\t\t/* reset ps display to remove the suffix */\n-\t\tset_ps_display_remove_suffix();\n-\n+\t\tif (!dontWait)\n+\t\t{\n+\t\t\t/* reset ps display to remove the suffix */\n+\t\t\tset_ps_display_remove_suffix();\n+\t\t}\n+\t\n \t\t/* and propagate the error */\n \t\tPG_RE_THROW();\n \t}\n@@ -1865,8 +1872,11 @@ WaitOnLock(LOCALLOCK *locallock, ResourceOwner owner, bool dontWait)\n \n \tawaitedLock = NULL;\n \n-\t/* reset ps display to remove the suffix */\n-\tset_ps_display_remove_suffix();\n+\tif (!dontWait)\n+\t{\n+\t\t/* reset ps display to remove the suffix */\n+\t\tset_ps_display_remove_suffix();\n+\t}\n \n \tLOCK_PRINT(\"WaitOnLock: wakeup on lock\",\n \t\t\t locallock->lock, locallock->tag.mode);\n\n\n--\nJingxian Li\n", "msg_date": "Tue, 12 Mar 2024 11:11:33 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Mon, Mar 11, 2024 at 11:11 PM Jingxian Li <[email protected]> wrote:\n> Your version changes less code than mine by pushing the nowait flag down\n> into ProcSleep(). This looks fine in general, except for a little advice,\n> which I don't think there is necessary to add 'waiting' suffix to the\n> process name in function WaitOnLock with dontwait being true, as follows:\n\nThat could be done, but in my opinion it's not necessary. The waiting\nsuffix will appear only very briefly, and probably only in relatively\nrare cases. It doesn't seem worth adding code to avoid it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 09:33:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Tue, Mar 12, 2024 at 9:33 AM Robert Haas <[email protected]> wrote:\n> On Mon, Mar 11, 2024 at 11:11 PM Jingxian Li <[email protected]> wrote:\n> > Your version changes less code than mine by pushing the nowait flag down\n> > into ProcSleep(). This looks fine in general, except for a little advice,\n> > which I don't think there is necessary to add 'waiting' suffix to the\n> > process name in function WaitOnLock with dontwait being true, as follows:\n>\n> That could be done, but in my opinion it's not necessary. The waiting\n> suffix will appear only very briefly, and probably only in relatively\n> rare cases. It doesn't seem worth adding code to avoid it.\n\nSeeing no further discussion, I have committed my version of this\npatch, with your test case.\n\nThanks for pursuing this improvement!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 09:15:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Thu, Mar 14, 2024 at 1:15 PM Robert Haas <[email protected]> wrote:\n> Seeing no further discussion, I have committed my version of this\n> patch, with your test case.\n\nThis comment on ProcSleep() seems to have the values of dontWait\nbackward (double negatives are tricky):\n\n * Result: PROC_WAIT_STATUS_OK if we acquired the lock,\nPROC_WAIT_STATUS_ERROR\n * if not (if dontWait = true, this is a deadlock; if dontWait = false, we\n * would have had to wait).\n\nAlso there's a minor typo in a comment in LockAcquireExtended():\n\n * Check the proclock entry status. If dontWait = true, this is an\n * expected case; otherwise, it will open happen if something in the\n * ipc communication doesn't work correctly.\n\n\"open\" should be \"only\".\n\n\n", "msg_date": "Tue, 26 Mar 2024 19:14:32 -0700", "msg_from": "Will Mortensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Tue, Mar 26, 2024 at 7:14 PM Will Mortensen <[email protected]> wrote:\n> This comment on ProcSleep() seems to have the values of dontWait\n> backward (double negatives are tricky):\n>\n> * Result: PROC_WAIT_STATUS_OK if we acquired the lock,\n> PROC_WAIT_STATUS_ERROR\n> * if not (if dontWait = true, this is a deadlock; if dontWait = false, we\n> * would have had to wait).\n>\n> Also there's a minor typo in a comment in LockAcquireExtended():\n>\n> * Check the proclock entry status. If dontWait = true, this is an\n> * expected case; otherwise, it will open happen if something in the\n> * ipc communication doesn't work correctly.\n>\n> \"open\" should be \"only\".\n\nHere's a patch fixing those typos.", "msg_date": "Fri, 17 May 2024 23:38:35 -0700", "msg_from": "Will Mortensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On 2024/5/18 14:38, Will Mortensen wrote:\n> On Tue, Mar 26, 2024 at 7:14 PM Will Mortensen <[email protected]> wrote:\n>> This comment on ProcSleep() seems to have the values of dontWait\n>> backward (double negatives are tricky):\n>>\n>> * Result: PROC_WAIT_STATUS_OK if we acquired the lock,\n>> PROC_WAIT_STATUS_ERROR\n>> * if not (if dontWait = true, this is a deadlock; if dontWait = false, we\n>> * would have had to wait).\n>>\n>> Also there's a minor typo in a comment in LockAcquireExtended():\n>>\n>> * Check the proclock entry status. If dontWait = true, this is an\n>> * expected case; otherwise, it will open happen if something in the\n>> * ipc communication doesn't work correctly.\n>>\n>> \"open\" should be \"only\".\n>\n> Here's a patch fixing those typos.\n\nNice catch! The patch looks good to me.\n\n\n--\nJingxian Li\n", "msg_date": "Sat, 18 May 2024 17:10:38 +0800", "msg_from": "\"Jingxian Li\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Fri, May 17, 2024 at 11:38:35PM -0700, Will Mortensen wrote:\n> On Tue, Mar 26, 2024 at 7:14 PM Will Mortensen <[email protected]> wrote:\n>> This comment on ProcSleep() seems to have the values of dontWait\n>> backward (double negatives are tricky):\n>>\n>> * Result: PROC_WAIT_STATUS_OK if we acquired the lock,\n>> PROC_WAIT_STATUS_ERROR\n>> * if not (if dontWait = true, this is a deadlock; if dontWait = false, we\n>> * would have had to wait).\n>>\n>> Also there's a minor typo in a comment in LockAcquireExtended():\n>>\n>> * Check the proclock entry status. If dontWait = true, this is an\n>> * expected case; otherwise, it will open happen if something in the\n>> * ipc communication doesn't work correctly.\n>>\n>> \"open\" should be \"only\".\n> \n> Here's a patch fixing those typos.\n\nPerhaps, this, err.. Should not have been named \"dontWait\" but\n\"doWait\" ;)\n\nAnyway, this goes way back in time and it is deep in the stack\n(LockAcquireExtended, etc.) so it is too late to change: the patch\nshould be OK as it is.\n--\nMichael", "msg_date": "Sat, 18 May 2024 20:37:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "On Sat, May 18, 2024 at 05:10:38PM +0800, Jingxian Li wrote:\n> Nice catch! The patch looks good to me.\n\nAnd fixed that as well.\n--\nMichael", "msg_date": "Thu, 23 May 2024 13:58:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" }, { "msg_contents": "Thanks! :-)\n\n\n", "msg_date": "Wed, 22 May 2024 22:21:53 -0700", "msg_from": "Will Mortensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] LockAcquireExtended improvement" } ]
[ { "msg_contents": "Hi hackers,\n\nPostgreSQL hit the following assertion during error cleanup, after being \nOOM in dsa_allocate0():\n\nvoid dshash_detach(dshash_table *hash_table) { \nASSERT_NO_PARTITION_LOCKS_HELD_BY_ME(hash_table);\n\ncalled from pgstat_shutdown_hook(), called from shmem_exit(), called \nfrom proc_exit(), called from the exception handler.\n\nThe partition locks got previously acquired by\n\nAutoVacWorkerMain() pgstat_report_autovac() \npgstat_get_entry_ref_locked() pgstat_get_entry_ref() \ndshash_find_or_insert() resize() resize() locks all partitions so the \nhash table can safely be resized. Then it calls dsa_allocate0(). If \ndsa_allocate0() fails to allocate, it errors out. The exception handler \ncalls proc_exit() which normally calls LWLockReleaseAll() via \nAbortTransaction() but only if there's an active transaction. However, \npgstat_report_autovac() runs before a transaction got started and hence \nLWLockReleaseAll() doesn't run before pgstat_shutdown_hook() is called.\n\nSee attached patch for an attempt to fix this issue.\n\n-- \nDavid Geier\n(ServiceNow)", "msg_date": "Tue, 28 Nov 2023 19:00:16 +0100", "msg_from": "David Geier <[email protected]>", "msg_from_op": true, "msg_subject": "Fix assertion in autovacuum worker" }, { "msg_contents": "On Tue, Nov 28, 2023 at 07:00:16PM +0100, David Geier wrote:\n> PostgreSQL hit the following assertion during error cleanup, after being OOM\n> in dsa_allocate0():\n> \n> void dshash_detach(dshash_table *hash_table) {\n> ASSERT_NO_PARTITION_LOCKS_HELD_BY_ME(hash_table);\n> \n> called from pgstat_shutdown_hook(), called from shmem_exit(), called from\n> proc_exit(), called from the exception handler.\n\nNice find.\n\n> AutoVacWorkerMain() pgstat_report_autovac() pgstat_get_entry_ref_locked()\n> pgstat_get_entry_ref() dshash_find_or_insert() resize() resize() locks all\n> partitions so the hash table can safely be resized. Then it calls\n> dsa_allocate0(). If dsa_allocate0() fails to allocate, it errors out. The\n> exception handler calls proc_exit() which normally calls LWLockReleaseAll()\n> via AbortTransaction() but only if there's an active transaction. However,\n> pgstat_report_autovac() runs before a transaction got started and hence\n> LWLockReleaseAll() doesn't run before pgstat_shutdown_hook() is called.\n\n From a glance, it looks to me like the problem is that pgstat_shutdown_hook\nis registered as a before_shmem_exit callback, while ProcKill is registered\nas an on_shmem_exit callback. However, IIUC even moving them to the same\nlist wouldn't be sufficient because the pg_stat_shutdown_hook is registered\nafter ProcKill, and the code that calls the callbacks walks backwards\nthrough the list.\n\nI would expect your patch to fix this particular issue, but I'm wondering\nwhether there's a bigger problem here.\n\n--\nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Nov 2023 16:05:16 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix assertion in autovacuum worker" }, { "msg_contents": "Hi,\n\nOn 2023-11-28 16:05:16 -0600, Nathan Bossart wrote:\n> On Tue, Nov 28, 2023 at 07:00:16PM +0100, David Geier wrote:\n> > PostgreSQL hit the following assertion during error cleanup, after being OOM\n> > in dsa_allocate0():\n> > \n> > void dshash_detach(dshash_table *hash_table) {\n> > ASSERT_NO_PARTITION_LOCKS_HELD_BY_ME(hash_table);\n> > \n> > called from pgstat_shutdown_hook(), called from shmem_exit(), called from\n> > proc_exit(), called from the exception handler.\n> \n> Nice find.\n\n+1\n\n\n> > AutoVacWorkerMain() pgstat_report_autovac() pgstat_get_entry_ref_locked()\n> > pgstat_get_entry_ref() dshash_find_or_insert() resize() resize() locks all\n> > partitions so the hash table can safely be resized. Then it calls\n> > dsa_allocate0(). If dsa_allocate0() fails to allocate, it errors out. The\n> > exception handler calls proc_exit() which normally calls LWLockReleaseAll()\n> > via AbortTransaction() but only if there's an active transaction. However,\n> > pgstat_report_autovac() runs before a transaction got started and hence\n> > LWLockReleaseAll() doesn't run before pgstat_shutdown_hook() is called.\n> \n> From a glance, it looks to me like the problem is that pgstat_shutdown_hook\n> is registered as a before_shmem_exit callback, while ProcKill is registered\n> as an on_shmem_exit callback.\n\nThat's required, as pgstat_shutdown_hook() needs to acquire lwlocks, which you\ncan't after ProcKill(). It's also not unique to pgstat, several other\nbefore_shmem_exit() callbacks acquire lwlocks (e.g. AtProcExit_Twophase(),\nParallelWorkerShutdown(), do_pg_abort_backup(), the first three uses of\nbefore_shmem_exit when git grepping) - which makes sense, they are presumably\nbefore_shmem_exit() because they need to manage shared state, which often\nneeds locks.\n\nIn normal backends this is fine-ish, because ShutdownPostgres() is registered\nvery late (and thus is called early in the shutdown sequence), and the\nAbortOutOfAnyTransaction() contained therein indirectly calls\nLWLockReleaseAll() and very little happens outside of the transaction context.\n\n\n> I would expect your patch to fix this particular issue, but I'm wondering\n> whether there's a bigger problem here.\n\nYes, there is - our subsystem initialization, shutdown, error recovery\ninfrastructure is a mess. We've interwoven transaction handling far too\ntightly with error handling, the order of subystem initialization is basically\nrandom and differs between operating systems (due to EXEC_BACKEND) and \"mode\"\nof execution (the order differs when using single user mode) and we've\ndistributed error recovery into ~10 places (all the sigsetjmp()s in backend\ncode, xact.c and and a few other places like WalSndErrorCleanup()).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Nov 2023 16:03:49 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix assertion in autovacuum worker" }, { "msg_contents": "On Tue, Nov 28, 2023 at 04:03:49PM -0800, Andres Freund wrote:\n> On 2023-11-28 16:05:16 -0600, Nathan Bossart wrote:\n>> From a glance, it looks to me like the problem is that pgstat_shutdown_hook\n>> is registered as a before_shmem_exit callback, while ProcKill is registered\n>> as an on_shmem_exit callback.\n> \n> That's required, as pgstat_shutdown_hook() needs to acquire lwlocks, which you\n> can't after ProcKill(). It's also not unique to pgstat, several other\n> before_shmem_exit() callbacks acquire lwlocks (e.g. AtProcExit_Twophase(),\n> ParallelWorkerShutdown(), do_pg_abort_backup(), the first three uses of\n> before_shmem_exit when git grepping) - which makes sense, they are presumably\n> before_shmem_exit() because they need to manage shared state, which often\n> needs locks.\n> \n> In normal backends this is fine-ish, because ShutdownPostgres() is registered\n> very late (and thus is called early in the shutdown sequence), and the\n> AbortOutOfAnyTransaction() contained therein indirectly calls\n> LWLockReleaseAll() and very little happens outside of the transaction context.\n\nRight. Perhaps we could add a LWLockReleaseAll() to\npgstat_shutdown_hook() instead of the autovacuum code, but I'm afraid that\nis still just a hack.\n\n>> I would expect your patch to fix this particular issue, but I'm wondering\n>> whether there's a bigger problem here.\n> \n> Yes, there is - our subsystem initialization, shutdown, error recovery\n> infrastructure is a mess. We've interwoven transaction handling far too\n> tightly with error handling, the order of subystem initialization is basically\n> random and differs between operating systems (due to EXEC_BACKEND) and \"mode\"\n> of execution (the order differs when using single user mode) and we've\n> distributed error recovery into ~10 places (all the sigsetjmp()s in backend\n> code, xact.c and and a few other places like WalSndErrorCleanup()).\n\n:(\n\nI do remember looking into uniting all the various sigsetjmp() calls\nbefore. That could be worth another try. The rest will probably require\nadditional thought...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Nov 2023 20:42:47 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix assertion in autovacuum worker" }, { "msg_contents": "Hi,\n\nOn 2023-11-28 20:42:47 -0600, Nathan Bossart wrote:\n> On Tue, Nov 28, 2023 at 04:03:49PM -0800, Andres Freund wrote:\n> > On 2023-11-28 16:05:16 -0600, Nathan Bossart wrote:\n> >> From a glance, it looks to me like the problem is that pgstat_shutdown_hook\n> >> is registered as a before_shmem_exit callback, while ProcKill is registered\n> >> as an on_shmem_exit callback.\n> > \n> > That's required, as pgstat_shutdown_hook() needs to acquire lwlocks, which you\n> > can't after ProcKill(). It's also not unique to pgstat, several other\n> > before_shmem_exit() callbacks acquire lwlocks (e.g. AtProcExit_Twophase(),\n> > ParallelWorkerShutdown(), do_pg_abort_backup(), the first three uses of\n> > before_shmem_exit when git grepping) - which makes sense, they are presumably\n> > before_shmem_exit() because they need to manage shared state, which often\n> > needs locks.\n> > \n> > In normal backends this is fine-ish, because ShutdownPostgres() is registered\n> > very late (and thus is called early in the shutdown sequence), and the\n> > AbortOutOfAnyTransaction() contained therein indirectly calls\n> > LWLockReleaseAll() and very little happens outside of the transaction context.\n> \n> Right. Perhaps we could add a LWLockReleaseAll() to\n> pgstat_shutdown_hook() instead of the autovacuum code, but I'm afraid that\n> is still just a hack.\n\nYea, we'd need that in just about all before_shmem_exit() callbacks. I could\nsee an argument for doing it in proc_exit_prepare(). While that'd be a fairly\ngross layering violation, we already do reset a number a bunch of stuff in\nthere:\n\t/*\n\t * Forget any pending cancel or die requests; we're doing our best to\n\t * close up shop already. Note that the signal handlers will not set\n\t * these flags again, now that proc_exit_inprogress is set.\n\t */\n\tInterruptPending = false;\n\tProcDiePending = false;\n\tQueryCancelPending = false;\n\tInterruptHoldoffCount = 1;\n\tCritSectionCount = 0;\n\n\n> >> I would expect your patch to fix this particular issue, but I'm wondering\n> >> whether there's a bigger problem here.\n> > \n> > Yes, there is - our subsystem initialization, shutdown, error recovery\n> > infrastructure is a mess. We've interwoven transaction handling far too\n> > tightly with error handling, the order of subystem initialization is basically\n> > random and differs between operating systems (due to EXEC_BACKEND) and \"mode\"\n> > of execution (the order differs when using single user mode) and we've\n> > distributed error recovery into ~10 places (all the sigsetjmp()s in backend\n> > code, xact.c and and a few other places like WalSndErrorCleanup()).\n> \n> :(\n> \n> I do remember looking into uniting all the various sigsetjmp() calls\n> before. That could be worth another try. The rest will probably require\n> additional thought...\n\nIt'd definitely be worth some effort. I'm quite sure that we have a number of\nhard to find bugs around this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Nov 2023 18:48:59 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix assertion in autovacuum worker" }, { "msg_contents": "On Tue, Nov 28, 2023 at 06:48:59PM -0800, Andres Freund wrote:\n> On 2023-11-28 20:42:47 -0600, Nathan Bossart wrote:\n>> Right. Perhaps we could add a LWLockReleaseAll() to\n>> pgstat_shutdown_hook() instead of the autovacuum code, but I'm afraid that\n>> is still just a hack.\n> \n> Yea, we'd need that in just about all before_shmem_exit() callbacks. I could\n> see an argument for doing it in proc_exit_prepare(). While that'd be a fairly\n> gross layering violation, we already do reset a number a bunch of stuff in\n> there:\n\nGross layering violations aside, that at least seems more future-proof\nagainst other sigsetjmp() blocks that proc_exit() without doing any\npreliminary cleanup.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 29 Nov 2023 11:52:01 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix assertion in autovacuum worker" }, { "msg_contents": "Hi,\n\nOn 2023-11-29 11:52:01 -0600, Nathan Bossart wrote:\n> On Tue, Nov 28, 2023 at 06:48:59PM -0800, Andres Freund wrote:\n> > On 2023-11-28 20:42:47 -0600, Nathan Bossart wrote:\n> >> Right. Perhaps we could add a LWLockReleaseAll() to\n> >> pgstat_shutdown_hook() instead of the autovacuum code, but I'm afraid that\n> >> is still just a hack.\n> >\n> > Yea, we'd need that in just about all before_shmem_exit() callbacks. I could\n> > see an argument for doing it in proc_exit_prepare(). While that'd be a fairly\n> > gross layering violation, we already do reset a number a bunch of stuff in\n> > there:\n>\n> Gross layering violations aside, that at least seems more future-proof\n> against other sigsetjmp() blocks that proc_exit() without doing any\n> preliminary cleanup.\n\nIt's not just sigsetjmp() blocks not doing cleanup that are problematic -\nconsider what happens if there's either no sigsetjmp() block established (and\nthus ERROR is promoted to FATAL), or if a FATAL error is raised. Then cleanup\nin the sigsetjmp() site doesn't matter. So we really need a better answer\nhere.\n\nIf we don't want to add LWLockReleaseAll() to proc_exit_prepare(), ISTM we\nshould all at least add some assertion infrastructure verifying it's being\ncalled in relevant paths, triggering an assertion if there was no\nLWLockReleaseAll() before reaching important before_shmem_exit() routines,\neven if we don't actually hold an lwlock at the time of the error. Otherwise\nproblematic paths are way too hard to find.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Nov 2023 10:55:55 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix assertion in autovacuum worker" } ]
[ { "msg_contents": "I noticed that under meson, the selection of the Python installation \nusing the 'PYTHON' option doesn't work completely. The 'PYTHON' option \ndetermined the Python binary that will be used to call the various build \nsupport programs. But it doesn't affect the Python installation used \nfor PL/Python. For that, we need to pass the program determined by the \n'PYTHON' option back into the find_installation() routine of the python \nmodule. (Otherwise, find_installation() will just search for an \ninstallation on its own.) See attached patch. I ran this through \nCirrus, seems to work.", "msg_date": "Tue, 28 Nov 2023 19:02:42 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Python installation selection in Meson" }, { "msg_contents": "Hi,\n\nOn 2023-11-28 19:02:42 +0100, Peter Eisentraut wrote:\n> I noticed that under meson, the selection of the Python installation using\n> the 'PYTHON' option doesn't work completely. The 'PYTHON' option determined\n> the Python binary that will be used to call the various build support\n> programs. But it doesn't affect the Python installation used for PL/Python.\n> For that, we need to pass the program determined by the 'PYTHON' option back\n> into the find_installation() routine of the python module. (Otherwise,\n> find_installation() will just search for an installation on its own.) See\n> attached patch. I ran this through Cirrus, seems to work.\n\nMakes sense!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 28 Nov 2023 10:16:29 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Python installation selection in Meson" }, { "msg_contents": "\nOn 2023-11-28 Tu 13:02, Peter Eisentraut wrote:\n> I noticed that under meson, the selection of the Python installation \n> using the 'PYTHON' option doesn't work completely.  The 'PYTHON' \n> option determined the Python binary that will be used to call the \n> various build support programs.  But it doesn't affect the Python \n> installation used for PL/Python.  For that, we need to pass the \n> program determined by the 'PYTHON' option back into the \n> find_installation() routine of the python module.  (Otherwise, \n> find_installation() will just search for an installation on its own.)  \n> See attached patch.  I ran this through Cirrus, seems to work.\n\n\nI noticed when working on the meson/windows stuff that meson would try \nto build plpython against its python installation, which failed \nmiserably. The workaround was to abandon separate meson/ninja \ninstallations via chocolatey, and instead install them using pip. Maybe \nthis was as a result of the above problem?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 29 Nov 2023 08:23:53 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Python installation selection in Meson" }, { "msg_contents": "On 29.11.23 14:23, Andrew Dunstan wrote:\n> \n> On 2023-11-28 Tu 13:02, Peter Eisentraut wrote:\n>> I noticed that under meson, the selection of the Python installation \n>> using the 'PYTHON' option doesn't work completely.  The 'PYTHON' \n>> option determined the Python binary that will be used to call the \n>> various build support programs.  But it doesn't affect the Python \n>> installation used for PL/Python.  For that, we need to pass the \n>> program determined by the 'PYTHON' option back into the \n>> find_installation() routine of the python module.  (Otherwise, \n>> find_installation() will just search for an installation on its own.) \n>> See attached patch.  I ran this through Cirrus, seems to work.\n> \n> I noticed when working on the meson/windows stuff that meson would try \n> to build plpython against its python installation, which failed \n> miserably. The workaround was to abandon separate meson/ninja \n> installations via chocolatey, and instead install them using pip. Maybe \n> this was as a result of the above problem?\n\nThat sounds like it could be the case.\n\n\n\n", "msg_date": "Wed, 29 Nov 2023 14:34:24 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Python installation selection in Meson" }, { "msg_contents": "On 28.11.23 19:16, Andres Freund wrote:\n> On 2023-11-28 19:02:42 +0100, Peter Eisentraut wrote:\n>> I noticed that under meson, the selection of the Python installation using\n>> the 'PYTHON' option doesn't work completely. The 'PYTHON' option determined\n>> the Python binary that will be used to call the various build support\n>> programs. But it doesn't affect the Python installation used for PL/Python.\n>> For that, we need to pass the program determined by the 'PYTHON' option back\n>> into the find_installation() routine of the python module. (Otherwise,\n>> find_installation() will just search for an installation on its own.) See\n>> attached patch. I ran this through Cirrus, seems to work.\n> \n> Makes sense!\n\nI have committed this, and also backpatched to 16 to keep the behavior \nconsistent.\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 07:30:22 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Python installation selection in Meson" } ]
[ { "msg_contents": "On Wed, Nov 22, 2023 at 12:49:35PM -0600, Nathan Bossart wrote:\n> On Wed, Nov 22, 2023 at 02:54:13PM +0200, Ants Aasma wrote:\n>> For reference, executing the page checksum 10M times on a AMD 3900X CPU:\n>> \n>> clang-14 -O2 4.292s (17.8 GiB/s)\n>> clang-14 -O2 -msse4.1 2.859s (26.7 GiB/s)\n>> clang-14 -O2 -msse4.1 -mavx2 1.378s (55.4 GiB/s)\n> \n> Nice. I've noticed similar improvements with AVX2 intrinsics in simd.h.\n\nI've alluded to this a few times now, so I figured I'd park the patch and\npreliminary benchmarks in a new thread while we iron out how to support\nnewer instructions (see discussion here [0]).\n\nUsing the same benchmark as we did for the SSE2 linear searches in\nXidInMVCCSnapshot() (commit 37a6e5d) [1] [2], I see the following:\n\n writers sse2 avx2 %\n 256 1195 1188 -1\n 512 928 1054 +14\n 1024 633 716 +13\n 2048 332 420 +27\n 4096 162 203 +25\n 8192 162 182 +12\n\nIt's been a while since I ran these benchmarks, but I vaguely recall also\nseeing something like a 50% improvement for a dedicated pg_lfind32()\nbenchmark on long arrays.\n\nAs is, the patch likely won't do anything unless you add -mavx2 or\n-march=native to your CFLAGS. I don't intend for this patch to be\nseriously considered until we have better support for detecting/compiling\nAVX2 instructions and a buildfarm machine that uses them.\n\nI plan to start another thread for AVX2 support for the page checksums.\n\n[0] https://postgr.es/m/20231107024734.GB729644%40nathanxps13\n[1] https://postgr.es/m/[email protected]\n[2] https://postgr.es/m/20220713170950.GA3116318%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 29 Nov 2023 11:15:26 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "add AVX2 support to simd.h" }, { "msg_contents": "On Thu, Nov 30, 2023 at 12:15 AM Nathan Bossart\n<[email protected]> wrote:\n> I don't intend for this patch to be\n> seriously considered until we have better support for detecting/compiling\n> AVX2 instructions and a buildfarm machine that uses them.\n\nThat's completely understandable, yet I'm confused why there is a\ncommitfest entry for it marked \"needs review\".\n\n\n", "msg_date": "Mon, 1 Jan 2024 19:12:26 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Mon, Jan 01, 2024 at 07:12:26PM +0700, John Naylor wrote:\n> On Thu, Nov 30, 2023 at 12:15 AM Nathan Bossart\n> <[email protected]> wrote:\n>> I don't intend for this patch to be\n>> seriously considered until we have better support for detecting/compiling\n>> AVX2 instructions and a buildfarm machine that uses them.\n> \n> That's completely understandable, yet I'm confused why there is a\n> commitfest entry for it marked \"needs review\".\n\nPerhaps I was too optimistic about adding support for newer instructions...\n\nI'm tempted to propose that we move forward with this patch as-is after\nadding a buildfarm machine that compiles with -mavx2 or -march=x86-64-v3.\nThere is likely still follow-up work to make these improvements more\naccessible, but I'm not sure that is a strict prerequisite here.\n\n(In case it isn't clear, I'm volunteering to set up such a buildfarm\nmachine.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 10:11:23 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> I'm tempted to propose that we move forward with this patch as-is after\n> adding a buildfarm machine that compiles with -mavx2 or -march=x86-64-v3.\n> There is likely still follow-up work to make these improvements more\n> accessible, but I'm not sure that is a strict prerequisite here.\n\nThe patch needs better comments (as in, more than \"none whatsoever\").\nIt doesn't need to be much though, perhaps like\n\n+#if defined(__AVX2__)\n+\n+/*\n+ * When compiled with -mavx2 or allied options, we prefer AVX2 instructions.\n+ */\n+#include <immintrin.h>\n+#define USE_AVX2\n+typedef __m256i Vector8;\n+typedef __m256i Vector32;\n\nAlso, do you really want to structure the header so that USE_SSE2\ndoesn't get defined? In that case you are committing to provide\nan AVX2 replacement every single place that there's USE_SSE2, which\ndoesn't seem like a great thing to require. OTOH, maybe there's\nno choice given than we need a different definition for Vector8 and\nVector32?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jan 2024 12:50:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Jan 02, 2024 at 12:50:04PM -0500, Tom Lane wrote:\n> The patch needs better comments (as in, more than \"none whatsoever\").\n\nYes, will do.\n\n> Also, do you really want to structure the header so that USE_SSE2\n> doesn't get defined? In that case you are committing to provide\n> an AVX2 replacement every single place that there's USE_SSE2, which\n> doesn't seem like a great thing to require. OTOH, maybe there's\n> no choice given than we need a different definition for Vector8 and\n> Vector32?\n\nYeah, the precedent is to use these abstracted types elsewhere so that any\nSIMD-related improvements aren't limited to one architecture. There are a\ncouple of places that do explicitly check for USE_NO_SIMD, though. Maybe\nthere's an eventual use-case for using SSE2 intrinsics even when you have\nAVX2 support, but for now, ensuring we have an AVX2 replacement for\neverything doesn't seem particularly burdensome.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 16:00:18 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Jan 2, 2024 at 11:11 PM Nathan Bossart <[email protected]> wrote:\n>\n> Perhaps I was too optimistic about adding support for newer instructions...\n>\n> I'm tempted to propose that we move forward with this patch as-is after\n> adding a buildfarm machine that compiles with -mavx2 or -march=x86-64-v3.\n\nThat means that we would be on the hook to fix it if it breaks, even\nthough nothing uses it yet in a normal build. I have pending patches\nthat will break, or get broken by, this, so minus-many from me until\nthere is an availability story.\n\n\n", "msg_date": "Wed, 3 Jan 2024 21:13:52 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Jan 03, 2024 at 09:13:52PM +0700, John Naylor wrote:\n> On Tue, Jan 2, 2024 at 11:11 PM Nathan Bossart <[email protected]> wrote:\n>> I'm tempted to propose that we move forward with this patch as-is after\n>> adding a buildfarm machine that compiles with -mavx2 or -march=x86-64-v3.\n> \n> That means that we would be on the hook to fix it if it breaks, even\n> though nothing uses it yet in a normal build. I have pending patches\n> that will break, or get broken by, this, so minus-many from me until\n> there is an availability story.\n\nHow will this break your patches? Is it just a matter of adding more AVX2\nsupport, or something else?\n\nIf the requirement is that normal builds use AVX2, then I fear we will be\nwaiting a long time. IIUC the current proposals (building multiple\nbinaries or adding a configuration option that maps to compiler flags)\nwould still be opt-in, and I'm not sure we can mandate AVX2 support for all\nx86_64 builds anytime soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Jan 2024 09:29:54 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Jan 02, 2024 at 10:11:23AM -0600, Nathan Bossart wrote:\n> (In case it isn't clear, I'm volunteering to set up such a buildfarm\n> machine.)\n\nI set up \"akepa\" to run with -march=x86-64-v3.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 11:48:48 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Jan 3, 2024 at 10:29 PM Nathan Bossart <[email protected]> wrote:\n> If the requirement is that normal builds use AVX2, then I fear we will be\n> waiting a long time. IIUC the current proposals (building multiple\n> binaries or adding a configuration option that maps to compiler flags)\n> would still be opt-in,\n\nIf and when we get one of those, I would consider that a \"normal\"\nbuild. Since there are no concrete proposals yet, I'm still waiting\nfor you to justify imposing an immediate maintenance cost for zero\nbenefit.\n\n\n", "msg_date": "Fri, 5 Jan 2024 09:03:39 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Fri, Jan 05, 2024 at 09:03:39AM +0700, John Naylor wrote:\n> On Wed, Jan 3, 2024 at 10:29 PM Nathan Bossart <[email protected]> wrote:\n>> If the requirement is that normal builds use AVX2, then I fear we will be\n>> waiting a long time. IIUC the current proposals (building multiple\n>> binaries or adding a configuration option that maps to compiler flags)\n>> would still be opt-in,\n> \n> If and when we get one of those, I would consider that a \"normal\"\n> build. Since there are no concrete proposals yet, I'm still waiting\n> for you to justify imposing an immediate maintenance cost for zero\n> benefit.\n\nI've been thinking about the configuration option approach. ISTM that\nwould be the most feasible strategy, at least for v17. A couple things\ncome to mind:\n\n* This option would simply map to existing compiler flags. We already have\n ways to provide those (-Dc_args in meson, CFLAGS in autoconf). Perhaps\n we'd want to provide our own shorthand for certain platforms (e.g., ARM),\n but that will still just be shorthand for compiler flags.\n\n* Such an option would itself generate some maintenance cost. That could\n be worth it because it formalizes the Postgres support for those options,\n but it's still one more thing to track.\n\nAnother related option could be to simply document that we have support for\nsome newer instructions that can be enabled by setting the aforementioned\ncompiler flags. That's perhaps a little less user-friendly, but it'd avoid\nthe duplication and possibly reduce the maintenance cost. I also wonder if\nit'd help prevent confusion when CFLAGS and this extra option conflict.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 11:04:27 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Thu, Nov 30, 2023 at 12:15 AM Nathan Bossart\n<[email protected]> wrote:\n\n> Using the same benchmark as we did for the SSE2 linear searches in\n> XidInMVCCSnapshot() (commit 37a6e5d) [1] [2], I see the following:\n\nI've been antagonistic towards the patch itself, but it'd be more\nproductive if I paid some nuanced attention to the problem it's trying\nto solve. First, I'd like to understand the benchmark a bit better.\n\n> writers sse2 avx2 %\n> 256 1195 1188 -1\n> 512 928 1054 +14\n> 1024 633 716 +13\n> 2048 332 420 +27\n> 4096 162 203 +25\n> 8192 162 182 +12\n\nThere doesn't seem to be any benefit at 256 at all. Is that expected\nand/or fine?\n\n> It's been a while since I ran these benchmarks, but I vaguely recall also\n> seeing something like a 50% improvement for a dedicated pg_lfind32()\n> benchmark on long arrays.\n\nThe latest I see in\nhttps://www.postgresql.org/message-id/20220808223254.GA1393216%40nathanxps13\n\nwriters head patch\n8 672 680\n16 639 664\n32 701 689\n64 705 703\n128 628 653\n256 576 627\n512 530 584\n768 450 536\n1024 350 494\n\nHere, the peak throughput seems to be around 64 writers with or\nwithout the patch from a couple years ago, but the slope is shallower\nafter that. It would be good to make sure that it can't regress near\nthe peak, even with a \"long tail\" case (see next paragraph). The first\nbenchmark above starts at 256, so we can't tell where the peak is. It\nmight be worth it to also have a microbenchmark because the systemic\none has enough noise to obscure what's going on unless there are a\nvery large number of writers. We know what a systemic benchmark can\ntell us on extreme workloads past the peak, and the microbenchmark\nwould tell us \"we need to see X improvement here in order to see Y\nimprovement in the system benchmark\".\n\nI suspect that there could be a regression lurking for some inputs\nthat the benchmark doesn't look at: pg_lfind32() currently needs to be\nable to read 4 vector registers worth of elements before taking the\nfast path. There is then a tail of up to 15 elements that are now\nchecked one-by-one, but AVX2 would increase that to 31. That's getting\nbig enough to be noticeable, I suspect. It would be good to understand\nthat case (n*32 + 31), because it may also be relevant now. It's also\neasy to improve for SSE2/NEON for v17.\n\nAlso, by reading 4 registers per loop iteration, that's 128 bytes on\nAVX2. I'm not sure that matters, but we shouldn't assume it doesn't.\nCode I've seen elsewhere reads a fixed 64-byte block, and then uses 1,\n2, or 4 registers to handle it, depending on architecture. Whether or\nnot that's worth it in this case, this patch does mean future patches\nwill have to wonder if they have to do anything differently depending\non vector length, whereas now they don't. That's not a deal-breaker,\nbut it is a trade-off to keep in mind.\n\n\n", "msg_date": "Mon, 8 Jan 2024 14:01:39 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Sat, Jan 6, 2024 at 12:04 AM Nathan Bossart <[email protected]> wrote:\n\n> I've been thinking about the configuration option approach. ISTM that\n> would be the most feasible strategy, at least for v17. A couple things\n> come to mind:\n>\n> * This option would simply map to existing compiler flags. We already have\n> ways to provide those (-Dc_args in meson, CFLAGS in autoconf). Perhaps\n> we'd want to provide our own shorthand for certain platforms (e.g., ARM),\n> but that will still just be shorthand for compiler flags.\n>\n> * Such an option would itself generate some maintenance cost. That could\n> be worth it because it formalizes the Postgres support for those options,\n> but it's still one more thing to track.\n>\n> Another related option could be to simply document that we have support for\n> some newer instructions that can be enabled by setting the aforementioned\n> compiler flags. That's perhaps a little less user-friendly, but it'd avoid\n> the duplication and possibly reduce the maintenance cost. I also wonder if\n> it'd help prevent confusion when CFLAGS and this extra option conflict.\n\nThe last one might offer more graceful forward compatibility if the\nmultiple-binaries idea gets any traction some day, because at that\npoint the additional config options are not needed, I think.\n\nAnother consideration is which way would touch the fewest places to\nwork with Windows, which uses the spelling /arch:AVX2 etc.\n\nOne small thing I would hope for from the finial version of this is\nthe ability to inline things where we currently indirect depending on\na run-time check. That seems like \"just work\" on top of everything\nelse, and I don't think it makes a case for either of the above.\n\n\n", "msg_date": "Mon, 8 Jan 2024 16:03:50 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Mon, Jan 08, 2024 at 02:01:39PM +0700, John Naylor wrote:\n> On Thu, Nov 30, 2023 at 12:15 AM Nathan Bossart\n> <[email protected]> wrote:\n>> writers sse2 avx2 %\n>> 256 1195 1188 -1\n>> 512 928 1054 +14\n>> 1024 633 716 +13\n>> 2048 332 420 +27\n>> 4096 162 203 +25\n>> 8192 162 182 +12\n> \n> There doesn't seem to be any benefit at 256 at all. Is that expected\n> and/or fine?\n\nMy unverified assumption is that the linear searches make up much less of\nthe benchmark at these lower client counts, so any improvements we make\nhere are unlikely to show up here. IIRC even the hash table approach that\nwe originally explored for XidInMVCCSnapshot() didn't do much, if anything,\nfor the benchmark at lower client counts.\n\n> Here, the peak throughput seems to be around 64 writers with or\n> without the patch from a couple years ago, but the slope is shallower\n> after that. It would be good to make sure that it can't regress near\n> the peak, even with a \"long tail\" case (see next paragraph). The first\n> benchmark above starts at 256, so we can't tell where the peak is. It\n> might be worth it to also have a microbenchmark because the systemic\n> one has enough noise to obscure what's going on unless there are a\n> very large number of writers. We know what a systemic benchmark can\n> tell us on extreme workloads past the peak, and the microbenchmark\n> would tell us \"we need to see X improvement here in order to see Y\n> improvement in the system benchmark\".\n\nYes, will do.\n\n> I suspect that there could be a regression lurking for some inputs\n> that the benchmark doesn't look at: pg_lfind32() currently needs to be\n> able to read 4 vector registers worth of elements before taking the\n> fast path. There is then a tail of up to 15 elements that are now\n> checked one-by-one, but AVX2 would increase that to 31. That's getting\n> big enough to be noticeable, I suspect. It would be good to understand\n> that case (n*32 + 31), because it may also be relevant now. It's also\n> easy to improve for SSE2/NEON for v17.\n\nGood idea. If it is indeed noticeable, we might be able to \"fix\" it by\nprocessing some of the tail with shorter vectors. But that probably means\nfinding a way to support multiple vector sizes on the same build, which\nwould require some work.\n\n> Also, by reading 4 registers per loop iteration, that's 128 bytes on\n> AVX2. I'm not sure that matters, but we shouldn't assume it doesn't.\n> Code I've seen elsewhere reads a fixed 64-byte block, and then uses 1,\n> 2, or 4 registers to handle it, depending on architecture. Whether or\n> not that's worth it in this case, this patch does mean future patches\n> will have to wonder if they have to do anything differently depending\n> on vector length, whereas now they don't. That's not a deal-breaker,\n> but it is a trade-off to keep in mind.\n\nYeah. Presently, this AVX2 patch just kicks the optimization down the road\na bit for the existing use-cases, so you don't start using the vector\nregisters until there's more data to work with, which might not even be\nnoticeable. But it's conceivable that vector length could matter at some\npoint, even if it doesn't matter much now.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 11:37:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Jan 9, 2024 at 12:37 AM Nathan Bossart <[email protected]> wrote:\n>\n> > I suspect that there could be a regression lurking for some inputs\n> > that the benchmark doesn't look at: pg_lfind32() currently needs to be\n> > able to read 4 vector registers worth of elements before taking the\n> > fast path. There is then a tail of up to 15 elements that are now\n> > checked one-by-one, but AVX2 would increase that to 31. That's getting\n> > big enough to be noticeable, I suspect. It would be good to understand\n> > that case (n*32 + 31), because it may also be relevant now. It's also\n> > easy to improve for SSE2/NEON for v17.\n>\n> Good idea. If it is indeed noticeable, we might be able to \"fix\" it by\n> processing some of the tail with shorter vectors. But that probably means\n> finding a way to support multiple vector sizes on the same build, which\n> would require some work.\n\nWhat I had in mind was an overlapping pattern I've seen in various\nplaces: do one iteration at the beginning, then subtract the\naligned-down length from the end and do all those iterations. And\none-by-one is only used if the total length is small.\n\n\n", "msg_date": "Tue, 9 Jan 2024 09:20:09 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On 29.11.23 18:15, Nathan Bossart wrote:\n> Using the same benchmark as we did for the SSE2 linear searches in\n> XidInMVCCSnapshot() (commit 37a6e5d) [1] [2], I see the following:\n> \n> writers sse2 avx2 %\n> 256 1195 1188 -1\n> 512 928 1054 +14\n> 1024 633 716 +13\n> 2048 332 420 +27\n> 4096 162 203 +25\n> 8192 162 182 +12\n\nAFAICT, your patch merely provides an alternative AVX2 implementation \nfor where currently SSE2 is supported, but it doesn't provide any new \nAPI calls or new functionality. One might naively expect that these are \njust two different ways to call the underlying primitives in the CPU, so \nthese performance improvements are surprising to me. Or do the CPUs \nactually have completely separate machinery for SSE2 and AVX2, and just \nusing the latter to do the same thing is faster?\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 15:03:39 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, 9 Jan 2024 at 16:03, Peter Eisentraut <[email protected]> wrote:\n> On 29.11.23 18:15, Nathan Bossart wrote:\n> > Using the same benchmark as we did for the SSE2 linear searches in\n> > XidInMVCCSnapshot() (commit 37a6e5d) [1] [2], I see the following:\n> >\n> > writers sse2 avx2 %\n> > 256 1195 1188 -1\n> > 512 928 1054 +14\n> > 1024 633 716 +13\n> > 2048 332 420 +27\n> > 4096 162 203 +25\n> > 8192 162 182 +12\n>\n> AFAICT, your patch merely provides an alternative AVX2 implementation\n> for where currently SSE2 is supported, but it doesn't provide any new\n> API calls or new functionality. One might naively expect that these are\n> just two different ways to call the underlying primitives in the CPU, so\n> these performance improvements are surprising to me. Or do the CPUs\n> actually have completely separate machinery for SSE2 and AVX2, and just\n> using the latter to do the same thing is faster?\n\nThe AVX2 implementation uses a wider vector register. On most current\nprocessors the throughput of the instructions in question is the same\non 256bit vectors as on 128bit vectors. Basically, the chip has AVX2\nworth of machinery and using SSE2 leaves half of it unused. Notable\nexceptions are efficiency cores on recent Intel desktop CPUs and AMD\nCPUs pre Zen 2 where AVX2 instructions are internally split up into\ntwo 128bit wide instructions.\n\nFor AVX512 the picture is much more complicated. Some instructions run\nat half rate, some at full rate, but not on all ALU ports, some\ninstructions cause aggressive clock rate reduction on some\nmicroarchitectures. AVX-512 adds mask registers and masked vector\ninstructions that enable quite a bit simpler code in many cases.\nInterestingly I have seen Clang make quite effective use of these\nmasked instructions even when using AVX2 intrinsics, but targeting an\nAVX-512 capable platform.\n\nThe vector width independent approach used in the patch is nice for\nsimple cases by not needing a separate implementation for each vector\nwidth. However for more complicated cases where \"horizontal\"\noperations are needed it's going to be much less useful. But these\ncases can easily just drop down to using intrinsics directly.\n\n\n", "msg_date": "Tue, 9 Jan 2024 17:25:42 +0200", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Jan 09, 2024 at 09:20:09AM +0700, John Naylor wrote:\n> On Tue, Jan 9, 2024 at 12:37 AM Nathan Bossart <[email protected]> wrote:\n>>\n>> > I suspect that there could be a regression lurking for some inputs\n>> > that the benchmark doesn't look at: pg_lfind32() currently needs to be\n>> > able to read 4 vector registers worth of elements before taking the\n>> > fast path. There is then a tail of up to 15 elements that are now\n>> > checked one-by-one, but AVX2 would increase that to 31. That's getting\n>> > big enough to be noticeable, I suspect. It would be good to understand\n>> > that case (n*32 + 31), because it may also be relevant now. It's also\n>> > easy to improve for SSE2/NEON for v17.\n>>\n>> Good idea. If it is indeed noticeable, we might be able to \"fix\" it by\n>> processing some of the tail with shorter vectors. But that probably means\n>> finding a way to support multiple vector sizes on the same build, which\n>> would require some work.\n> \n> What I had in mind was an overlapping pattern I've seen in various\n> places: do one iteration at the beginning, then subtract the\n> aligned-down length from the end and do all those iterations. And\n> one-by-one is only used if the total length is small.\n\nSorry, I'm not sure I understood this. Do you mean processing the first\nseveral elements individually or with SSE2 until the number of remaining\nelements can be processed with just the AVX2 instructions (a bit like how\npg_comp_crc32c_armv8() is structured for memory alignment)?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 10:20:09 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, 9 Jan 2024 at 18:20, Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Jan 09, 2024 at 09:20:09AM +0700, John Naylor wrote:\n> > On Tue, Jan 9, 2024 at 12:37 AM Nathan Bossart <[email protected]> wrote:\n> >>\n> >> > I suspect that there could be a regression lurking for some inputs\n> >> > that the benchmark doesn't look at: pg_lfind32() currently needs to be\n> >> > able to read 4 vector registers worth of elements before taking the\n> >> > fast path. There is then a tail of up to 15 elements that are now\n> >> > checked one-by-one, but AVX2 would increase that to 31. That's getting\n> >> > big enough to be noticeable, I suspect. It would be good to understand\n> >> > that case (n*32 + 31), because it may also be relevant now. It's also\n> >> > easy to improve for SSE2/NEON for v17.\n> >>\n> >> Good idea. If it is indeed noticeable, we might be able to \"fix\" it by\n> >> processing some of the tail with shorter vectors. But that probably means\n> >> finding a way to support multiple vector sizes on the same build, which\n> >> would require some work.\n> >\n> > What I had in mind was an overlapping pattern I've seen in various\n> > places: do one iteration at the beginning, then subtract the\n> > aligned-down length from the end and do all those iterations. And\n> > one-by-one is only used if the total length is small.\n>\n> Sorry, I'm not sure I understood this. Do you mean processing the first\n> several elements individually or with SSE2 until the number of remaining\n> elements can be processed with just the AVX2 instructions (a bit like how\n> pg_comp_crc32c_armv8() is structured for memory alignment)?\n\nFor some operations (min, max, = any) processing the same elements\nmultiple times doesn't change the result. So the vectors for first\nand/or last iterations can overlap with the main loop. In other cases\nit's possible to mask out the invalid elements and replace them with\nzeroes. Something along the lines of:\n\nstatic inline Vector8\nvector8_mask_right(int num_valid)\n{\n __m256i seq = _mm256_set_epi8(31, 30, 29, 28, 27, 26, 25, 24,\n 23, 22, 21, 20, 19, 18, 17, 16,\n 15, 14, 13, 12, 11, 10, 9, 8,\n 7, 6, 5, 4, 3, 2, 1, 0);\n return _mm256_cmpgt_epi8(_mm256_set1_epi8(num_valid), seq);\n}\n\n/* final incomplete iteration */\nVector8 mask = vector8_mask_right(end - cur);\nfinal_vec = vector8_and((Vector8*) (end - sizeof(Vector8), mask);\naccum = vector8_add(accum, final_vec);\n\nIt helps that on any halfway recent x86 unaligned loads only have a\nminor performance penalty and only when straddling cache line\nboundaries. Not sure what the state on ARM is. If we don't care about\nunaligned loads then we only need to care about the load not crossing\npage boundaries which could cause segfaults. Though I'm sure memory\nsanitizer tools will have plenty to complain about around such hacks.\n\n\n", "msg_date": "Tue, 9 Jan 2024 23:21:21 +0200", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Jan 9, 2024 at 11:20 PM Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Jan 09, 2024 at 09:20:09AM +0700, John Naylor wrote:\n> > On Tue, Jan 9, 2024 at 12:37 AM Nathan Bossart <[email protected]> wrote:\n> >>\n> >> > I suspect that there could be a regression lurking for some inputs\n> >> > that the benchmark doesn't look at: pg_lfind32() currently needs to be\n> >> > able to read 4 vector registers worth of elements before taking the\n> >> > fast path. There is then a tail of up to 15 elements that are now\n> >> > checked one-by-one, but AVX2 would increase that to 31. That's getting\n> >> > big enough to be noticeable, I suspect. It would be good to understand\n> >> > that case (n*32 + 31), because it may also be relevant now. It's also\n> >> > easy to improve for SSE2/NEON for v17.\n> >>\n> >> Good idea. If it is indeed noticeable, we might be able to \"fix\" it by\n> >> processing some of the tail with shorter vectors. But that probably means\n> >> finding a way to support multiple vector sizes on the same build, which\n> >> would require some work.\n> >\n> > What I had in mind was an overlapping pattern I've seen in various\n> > places: do one iteration at the beginning, then subtract the\n> > aligned-down length from the end and do all those iterations. And\n> > one-by-one is only used if the total length is small.\n>\n> Sorry, I'm not sure I understood this. Do you mean processing the first\n> several elements individually or with SSE2 until the number of remaining\n> elements can be processed with just the AVX2 instructions (a bit like how\n> pg_comp_crc32c_armv8() is structured for memory alignment)?\n\nIf we have say 25 elements, I mean (for SSE2) check the first 16, then\nthe last 16. Some will be checked twice, but that's okay.\n\n\n", "msg_date": "Wed, 10 Jan 2024 09:06:08 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Jan 10, 2024 at 09:06:08AM +0700, John Naylor wrote:\n> If we have say 25 elements, I mean (for SSE2) check the first 16, then\n> the last 16. Some will be checked twice, but that's okay.\n\nI finally got around to trying this. 0001 adds this overlapping logic.\n0002 is a rebased version of the AVX2 patch (it needed some updates after\ncommit 9f225e9). And 0003 is a benchmark for test_lfind32(). It runs\npg_lfind32() on an array of the given size 100M times.\n\nI've also attached the results of running this benchmark on my machine at\nHEAD, after applying 0001, and after applying both 0001 and 0002. 0001\nappears to work pretty well. When there is a small \"tail,\" it regresses a\nsmall amount, but overall, it seems to improve more cases than it harms.\n0002 does regress searches on smaller arrays quite a bit, since it\npostpones the SIMD optimizations until the arrays are longer. It might be\npossible to mitigate by using 2 registers when the \"tail\" is long enough,\nbut I have yet to try that.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 15 Mar 2024 12:41:49 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Fri, Mar 15, 2024 at 12:41:49PM -0500, Nathan Bossart wrote:\n> I've also attached the results of running this benchmark on my machine at\n> HEAD, after applying 0001, and after applying both 0001 and 0002. 0001\n> appears to work pretty well. When there is a small \"tail,\" it regresses a\n> small amount, but overall, it seems to improve more cases than it harms.\n> 0002 does regress searches on smaller arrays quite a bit, since it\n> postpones the SIMD optimizations until the arrays are longer. It might be\n> possible to mitigate by using 2 registers when the \"tail\" is long enough,\n> but I have yet to try that.\n\nThe attached 0003 is a sketch of what such mitigation might look like. It\nappears to help with the regressions nicely. I omitted the benchmarking\npatch in v3 to appease cfbot.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 15 Mar 2024 14:40:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Sat, Mar 16, 2024 at 2:40 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Mar 15, 2024 at 12:41:49PM -0500, Nathan Bossart wrote:\n> > I've also attached the results of running this benchmark on my machine at\n> > HEAD, after applying 0001, and after applying both 0001 and 0002. 0001\n> > appears to work pretty well. When there is a small \"tail,\" it regresses a\n> > small amount, but overall, it seems to improve more cases than it harms.\n> > 0002 does regress searches on smaller arrays quite a bit, since it\n> > postpones the SIMD optimizations until the arrays are longer. It might be\n> > possible to mitigate by using 2 registers when the \"tail\" is long enough,\n> > but I have yet to try that.\n>\n> The attached 0003 is a sketch of what such mitigation might look like. It\n> appears to help with the regressions nicely. I omitted the benchmarking\n> patch in v3 to appease cfbot.\n\nI haven't looked at the patches, but the graphs look good.\n\n\n", "msg_date": "Sun, 17 Mar 2024 09:47:33 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Sun, Mar 17, 2024 at 09:47:33AM +0700, John Naylor wrote:\n> I haven't looked at the patches, but the graphs look good.\n\nI spent some more time on these patches. Specifically, I reordered them to\ndemonstrate the effects on systems without AVX2 support. I've also added a\nshortcut to jump to the one-by-one approach when there aren't many\nelements, as the overhead becomes quite noticeable otherwise. Finally, I\nran the same benchmarks again on x86 and Arm out to 128 elements.\n\nOverall, I think 0001 and 0002 are in decent shape, although I'm wondering\nif it's possible to improve the style a bit. 0003 at least needs a big\ncomment in simd.h, and it might need a note in the documentation, too. If\nthe approach in this patch set seems reasonable, I'll spend some time on\nthat.\n\nBTW I did try to add some other optimizations, such as processing remaining\nelements with only one vector and trying to use the overlapping strategy\nwith more registers if we know there are relatively many remaining\nelements. These other approaches all added a lot of complexity and began\nhurting performance, and I've probably already spent way too much time\noptimizing a linear search, so this is where I've decided to stop.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 18 Mar 2024 21:03:41 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 19, 2024 at 9:03 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Sun, Mar 17, 2024 at 09:47:33AM +0700, John Naylor wrote:\n> > I haven't looked at the patches, but the graphs look good.\n>\n> I spent some more time on these patches. Specifically, I reordered them to\n> demonstrate the effects on systems without AVX2 support. I've also added a\n> shortcut to jump to the one-by-one approach when there aren't many\n> elements, as the overhead becomes quite noticeable otherwise. Finally, I\n> ran the same benchmarks again on x86 and Arm out to 128 elements.\n>\n> Overall, I think 0001 and 0002 are in decent shape, although I'm wondering\n> if it's possible to improve the style a bit.\n\nI took a brief look, and 0001 isn't quite what I had in mind. I can't\nquite tell what it's doing with the additional branches and \"goto\nretry\", but I meant something pretty simple:\n\n- if short, do one element at a time and return\n- if long, do one block unconditionally, then round the start pointer\nup so that \"end - start\" is an exact multiple of blocks, and loop over\nthem\n\n\n", "msg_date": "Tue, 19 Mar 2024 10:03:36 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 19, 2024 at 10:03:36AM +0700, John Naylor wrote:\n> I took a brief look, and 0001 isn't quite what I had in mind. I can't\n> quite tell what it's doing with the additional branches and \"goto\n> retry\", but I meant something pretty simple:\n\nDo you mean 0002? 0001 just adds a 2-register loop for remaining elements\nonce we've exhausted what can be processed with the 4-register loop.\n\n> - if short, do one element at a time and return\n\n0002 does this.\n\n> - if long, do one block unconditionally, then round the start pointer\n> up so that \"end - start\" is an exact multiple of blocks, and loop over\n> them\n\n0002 does the opposite of this. That is, after we've completed as many\nblocks as possible, we move the iterator variable back to \"end -\nblock_size\" and do one final iteration to cover all the remaining elements.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 22:16:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 19, 2024 at 10:16 AM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Tue, Mar 19, 2024 at 10:03:36AM +0700, John Naylor wrote:\n> > I took a brief look, and 0001 isn't quite what I had in mind. I can't\n> > quite tell what it's doing with the additional branches and \"goto\n> > retry\", but I meant something pretty simple:\n>\n> Do you mean 0002? 0001 just adds a 2-register loop for remaining elements\n> once we've exhausted what can be processed with the 4-register loop.\n\nSorry, I was looking at v2 at the time.\n\n> > - if short, do one element at a time and return\n>\n> 0002 does this.\n\nThat part looks fine.\n\n> > - if long, do one block unconditionally, then round the start pointer\n> > up so that \"end - start\" is an exact multiple of blocks, and loop over\n> > them\n>\n> 0002 does the opposite of this. That is, after we've completed as many\n> blocks as possible, we move the iterator variable back to \"end -\n> block_size\" and do one final iteration to cover all the remaining elements.\n\nSounds similar in principle, but it looks really complicated. I don't\nthink the additional loops and branches are a good way to go, either\nfor readability or for branch prediction. My sketch has one branch for\nwhich loop to do, and then performs only one loop. Let's do the\nsimplest thing that could work. (I think we might need a helper\nfunction to do the block, but the rest should be easy)\n\n\n", "msg_date": "Tue, 19 Mar 2024 16:53:04 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 19, 2024 at 04:53:04PM +0700, John Naylor wrote:\n> On Tue, Mar 19, 2024 at 10:16 AM Nathan Bossart\n> <[email protected]> wrote:\n>> 0002 does the opposite of this. That is, after we've completed as many\n>> blocks as possible, we move the iterator variable back to \"end -\n>> block_size\" and do one final iteration to cover all the remaining elements.\n> \n> Sounds similar in principle, but it looks really complicated. I don't\n> think the additional loops and branches are a good way to go, either\n> for readability or for branch prediction. My sketch has one branch for\n> which loop to do, and then performs only one loop. Let's do the\n> simplest thing that could work. (I think we might need a helper\n> function to do the block, but the rest should be easy)\n\nI tried to trim some of the branches, and came up with the attached patch.\nI don't think this is exactly what you were suggesting, but I think it's\nrelatively close. My testing showed decent benefits from using 2 vectors\nwhen there aren't enough elements for 4, so I've tried to keep that part\nintact. This changes pg_lfind32() to something like:\n\n\tif not many elements\n\t\tprocess one by one\n\n\twhile enough elements for 4 registers remain\n\t\tprocess with 4 registers\n\n\tif no elements remain\n\t\treturn false\n\n\tif more than 2-registers-worth of elements remain\n\t\tdo one iteration with 2 registers\n\n\tdo another iteration on last 2-registers-worth of elements\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 19 Mar 2024 11:30:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 19, 2024 at 11:30 PM Nathan Bossart\n<[email protected]> wrote:\n> > Sounds similar in principle, but it looks really complicated. I don't\n> > think the additional loops and branches are a good way to go, either\n> > for readability or for branch prediction. My sketch has one branch for\n> > which loop to do, and then performs only one loop. Let's do the\n> > simplest thing that could work. (I think we might need a helper\n> > function to do the block, but the rest should be easy)\n>\n> I tried to trim some of the branches, and came up with the attached patch.\n> I don't think this is exactly what you were suggesting, but I think it's\n> relatively close. My testing showed decent benefits from using 2 vectors\n> when there aren't enough elements for 4, so I've tried to keep that part\n> intact.\n\nI would caution against that if the benchmark is repeatedly running\nagainst a static number of elements, because the branch predictor will\nbe right all the time (except maybe when it exits a loop, not sure).\nWe probably don't need to go to the trouble to construct a benchmark\nwith some added randomness, but we have be careful not to overfit what\nthe test is actually measuring.\n\n\n", "msg_date": "Wed, 20 Mar 2024 13:57:54 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Mar 20, 2024 at 01:57:54PM +0700, John Naylor wrote:\n> On Tue, Mar 19, 2024 at 11:30 PM Nathan Bossart\n> <[email protected]> wrote:\n>> I tried to trim some of the branches, and came up with the attached patch.\n>> I don't think this is exactly what you were suggesting, but I think it's\n>> relatively close. My testing showed decent benefits from using 2 vectors\n>> when there aren't enough elements for 4, so I've tried to keep that part\n>> intact.\n> \n> I would caution against that if the benchmark is repeatedly running\n> against a static number of elements, because the branch predictor will\n> be right all the time (except maybe when it exits a loop, not sure).\n> We probably don't need to go to the trouble to construct a benchmark\n> with some added randomness, but we have be careful not to overfit what\n> the test is actually measuring.\n\nI don't mind removing the 2-register stuff if that's what you think we\nshould do. I'm cautiously optimistic that it'd help more than the extra\nbranch prediction might hurt, and it'd at least help avoid regressing the\nlower end for the larger AVX2 registers, but I probably won't be able to\nprove that without constructing another benchmark. And TBH I'm not sure\nit'll significantly impact any real-world workload, anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 09:31:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Mar 20, 2024 at 09:31:16AM -0500, Nathan Bossart wrote:\n> On Wed, Mar 20, 2024 at 01:57:54PM +0700, John Naylor wrote:\n>> On Tue, Mar 19, 2024 at 11:30 PM Nathan Bossart\n>> <[email protected]> wrote:\n>>> I tried to trim some of the branches, and came up with the attached patch.\n>>> I don't think this is exactly what you were suggesting, but I think it's\n>>> relatively close. My testing showed decent benefits from using 2 vectors\n>>> when there aren't enough elements for 4, so I've tried to keep that part\n>>> intact.\n>> \n>> I would caution against that if the benchmark is repeatedly running\n>> against a static number of elements, because the branch predictor will\n>> be right all the time (except maybe when it exits a loop, not sure).\n>> We probably don't need to go to the trouble to construct a benchmark\n>> with some added randomness, but we have be careful not to overfit what\n>> the test is actually measuring.\n> \n> I don't mind removing the 2-register stuff if that's what you think we\n> should do. I'm cautiously optimistic that it'd help more than the extra\n> branch prediction might hurt, and it'd at least help avoid regressing the\n> lower end for the larger AVX2 registers, but I probably won't be able to\n> prove that without constructing another benchmark. And TBH I'm not sure\n> it'll significantly impact any real-world workload, anyway.\n\nHere's a new version of the patch set with the 2-register stuff removed,\nplus a fresh run of the benchmark. The weird spike for AVX2 is what led me\ndown the 2-register path earlier.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 20 Mar 2024 14:55:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Thu, Mar 21, 2024 at 2:55 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 09:31:16AM -0500, Nathan Bossart wrote:\n\n> > I don't mind removing the 2-register stuff if that's what you think we\n> > should do. I'm cautiously optimistic that it'd help more than the extra\n> > branch prediction might hurt, and it'd at least help avoid regressing the\n> > lower end for the larger AVX2 registers, but I probably won't be able to\n> > prove that without constructing another benchmark. And TBH I'm not sure\n> > it'll significantly impact any real-world workload, anyway.\n>\n> Here's a new version of the patch set with the 2-register stuff removed,\n\nI'm much happier about v5-0001. With a small tweak it would match what\nI had in mind:\n\n+ if (nelem < nelem_per_iteration)\n+ goto one_by_one;\n\nIf this were \"<=\" then the for long arrays we could assume there is\nalways more than one block, and wouldn't need to check if any elements\nremain -- first block, then a single loop and it's done.\n\nThe loop could also then be a \"do while\" since it doesn't have to\ncheck the exit condition up front.\n\n> plus a fresh run of the benchmark. The weird spike for AVX2 is what led me\n> down the 2-register path earlier.\n\nYes, that spike is weird, because it seems super-linear. However, the\nmore interesting question for me is: AVX2 isn't really buying much for\nthe numbers covered in this test. Between 32 and 48 elements, and\nbetween 64 and 80, it's indistinguishable from SSE2. The jumps to the\nnext shelf are postponed, but the jumps are just as high. From earlier\nsystem benchmarks, I recall it eventually wins out with hundreds of\nelements, right? Is that still true?\n\nFurther, now that the algorithm is more SIMD-appropriate, I wonder\nwhat doing 4 registers at a time is actually buying us for either SSE2\nor AVX2. It might just be a matter of scale, but that would be good to\nunderstand.\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:30:30 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:30:30AM +0700, John Naylor wrote:\n> I'm much happier about v5-0001. With a small tweak it would match what\n> I had in mind:\n> \n> + if (nelem < nelem_per_iteration)\n> + goto one_by_one;\n> \n> If this were \"<=\" then the for long arrays we could assume there is\n> always more than one block, and wouldn't need to check if any elements\n> remain -- first block, then a single loop and it's done.\n> \n> The loop could also then be a \"do while\" since it doesn't have to\n> check the exit condition up front.\n\nGood idea. That causes us to re-check all of the tail elements when the\nnumber of elements is evenly divisible by nelem_per_iteration, but that\nmight be worth the trade-off.\n\n> Yes, that spike is weird, because it seems super-linear. However, the\n> more interesting question for me is: AVX2 isn't really buying much for\n> the numbers covered in this test. Between 32 and 48 elements, and\n> between 64 and 80, it's indistinguishable from SSE2. The jumps to the\n> next shelf are postponed, but the jumps are just as high. From earlier\n> system benchmarks, I recall it eventually wins out with hundreds of\n> elements, right? Is that still true?\n\nIt does still eventually win, although not nearly to the same extent as\nbefore. I extended the benchmark a bit to show this. I wouldn't be\ndevastated if we only got 0001 committed for v17, given these results.\n\n> Further, now that the algorithm is more SIMD-appropriate, I wonder\n> what doing 4 registers at a time is actually buying us for either SSE2\n> or AVX2. It might just be a matter of scale, but that would be good to\n> understand.\n\nI'll follow up with these numbers shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 21 Mar 2024 12:09:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Thu, Mar 21, 2024 at 12:09:44PM -0500, Nathan Bossart wrote:\n> It does still eventually win, although not nearly to the same extent as\n> before. I extended the benchmark a bit to show this. I wouldn't be\n> devastated if we only got 0001 committed for v17, given these results.\n\n(In case it isn't clear from the graph, after 128 elements, I only tested\nat 200, 300, 400, etc. elements.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 12:12:22 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Thu, Mar 21, 2024 at 12:09:44PM -0500, Nathan Bossart wrote:\n> On Thu, Mar 21, 2024 at 11:30:30AM +0700, John Naylor wrote:\n>> Further, now that the algorithm is more SIMD-appropriate, I wonder\n>> what doing 4 registers at a time is actually buying us for either SSE2\n>> or AVX2. It might just be a matter of scale, but that would be good to\n>> understand.\n> \n> I'll follow up with these numbers shortly.\n\nIt looks like the 4-register code still outperforms the 2-register code,\nexcept for a handful of cases where there aren't many elements.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 21 Mar 2024 13:38:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "Here's a new version of 0001 with some added #ifdefs that cfbot revealed\nwere missing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 24 Mar 2024 15:53:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Sun, Mar 24, 2024 at 03:53:17PM -0500, Nathan Bossart wrote:\n> Here's a new version of 0001 with some added #ifdefs that cfbot revealed\n> were missing.\n\nSorry for the noise. cfbot revealed another silly mistake (forgetting to\nreset the \"i\" variable in the assertion path). That should be fixed in v8.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 24 Mar 2024 17:09:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Fri, Mar 22, 2024 at 12:09 AM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 11:30:30AM +0700, John Naylor wrote:\n\n> > If this were \"<=\" then the for long arrays we could assume there is\n> > always more than one block, and wouldn't need to check if any elements\n> > remain -- first block, then a single loop and it's done.\n> >\n> > The loop could also then be a \"do while\" since it doesn't have to\n> > check the exit condition up front.\n>\n> Good idea. That causes us to re-check all of the tail elements when the\n> number of elements is evenly divisible by nelem_per_iteration, but that\n> might be worth the trade-off.\n\nYeah, if there's no easy way to avoid that it's probably fine. I\nwonder if we can subtract one first to force even multiples to round\ndown, although I admit I haven't thought through the consequences of\nthat.\n\n> [v8]\n\nSeems pretty good. It'd be good to see the results of 2- vs.\n4-register before committing, because that might lead to some\nrestructuring, but maybe it won't, and v8 is already an improvement\nover HEAD.\n\n/* Process the remaining elements one at a time. */\n\nThis now does all of them if that path is taken, so \"remaining\" can be removed.\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:03:27 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Mon, Mar 25, 2024 at 10:03:27AM +0700, John Naylor wrote:\n> Seems pretty good. It'd be good to see the results of 2- vs.\n> 4-register before committing, because that might lead to some\n> restructuring, but maybe it won't, and v8 is already an improvement\n> over HEAD.\n\nI tested this the other day [0] (only for x86). The results seemed to\nindicate that the 4-register approach was still quite a bit better.\n\n> /* Process the remaining elements one at a time. */\n> \n> This now does all of them if that path is taken, so \"remaining\" can be removed.\n\nRight, will do.\n\n[0] https://postgr.es/m/20240321183823.GA1800896%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:21:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "Here is what I have staged for commit. One notable difference in this\nversion of the patch is that I've changed\n\n +\tif (nelem <= nelem_per_iteration)\n +\t\tgoto one_by_one;\n\nto\n\n +\tif (nelem < nelem_per_iteration)\n +\t\tgoto one_by_one;\n\nI realized that there's no reason to jump to the one-by-one linear search\ncode when nelem == nelem_per_iteration, as the worst thing that will happen\nis that we'll process all the elements twice if the value isn't present in\nthe array. My benchmark that I've been using also shows a significant\nspeedup for this case with this change (on the order of 75%), which I\nimagine might be due to a combination of branch prediction, caching, fewer\ninstructions, etc.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 25 Mar 2024 16:37:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "I've committed v9, and I've marked the commitfest entry as \"Committed,\"\nalthough we may want to revisit AVX2, etc. in the future.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 14:09:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> I've committed v9, and I've marked the commitfest entry as \"Committed,\"\n> although we may want to revisit AVX2, etc. in the future.\n\nA significant fraction of the buildfarm is issuing warnings about\nthis.\n\n adder | 2024-03-26 21:04:33 | ../pgsql/src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n buri | 2024-03-26 21:16:09 | ../../src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n cavefish | 2024-03-26 22:53:23 | ../../src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n cisticola | 2024-03-26 22:20:07 | ../../../../src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n lancehead | 2024-03-26 21:48:17 | ../../src/include/port/pg_lfind.h:199:1: warning: unused label 'one_by_one' [-Wunused-label]\n nicator | 2024-03-26 21:08:14 | ../../src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n nuthatch | 2024-03-26 22:00:04 | ../../src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n rinkhals | 2024-03-26 19:51:32 | ../../src/include/port/pg_lfind.h:199:1: warning: unused label 'one_by_one' [-Wunused-label]\n siskin | 2024-03-26 19:59:29 | ../../src/include/port/pg_lfind.h:199:1: warning: label 'one_by_one' defined but not used [-Wunused-label]\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 19:28:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 26, 2024 at 07:28:24PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I've committed v9, and I've marked the commitfest entry as \"Committed,\"\n>> although we may want to revisit AVX2, etc. in the future.\n> \n> A significant fraction of the buildfarm is issuing warnings about\n> this.\n\nThanks for the heads-up. Will fix.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 18:55:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 26, 2024 at 06:55:54PM -0500, Nathan Bossart wrote:\n> On Tue, Mar 26, 2024 at 07:28:24PM -0400, Tom Lane wrote:\n>> A significant fraction of the buildfarm is issuing warnings about\n>> this.\n> \n> Thanks for the heads-up. Will fix.\n\nDone. I'll keep an eye on the farm.\n\nI just did the minimal fix for now, i.e., I moved the new label into the\nSIMD section of the function. I think it would be better stylistically to\nmove the one-by-one logic to an inline helper function, but I didn't do\nthat just in case it might negatively impact performance. I'll look into\nthis and will follow up with another patch if it looks good.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 20:36:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Tue, Mar 26, 2024 at 06:55:54PM -0500, Nathan Bossart wrote:\n>> On Tue, Mar 26, 2024 at 07:28:24PM -0400, Tom Lane wrote:\n>>> A significant fraction of the buildfarm is issuing warnings about\n>>> this.\n\n> Done. I'll keep an eye on the farm.\n\nThanks.\n\n> I just did the minimal fix for now, i.e., I moved the new label into the\n> SIMD section of the function. I think it would be better stylistically to\n> move the one-by-one logic to an inline helper function, but I didn't do\n> that just in case it might negatively impact performance. I'll look into\n> this and will follow up with another patch if it looks good.\n\nSounds like a plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Mar 2024 21:48:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Tue, Mar 26, 2024 at 09:48:57PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I just did the minimal fix for now, i.e., I moved the new label into the\n>> SIMD section of the function. I think it would be better stylistically to\n>> move the one-by-one logic to an inline helper function, but I didn't do\n>> that just in case it might negatively impact performance. I'll look into\n>> this and will follow up with another patch if it looks good.\n> \n> Sounds like a plan.\n\nHere's what I had in mind. My usual benchmark seems to indicate that this\nshouldn't impact performance.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 13:57:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Here's what I had in mind. My usual benchmark seems to indicate that this\n> shouldn't impact performance.\n\nShouldn't \"i\" be declared uint32, since nelem is?\n\nBTW, I wonder why these functions don't declare their array\narguments like \"const uint32 *base\".\n\nLGTM otherwise, and I like the fact that the #if structure\ngets a lot less messy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Mar 2024 17:10:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Mar 27, 2024 at 05:10:13PM -0400, Tom Lane wrote:\n> Shouldn't \"i\" be declared uint32, since nelem is?\n\nYes, that's a mistake.\n\n> BTW, I wonder why these functions don't declare their array\n> arguments like \"const uint32 *base\".\n\nThey probably should. I don't see any reason not to, and my compiler\ndoesn't complain, either.\n \n> LGTM otherwise, and I like the fact that the #if structure\n> gets a lot less messy.\n\nThanks for reviewing. I've attached a v2 that I intend to commit when I\nget a chance.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 16:37:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" }, { "msg_contents": "On Wed, Mar 27, 2024 at 04:37:35PM -0500, Nathan Bossart wrote:\n> On Wed, Mar 27, 2024 at 05:10:13PM -0400, Tom Lane wrote:\n>> LGTM otherwise, and I like the fact that the #if structure\n>> gets a lot less messy.\n> \n> Thanks for reviewing. I've attached a v2 that I intend to commit when I\n> get a chance.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 20:32:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: add AVX2 support to simd.h" } ]
[ { "msg_contents": "I enabled CI on my personal Postgres fork. I then tried to open a PR \nagainst my fork, and since GitHub defaults to creating PRs against \nupstream, I accidentally opened a PR against the Postgres mirror, which \nthe postgres-mirror bot then closed, which is good. Stupid me.\n\nWhat the bot didn't do however was cancel the Cirrus CI build that arose \nfrom my immediately closed PR. Here[0] is the current run. It seems like \nyou could very easily waste all the CI credits by creating a whole bunch \nof PRs against the mirror. \n\nIf I am just wasting my own credits, please ignore :).\n\n[0]: https://cirrus-ci.com/build/6235510532734976\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 29 Nov 2023 11:30:25 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Whose Cirrus CI credits are used when making a PR to the GitHub\n mirror?" }, { "msg_contents": "Hi,\n\nOn 2023-11-29 11:30:25 -0600, Tristan Partin wrote:\n> I enabled CI on my personal Postgres fork. I then tried to open a PR against\n> my fork, and since GitHub defaults to creating PRs against upstream, I\n> accidentally opened a PR against the Postgres mirror, which the\n> postgres-mirror bot then closed, which is good. Stupid me.\n\n> What the bot didn't do however was cancel the Cirrus CI build that arose\n> from my immediately closed PR. Here[0] is the current run. It seems like you\n> could very easily waste all the CI credits by creating a whole bunch of PRs\n> against the mirror.\n> \n> If I am just wasting my own credits, please ignore :).\n\nIt's currently using custom compute resources provided by google (formerly\nprovided by me), the same as cfbot. There is a hard limits to the number of\nconcurrent tasks and warnings when getting closer to those. So I am currently\nnot too worried about this threat.\n\nI do wish github would allow disabling PRs...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Nov 2023 09:41:11 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Whose Cirrus CI credits are used when making a PR to the GitHub\n mirror?" } ]
[ { "msg_contents": "Hi,\n\nThe just released meson 1.3 strongly deprecated a hack we were using, emitting\na noisy warning (the hack basically depended on an implementation detail to\nwork). Turns out there has been a better way available for a while, I just\nhadn't found it. 1.4 added a more convenient approach, but we can't rely on\nthat.\n\nEverything continues to work, but the warning is annoying.\n\nThe warning:\n\nMessage: checking for file conflicts between source and build directory\n../home/andres/src/postgresql/meson.build:2972: DEPRECATION: Project uses feature that was always broken, and is now deprecated since '1.3.0': str.format: Value other than strings, integers, bools, options, dictionaries and lists thereof..\n[...]\nWARNING: Broken features used:\n * 1.3.0: {'str.format: Value other than strings, integers, bools, options, dictionaries and lists thereof.'}\n\nI plan to apply this soon, unless I hear some opposition / better ideas / ....\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 29 Nov 2023 10:50:53 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "meson: Stop using deprecated way getting path of files" }, { "msg_contents": "This looks good to me. What is our limiting factor on bumping the \nminimum Meson version?\n\nWhile we are on the topic of Meson, it would be great if you could take \na look at this thread[0], where I am trying to compile Postgres with \n-fsanitize=address,undefined (-Db_sanitize=address,undefined).\n\n[0]: https://www.postgresql.org/message-id/CWTM35CAUKRT.1733OSMXUZW7%40neon.tech\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 29 Nov 2023 13:11:23 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "Hi,\n\nOn 2023-11-29 13:11:23 -0600, Tristan Partin wrote:\n> This looks good to me.\n\nCool.\n\n\n> What is our limiting factor on bumping the minimum Meson version?\n\nOld distro versions, particularly ones where the distro just has an older\npython. It's one thing to require installing meson but quite another to also\nrequire building python. There's some other ongoing discussion about\nestablishing the minimum baseline in a somewhat more, uh, formalized way:\nhttps://postgr.es/m/CA%2BhUKGLhNs5geZaVNj2EJ79Dx9W8fyWUU3HxcpZy55sMGcY%3DiA%40mail.gmail.com\n\n> While we are on the topic of Meson, it would be great if you could take a\n> look at this thread[0], where I am trying to compile Postgres with\n> -fsanitize=address,undefined (-Db_sanitize=address,undefined).\n\nDone. Not sure it helps you much though :)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 29 Nov 2023 11:42:24 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Wed Nov 29, 2023 at 1:42 PM CST, Andres Freund wrote:\n> Hi,\n>\n> On 2023-11-29 13:11:23 -0600, Tristan Partin wrote:\n> > What is our limiting factor on bumping the minimum Meson version?\n>\n> Old distro versions, particularly ones where the distro just has an older\n> python. It's one thing to require installing meson but quite another to also\n> require building python. There's some other ongoing discussion about\n> establishing the minimum baseline in a somewhat more, uh, formalized way:\n> https://postgr.es/m/CA%2BhUKGLhNs5geZaVNj2EJ79Dx9W8fyWUU3HxcpZy55sMGcY%3DiA%40mail.gmail.com\n\nI'll take a look there. According to Meson, the following versions had \nPython version bumps:\n\n0.61.5: 3.6\n0.56.2: 3.5\n0.45.1: 3.4\n\nTaking a look at pkgs.org, Debian 10, Ubuntu 20.04, and Oracle Linux \n7 (a RHEL re-spin), and CentOS 7, all have >= Python 3.6.8. Granted, \nthis isn't the whole picture of what Postgres supports from version 16+. \nTo put things in perspective, Python 3.6 was released on December 23, \n2016, which is coming up on 7 years. Python 3.6 reached end of life on \nthe same date in 2021.\n\nIs there a complete list somewhere that talks about what platforms each \nversion of Postgres supports?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 30 Nov 2023 15:00:22 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "\nOn 2023-11-30 Th 16:00, Tristan Partin wrote:\n> On Wed Nov 29, 2023 at 1:42 PM CST, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-11-29 13:11:23 -0600, Tristan Partin wrote:\n>> > What is our limiting factor on bumping the minimum Meson version?\n>>\n>> Old distro versions, particularly ones where the distro just has an \n>> older\n>> python. It's one thing to require installing meson but quite another \n>> to also\n>> require building python. There's some other ongoing discussion about\n>> establishing the minimum baseline in a somewhat more, uh, formalized \n>> way:\n>> https://postgr.es/m/CA%2BhUKGLhNs5geZaVNj2EJ79Dx9W8fyWUU3HxcpZy55sMGcY%3DiA%40mail.gmail.com \n>>\n>\n> I'll take a look there. According to Meson, the following versions had \n> Python version bumps:\n>\n> 0.61.5: 3.6\n> 0.56.2: 3.5\n> 0.45.1: 3.4\n>\n> Taking a look at pkgs.org, Debian 10, Ubuntu 20.04, and Oracle Linux 7 \n> (a RHEL re-spin), and CentOS 7, all have >= Python 3.6.8. Granted, \n> this isn't the whole picture of what Postgres supports from version \n> 16+. To put things in perspective, Python 3.6 was released on December \n> 23, 2016, which is coming up on 7 years. Python 3.6 reached end of \n> life on the same date in 2021.\n>\n> Is there a complete list somewhere that talks about what platforms \n> each version of Postgres supports?\n\n\nYou can look at animals in the buildfarm. For meson only release 16 and \nup matter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:46:06 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On 2023-11-29 10:50:53 -0800, Andres Freund wrote:\n> I plan to apply this soon, unless I hear some opposition / better ideas / ....\n\nPushed.\n\n\n", "msg_date": "Thu, 30 Nov 2023 20:06:01 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Thu Nov 30, 2023 at 3:46 PM CST, Andrew Dunstan wrote:\n>\n> On 2023-11-30 Th 16:00, Tristan Partin wrote:\n> > On Wed Nov 29, 2023 at 1:42 PM CST, Andres Freund wrote:\n> >> Hi,\n> >>\n> >> On 2023-11-29 13:11:23 -0600, Tristan Partin wrote:\n> >> > What is our limiting factor on bumping the minimum Meson version?\n> >>\n> >> Old distro versions, particularly ones where the distro just has an \n> >> older\n> >> python. It's one thing to require installing meson but quite another \n> >> to also\n> >> require building python. There's some other ongoing discussion about\n> >> establishing the minimum baseline in a somewhat more, uh, formalized \n> >> way:\n> >> https://postgr.es/m/CA%2BhUKGLhNs5geZaVNj2EJ79Dx9W8fyWUU3HxcpZy55sMGcY%3DiA%40mail.gmail.com \n> >>\n> >\n> > I'll take a look there. According to Meson, the following versions had \n> > Python version bumps:\n> >\n> > 0.61.5: 3.6\n> > 0.56.2: 3.5\n> > 0.45.1: 3.4\n> >\n> > Taking a look at pkgs.org, Debian 10, Ubuntu 20.04, and Oracle Linux 7 \n> > (a RHEL re-spin), and CentOS 7, all have >= Python 3.6.8. Granted, \n> > this isn't the whole picture of what Postgres supports from version \n> > 16+. To put things in perspective, Python 3.6 was released on December \n> > 23, 2016, which is coming up on 7 years. Python 3.6 reached end of \n> > life on the same date in 2021.\n> >\n> > Is there a complete list somewhere that talks about what platforms \n> > each version of Postgres supports?\n>\n>\n> You can look at animals in the buildfarm. For meson only release 16 and \n> up matter.\n\nOn the buildfarm page[0], it would be nice if more than just the \ncompiler versions were stated. It would be nice to have all \nbuild/runtime dependencies listed. For instance, it would be interesting \nif there was a json document for each build animal, and perhaps a root \njson document which was an amalgomation of the individual documents. \nThen I could use a tool like jq to query all the information rather \neasily. As-is, I don't know where to search for package versions for \nsome of the archaic operating systems in the farm. Perhaps other people \nhave had similar problems in the past. Having a full write-up of every \nbuild machine would also be good for debugging purposes. If I see \nopenssl tests suddenly failing on one machine, then I can just check the \nopenssl version, and try to reproduce locally.\n\nI know the buildfarm seems to be a volunteer thing, so asking more of \nthem seems like a hard ask. Just wanted to throw my thoughts into the \nvoid.\n\n[0]: https://buildfarm.postgresql.org/cgi-bin/show_members.pl\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 01 Dec 2023 10:18:38 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On the buildfarm page[0], it would be nice if more than just the \n> compiler versions were stated. It would be nice to have all \n> build/runtime dependencies listed.\n\nBy and large, we've attempted to address such concerns by extending\nthe configure script to emit info about versions of things it finds.\nSo you should look into the configure step's log to see what version\nof bison, openssl, etc is in use.\n\n> I know the buildfarm seems to be a volunteer thing, so asking more of \n> them seems like a hard ask.\n\nWe certainly aren't going to ask owners to maintain such information\nmanually. Even if they tried, it'd soon be impossibly out-of-date.\nThe logging method has the additional advantage that it's accurate\nfor historical runs as well as whatever the machine has installed\ntoday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Dec 2023 13:07:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Fri Dec 1, 2023 at 12:07 PM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On the buildfarm page[0], it would be nice if more than just the \n> > compiler versions were stated. It would be nice to have all \n> > build/runtime dependencies listed.\n>\n> By and large, we've attempted to address such concerns by extending\n> the configure script to emit info about versions of things it finds.\n> So you should look into the configure step's log to see what version\n> of bison, openssl, etc is in use.\n\nGood point. For some reason that slipped my mind. Off into the weeds \nI go...\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 01 Dec 2023 12:16:38 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Fri Dec 1, 2023 at 12:16 PM CST, Tristan Partin wrote:\n> On Fri Dec 1, 2023 at 12:07 PM CST, Tom Lane wrote:\n> > \"Tristan Partin\" <[email protected]> writes:\n> > > On the buildfarm page[0], it would be nice if more than just the \n> > > compiler versions were stated. It would be nice to have all \n> > > build/runtime dependencies listed.\n> >\n> > By and large, we've attempted to address such concerns by extending\n> > the configure script to emit info about versions of things it finds.\n> > So you should look into the configure step's log to see what version\n> > of bison, openssl, etc is in use.\n>\n> Good point. For some reason that slipped my mind. Off into the weeds \n> I go...\n\nOk, so what I found is that we still have build farm animals using \nPython 3.4, specifically the AIX machines. There was also at least one \nPython 3.5 user too. Note that this was a manual check.\n\nI think I'll probably work on a tool for querying information out of the \nbuild farm tonight to make tasks like this much more automated.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 04 Dec 2023 13:55:17 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Fri Dec 1, 2023 at 12:16 PM CST, Tristan Partin wrote:\n>>> Ok, so what I found is that we still have build farm animals using \n>>> Python 3.4, specifically the AIX machines. There was also at least one \n>>> Python 3.5 user too. Note that this was a manual check.\n\n> I think I'll probably work on a tool for querying information out of the \n> build farm tonight to make tasks like this much more automated.\n\nNot sure what you were using, but are you aware that SQL access to the\nbuildfarm database is available to project members? My own stock\napproach to checking on this sort of thing is like\n\nselect * from\n(select sysname, snapshot, unnest(string_to_array(log_text, E'\\n')) as l\n from build_status_log join snapshots using (sysname, snapshot)\n where log_stage = 'configure.log') ss\nwhere l like 'checking for builtin %'\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Dec 2023 15:10:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Mon Dec 4, 2023 at 2:10 PM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Fri Dec 1, 2023 at 12:16 PM CST, Tristan Partin wrote:\n> >>> Ok, so what I found is that we still have build farm animals using \n> >>> Python 3.4, specifically the AIX machines. There was also at least one \n> >>> Python 3.5 user too. Note that this was a manual check.\n>\n> > I think I'll probably work on a tool for querying information out of the \n> > build farm tonight to make tasks like this much more automated.\n>\n> Not sure what you were using, but are you aware that SQL access to the\n> buildfarm database is available to project members? My own stock\n> approach to checking on this sort of thing is like\n>\n> select * from\n> (select sysname, snapshot, unnest(string_to_array(log_text, E'\\n')) as l\n> from build_status_log join snapshots using (sysname, snapshot)\n> where log_stage = 'configure.log') ss\n> where l like 'checking for builtin %'\n\nThis looks useful. I had no idea about this. Can you send me any \nresources for setting this up? My idea was just to do some web scraping.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 04 Dec 2023 14:26:42 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Tue, Dec 5, 2023 at 3:27 AM Tristan Partin <[email protected]> wrote:\n>\n> On Mon Dec 4, 2023 at 2:10 PM CST, Tom Lane wrote:\n> > Not sure what you were using, but are you aware that SQL access to the\n> > buildfarm database is available to project members? My own stock\n> > approach to checking on this sort of thing is like\n> >\n> > select * from\n> > (select sysname, snapshot, unnest(string_to_array(log_text, E'\\n')) as l\n> > from build_status_log join snapshots using (sysname, snapshot)\n> > where log_stage = 'configure.log') ss\n> > where l like 'checking for builtin %'\n>\n> This looks useful. I had no idea about this. Can you send me any\n> resources for setting this up? My idea was just to do some web scraping.\n\n+1 -- I was vaguely aware of this, but can't find any mention of\nspecifics in the buildfarm how-to, or other places I thought to look.\n\n\n", "msg_date": "Mon, 18 Dec 2023 13:43:09 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" }, { "msg_contents": "On Mon Dec 18, 2023 at 12:43 AM CST, John Naylor wrote:\n> On Tue, Dec 5, 2023 at 3:27 AM Tristan Partin <[email protected]> wrote:\n> >\n> > On Mon Dec 4, 2023 at 2:10 PM CST, Tom Lane wrote:\n> > > Not sure what you were using, but are you aware that SQL access to the\n> > > buildfarm database is available to project members? My own stock\n> > > approach to checking on this sort of thing is like\n> > >\n> > > select * from\n> > > (select sysname, snapshot, unnest(string_to_array(log_text, E'\\n')) as l\n> > > from build_status_log join snapshots using (sysname, snapshot)\n> > > where log_stage = 'configure.log') ss\n> > > where l like 'checking for builtin %'\n> >\n> > This looks useful. I had no idea about this. Can you send me any\n> > resources for setting this up? My idea was just to do some web scraping.\n>\n> +1 -- I was vaguely aware of this, but can't find any mention of\n> specifics in the buildfarm how-to, or other places I thought to look.\n\n From my off-list conversations with Andrew, database access to the \nbuildfarm is for trusted contributors. I do not meet current criteria. \nI've thought about building a web-scraper to get at some of this \ninformation for non-trusted contributors. If that interests you, let me \nknow, and maybe I can build it out over the holiday. Or maybe you meet \nthe criteria! :)\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 19 Dec 2023 10:08:10 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Stop using deprecated way getting path of files" } ]
[ { "msg_contents": "On Fri, Nov 10, 2023 at 08:55:29PM -0600, Nathan Bossart wrote:\n> On Fri, Nov 10, 2023 at 06:48:39PM -0800, Andres Freund wrote:\n>> Yes. We should optimize pg_atomic_exchange_u32() one of these days - it can be\n>> done *far* faster than a cmpxchg. When I was adding the atomic abstraction\n>> there was concern with utilizing too many different atomic instructions. I\n>> didn't really agree back then, but these days I really don't see a reason to\n>> not use a few more intrinsics.\n> \n> I might give this a try, if for no other reason than it'd force me to\n> improve my mental model of this stuff. :)\n\nHere's a first draft. I haven't attempted to add implementations for\nPowerPC, and I only added the __atomic version for gcc since\n__sync_lock_test_and_set() only supports setting the value to 1 on some\nplatforms. Otherwise, I tried to add specialized atomic exchange\nimplementations wherever there existed other specialized atomic\nimplementations.\n\nI haven't done any sort of performance testing on this yet. Some\npreliminary web searches suggest that there is unlikely to be much\ndifference between cmpxchg and xchg, but presumably there's some difference\nbetween xchg and doing cmpxchg in a while loop (as is done in\natomics/generic.h today). I'll report back once I've had a chance to do\nsome testing...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 29 Nov 2023 15:29:05 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "optimize atomic exchanges" }, { "msg_contents": "On Wed, Nov 29, 2023 at 03:29:05PM -0600, Nathan Bossart wrote:\n> I haven't done any sort of performance testing on this yet. Some\n> preliminary web searches suggest that there is unlikely to be much\n> difference between cmpxchg and xchg, but presumably there's some difference\n> between xchg and doing cmpxchg in a while loop (as is done in\n> atomics/generic.h today). I'll report back once I've had a chance to do\n> some testing...\n\nSome rudimentary tests show a >40% speedup with this patch on x86_64.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 21:18:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimize atomic exchanges" }, { "msg_contents": "Hi,\n\nOn 2023-11-30 21:18:15 -0600, Nathan Bossart wrote:\n> On Wed, Nov 29, 2023 at 03:29:05PM -0600, Nathan Bossart wrote:\n> > I haven't done any sort of performance testing on this yet. Some\n> > preliminary web searches suggest that there is unlikely to be much\n> > difference between cmpxchg and xchg, but presumably there's some difference\n> > between xchg and doing cmpxchg in a while loop (as is done in\n> > atomics/generic.h today). I'll report back once I've had a chance to do\n> > some testing...\n> \n> Some rudimentary tests show a >40% speedup with this patch on x86_64.\n\nOn bigger machines, with contention, the wins are likely much higher. I see\ntwo orders of magnitude higher throughput in a test program that I had around,\non a two socket cascade lake machine. Of course it's also much less\npowerfull...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 19:56:27 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize atomic exchanges" }, { "msg_contents": "On Thu, Nov 30, 2023 at 07:56:27PM -0800, Andres Freund wrote:\n> On 2023-11-30 21:18:15 -0600, Nathan Bossart wrote:\n>> Some rudimentary tests show a >40% speedup with this patch on x86_64.\n> \n> On bigger machines, with contention, the wins are likely much higher. I see\n> two orders of magnitude higher throughput in a test program that I had around,\n> on a two socket cascade lake machine. Of course it's also much less\n> powerfull...\n\nNice. Thanks for trying it out.\n\nOne thing on my mind is whether we should bother with the inline assembly\nversions. It looks like gcc has had __atomic since 4.7.0 (2012), so I'm\nnot sure we gain much from them. OTOH they are pretty simple and seem\nunlikely to cause too much trouble.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 22:35:22 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimize atomic exchanges" }, { "msg_contents": "On Thu, Nov 30, 2023 at 10:35:22PM -0600, Nathan Bossart wrote:\n> One thing on my mind is whether we should bother with the inline assembly\n> versions. It looks like gcc has had __atomic since 4.7.0 (2012), so I'm\n> not sure we gain much from them. OTOH they are pretty simple and seem\n> unlikely to cause too much trouble.\n\nBarring objections or additional feedback, I think I'm inclined to press\nforward with this one and commit it in the next week or two. I'm currently\nplanning to keep the inline assembly, but I'm considering removing the\nconfiguration checks for __atomic_exchange_n() if the availability of\n__atomic_compare_exchange_n() seems like a reliable indicator of its\npresence. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 4 Dec 2023 12:18:05 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimize atomic exchanges" }, { "msg_contents": "On Mon, Dec 04, 2023 at 12:18:05PM -0600, Nathan Bossart wrote:\n> Barring objections or additional feedback, I think I'm inclined to press\n> forward with this one and commit it in the next week or two. I'm currently\n> planning to keep the inline assembly, but I'm considering removing the\n> configuration checks for __atomic_exchange_n() if the availability of\n> __atomic_compare_exchange_n() seems like a reliable indicator of its\n> presence. Thoughts?\n\nConcretely, like this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Dec 2023 15:08:57 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimize atomic exchanges" }, { "msg_contents": "Hi,\n\nOn 2023-12-04 15:08:57 -0600, Nathan Bossart wrote:\n> On Mon, Dec 04, 2023 at 12:18:05PM -0600, Nathan Bossart wrote:\n> > Barring objections or additional feedback, I think I'm inclined to press\n> > forward with this one and commit it in the next week or two. I'm currently\n> > planning to keep the inline assembly, but I'm considering removing the\n> > configuration checks for __atomic_exchange_n() if the availability of\n> > __atomic_compare_exchange_n() seems like a reliable indicator of its\n> > presence. Thoughts?\n\nI don't think we need the inline asm. Otherwise looks good.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 15 Dec 2023 04:56:27 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize atomic exchanges" }, { "msg_contents": "On Fri, Dec 15, 2023 at 04:56:27AM -0800, Andres Freund wrote:\n> I don't think we need the inline asm. Otherwise looks good.\n\nCommitted with that change. Thanks for reviewing! I am going to watch the\nbuildfarm especially closely for this one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Dec 2023 10:57:43 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimize atomic exchanges" } ]
[ { "msg_contents": "The following query:\n\n SELECT U&'\\017D' ~ '[[:alpha:]]' collate \"en-US-x-icu\";\n\nreturns true if the server encoding is UTF8, and false if the server\nencoding is LATIN9. That's a bug -- any behavior involving ICU should\nbe encoding-independent.\n\nThe problem seems to be confusion between pg_wchar and a unicode code\npoint in pg_wc_isalpha() and related functions.\n\nIt might be good to introduce some infrastructure here that can convert\na pg_wchar into a Unicode code point, or decode a string of bytes into\na string of 32-bit code points. Right now, that's possible, but it\ninvolves pg_wchar2mb() followed by encoding conversion to UTF8,\nfollowed by decoding the UTF8 to a code point. (Is there an easier path\nthat I missed?)\n\nOne wrinkle is MULE_INTERNAL, which doesn't have any conversion path to\nUTF8. That's not important for ICU (because ICU is not allowed for that\nencoding), but I'd like it if we could make this infrastructure\nindependent of ICU, because I have some follow-up proposals to simplify\ncharacter classification here and in ts_locale.c.\n\nThoughts?\n\nRegards,\n Jeff Davis\n\n\n\n", "msg_date": "Wed, 29 Nov 2023 15:46:26 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "encoding affects ICU regex character classification" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> The problem seems to be confusion between pg_wchar and a unicode code\n> point in pg_wc_isalpha() and related functions.\n\nYeah, that's an ancient sore spot: we don't really know what the\nrepresentation of wchar is. We assume it's Unicode code points\nfor UTF8 locales, but libc isn't required to do that AFAIK. See\ncomment block starting about line 20 in regc_pg_locale.c.\n\nI doubt that ICU has much to do with this directly.\n\nWe'd have to find an alternate source of knowledge to replace the\n<wctype.h> functions if we wanted to fix it fully ... can ICU do that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Nov 2023 18:56:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Wed, 2023-11-29 at 18:56 -0500, Tom Lane wrote:\n> We'd have to find an alternate source of knowledge to replace the\n> <wctype.h> functions if we wanted to fix it fully ... can ICU do\n> that?\n\nMy follow-up proposal is exactly along those lines, except that we\ndon't even need ICU.\n\nBy adding a couple lookup tables generated from the Unicode data files,\nwe can offer a pg_u_isalpha() family of functions. As a bonus, I have\nsome exhaustive tests to compare with what ICU does so we can protect\nourselves from simple mistakes.\n\nI might as well send it now; patch attached (0003 is the interesting\none).\n\nI also tested against the iswalpha() family of functions, and those\nhave very similar behavior (apart from the \"C\" locale, of course).\nCharacter classification is not localized at all in libc or ICU as far\nas I can tell.\n\nThere are some differences, and I don't understand why those\ndifferences exist, so perhaps that's worth discussing. Some differences\nseem to be related to the titlecase/uppercase distinction. Others are\nstrange, like how glibc counts some digit characters (outside 0-9) as\nalphabetic. And some seem arbitrary, like excluding a few whitespace\ncharacters. I can try to post more details if that would be helpful.\n\nAnother issue is that right now we are doing the wrong thing with ICU:\nwe should be using the u_isUAlphabetic() family of functions, not the\nu_isalpha() family of functions.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 29 Nov 2023 16:23:22 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Thu, Nov 30, 2023 at 1:23 PM Jeff Davis <[email protected]> wrote:\n> Character classification is not localized at all in libc or ICU as far\n> as I can tell.\n\nReally? POSIX isalpha()/isalpha_l() and friends clearly depend on a\nlocale. See eg d522b05c for a case where that broke something.\nPerhaps you mean glibc wouldn't do that to you because you know that,\nas an unstandardised detail, it sucks in (some version of) Unicode's\ndata which shouldn't vary between locales. But you are allowed to\nmake your own locales, including putting whatever classifications you\nwant into the LC_TYPE file using POSIX-standardised tools like\nlocaledef. Perhaps that is a bit of a stretch, and no one really does\nthat in practice, but anyway it's still \"localized\".\n\nNot knowing anything about how glibc generates its charmaps, Unicode\nor pre-Unicode, I could take a wild guess that maybe in LATIN9 they\nhave an old hand-crafted table, but for UTF-8 encoding it's fully\noutsourced to Unicode, and that's why you see a difference. Another\nproblem seen in a few parts of our tree is that we sometimes feed\nindividual UTF-8 bytes to the isXXX() functions which is about as well\ndefined as trying to pay for a pint with the left half of a $10 bill.\n\nAs for ICU, it's \"not localized\" only if there is only one ICU library\nin the universe, but of course different versions of ICU might give\ndifferent answers because they correspond to different versions of\nUnicode (as do glibc versions, FreeBSD libc versions, etc) and also\nmight disagree with tables built by PostgreSQL. Maybe irrelevant for\nnow, but I think with thus-far-imagined variants of the multi-version\nICU proposal, you have to choose whether to call u_isUAlphabetic() in\nthe library we're linked against, or via the dlsym() we look up in a\nparticular dlopen'd library. So I guess we'd have to access it via\nour pg_locale_t, so again it'd be \"localized\" by some definitions.\n\nThinking about how to apply that thinking to libc, ... this is going\nto sound far fetched and handwavy but here goes: we could even\nimagine a multi-version system based on different base locale paths.\nInstead of using the system-provided locales under /usr/share/locale\nto look when we call newlocale(..., \"en_NZ.UTF-8\", ...), POSIX says\nwe're allowed to specify an absolute path eg newlocale(...,\n\"/foo/bar/unicode11/en_NZ.UTF-8\", ...). If it is possible to use\n$DISTRO's localedef to compile $OLD_DISTRO's locale sources to get\nhistorical behaviour, that might provide a way to get them without\nassuming the binary format is stable (it definitely isn't, but the\nsource format is nailed down by POSIX). One fly in the ointment is\nthat glibc failed to implement absolute path support, so you might\nneed to use versioned locale names instead, or see if the LOCPATH\nenvironment variable can be swizzled around without confusing glibc's\nlocale cache. Then wouldn't be fundamentally different than the\nhypothesised multi-version ICU case: you could probably come up with\ndifferent isalpha_l() results for different locales because you have\ndifferent LC_CTYPE versions (for example Unicode 15.0 added new\nextended Cyrillic characters 1E030..1E08F, they look alphabetical to\nme but what would I know). That is an extremely hypothetical\npie-in-the-sky thought and I don't know if it'd really work very well,\nbut it is a concrete way that someone might finish up getting\ndifferent answers out of isalpha_l(), to observe that it really is\nlocalised. And localized.\n\n\n", "msg_date": "Thu, 30 Nov 2023 15:10:40 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Thu, 2023-11-30 at 15:10 +1300, Thomas Munro wrote:\n> > On Thu, Nov 30, 2023 at 1:23 PM Jeff Davis <[email protected]>\n> > wrote:\n> > > > Character classification is not localized at all in libc or ICU\n> > > > as > > far\n> > > > as I can tell.\n> > \n> > Really?  POSIX isalpha()/isalpha_l() and friends clearly depend on\n> > a\n> > locale.  See eg d522b05c for a case where that broke something.\n\nI believe we're using different definitions of \"localized\". What I mean\nis \"varies from region to region or language to language\". I think you\nmean \"varies for any reason at all [perhaps for no reason?]\".\n\nFor instance, that commit indirectly links to:\n\nhttps://github.com/evanj/isspace_locale\n\nWhich says \"Mac OS X in a UTF-8 locale...\". I don't see any fundamental\nlocale-based concern there.\n\nI wrote a test program (attached) which compares any two given libc\nlocales using both the ordinary isalpha() family of functions, and also\nusing the iswalpha() family of functions. For the former, I only test\nup to 0x7f. For the latter, I went to some effort to properly translate\nthe code point to a wchar_t (encode as UTF8, then mbstowcs using a UTF-\n8 locale), and I test all unicode code points except the surrogate\nrange.\n\nUsing the test program, I compared the C.UTF-8 locale to every other\ninstalled locale on my system (attached list for reference) and the\nonly ones that show any differences are \"C\" and \"POSIX\". That, combined\nwith the fact that ICU doesn't even accept a locale argument to the\ncharacter classification functions, gives me a high degree of\nconfidence that character classification is not localized on my system\naccording to my definition of \"localized\". If someone else wants to run\nthe test program on their system, I'd be interested to see the results\n(some platform-specific modification may be required, e.g. handling 16-\nbit whcar_t, etc.).\n\nYour definition is too wide in my opinion, because it mixes together\ndifferent sources of variation that are best left separate:\n a. region/language\n b. technical requirements\n c. versioning\n d. implementation variance\n\n(a) is not a true source of variation (please correct me if I'm wrong)\n\n(b) is perhaps interesting. The \"C\" locale is one example, and perhaps\nthere are others, but I doubt very many others that we want to support.\n\n(c) is not a major concern in my opinion. The impact of Unicode changes\nis usually not dramatic, and it only affects regexes so it's much more\ncontained than collation, for example. And if you really care, just use\nthe \"C\" locale.\n\n(d) is mostly a bug. Most users would prefer standardization, platform-\nindependence, documentability, and testability. There are users who\nmight care a lot about compatibility, and I don't want to disrupt such\nusers, but long term I don't see a lot of value in bubbling up\nsemantics from libc into expressions when there's not a clear reason to\ndo so. (Note: getting semantics from libc is a bit dubious in the case\nof collation, as well, but at least for collation there are regional\nand linguistic differences that we can't handle internally.)\n\nI think we only need 2 main character classification schemes: \"C\" and\nUnicode (TR #18 Compatibility Properties[1], either the \"Standard\"\nvariant or the \"POSIX Compatible\" variant or both). The libc and ICU\nones should be there only for compatibility and discouraged and\nhopefully eventually removed.\n\n> > Not knowing anything about how glibc generates its charmaps,\n> > Unicode\n> > or pre-Unicode, I could take a wild guess that maybe in LATIN9 they\n> > have an old hand-crafted table, but for UTF-8 encoding it's fully\n> > outsourced to Unicode, and that's why you see a difference.\n\nNo, the problem is that we're passing a pg_wchar to an ICU function\nthat expects a 32-bit code point. Those two things are equivalent in\nthe UTF8 encoding, but not in the LATIN9 encoding.\n\nSee the comment at the top of regc_pg_locale.c, which should probably\nbe updated to describe what happens with ICU collations.\n\n> >   Another\n> > problem seen in a few parts of our tree is that we sometimes feed\n> > individual UTF-8 bytes to the isXXX() functions which is about as >\n> > well\n> > defined as trying to pay for a pint with the left half of a $10\n> > bill.\n\nIf we have built-in character classification systems as I propose (\"C\"\nand Unicode), then the callers can simply choose which well-defined one\nto use.\n\n> >  also\n> > might disagree with tables built by PostgreSQL.\n\nThe patch I provided (new version attached) exhaustively tests all the\nnew Unicode property tables, and also the class assignments based on\n[1] and [2]. Everything is identical for all assigned code points. The\ntest will run whenever you \"ninja update-unicode\", so any\ninconsistencies will be highly visible before release. Additionally,\nbecause the tables are checked in, you'll be able to see (in the diff)\nthe impact from a Unicode version update and consider that impact when\nwriting the release notes.\n\nYou may be wondering about differences in the version of Unicode\nbetween Postgres and ICU while the test is running. It only tests code\npoints that are assigned in both Unicode versions, and reports the\nnumber of code points that are skipped due to this check. The person\nrunning \"update-unicode\" may see a failing test or a large number of\nskipped codepoints if the Unicode versions don't match, in which case\nthey should try running against a more closely-matching version of ICU.\n\nRegards,\n\tJeff Davis\n\n[1] http://www.unicode.org/reports/tr18/#Compatibility_Properties\n[2]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/uchar_8h.html#details", "msg_date": "Fri, 01 Dec 2023 12:49:46 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Sat, Dec 2, 2023 at 9:49 AM Jeff Davis <[email protected]> wrote:\n> Your definition is too wide in my opinion, because it mixes together\n> different sources of variation that are best left separate:\n> a. region/language\n> b. technical requirements\n> c. versioning\n> d. implementation variance\n>\n> (a) is not a true source of variation (please correct me if I'm wrong)\n>\n> (b) is perhaps interesting. The \"C\" locale is one example, and perhaps\n> there are others, but I doubt very many others that we want to support.\n>\n> (c) is not a major concern in my opinion. The impact of Unicode changes\n> is usually not dramatic, and it only affects regexes so it's much more\n> contained than collation, for example. And if you really care, just use\n> the \"C\" locale.\n>\n> (d) is mostly a bug\n\nI get you. I was mainly commenting on what POSIX APIs allow, which is\nmuch wider than what you might observe on <your local libc>, and also\nend-user-customisable. But I agree that Unicode is all-pervasive and\nauthoritative in practice, to the point that if your libc disagrees\nwith it, it's probably just wrong. (I guess site-local locales were\nessential for bootstrapping in the early days of computers in a\nlanguage/territory but I can't find much discussion of the tools being\nused by non-libc-maintainers today.)\n\n> I think we only need 2 main character classification schemes: \"C\" and\n> Unicode (TR #18 Compatibility Properties[1], either the \"Standard\"\n> variant or the \"POSIX Compatible\" variant or both). The libc and ICU\n> ones should be there only for compatibility and discouraged and\n> hopefully eventually removed.\n\nHow would you specify what you want? As with collating, I like the\nidea of keeping support for libc even if it is terrible (some libcs\nmore than others) and eventually not the default, because I think\noptional agreement with other software on the same host is a feature.\n\nIn the regex code we see not only class membership tests eg\niswlower_l(), but also conversions eg towlower_l(). Unless you also\nimplement built-in case mapping, you'd still have to call libc or ICU\nfor that, right? It seems a bit strange to use different systems for\nclassification and mapping. If you do implement mapping too, you have\nto decide if you believe it is language-dependent or not, I think?\n\nHmm, let's see what we're doing now... for ICU the regex code is using\n\"simple\" case mapping functions like u_toupper(c) that don't take a\nlocale, so no Turkish i/İ conversion for you, unlike our SQL\nupper()/lower(), which this is supposed to agree with according to the\ncomments at the top. I see why: POSIX can only do one-by-one\ncharacter mappings (which cannot handle Greek's context-sensitive\nΣ->σ/ς or German's multi-character ß->SS), while ICU offers only\nlanguage-aware \"full\" string conversation (which does not guarantee\n1:1 mapping for each character in a string) OR non-language-aware\n\"simple\" character conversion (which does not handle Turkish's i->İ).\nICU has no middle ground for language-aware mapping with just the 1:1\ncases only, probably because that doesn't really make total sense as a\nconcept (as I assume Greek speakers would agree).\n\n> > > Not knowing anything about how glibc generates its charmaps,\n> > > Unicode\n> > > or pre-Unicode, I could take a wild guess that maybe in LATIN9 they\n> > > have an old hand-crafted table, but for UTF-8 encoding it's fully\n> > > outsourced to Unicode, and that's why you see a difference.\n>\n> No, the problem is that we're passing a pg_wchar to an ICU function\n> that expects a 32-bit code point. Those two things are equivalent in\n> the UTF8 encoding, but not in the LATIN9 encoding.\n\nAh right, I get that now (sorry, I confused myself by forgetting we\nwere talking about ICU).\n\n\n", "msg_date": "Sun, 10 Dec 2023 10:39:37 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Sun, 2023-12-10 at 10:39 +1300, Thomas Munro wrote:\n\n> \n> How would you specify what you want?\n\nOne proposal would be to have a builtin collation provider:\n\nhttps://postgr.es/m/[email protected]\n\nI don't think there are very many ctype options, but they could be\nspecified as part of the locale, or perhaps even as some provider-\nspecific options specified at CREATE COLLATION time.\n\n> As with collating, I like the\n> idea of keeping support for libc even if it is terrible (some libcs\n> more than others) and eventually not the default, because I think\n> optional agreement with other software on the same host is a feature.\n\nOf course we should keep the libc support around. I'm not sure how\nrelevant such a feature is, but I don't think we actually have to\nremove it.\n\n> Unless you also\n> implement built-in case mapping, you'd still have to call libc or ICU\n> for that, right?\n\nWe can do built-in case mapping, see:\n\nhttps://postgr.es/m/[email protected]\n\n>   It seems a bit strange to use different systems for\n> classification and mapping.  If you do implement mapping too, you\n> have\n> to decide if you believe it is language-dependent or not, I think?\n\nA complete solution would need to do the language-dependent case\nmapping. But that seems to only be 3 locales (\"az\", \"lt\", and \"tr\"),\nand only a handful of mapping changes, so we can handle that with the\nbuiltin provider as well.\n\n> Hmm, let's see what we're doing now... for ICU the regex code is\n> using\n> \"simple\" case mapping functions like u_toupper(c) that don't take a\n> locale, so no Turkish i/İ conversion for you, unlike our SQL\n> upper()/lower(), which this is supposed to agree with according to\n> the\n> comments at the top.  I see why: POSIX can only do one-by-one\n> character mappings (which cannot handle Greek's context-sensitive\n> Σ->σ/ς or German's multi-character ß->SS)\n\nRegexes are inherently character-by-character, so transformations like\nß->SS are not going to work for case-insensitive regex matching\nregardless of the provider.\n\nΣ->σ/ς does make sense, and what we have seems to be just broken:\n\n select 'ς' ~* 'Σ'; -- false in both libc and ICU\n select 'Σ' ~* 'ς'; -- true in both libc and ICU\n\nSimilarly for titlecase variants:\n\n select 'Dž' ~* 'dž'; -- false in libc and ICU\n select 'dž' ~* 'Dž'; -- true in libc and ICU\n\nIf we do the case mapping ourselves, we can make those work. We'd just\nhave to modify the APIs a bit so that allcases() can actually get all\nof the case variants, rather than relying on just towupper/towlower.\n\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 12 Dec 2023 13:39:55 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On 12/12/23 1:39 PM, Jeff Davis wrote:\n> On Sun, 2023-12-10 at 10:39 +1300, Thomas Munro wrote:\n>> Unless you also\n>> implement built-in case mapping, you'd still have to call libc or ICU\n>> for that, right?\n> \n> We can do built-in case mapping, see:\n> \n> https://postgr.es/m/[email protected]\n> \n>>   It seems a bit strange to use different systems for\n>> classification and mapping.  If you do implement mapping too, you\n>> have\n>> to decide if you believe it is language-dependent or not, I think?\n> \n> A complete solution would need to do the language-dependent case\n> mapping. But that seems to only be 3 locales (\"az\", \"lt\", and \"tr\"),\n> and only a handful of mapping changes, so we can handle that with the\n> builtin provider as well.\n\nThis thread has me second-guessing the reply I just sent on the other\nthread.\n\nIs someone able to test out upper & lower functions on U+A7BA ... U+A7BF\nacross a few libs/versions? Theoretically the upper/lower behavior\nshould change in ICU between Ubuntu 18.04 LTS and Ubuntu 20.04 LTS\n(specifically in ICU 64 / Unicode 12). And I have no idea if or when\nglibc might have picked up the new unicode characters.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Tue, 12 Dec 2023 14:35:57 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Tue, 2023-12-12 at 14:35 -0800, Jeremy Schneider wrote:\n> Is someone able to test out upper & lower functions on U+A7BA ...\n> U+A7BF\n> across a few libs/versions?\n\nThose code points are unassigned in Unicode 11.0 and assigned in\nUnicode 12.0.\n\nIn ICU 63-2 (based on Unicode 11.0), they just get mapped to\nthemselves. In ICU 64-2 (based on Unicode 12.1) they get mapped the\nsame way the builtin CTYPE maps them (based on Unicode 15.1).\n\nThe concern over unassigned code points is misplaced. The application\nmay be aware of newly-assigned code points, and there's no way they\nwill be mapped correctly in Postgres if the provider is not aware of\nthose code points. The user can either proceed in using unassigned code\npoints and accept the risk of future changes, or wait for the provider\nto be upgraded.\n\nIf the user doesn't have many expression indexes dependent on ctype\nbehavior, it doesn't matter much. If they do have such indexes, the\nbest we can offer is a controlled process, and the builtin provider\nallows the most visibility and control.\n\n(Aside: case mapping has very strong compatibility guarantees, but not\nperfect. For better compatibility guarantees, we should support case\nfolding.)\n\n> And I have no idea if or when\n> glibc might have picked up the new unicode characters.\n\nThat's a strong argument in favor of a builtin provider.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 14 Dec 2023 07:12:27 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On 12/14/23 7:12 AM, Jeff Davis wrote:\n> The concern over unassigned code points is misplaced. The application\n> may be aware of newly-assigned code points, and there's no way they\n> will be mapped correctly in Postgres if the provider is not aware of\n> those code points. The user can either proceed in using unassigned code\n> points and accept the risk of future changes, or wait for the provider\n> to be upgraded.\n\nThis does not seem to me like a good way to view the situation.\n\nEarlier this summer, a day or two after writing a document, I was\ncompletely surprised to open it on my work computer and see \"unknown\ncharacter\" boxes. When I had previously written the document on my home\ncomputer and when I had viewed it from my cell phone, everything was\nfine. Apple does a very good job of always keeping iPhones and MacOS\nversions up-to-date with the latest versions of Unicode and latest\ncharacters. iPhone keyboards make it very easy to access any character.\nEmojis are the canonical example here. My work computer was one major\nversion of MacOS behind my home computer.\n\nAnd I'm probably one of a few people on this hackers email list who even\nunderstands what the words \"unassigned code point\" mean. Generally DBAs,\nsysadmins, architects and developers who are all part of the tangled web\nof building and maintaining systems which use PostgreSQL on their\nbackend are never going to think about unicode characters proactively.\n\nThis goes back to my other thread (which sadly got very little\ndiscussion): PosgreSQL really needs to be safe by /default/ ... having\nGUCs is fine though; we can put explanation in the docs about what users\nshould consider if they change a setting.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Fri, 15 Dec 2023 16:48:23 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Sat, Dec 16, 2023 at 1:48 PM Jeremy Schneider\n<[email protected]> wrote:\n> On 12/14/23 7:12 AM, Jeff Davis wrote:\n> > The concern over unassigned code points is misplaced. The application\n> > may be aware of newly-assigned code points, and there's no way they\n> > will be mapped correctly in Postgres if the provider is not aware of\n> > those code points. The user can either proceed in using unassigned code\n> > points and accept the risk of future changes, or wait for the provider\n> > to be upgraded.\n>\n> This does not seem to me like a good way to view the situation.\n>\n> Earlier this summer, a day or two after writing a document, I was\n> completely surprised to open it on my work computer and see \"unknown\n> character\" boxes. When I had previously written the document on my home\n> computer and when I had viewed it from my cell phone, everything was\n> fine. Apple does a very good job of always keeping iPhones and MacOS\n> versions up-to-date with the latest versions of Unicode and latest\n> characters. iPhone keyboards make it very easy to access any character.\n> Emojis are the canonical example here. My work computer was one major\n> version of MacOS behind my home computer.\n\nThat \"SQUARE ERA NAME REIWA\" codepoint we talked about in one of the\nmulti-version ICU threads was an interesting case study. It's not an\nemoji, it entered real/serious use suddenly, landed in a quickly\nwrapped minor release of Unicode, and then arrived in locale\ndefinitions via regular package upgrades on various OSes AFAICT (ie\ndidn't require a major version upgrade of the OS).\n\nhttps://en.wikipedia.org/wiki/Reiwa_era#Announcement\nhttps://en.wikipedia.org/wiki/Reiwa_era#Technology\nhttps://unicode.org/versions/Unicode12.1.0/\n\n\n", "msg_date": "Sat, 16 Dec 2023 14:23:53 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encoding affects ICU regex character classification" }, { "msg_contents": "On Fri, 2023-12-15 at 16:48 -0800, Jeremy Schneider wrote:\n> This goes back to my other thread (which sadly got very little\n> discussion): PosgreSQL really needs to be safe by /default/\n\nDoesn't a built-in provider help create a safer option?\n\nThe built-in provider's version of Unicode will be consistent with\nunicode_assigned(), which is a first step toward rejecting code points\nthat the provider doesn't understand. And by rejecting unassigned code\npoints, we get all kinds of Unicode compatibility guarantees that avoid\nthe kinds of change risks that you are worried about.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 18 Dec 2023 12:39:05 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encoding affects ICU regex character classification" } ]
[ { "msg_contents": "Hello.\n\nRecently, a new --filter option was added to pg_dump. I might be\nwrong, but the syntax of the help message for this feels off. Is the\nword 'on' not necessary after 'based'?\n\n> --filter=FILENAME include or exclude objects and data from dump\n> based expressions in FILENAME\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 30 Nov 2023 10:20:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "about help message for new pg_dump's --filter option" }, { "msg_contents": "At Thu, 30 Nov 2023 10:20:40 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Hello.\n> \n> Recently, a new --filter option was added to pg_dump. I might be\n> wrong, but the syntax of the help message for this feels off. Is the\n> word 'on' not necessary after 'based'?\n> \n> > --filter=FILENAME include or exclude objects and data from dump\n> > based expressions in FILENAME\n\nHmm. A similar message is spelled as \"based on expression\". Thus, the\nattached correct this message in this line.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 30 Nov 2023 10:52:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: about help message for new pg_dump's --filter option" }, { "msg_contents": "> On 30 Nov 2023, at 02:52, Kyotaro Horiguchi <[email protected]> wrote:\n> \n> At Thu, 30 Nov 2023 10:20:40 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n>> Hello.\n>> \n>> Recently, a new --filter option was added to pg_dump. I might be\n>> wrong, but the syntax of the help message for this feels off. Is the\n>> word 'on' not necessary after 'based'?\n>> \n>>> --filter=FILENAME include or exclude objects and data from dump\n>>> based expressions in FILENAME\n> \n> Hmm. A similar message is spelled as \"based on expression\". Thus, the\n> attached correct this message in this line.\n\nRight, that's an unfortunate miss, it should've been \"based on expression\" like\nyou propose. Fixed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:06:40 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: about help message for new pg_dump's --filter option" }, { "msg_contents": "On 2023-Nov-30, Kyotaro Horiguchi wrote:\n\n> Hello.\n> \n> Recently, a new --filter option was added to pg_dump. I might be\n> wrong, but the syntax of the help message for this feels off. Is the\n> word 'on' not necessary after 'based'?\n> \n> > --filter=FILENAME include or exclude objects and data from dump\n> > based expressions in FILENAME\n\nIsn't this a bit too long? Maybe we can do something shorter, like\n \n --filter=FILENAME determine objects in dump based on FILENAME\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Those who use electric razors are infidels destined to burn in hell while\nwe drink from rivers of beer, download free vids and mingle with naked\nwell shaved babes.\" (http://slashdot.org/comments.pl?sid=44793&cid=4647152)\n\n\n", "msg_date": "Sat, 2 Dec 2023 17:02:47 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: about help message for new pg_dump's --filter option" }, { "msg_contents": "> On 2 Dec 2023, at 17:02, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2023-Nov-30, Kyotaro Horiguchi wrote:\n> \n>> Hello.\n>> \n>> Recently, a new --filter option was added to pg_dump. I might be\n>> wrong, but the syntax of the help message for this feels off. Is the\n>> word 'on' not necessary after 'based'?\n>> \n>>> --filter=FILENAME include or exclude objects and data from dump\n>>> based expressions in FILENAME\n> \n> Isn't this a bit too long?\n\nI was trying to come up with a shorter description but didn't come up with one\nthat clearly enough described what it does.\n\n> Maybe we can do something shorter, like\n> \n> --filter=FILENAME determine objects in dump based on FILENAME\n\nI don't think that's an improvement really, it's not obvious what \"determine\nobjects\" means. How about a variation along these lines?:\n\n--filter=FILENAME include/exclude objects based on rules in FILENAME\n\nIf we want to use less horizontal space we can replace FILENAME with FILE,\nthough I'd prefer not to since FILENAME is already used in the help output for\n--file setting a precedent.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 2 Dec 2023 23:01:04 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: about help message for new pg_dump's --filter option" } ]
[ { "msg_contents": "Hi,\n\nI noticed something that looks like a bug in pgbench when using the\nprepared protocol. pgbench assumes that all prepared statements are\nprepared correctly, even if they contain errors (e.g. syntax, column/table\ndoesn't exist, etc.).\n\nMy test script is just:\n\nSELECT one;\n\nThe output looks something like this:\n\n$ pgbench -f test.sql --protocol prepared -d postgres\n\n[...]\npgbench: client 0 executing script \"test.sql\"\npgbench: client 0 preparing P_0\npgbench: error: ERROR: column \"one\" does not exist\nLINE 1: SELECT one;\n ^\npgbench: client 0 sending P_0\npgbench: client 0 receiving\npgbench: client 0 receiving\npgbench: error: client 0 script 0 aborted in command 0 query 0: ERROR:\n prepared statement \"P_0\" does not exist\ntransaction type: test.sql\n[...]\n\nNormally this wouldn't be a big deal, although by itself the output is\nconfusing, since the second error, while technically true, is not what\ncaused the test run to fail. In my case, I was using pgbench to validate\nthe correctness of prepared statements implementation in our pooler. Having\nthe second error sent me on quite a debugging session until I realized that\nmy fix was actually working.\n\nPatch attached, if there is any interest in fixing this small bug.\n\nCheers!\n\nLev\npostgresml.org", "msg_date": "Wed, 29 Nov 2023 17:38:13 -0800", "msg_from": "Lev Kokotov <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in pgbench prepared statements" }, { "msg_contents": "On Wed Nov 29, 2023 at 7:38 PM CST, Lev Kokotov wrote:\n> Patch attached, if there is any interest in fixing this small bug.\n\nI see prepareCommand() is called one more time in \nprepareCommandsInPipeline(). Should you also check the return value \nthere?\n\nIt may also be useful to throw this patch on the January commitfest if \nno one else comes along to review/commit it. This first set of changes \nlooks good to me.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:10:25 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in pgbench prepared statements" }, { "msg_contents": "> I see prepareCommand() is called one more time in\n> prepareCommandsInPipeline(). Should you also check the return value\n> there?\n\nYes, good catch. New patch attached.\n\n> It may also be useful to throw this patch on the January commitfest if\n> no one else comes along to review/commit it.\n\nFirst time contributing, not familiar with the process here, but happy to\nlearn.\n\nBest,\n\nLev\npostgresml.org\n\n\nOn Thu, Nov 30, 2023 at 2:10 PM Tristan Partin <[email protected]> wrote:\n\n> On Wed Nov 29, 2023 at 7:38 PM CST, Lev Kokotov wrote:\n> > Patch attached, if there is any interest in fixing this small bug.\n>\n> I see prepareCommand() is called one more time in\n> prepareCommandsInPipeline(). Should you also check the return value\n> there?\n>\n> It may also be useful to throw this patch on the January commitfest if\n> no one else comes along to review/commit it. This first set of changes\n> looks good to me.\n>\n> --\n> Tristan Partin\n> Neon (https://neon.tech)\n>", "msg_date": "Thu, 30 Nov 2023 19:15:54 -0800", "msg_from": "Lev Kokotov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in pgbench prepared statements" }, { "msg_contents": "On Thu, Nov 30, 2023 at 07:15:54PM -0800, Lev Kokotov wrote:\n>> I see prepareCommand() is called one more time in\n>> prepareCommandsInPipeline(). Should you also check the return value\n>> there?\n> \n> Yes, good catch. New patch attached.\n\nAgreed that this is not really helpful as it stands\n\n>> It may also be useful to throw this patch on the January commitfest if\n>> no one else comes along to review/commit it.\n> \n> First time contributing, not familiar with the process here, but happy to\n> learn.\n\nThe patch you have sent does not apply cleanly on the master branch,\ncould you rebase please?\n\nFWIW, I am a bit confused by the state of sendCommand(). Wouldn't it\nbetter to consume the errors from PQsendQueryPrepared() and\nPQsendQueryParams() when these fail?\n--\nMichael", "msg_date": "Fri, 1 Dec 2023 16:18:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in pgbench prepared statements" }, { "msg_contents": "> The patch you have sent does not apply cleanly on the master branch,\n> could you rebase please?\n\nAttached. PR against master also here\n<https://github.com/postgres/postgres/pull/147>, just to make sure it's\nmergeable <https://github.com/postgres/postgres/pull/147.patch>.\n\n> Wouldn't it\n> better to consume the errors from PQsendQueryPrepared() and\n> PQsendQueryParams() when these fail?\n\nThe error is returned in PQPrepare(), which happens only in QUERY_PREPARED\nmode, so PQsendQueryParams() does not apply, and before\nPQsendQueryPrepared() is called, so catching the error from\nPQsendQueryPrepared() is exactly what's causing the bug: ERROR: prepared\nstatement \"P_0\" does not exist.\n\nBest,\n\nLev\npostgresml.org\n\nOn Thu, Nov 30, 2023 at 11:19 PM Michael Paquier <[email protected]>\nwrote:\n\n> On Thu, Nov 30, 2023 at 07:15:54PM -0800, Lev Kokotov wrote:\n> >> I see prepareCommand() is called one more time in\n> >> prepareCommandsInPipeline(). Should you also check the return value\n> >> there?\n> >\n> > Yes, good catch. New patch attached.\n>\n> Agreed that this is not really helpful as it stands\n>\n> >> It may also be useful to throw this patch on the January commitfest if\n> >> no one else comes along to review/commit it.\n> >\n> > First time contributing, not familiar with the process here, but happy to\n> > learn.\n>\n> The patch you have sent does not apply cleanly on the master branch,\n> could you rebase please?\n>\n> FWIW, I am a bit confused by the state of sendCommand(). Wouldn't it\n> better to consume the errors from PQsendQueryPrepared() and\n> PQsendQueryParams() when these fail?\n> --\n> Michael\n>", "msg_date": "Fri, 1 Dec 2023 19:06:40 -0800", "msg_from": "Lev Kokotov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in pgbench prepared statements" }, { "msg_contents": "On Fri, Dec 01, 2023 at 07:06:40PM -0800, Lev Kokotov wrote:\n> Attached. PR against master also here\n> <https://github.com/postgres/postgres/pull/147>, just to make sure it's\n> mergeable <https://github.com/postgres/postgres/pull/147.patch>.\n\nThanks for the updated patch. It looks sensible seen from here.\n\n+ if (PQresultStatus(res) != PGRES_COMMAND_OK) {\n pg_log_error(\"%s\", PQerrorMessage(st->con));\n+ return false;\n+ }\n\nEach bracket should be on its own line, that's the format used\nelsewhere in the code.\n\nDid you notice that this fails some of the regression tests? You can\nenable these with --enable-tap-tests. Here is the failure:\n[13:57:19.960](0.000s) not ok 241 - pgbench script error: sql syntax error stderr /(?^:prepared statement .* does not exist)/\n[13:57:19.960](0.000s) \n[13:57:19.960](0.000s) # Failed test 'pgbench script error: sql syntax error stderr /(?^:prepared statement .* does not exist)/'\n# at t/001_pgbench_with_server.pl line 1150.\n[13:57:19.960](0.000s) # 'pgbench: error: ERROR: syntax error at or near \";\"\n# LINE 1: SELECT 1 + ;\n# ^\n# pgbench: error: client 0 aborted in command 0 (SQL) of script 0; SQL command send failed\n# pgbench: error: Run was aborted; the above results are incomplete.\n# '\n# doesn't match '(?^:prepared statement .* does not exist)'\n\nThe test case expects a syntax error as per its name, but we've just\nbothered checking for the last pattern of the error generated since\nit has been introduced in ed8a7c6fcf92. I would be more restrictive\nand just update the error output in the test, to reflect the error we\nwant to show up, aka a \"syntax error\".\n--\nMichael", "msg_date": "Mon, 4 Dec 2023 14:07:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in pgbench prepared statements" } ]
[ { "msg_contents": "Sorry for the sequential mails.\n\nIn the bleeding-edge version of pg_dump, when a conditionspecifying an\nindex, for example, is described in an object filter file, the\nfollowing message is output. However, there is a period at the end of\nthe line. Shouldn't this be removed?\n\n$ pg_dump --filter=/tmp/hoge.filter\npg_dump: error: invalid format in filter read from \"/tmp/hoge.filter\" on line 1: include filter for \"index\" is not allowed.\n\nThe attached patch includes modifications related to the calls to\npg_log_filter_error().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 30 Nov 2023 10:39:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Extra periods in pg_dump messages" }, { "msg_contents": "> On 30 Nov 2023, at 02:39, Kyotaro Horiguchi <[email protected]> wrote:\n\n> In the bleeding-edge version of pg_dump, when a conditionspecifying an\n> index, for example, is described in an object filter file, the\n> following message is output. However, there is a period at the end of\n> the line. Shouldn't this be removed?\n\nYes, ending with a period is for detail and hint messages. Fixed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:07:21 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extra periods in pg_dump messages" } ]
[ { "msg_contents": "Hello.\n\nUpon reviewing my translation, I discovered that filter.c was not\nincluded in the nls.mk of pg_dump. Additional it appears that two '.h'\nfiles have been included for a long time, but they seem unnecessary as\ntheir removal does not affect the results. The attached patch is\nintended to correct these issues.\n\nFor Meson, on the other hand, I believe there is nothing in particular\nthat needs to be done.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 30 Nov 2023 12:00:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump/nls.mk is missing a file" }, { "msg_contents": "> On 30 Nov 2023, at 04:00, Kyotaro Horiguchi <[email protected]> wrote:\n\n> Upon reviewing my translation, I discovered that filter.c was not\n> included in the nls.mk of pg_dump.\n\nFixed. I did leave the other headers in there since I don't feel comfortable\nwith changing that part in an otherwise unrelated thread (and maybe there are\nmore in the tree such that a bigger cleanup is possible?).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:09:09 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump/nls.mk is missing a file" } ]
[ { "msg_contents": "Hi\n\none my customer migrated a pretty large application from Oracle, and when\ndid performance tests, he found very high memory usage related probably to\nunclosed cursors. The overhead is significantly bigger than on Oracle\n(probably Oracle closes cursors after leaving cursor's variable scope, I\ndon't know. Maybe it just uses a different pattern with shorter\ntransactions on Oracle). He cannot use FOR cycle, because he needs to hold\ncode in form that allows automatic translation from PL/SQL to PL/pgSQL for\nsome years (some years he will support both platforms).\n\nDECLARE qNAJUPOSPL refcursor;\nBEGIN\n OPEN qNAJUPOSPL FOR EXECUTE mSqNAJUPOSPL;\n LOOP\n FETCH qNAJUPOSPL INTO mID_NAJVUPOSPL , mID_NAJDATSPLT , mID_PREDPIS;\n EXIT WHEN NOT FOUND; /* apply on qNAJUPOSPL */\n END LOOP;\nEND;\n\nBecause plpgsql and postgres can be referenced just by name then it is not\npossible to use some reference counters and close cursors when the\nreference number is zero. Can we introduce some modifier that forces\nclosing the unclosed cursor before the related scope is left?\n\nSome like `DECLATE curvar refcursor LOCAL`\n\nAnother way to solve this issue is just warning when the number of opened\ncursors crosses some limit. Later this warning can be disabled, increased\nor solved. But investigation of related memory issues can be easy then.\n\nComments, notes?\n\nRegards\n\nPavel\n\nHione my customer migrated a pretty large application from Oracle, and when did performance tests, he found very high memory usage related probably to unclosed cursors. The overhead is significantly bigger than on Oracle (probably Oracle closes cursors after leaving cursor's variable scope, I don't know. Maybe it just uses a different pattern with shorter transactions on Oracle). He cannot use FOR cycle, because he needs to hold code in form that allows automatic translation from PL/SQL to PL/pgSQL for some years (some years he will support both platforms).DECLARE qNAJUPOSPL refcursor;BEGIN  OPEN qNAJUPOSPL FOR EXECUTE\n mSqNAJUPOSPL;\n   LOOP\n     FETCH qNAJUPOSPL INTO mID_NAJVUPOSPL , mID_NAJDATSPLT ,\n mID_PREDPIS;\n     EXIT WHEN NOT FOUND; /* apply on qNAJUPOSPL */\n   END LOOP;END;Because plpgsql and postgres can be referenced just by name then it is not possible to use some reference counters and close cursors when the reference number is zero. Can we introduce some modifier that forces closing the unclosed cursor before the related scope is left?Some like `DECLATE curvar refcursor LOCAL`Another way to solve this issue is just warning when the number of opened cursors crosses some limit. Later this warning can be disabled, increased or solved. But investigation of related memory issues can be easy then. Comments, notes?RegardsPavel", "msg_date": "Thu, 30 Nov 2023 06:45:23 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "possibility to define only local cursors" }, { "msg_contents": "čt 30. 11. 2023 v 6:45 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> one my customer migrated a pretty large application from Oracle, and when\n> did performance tests, he found very high memory usage related probably to\n> unclosed cursors. The overhead is significantly bigger than on Oracle\n> (probably Oracle closes cursors after leaving cursor's variable scope, I\n> don't know. Maybe it just uses a different pattern with shorter\n> transactions on Oracle). He cannot use FOR cycle, because he needs to hold\n> code in form that allows automatic translation from PL/SQL to PL/pgSQL for\n> some years (some years he will support both platforms).\n>\n> DECLARE qNAJUPOSPL refcursor;\n> BEGIN\n> OPEN qNAJUPOSPL FOR EXECUTE mSqNAJUPOSPL;\n> LOOP\n> FETCH qNAJUPOSPL INTO mID_NAJVUPOSPL , mID_NAJDATSPLT , mID_PREDPIS;\n> EXIT WHEN NOT FOUND; /* apply on qNAJUPOSPL */\n> END LOOP;\n> END;\n>\n> Because plpgsql and postgres can be referenced just by name then it is not\n> possible to use some reference counters and close cursors when the\n> reference number is zero. Can we introduce some modifier that forces\n> closing the unclosed cursor before the related scope is left?\n>\n> Some like `DECLATE curvar refcursor LOCAL`\n>\n> Another way to solve this issue is just warning when the number of opened\n> cursors crosses some limit. Later this warning can be disabled, increased\n> or solved. But investigation of related memory issues can be easy then.\n>\n\nit can be implemented like extra warning for OPEN statement.\n\n\n\n>\n> Comments, notes?\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n\nčt 30. 11. 2023 v 6:45 odesílatel Pavel Stehule <[email protected]> napsal:Hione my customer migrated a pretty large application from Oracle, and when did performance tests, he found very high memory usage related probably to unclosed cursors. The overhead is significantly bigger than on Oracle (probably Oracle closes cursors after leaving cursor's variable scope, I don't know. Maybe it just uses a different pattern with shorter transactions on Oracle). He cannot use FOR cycle, because he needs to hold code in form that allows automatic translation from PL/SQL to PL/pgSQL for some years (some years he will support both platforms).DECLARE qNAJUPOSPL refcursor;BEGIN  OPEN qNAJUPOSPL FOR EXECUTE\n mSqNAJUPOSPL;\n   LOOP\n     FETCH qNAJUPOSPL INTO mID_NAJVUPOSPL , mID_NAJDATSPLT ,\n mID_PREDPIS;\n     EXIT WHEN NOT FOUND; /* apply on qNAJUPOSPL */\n   END LOOP;END;Because plpgsql and postgres can be referenced just by name then it is not possible to use some reference counters and close cursors when the reference number is zero. Can we introduce some modifier that forces closing the unclosed cursor before the related scope is left?Some like `DECLATE curvar refcursor LOCAL`Another way to solve this issue is just warning when the number of opened cursors crosses some limit. Later this warning can be disabled, increased or solved. But investigation of related memory issues can be easy then. it can be implemented like extra warning for OPEN statement. Comments, notes?RegardsPavel", "msg_date": "Thu, 30 Nov 2023 06:47:57 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: possibility to define only local cursors" } ]
[ { "msg_contents": "Hi hackers,\n\n\n\nI found that dumped view SQL failed to execute due to the explicit cast\n\nof negative number, and I took a look at the defined SQL in view and then\n\nfound -1 in the group by clause. I suppose it’s the main reason the sql\n\ncannot be executed and raised ERROR \"GROUP BY position -1 is not in select list\"\n\n\n\npostgres=# create view v1 as select * from t1 group by a,b,-1::int;\n\nCREATE VIEW\n\npostgres=# \\d+ v1;\n\n View \"public.v1\"\n\n Column | Type | Collation | Nullable | Default | Storage | Description\n\n--------+---------+-----------+----------+---------+---------+-------------\n\n a | integer | | | | plain |\n\n b | integer | | | | plain |\n\nView definition:\n\n SELECT a,\n\n b\n\n FROM t1\n\n GROUP BY a, b, (- 1);\n\n\n\nAfter exploring codes, I suppose we should treat operator plus constant\n\nas -'nnn'::typename instead of const, my patch just did this by handling\n\nOpexpr especially, but I am not sure it’s the best way or not, BTW do you\n\nguys have any suggestions and another approach?\n\n\n\n--\nBest Regards,\nHaotian", "msg_date": "Thu, 30 Nov 2023 06:06:20 +0000", "msg_from": "Haotian Chen <[email protected]>", "msg_from_op": true, "msg_subject": "Dumped SQL failed to execute with ERROR \"GROUP BY position -1 is not\n in select list\"" }, { "msg_contents": "Haotian Chen <[email protected]> writes:\n> postgres=# create view v1 as select * from t1 group by a,b,-1::int;\n> CREATE VIEW\n\nHmm, surely that is a contrived case?\n\n> After exploring codes, I suppose we should treat operator plus constant\n> as -'nnn'::typename instead of const, my patch just did this by handling\n> Opexpr especially, but I am not sure it's the best way or not,\n\nYeah, after some time looking at alternatives, I agree that hacking up\nget_rule_sortgroupclause some more is the least risky way to make this\nwork. We could imagine changing the parser instead but people might\nbe depending on the current parsing behavior.\n\nI don't like your patch too much though, particularly not the arbitrary\n(and undocumented) change in get_const_expr; that seems way too likely\nto have unexpected side-effects. Also, I think that relying on\ngenerate_operator_name to produce exactly '-' (and not, say,\n'pg_catalog.-') is unwise as well as unduly expensive.\n\nThere are, I think, precisely two operators we need to worry about here,\nnamely int4um and numeric_uminus. It'd be cheaper and more reliable to\nidentify those by OID. (If the underlying Const is neither int4 nor\nnumeric, it'll end up getting labeled with a typecast, so that we don't\nneed to worry about anything else.)\n\nAs for getting the right thing to be printed, I think what we might\nwant is to effectively const-fold the expression into a negative\nConst, and then we can just apply get_const_expr with showtype=1.\n(So we'd end with output like '-1'::integer or similar.)\n\nWe could do worse than to implement that by actual const-folding,\nie call expression_planner. Efficiency-wise that's not great, but\nthis is such a weird edge case that I don't think we need to sweat\nabout making it fast. The alternative of hand-rolled constant\nfolding code isn't very appealing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Dec 2023 14:57:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dumped SQL failed to execute with ERROR \"GROUP BY position -1 is\n not in select list\"" }, { "msg_contents": "> Hmm, surely that is a contrived case?\r\nSorry for forgetting to paste the reproduction. It should be as follows.\r\n\r\n###\r\npsql -d postgres -c 'create table t1(a int, b int, c int)'\r\npsql -d postgres -c 'create view v1 as select a,b,c, -1::int from t1 group by 1,2,3,4'\r\npg_dumpall > /tmp/ddl.sql\r\npsql -d postgres -c 'drop view v1'\r\npsql -d postgres -c 'drop table t1'\r\npsql -d postgres -f /tmp/ddl.sql\r\n\r\npsql:/tmp/ddl.sql:111: ERROR: GROUP BY position -1 is not in select list\r\nLINE 7: GROUP BY a, b, c, (- 1);\r\n ^\r\npsql:/tmp/ddl.sql:114: ERROR: relation \"public.v1\" does not exist\r\n###\r\n\r\n> There are, I think, precisely two operators we need to worry about here,\r\n> namely int4um and numeric_uminus. It'd be cheaper and more reliable to\r\n> identify those by OID.\r\nYes, I updated my patch and just used oid numbers 558 and 1751 stand for\r\nint4um and numeric_uminus. Maybe we could define a macro for them,\r\nbut seems unnecessary.\r\n\r\n> We could do worse than to implement that by actual const-folding,\r\n> ie call expression_planner.\r\nAfter exploring more codes, I also suppose expression_planner is a good choice.\r\n\r\nRegards,\r\nHaotian\r\n\r\n发件人: Tom Lane <[email protected]>\r\n日期: 星期六, 2023年12月2日 03:57\r\n收件人: Haotian Chen <[email protected]>\r\n抄送: [email protected] <[email protected]>\r\n主题: Re: Dumped SQL failed to execute with ERROR \"GROUP BY position -1 is not in select list\"\r\nHaotian Chen <[email protected]> writes:\r\n> postgres=# create view v1 as select * from t1 group by a,b,-1::int;\r\n> CREATE VIEW\r\n\r\nHmm, surely that is a contrived case?\r\n\r\n> After exploring codes, I suppose we should treat operator plus constant\r\n> as -'nnn'::typename instead of const, my patch just did this by handling\r\n> Opexpr especially, but I am not sure it's the best way or not,\r\n\r\nYeah, after some time looking at alternatives, I agree that hacking up\r\nget_rule_sortgroupclause some more is the least risky way to make this\r\nwork. We could imagine changing the parser instead but people might\r\nbe depending on the current parsing behavior.\r\n\r\nI don't like your patch too much though, particularly not the arbitrary\r\n(and undocumented) change in get_const_expr; that seems way too likely\r\nto have unexpected side-effects. Also, I think that relying on\r\ngenerate_operator_name to produce exactly '-' (and not, say,\r\n'pg_catalog.-') is unwise as well as unduly expensive.\r\n\r\nThere are, I think, precisely two operators we need to worry about here,\r\nnamely int4um and numeric_uminus. It'd be cheaper and more reliable to\r\nidentify those by OID. (If the underlying Const is neither int4 nor\r\nnumeric, it'll end up getting labeled with a typecast, so that we don't\r\nneed to worry about anything else.)\r\n\r\nAs for getting the right thing to be printed, I think what we might\r\nwant is to effectively const-fold the expression into a negative\r\nConst, and then we can just apply get_const_expr with showtype=1.\r\n(So we'd end with output like '-1'::integer or similar.)\r\n\r\nWe could do worse than to implement that by actual const-folding,\r\nie call expression_planner. Efficiency-wise that's not great, but\r\nthis is such a weird edge case that I don't think we need to sweat\r\nabout making it fast. The alternative of hand-rolled constant\r\nfolding code isn't very appealing.\r\n\r\n regards, tom lane", "msg_date": "Sun, 3 Dec 2023 13:38:33 +0000", "msg_from": "Haotian Chen <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?gb2312?B?tPC4tDogRHVtcGVkIFNRTCBmYWlsZWQgdG8gZXhlY3V0ZSB3aXRoIEVSUk9S?=\n =?gb2312?Q?_\"GROUP_BY_position_-1_is_not_in_select_list\"?=" }, { "msg_contents": "On Mon, 4 Dec 2023 at 02:38, Haotian Chen <[email protected]> wrote:\n> Yes, I updated my patch and just used oid numbers 558 and 1751 stand for\n> int4um and numeric_uminus. Maybe we could define a macro for them,\n> but seems unnecessary.\n\nThe thing to do here is modify pg_operator.dat and give both of these\noperators an \"oid_symbol\". Perhaps Int4NegOperator is ok. (I think\nInt4UnaryMinusOperator might be on the verbose side.). The code that\nparses pg_operator.dat will then define that constant in\npg_operator_d.h. You can then use that and the other ones you defined\nfor the numeric operator instead of hard coding the Oids in the patch.\n\nDavid\n\n\n", "msg_date": "Mon, 4 Dec 2023 11:08:00 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dumped SQL failed to execute with ERROR \"GROUP BY position -1 is\n not in select list\"" }, { "msg_contents": "> The thing to do here is modify pg_operator.dat and give both of these\n> operators an \"oid_symbol\". Perhaps Int4NegOperator is ok.\nThanks for suggestions, I replaced hacking oids with Int4NegOperator and NumericNegOperator.\nAnd I also updated some comments and commit message.\n\nPlease feel free to review latest version patch v3-0001-Printing-const-folder-expression-in-ruleutils.c.patch\n\nBest regards,\nHaotian\n\n\nFrom: David Rowley <[email protected]>\nDate: Monday, December 4, 2023 at 06:08\nTo: Haotian Chen <[email protected]>\nCc: Tom Lane <[email protected]>, [email protected] <[email protected]>\nSubject: Re: Dumped SQL failed to execute with ERROR \"GROUP BY position -1 is not in select list\"\nOn Mon, 4 Dec 2023 at 02:38, Haotian Chen <[email protected]> wrote:\n> Yes, I updated my patch and just used oid numbers 558 and 1751 stand for\n> int4um and numeric_uminus. Maybe we could define a macro for them,\n> but seems unnecessary.\n\nThe thing to do here is modify pg_operator.dat and give both of these\noperators an \"oid_symbol\". Perhaps Int4NegOperator is ok. (I think\nInt4UnaryMinusOperator might be on the verbose side.). The code that\nparses pg_operator.dat will then define that constant in\npg_operator_d.h. You can then use that and the other ones you defined\nfor the numeric operator instead of hard coding the Oids in the patch.\n\nDavid", "msg_date": "Mon, 4 Dec 2023 07:58:11 +0000", "msg_from": "Haotian Chen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Dumped SQL failed to execute with ERROR \"GROUP BY position -1 is\n not in select list\"" } ]
[ { "msg_contents": "Hi hackers,\nI found a problem when executing the plpython function:\nAfter the plpython function returns an error, in the same session, if we\ncontinue to execute\nplpython function, the server panic will be caused.\n\n*Reproduce*\npreparation\n\nSET max_parallel_workers_per_gather=4;\nSET parallel_setup_cost=1;\nSET min_parallel_table_scan_size ='4kB';\n\nCREATE TABLE t(i int);\nINSERT INTO t SELECT generate_series(1, 10000);\n\nCREATE EXTENSION plpython3u;\nCREATE OR REPLACE FUNCTION test_func() RETURNS SETOF int AS\n$$\nplpy.execute(\"select pg_backend_pid()\")\n\nfor i in range(0, 5):\n yield (i)\n\n$$ LANGUAGE plpython3u parallel safe;\n\nexecute the function twice in the same session\n\npostgres=# SELECT test_func() from t where i>10 and i<100;\nERROR: error fetching next item from iterator\nDETAIL: Exception: cannot start subtransactions during a parallel\noperation\nCONTEXT: Traceback (most recent call last):\nPL/Python function \"test_func\"\n\npostgres=# SELECT test_func();\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n\n*Analysis*\n\n - There is an SPI call in test_func(): plpy.execute().\n - Then the server will start a subtransaction by\n PLy_spi_subtransaction_begin(); BUT! The Python processor does not know\n whether an error happened during PLy_spi_subtransaction_begin().\n - If there is an error that occurs in PLy_spi_subtransaction_begin(),\n the SPI call will be terminated but the python error indicator won't be set\n and the PyObject won't be free.\n - Then the next plpython UDF in the same session will fail due to the\n wrong Python environment.\n\n\n*Solution*\nUse try-catch to catch the error that occurs in\nPLy_spi_subtransaction_begin(), and set the python error indicator.\n\nWith Regards\nHao Zhang", "msg_date": "Thu, 30 Nov 2023 15:15:55 +0800", "msg_from": "Hao Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] plpython function causes server panic" }, { "msg_contents": "Hao Zhang <[email protected]> writes:\n> I found a problem when executing the plpython function:\n> After the plpython function returns an error, in the same session, if we\n> continue to execute\n> plpython function, the server panic will be caused.\n\nThanks for the report! I see the problem is that we're not expecting\nBeginInternalSubTransaction to fail. However, I'm not sure I like\nthis solution, mainly because it's only covering a fraction of the\nproblem. There are similarly unsafe usages in plperl, pltcl, and\nvery possibly a lot of third-party PLs. I wonder if there's a way\nto deal with this issue without changing these API assumptions.\n\nThe only readily-reachable error case in BeginInternalSubTransaction\nis this specific one about IsInParallelMode, which was added later\nthan the original design and evidently with not a lot of thought or\ntesting. The comment for it speculates about whether we could get\nrid of it, so I wonder if our thoughts about this ought to go in that\ndirection.\n\nIn any case, if we do proceed along the lines of catching errors\nfrom BeginInternalSubTransaction, I think your patch is a bit shy\nof a load because it doesn't do all the same things that other callers\nof PLy_spi_exception_set do. Looking at that made me wonder why\nthe PLy_spi_exceptions lookup business was being duplicated by every\ncaller rather than being done once in PLy_spi_exception_set. So\n0001 attached is a quick refactoring patch to remove that code\nduplication, and then 0002 is your patch adapted to that.\n\nI also attempted to include a test case in 0002, but I'm not very\nsatisfied with that. Your original test case seemed pretty expensive\nfor the amount of code coverage it adds, so I tried to make it happen\nwith debug_parallel_query instead. That does exercise the new code,\nbut it does not exhibit the crash if run against unpatched code.\nThat's because with this test case the error is only thrown in worker\nprocesses not the leader, so we don't end up with corrupted Python\nstate in the leader. That result also points up that the original\ntest case isn't very reliable for this either: you have to have\nparallel_leader_participation on, and you have to have the leader\nprocess at least one row, which makes it pretty timing-sensitive.\nOn top of all that, the test would become useless if we do eventually\nget rid of the !IsInParallelMode restriction. So I'm kind of inclined\nto not bother with a test case if this gets to be committed in this\nform.\n\nThoughts anyone?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 01 Dec 2023 20:04:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Hi,\n\nOn 2023-12-01 20:04:15 -0500, Tom Lane wrote:\n> Hao Zhang <[email protected]> writes:\n> > I found a problem when executing the plpython function:\n> > After the plpython function returns an error, in the same session, if we\n> > continue to execute\n> > plpython function, the server panic will be caused.\n> \n> Thanks for the report! I see the problem is that we're not expecting\n> BeginInternalSubTransaction to fail. However, I'm not sure I like\n> this solution, mainly because it's only covering a fraction of the\n> problem. There are similarly unsafe usages in plperl, pltcl, and\n> very possibly a lot of third-party PLs. I wonder if there's a way\n> to deal with this issue without changing these API assumptions.\n\nThere are plenty other uses, but it's not clear to me that they are similarly\naffected by BeginInternalSubTransaction raising an error? It e.g. doesn't\nimmediately look like plperl's usage would be affected in a similar way?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Dec 2023 17:30:07 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-01 20:04:15 -0500, Tom Lane wrote:\n>> Thanks for the report! I see the problem is that we're not expecting\n>> BeginInternalSubTransaction to fail. However, I'm not sure I like\n>> this solution, mainly because it's only covering a fraction of the\n>> problem. There are similarly unsafe usages in plperl, pltcl, and\n>> very possibly a lot of third-party PLs. I wonder if there's a way\n>> to deal with this issue without changing these API assumptions.\n\n> There are plenty other uses, but it's not clear to me that they are similarly\n> affected by BeginInternalSubTransaction raising an error? It e.g. doesn't\n> immediately look like plperl's usage would be affected in a similar way?\n\nWhy not? We'd be longjmp'ing out from inside the Perl interpreter.\nMaybe Perl is so robust it doesn't care, but I'd be surprised if this\ncan't break it. The same for Tcl.\n\nI think that plpgsql indeed doesn't care because it has no non-PG\ninterpreter state to worry about. But it's in the minority I fear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Dec 2023 20:46:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "I wrote:\n> The only readily-reachable error case in BeginInternalSubTransaction\n> is this specific one about IsInParallelMode, which was added later\n> than the original design and evidently with not a lot of thought or\n> testing. The comment for it speculates about whether we could get\n> rid of it, so I wonder if our thoughts about this ought to go in that\n> direction.\n\nAfter thinking a bit more I wonder why we need that error check at all.\nWhy isn't it sufficient to rely on GetNewTransactionId()'s check that\nthrows an error if a parallelized subtransaction tries to obtain an XID?\nI don't see why we'd need to \"synchronize transaction state\" about\nanything that never acquires an XID.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Dec 2023 20:51:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Thanks for your reply. These patches look good to me!\n\n> The only readily-reachable error case in BeginInternalSubTransaction\n> is this specific one about IsInParallelMode, which was added later\n> than the original design and evidently with not a lot of thought or\n> testing. The comment for it speculates about whether we could get\n> rid of it, so I wonder if our thoughts about this ought to go in that\n> direction.\n\nIMHO, there are other error reports in the function\nBeginInternalSubTransaction(), like\n```\nereport(ERROR,\n (errcode(ERRCODE_OUT_OF_MEMORY),\n errmsg(\"out of memory\"),\n errdetail(\"Failed on request of size %zu in memory context\n\\\"%s\\\".\",\n size, context->name)));\n```\nwe cannot avoid this crash by just getting rid of IsInParallelMode().\n\nAnd in my test, the server won't crash in the plperl test.\n\nWith regards,\nHao Zhang\n\nTom Lane <[email protected]> 于2023年12月2日周六 09:51写道:\n\n> I wrote:\n> > The only readily-reachable error case in BeginInternalSubTransaction\n> > is this specific one about IsInParallelMode, which was added later\n> > than the original design and evidently with not a lot of thought or\n> > testing. The comment for it speculates about whether we could get\n> > rid of it, so I wonder if our thoughts about this ought to go in that\n> > direction.\n>\n> After thinking a bit more I wonder why we need that error check at all.\n> Why isn't it sufficient to rely on GetNewTransactionId()'s check that\n> throws an error if a parallelized subtransaction tries to obtain an XID?\n> I don't see why we'd need to \"synchronize transaction state\" about\n> anything that never acquires an XID.\n>\n> regards, tom lane\n>\n\nThanks for your reply. These patches look good to me!> The only readily-reachable error case in BeginInternalSubTransaction> is this specific one about IsInParallelMode, which was added later> than the original design and evidently with not a lot of thought or> testing.  The comment for it speculates about whether we could get> rid of it, so I wonder if our thoughts about this ought to go in that> direction.IMHO, there are other error reports in the function BeginInternalSubTransaction(), like```ereport(ERROR,                (errcode(ERRCODE_OUT_OF_MEMORY),                 errmsg(\"out of memory\"),                 errdetail(\"Failed on request of size %zu in memory context \\\"%s\\\".\",                           size, context->name)));```we cannot avoid this crash by just getting rid of IsInParallelMode().And in my test, the server won't crash in the plperl test.With regards,Hao ZhangTom Lane <[email protected]> 于2023年12月2日周六 09:51写道:I wrote:\n> The only readily-reachable error case in BeginInternalSubTransaction\n> is this specific one about IsInParallelMode, which was added later\n> than the original design and evidently with not a lot of thought or\n> testing.  The comment for it speculates about whether we could get\n> rid of it, so I wonder if our thoughts about this ought to go in that\n> direction.\n\nAfter thinking a bit more I wonder why we need that error check at all.\nWhy isn't it sufficient to rely on GetNewTransactionId()'s check that\nthrows an error if a parallelized subtransaction tries to obtain an XID?\nI don't see why we'd need to \"synchronize transaction state\" about\nanything that never acquires an XID.\n\n                        regards, tom lane", "msg_date": "Mon, 4 Dec 2023 17:21:29 +0800", "msg_from": "Hao Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Hao Zhang <[email protected]> writes:\n>> The only readily-reachable error case in BeginInternalSubTransaction\n>> is this specific one about IsInParallelMode, which was added later\n>> than the original design and evidently with not a lot of thought or\n>> testing. The comment for it speculates about whether we could get\n>> rid of it, so I wonder if our thoughts about this ought to go in that\n>> direction.\n\n> IMHO, there are other error reports in the function\n> BeginInternalSubTransaction(), like\n\nSure, but all the other ones are extremely hard to hit, which is why\nwe didn't bother to worry about them to begin with. If we want to\nmake this more formally bulletproof, my inclination would be to\n(a) get rid of the IsInParallelMode restriction and then (b) turn\nthe function into a critical section, so that any other error gets\ntreated as a PANIC. Maybe at some point we'd be willing to make a\nvariant of BeginInternalSubTransaction that has a different API and\ncan manage such cases without a PANIC, but that seems far down the\nroad to me, and certainly not something to be back-patched.\n\nThe main reason for my caution here is that, by catching an error\nand allowing Python (or Perl, or something else) code to decide\nwhat to do next, we are very dependent on that code doing the right\nthing. This is already a bit of a leap of faith for run-of-the-mill\nerrors. For errors in transaction startup or shutdown, I think it's\na bigger leap than I care to make. We're pretty well hosed if we\ncan't make the transaction machinery work, so imagining that we can\nclean up after such an error and march merrily onwards seems mighty\noptimistic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Dec 2023 16:56:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "I wrote:\n> Hao Zhang <[email protected]> writes:\n>> IMHO, there are other error reports in the function\n>> BeginInternalSubTransaction(), like\n\n> Sure, but all the other ones are extremely hard to hit, which is why\n> we didn't bother to worry about them to begin with. If we want to\n> make this more formally bulletproof, my inclination would be to\n> (a) get rid of the IsInParallelMode restriction and then (b) turn\n> the function into a critical section, so that any other error gets\n> treated as a PANIC.\n\nHere's a draft patch along this line. Basically the idea is that\nsubtransactions used for error control are now legal in parallel\nmode (including in parallel workers) so long as they don't try to\nacquire their own XIDs. I had to clean up some error handling\nin xact.c, but really this is a pretty simple patch.\n\nRather than a true critical section (ie PANIC on failure), it seemed\nto me to be enough to force FATAL exit if BeginInternalSubTransaction\nfails. Given the likelihood that our transaction state is messed up\nif we get a failure partway through, it's not clear to me that we\ncould do much better than that even if we were willing to make an API\nchange for BeginInternalSubTransaction.\n\nI haven't thought hard about what new test cases we might want to\nadd for this. It gets through check-world as is, meaning that\nnobody has made any test cases exercising the previous restrictions\neither. There might be more documentation work to be done, too.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 29 Dec 2023 12:55:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Fri, Dec 29, 2023 at 12:56 PM Tom Lane <[email protected]> wrote:\n> Here's a draft patch along this line. Basically the idea is that\n> subtransactions used for error control are now legal in parallel\n> mode (including in parallel workers) so long as they don't try to\n> acquire their own XIDs. I had to clean up some error handling\n> in xact.c, but really this is a pretty simple patch.\n\nI agree with the general direction. A few comments:\n\n- Isn't it redundant to test if IsInParallelMode() ||\nIsParallelWorker()? We can't be in a parallel worker without also\nbeing in parallel mode, except during the worker startup sequence.\n\n- I don't think the documentation changes are entirely accurate. The\nwhole point of the patch is to allow parallel workers to make changes\nto the transaction state, but the documentation says you can't. Maybe\nwe should just delete \"change the transaction state\" entirely from the\nlist of things that you're not allowed to do, since \"write to the\ndatabase\" is already listed separately; or maybe we should replace it\nwith something like \"assign new transaction IDs or command IDs,\"\nalthough that's kind of low-level. I don't think we should just delete\nthe \"even temporarily\" bit, as you've done.\n\n- While I like the new comments in BeginInternalSubTransaction(), I\nthink the changes in ReleaseCurrentSubTransaction() and\nRollbackAndReleaseCurrentSubTransaction() need more thought. For one\nthing, it's got to be wildly optimistic to claim that we would have\ncaught *anything* that's forbidden in parallel mode; that would\nrequire solving the halting problem. I'd rather have no comment at all\nhere than one making such an ambitious claim, and I think that might\nbe a fine way to go. But if we do have a comment, I think it should be\nmore narrowly focused e.g. \"We do not check for parallel mode here.\nIt's permissible to start and end subtransactions while in parallel\nmode, as long as no new XIDs or command IDs are assigned.\" One\nadditional thing that might (or might not) be worth mentioning or\nchecking for here is that the leader shouldn't try to reduce the\nheight of the transaction state stack to anything less than what it\nwas when the parallel operation started; if it wants to do that, it\nneeds to clean up the parallel workers and exit parallel mode first.\nSimilarly a worker shouldn't ever end the toplevel transaction except\nduring backend cleanup.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:39:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Dec 29, 2023 at 12:56 PM Tom Lane <[email protected]> wrote:\n>> Here's a draft patch along this line. Basically the idea is that\n>> subtransactions used for error control are now legal in parallel\n>> mode (including in parallel workers) so long as they don't try to\n>> acquire their own XIDs. I had to clean up some error handling\n>> in xact.c, but really this is a pretty simple patch.\n\n> I agree with the general direction. A few comments:\n\nThanks for looking at this! I was hoping you'd review it, because\nI thought there was a pretty significant chance that I'd missed some\nfundamental reason it couldn't work. I feel better now about it\nbeing worth pursuing.\n\nI consider the patch draft quality at this point: I didn't spend\nmuch effort on docs or comments, and none on test cases. I'll\nwork on those issues and come back with a v2.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:51:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I agree with the general direction. A few comments:\n\n> - Isn't it redundant to test if IsInParallelMode() ||\n> IsParallelWorker()? We can't be in a parallel worker without also\n> being in parallel mode, except during the worker startup sequence.\n\nHmm. The existing code in AssignTransactionId and\nCommandCounterIncrement tests both, so I figured that the conservative\ncourse was to make DefineSavepoint and friends test both. Are you\nsaying AssignTransactionId and CommandCounterIncrement are wrong?\nIf you're saying you don't believe that these routines are reachable\nduring parallel worker start, that could be true, but I'm not sure\nI want to make that assumption. In any case, surely the xxxSavepoint\nroutines are not hot enough to make it an interesting\nmicro-optimization. (Perhaps it is worthwhile in AssignTransactionId\nand CCI, but changing those seems like a job for another patch.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:52:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Fri, Mar 22, 2024 at 1:52 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > I agree with the general direction. A few comments:\n>\n> > - Isn't it redundant to test if IsInParallelMode() ||\n> > IsParallelWorker()? We can't be in a parallel worker without also\n> > being in parallel mode, except during the worker startup sequence.\n>\n> Hmm. The existing code in AssignTransactionId and\n> CommandCounterIncrement tests both, so I figured that the conservative\n> course was to make DefineSavepoint and friends test both. Are you\n> saying AssignTransactionId and CommandCounterIncrement are wrong?\n> If you're saying you don't believe that these routines are reachable\n> during parallel worker start, that could be true, but I'm not sure\n> I want to make that assumption. In any case, surely the xxxSavepoint\n> routines are not hot enough to make it an interesting\n> micro-optimization. (Perhaps it is worthwhile in AssignTransactionId\n> and CCI, but changing those seems like a job for another patch.)\n\nYeah, that's all fair enough. I went back and looked at the history of\nthis and found 94b4f7e2a635c3027a23b07086f740615b56aa64.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:02:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> - I don't think the documentation changes are entirely accurate. The\n> whole point of the patch is to allow parallel workers to make changes\n> to the transaction state, but the documentation says you can't. Maybe\n> we should just delete \"change the transaction state\" entirely from the\n> list of things that you're not allowed to do, since \"write to the\n> database\" is already listed separately; or maybe we should replace it\n> with something like \"assign new transaction IDs or command IDs,\"\n> although that's kind of low-level. I don't think we should just delete\n> the \"even temporarily\" bit, as you've done.\n\nFair enough. In the attached v2, I wrote \"change the transaction\nstate (other than by using a subtransaction for error recovery)\";\nwhat do you think of that?\n\nI dug around in the docs and couldn't really find anything about\nparallel-query transaction limitations other than this bit in\nparallel.sgml and the more or less copy-pasted text in\ncreate_function.sgml; did you have any other spots in mind?\n(I did find the commentary in README.parallel, but that's not\nexactly user-facing.)\n\n> - While I like the new comments in BeginInternalSubTransaction(), I\n> think the changes in ReleaseCurrentSubTransaction() and\n> RollbackAndReleaseCurrentSubTransaction() need more thought.\n\nYah. After studying the code a bit more, I realized that what\nI'd done would cause IsInParallelMode() to start returning false\nduring a subtransaction within parallel mode, which is surely not\nwhat we want. That state has to be heritable into subtransactions\nin some fashion. The attached keeps the current semantics of\nparallelModeLevel and adds a bool parallelChildXact field that is\ntrue if any outer transaction level has nonzero parallelModeLevel.\nThat's possibly more general than we need today, but it seems like\na reasonably clean definition.\n\n> One additional thing that might (or might not) be worth mentioning or\n> checking for here is that the leader shouldn't try to reduce the\n> height of the transaction state stack to anything less than what it\n> was when the parallel operation started; if it wants to do that, it\n> needs to clean up the parallel workers and exit parallel mode first.\n> Similarly a worker shouldn't ever end the toplevel transaction except\n> during backend cleanup.\n\nI think these things are already dealt with. However, one thing\nworth questioning is that CommitSubTransaction() will just silently\nkill any workers started during the current subxact, and likewise\nCommitTransaction() zaps workers without complaint. Shouldn't these\ninstead throw an error about how you didn't close parallel mode,\nand then the corresponding Abort function does the cleanup?\nI did not change that behavior here, but it seems dubious.\n\nv2 attached works a bit harder on the comments and adds a simplistic\ntest case. I feel that I don't want to incorporate the plpython\ncrash that started this thread, as it's weird and dependent on\nPython code outside our control (though I have checked that we\ndon't crash on that anymore).\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 22 Mar 2024 16:37:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Fri, Mar 22, 2024 at 4:37 PM Tom Lane <[email protected]> wrote:\n> Fair enough. In the attached v2, I wrote \"change the transaction\n> state (other than by using a subtransaction for error recovery)\";\n> what do you think of that?\n\nI think that's pretty good. I wonder if there are some bizarre cases\nwhere the patch would allow slightly more than that ... who is to say\nthat you must pop the subtransaction you pushed? But that sort of\npedantry is probably not worth worrying about for purposes of the\ndocumentation, especially because such a thing might not be a very\ngood idea anyway.\n\n> I dug around in the docs and couldn't really find anything about\n> parallel-query transaction limitations other than this bit in\n> parallel.sgml and the more or less copy-pasted text in\n> create_function.sgml; did you have any other spots in mind?\n> (I did find the commentary in README.parallel, but that's not\n> exactly user-facing.)\n\nI don't have anything else in mind at the moment.\n\n> I think these things are already dealt with. However, one thing\n> worth questioning is that CommitSubTransaction() will just silently\n> kill any workers started during the current subxact, and likewise\n> CommitTransaction() zaps workers without complaint. Shouldn't these\n> instead throw an error about how you didn't close parallel mode,\n> and then the corresponding Abort function does the cleanup?\n> I did not change that behavior here, but it seems dubious.\n\nI'm not sure. I definitely knew when I wrote this code that we often\nemit warnings about resources that aren't cleaned up at (sub)commit\ntime rather than just silently releasing them, and I feel like the\nfact that I didn't implement that behavior here was probably a\ndeliberate choice to avoid some problem. But I have no memory of what\nthat problem was, and it is entirely possible that it was eliminated\nat some later phase of development. I think that decision was made\nquite early, before much of anything was working.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Mar 2024 08:55:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 4:37 PM Tom Lane <[email protected]> wrote:\n>> I think these things are already dealt with. However, one thing\n>> worth questioning is that CommitSubTransaction() will just silently\n>> kill any workers started during the current subxact, and likewise\n>> CommitTransaction() zaps workers without complaint. Shouldn't these\n>> instead throw an error about how you didn't close parallel mode,\n>> and then the corresponding Abort function does the cleanup?\n>> I did not change that behavior here, but it seems dubious.\n\n> I'm not sure. I definitely knew when I wrote this code that we often\n> emit warnings about resources that aren't cleaned up at (sub)commit\n> time rather than just silently releasing them, and I feel like the\n> fact that I didn't implement that behavior here was probably a\n> deliberate choice to avoid some problem.\n\nAh, right, it's reasonable to consider this an end-of-xact resource\nleak, which we generally handle with WARNING not ERROR. And I see\nthat AtEOXact_Parallel and AtEOSubXact_Parallel already do\n\n if (isCommit)\n elog(WARNING, \"leaked parallel context\");\n\nHowever, the calling logic seems a bit shy of a load, in that it\ntrusts IsInParallelMode() completely to decide whether to check for\nleaked parallel contexts. So we'd miss the case where somebody did\nExitParallelMode without having cleaned up workers. It's not like\nAtEOXact_Parallel and AtEOSubXact_Parallel cost a lot when they have\nnothing to do, so I think we should call them unconditionally, and\nseparately from that issue a warning if parallelModeLevel isn't zero\n(and we're committing).\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 23 Mar 2024 12:31:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Sat, Mar 23, 2024 at 12:31 PM Tom Lane <[email protected]> wrote:\n> However, the calling logic seems a bit shy of a load, in that it\n> trusts IsInParallelMode() completely to decide whether to check for\n> leaked parallel contexts. So we'd miss the case where somebody did\n> ExitParallelMode without having cleaned up workers. It's not like\n> AtEOXact_Parallel and AtEOSubXact_Parallel cost a lot when they have\n> nothing to do, so I think we should call them unconditionally, and\n> separately from that issue a warning if parallelModeLevel isn't zero\n> (and we're committing).\n\nI wasn't worried about this case when I wrote this code. The general\nflow that I anticipated was that somebody would run a query, and\nExecMain.c would enter parallel mode, and then maybe eventually reach\nsome SQL-callable C function that hadn't gotten the memo about\nparallel query but had been mistakenly labelled as PARALLEL RESTRICTED\nor PARALLEL SAFE when it wasn't really, and so the goal was for core\nfunctions that such a function might reasonably attempt to call to\nnotice that something bad was happening.\n\nBut if the user puts a call to ExitParallelMode() inside such a\nfunction, it's hard to imagine what goal they have other than to\ndeliberately circumvent the safeguards. And they're always going to be\nable to do that somehow, if they're coding in C. So I'm not convinced\nthat the sanity checks you've added are really going to do anything\nother than burn a handful of CPU cycles. If there's some plausible\ncase in which they protect us against a user who has legitimately made\nan error, fine; but if we're just wandering down the slippery slope of\nbelieving we can defend against malicious C code, we absolutely should\nnot do that, not even a little bit. The first CPU instruction we burn\nin the service of a hopeless cause is already one too many.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:16:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Mar 23, 2024 at 12:31 PM Tom Lane <[email protected]> wrote:\n>> However, the calling logic seems a bit shy of a load, in that it\n>> trusts IsInParallelMode() completely to decide whether to check for\n>> leaked parallel contexts. So we'd miss the case where somebody did\n>> ExitParallelMode without having cleaned up workers.\n\n> But if the user puts a call to ExitParallelMode() inside such a\n> function, it's hard to imagine what goal they have other than to\n> deliberately circumvent the safeguards. And they're always going to be\n> able to do that somehow, if they're coding in C. So I'm not convinced\n> that the sanity checks you've added are really going to do anything\n> other than burn a handful of CPU cycles. If there's some plausible\n> case in which they protect us against a user who has legitimately made\n> an error, fine; but if we're just wandering down the slippery slope of\n> believing we can defend against malicious C code, we absolutely should\n> not do that, not even a little bit. The first CPU instruction we burn\n> in the service of a hopeless cause is already one too many.\n\nBy that logic, we should rip out every Assert in the system, as well\nas all of the (extensive) resource leak checking that already happens\nduring CommitTransaction. We've always felt that those leak checks\nwere worth the cost to help us find bugs --- which they have done and\nstill do from time to time. I don't see why this case is different,\nespecially when the added cost compared to HEAD is not much more than\none C function call.\n\nOr in other words: the point is not about stopping malicious C code,\nit's about recognizing that we make mistakes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:36:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:36 AM Tom Lane <[email protected]> wrote:\n> By that logic, we should rip out every Assert in the system, as well\n> as all of the (extensive) resource leak checking that already happens\n> during CommitTransaction. We've always felt that those leak checks\n> were worth the cost to help us find bugs --- which they have done and\n> still do from time to time. I don't see why this case is different,\n> especially when the added cost compared to HEAD is not much more than\n> one C function call.\n\nWell, I explained why *I* thought it was different, but obviously you\ndon't agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:50:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Mar 25, 2024 at 11:36 AM Tom Lane <[email protected]> wrote:\n>> ... I don't see why this case is different,\n>> especially when the added cost compared to HEAD is not much more than\n>> one C function call.\n\n> Well, I explained why *I* thought it was different, but obviously you\n> don't agree.\n\nAfter mulling it over for awhile, I still think the extra checking\nis appropriate, especially since this patch is enlarging the set of\nthings that can happen in parallel mode. How do you want to proceed?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Mar 2024 17:28:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Wed, Mar 27, 2024 at 5:28 PM Tom Lane <[email protected]> wrote:\n> After mulling it over for awhile, I still think the extra checking\n> is appropriate, especially since this patch is enlarging the set of\n> things that can happen in parallel mode. How do you want to proceed?\n\nI sort of assumed you were going to commit the patch as you had it.\nI'm not a huge fan of that, but I don't think that's it's catastrophe,\neither. It pains me a bit to add CPU cycles that I consider\nunnecessary to a very frequently taken code path, but as you say, it's\nnot a lot of CPU cycles, so maybe nobody will ever notice. I actually\nreally wish we could find some way of making subtransactions\nsignificantly lighter-wait, because I think the cost of spinning up\nand tearing down a trivial subtransaction is a real performance\nproblem, but fixing that is probably a pretty hard problem whether\nthis patch gets committed or not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Mar 2024 10:22:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I sort of assumed you were going to commit the patch as you had it.\n\nOK, I will move ahead on that.\n\n> I actually\n> really wish we could find some way of making subtransactions\n> significantly lighter-wait, because I think the cost of spinning up\n> and tearing down a trivial subtransaction is a real performance\n> problem, but fixing that is probably a pretty hard problem whether\n> this patch gets committed or not.\n\nYeah. The whole ResourceOwner mechanism is not exactly lightweight,\nbut it's hard to argue that we don't need it. I wonder whether we\ncould get anywhere by deeming that a \"small enough\" subtransaction\ndoesn't need to have its resources cleaned up instantly, and\ninstead re-use its ResourceOwner to accumulate resources of the\nnext subtransaction, and the next, until there's enough to be\nworth cleaning up.\n\nHaving said that, it's hard to see any regime under which tied-up\nparallel workers wouldn't count as a resource worth releasing ASAP.\nI started this mail with the idea of suggesting that parallel contexts\nought to become a ResourceOwner-managed resource, but maybe that\nwouldn't be an improvement after all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 10:59:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Thu, Mar 28, 2024 at 10:59 AM Tom Lane <[email protected]> wrote:\n> Yeah. The whole ResourceOwner mechanism is not exactly lightweight,\n> but it's hard to argue that we don't need it. I wonder whether we\n> could get anywhere by deeming that a \"small enough\" subtransaction\n> doesn't need to have its resources cleaned up instantly, and\n> instead re-use its ResourceOwner to accumulate resources of the\n> next subtransaction, and the next, until there's enough to be\n> worth cleaning up.\n\nHmm, I wonder if that's actually where the cycles are going. There's\nan awful lot of separate function calls inside CommitSubTransaction(),\nand in the common case, each one of them has to individually decide\nthat it doesn't need to do anything. Sure, they're all fast, but if\nyou have enough of them, it's still going to add up, at least a bit.\nIn that sense, the resource owner mechanism seems like it should, or\nat least could, be better. I'm not sure this is quite the way it works\nnow, but if you had one single list/array/thingamabob that listed all\nof the resources that needed releasing, that should in theory be\nbetter when there's a lot of kinds of resources that you COULD hold\nbut only a small number of kinds of resources that you actually do\nhold -- and it also shouldn't be any worse if it turns out that you\nhold a whole lot of resources of many different types.\n\nBut I haven't done any benchmarking of this area in a long time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Mar 2024 11:27:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Hmm, I wonder if that's actually where the cycles are going. There's\n> an awful lot of separate function calls inside CommitSubTransaction(),\n> and in the common case, each one of them has to individually decide\n> that it doesn't need to do anything. Sure, they're all fast, but if\n> you have enough of them, it's still going to add up, at least a bit.\n> In that sense, the resource owner mechanism seems like it should, or\n> at least could, be better.\n\nYeah, I was thinking about that too. The normal case is that you\ndon't hold any releasable resources except locks when arriving at\nCommitSubTransaction --- if you do, it's a bug and we're going to\nprint leak warnings. Seems like maybe it'd be worth trying to\nhave a fast path for that case. (Also, given that we probably\ndo need to release locks right away, this point invalidates my\nearlier idea of postponing the work.)\n\n> But I haven't done any benchmarking of this area in a long time.\n\nDitto.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 11:50:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Thu, Mar 28, 2024 at 11:50 AM Tom Lane <[email protected]> wrote:\n> Yeah, I was thinking about that too. The normal case is that you\n> don't hold any releasable resources except locks when arriving at\n> CommitSubTransaction --- if you do, it's a bug and we're going to\n> print leak warnings. Seems like maybe it'd be worth trying to\n> have a fast path for that case.\n\nWell, there's the abort case, too, which I think is almost equally important.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Mar 2024 11:59:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Mar 28, 2024 at 11:50 AM Tom Lane <[email protected]> wrote:\n>> Yeah, I was thinking about that too. The normal case is that you\n>> don't hold any releasable resources except locks when arriving at\n>> CommitSubTransaction --- if you do, it's a bug and we're going to\n>> print leak warnings. Seems like maybe it'd be worth trying to\n>> have a fast path for that case.\n\n> Well, there's the abort case, too, which I think is almost equally important.\n\nTrue, but in the abort case there probably *are* resources to be\ncleaned up, so I'm not seeing that the fast-path idea helps.\nAlthough maybe the idea of batching multiple cleanups would?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Mar 2024 12:01:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" }, { "msg_contents": "On Thu, Mar 28, 2024 at 12:01 PM Tom Lane <[email protected]> wrote:\n> > Well, there's the abort case, too, which I think is almost equally important.\n>\n> True, but in the abort case there probably *are* resources to be\n> cleaned up, so I'm not seeing that the fast-path idea helps.\n> Although maybe the idea of batching multiple cleanups would?\n\nYes, I think we should be trying to optimize for the case where the\n(sub)transaction being cleaned up holds a small but non-zero number of\nresources. I think if we just optimize the case where it's exactly\nzero, there will be enough cases where the optimization doesn't apply\nthat we'll feel like we haven't really solved the problem. Whether the\nspecific idea of trying to batch the cleanups could be made to help\nenough to matter, I'm not quite sure. Another idea I had at one point\nwas to have some kind of bitmask where each bit tells you whether or\nnot one particular resource type might be held, so that\n{Commit,Abort}{Sub,}Transaction would end up doing a bunch of stuff\nlike if (ResourcesNeedingCleanup & MIGHT_HOLD_THINGY)\nAtEO(Sub)Xact_Thingy(...). But I wasn't sure that would really move\nthe needle, either. This seems to be one of those annoying cases where\nthe problem is much more obvious than the solution.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 Mar 2024 12:27:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] plpython function causes server panic" } ]
[ { "msg_contents": "Hi,\n\nDuring logical decoding, if there is a large write transaction, some\nspill files will be written to disk,\ndepending on the setting of max_changes_in_memory.\n\nThis behavior can effectively avoid OOM, but if the transaction\ngenerates a lot of change before commit,\na large number of files may fill the disk. For example, you can update\na TB-level table.\nOf course, this is also inevitable.\n\nBut I found an inelegant phenomenon. If the updated large table is not\npublished, its changes will also\nbe written with a large number of spill files. Look at an example below:\n\npublisher:\n```\ncreate table tbl_pub(id int, val1 text, val2 text,val3 text);\ncreate table tbl_t1(id int, val1 text, val2 text,val3 text);\nCREATE PUBLICATION mypub FOR TABLE public.tbl_pub;\n```\n\nsubscriber:\n```\ncreate table tbl_pub(id int, val1 text, val2 text,val3 text);\ncreate table tbl_t1(id int, val1 text, val2 text,val3 text);\nCREATE SUBSCRIPTION mysub CONNECTION 'host=127.0.0.1 port=5432\nuser=postgres dbname=postgres' PUBLICATION mypub;\n```\n\npublisher:\n```\nbegin;\ninsert into tbl_t1 select i,repeat('xyzzy', i),repeat('abcba',\ni),repeat('dfds', i) from generate_series(0,999999) i;\n```\n\nLater you will see a large number of spill files in the\n\"/$PGDATA/pg_replslot/mysub/\" directory.\n```\n$ll -sh\ntotal 4.5G\n4.0K -rw------- 1 postgres postgres 200 Nov 30 09:24 state\n17M -rw------- 1 postgres postgres 17M Nov 30 08:22 xid-750-lsn-0-10000000.spill\n12M -rw------- 1 postgres postgres 12M Nov 30 08:20 xid-750-lsn-0-1000000.spill\n17M -rw------- 1 postgres postgres 17M Nov 30 08:23 xid-750-lsn-0-11000000.spill\n......\n```\n\nWe can see that table tbl_t1 is not published in mypub. It is also not\nsent downstream because it is subscribed.\nAfter the transaction is reorganized, the pgoutput decoding plug-in\nfilters out these change of unpublished relation\nwhen sending logical changes. see function pgoutput_change.\n\nAbove all, if after constructing a change and before queuing a change\ninto a transaction, we filter out unpublished\nrelation-related changes, will it make logical decoding less laborious\nand avoid disk growth as much as possible?\n\n\nThis is just an immature idea. I haven't started to implement it yet.\nMaybe it was designed this way because there\nare key factors that I didn't consider. So I want to hear everyone's\nopinions, especially the designers of logic decoding.\n\n\n", "msg_date": "Thu, 30 Nov 2023 17:47:57 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": "> This is just an immature idea. I haven't started to implement it yet.\n> Maybe it was designed this way because there\n> are key factors that I didn't consider. So I want to hear everyone's\n> opinions, especially the designers of logic decoding.\n\nAttached is the patch I used to implement this optimization.\nThe main designs are as follows:\n1. Added a callback LogicalDecodeFilterByRelCB for the output plugin.\n\n2. Added this callback function pgoutput_table_filter for the pgoutput plugin.\n Its main implementation is based on the table filter in the\npgoutput_change function.\n\n3. After constructing a change and before Queue a change into a transaction,\n use RelidByRelfilenumber to obtain the relation associated with the change,\n just like obtaining the relation in the ReorderBufferProcessTXN function.\n\n4. Relation may be a toast, and there is no good way to get its real\ntable relation\n based on toast relation. Here, I get the real table oid through\ntoast relname, and\n then get the real table relation.\n\n5. This filtering takes into account INSERT/UPDATE/INSERT. Other\nchanges have not\n been considered yet and can be expanded in the future.\n\n6. The table filter in pgoutput_change and the get relation in\nReorderBufferProcessTXN\n can be deleted. This has not been done yet. This is the next step.\n\nSincerely look forward to your feedback.\nRegards, lijie", "msg_date": "Fri, 1 Dec 2023 16:25:17 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": "On Fri, Dec 1, 2023 at 1:55 PM li jie <[email protected]> wrote:\n>\n> > This is just an immature idea. I haven't started to implement it yet.\n> > Maybe it was designed this way because there\n> > are key factors that I didn't consider. So I want to hear everyone's\n> > opinions, especially the designers of logic decoding.\n>\n> Attached is the patch I used to implement this optimization.\n> The main designs are as follows:\n> 1. Added a callback LogicalDecodeFilterByRelCB for the output plugin.\n>\n> 2. Added this callback function pgoutput_table_filter for the pgoutput plugin.\n> Its main implementation is based on the table filter in the\n> pgoutput_change function.\n>\n> 3. After constructing a change and before Queue a change into a transaction,\n> use RelidByRelfilenumber to obtain the relation associated with the change,\n> just like obtaining the relation in the ReorderBufferProcessTXN function.\n>\n> 4. Relation may be a toast, and there is no good way to get its real\n> table relation\n> based on toast relation. Here, I get the real table oid through\n> toast relname, and\n> then get the real table relation.\n>\n\nThis may be helpful for the case you have mentioned but how about\ncases where there is nothing to filter by relation? It will add\noverhead related to the transaction start/end and others for each\nchange. Currently, we do that just once for all the changes that need\nto be processed. I wonder why the spilling can't be avoided with GUC\nlogical_decoding_work_mem?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 2 Dec 2023 09:41:08 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": ">\n> This may be helpful for the case you have mentioned but how about\n> cases where there is nothing to filter by relation?\n\nYou can see the complete antecedent in the email [1]. Relation that has\nnot been published will also generate changes and put them into the entire\ntransaction group, which will increase invalid memory or disk space.\n\n> It will add\n> overhead related to the transaction start/end and others for each\n> change. Currently, we do that just once for all the changes that need\n> to be processed.\n\nYes, it will only be processed once at present. It is done when applying\neach change when the transaction is committed. This patch hopes to\nadvance it to the time when constructing the change, and determines the\nchange queue into a based on whether the relation is published.\n\n> I wonder why the spilling can't be avoided with GUC\n> logical_decoding_work_mem?\n\nOf course you can, but this will only convert disk space into memory space.\n For details, please see the case in Email [1].\n\nRegards, lijie\n\n\n", "msg_date": "Mon, 4 Dec 2023 10:01:43 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": "> This may be helpful for the case you have mentioned but how about\n> cases where there is nothing to filter by relation?\n\nYou can see the complete antecedent in the email [1]. Relation that has\nnot been published will also generate changes and put them into the entire\ntransaction group, which will increase invalid memory or disk space.\n\n> It will add\n> overhead related to the transaction start/end and others for each\n> change. Currently, we do that just once for all the changes that need\n> to be processed.\n\nYes, it will only be processed once at present. It is done when applying\neach change when the transaction is committed. This patch hopes to\nadvance it to the time when constructing the change, and determines the\nchange queue into a based on whether the relation is published.\n\n> I wonder why the spilling can't be avoided with GUC\n> logical_decoding_work_mem?\n\nOf course you can, but this will only convert disk space into memory space.\n For details, please see the case in Email [1].\n\n[1] https://www.postgresql.org/message-id/CAGfChW51P944nM5h0HTV9HistvVfwBxNaMt_s-OZ9t%3DuXz%2BZbg%40mail.gmail.com\n\nRegards, lijie\n\nAmit Kapila <[email protected]> 于2023年12月2日周六 12:11写道:\n>\n> On Fri, Dec 1, 2023 at 1:55 PM li jie <[email protected]> wrote:\n> >\n> > > This is just an immature idea. I haven't started to implement it yet.\n> > > Maybe it was designed this way because there\n> > > are key factors that I didn't consider. So I want to hear everyone's\n> > > opinions, especially the designers of logic decoding.\n> >\n> > Attached is the patch I used to implement this optimization.\n> > The main designs are as follows:\n> > 1. Added a callback LogicalDecodeFilterByRelCB for the output plugin.\n> >\n> > 2. Added this callback function pgoutput_table_filter for the pgoutput plugin.\n> > Its main implementation is based on the table filter in the\n> > pgoutput_change function.\n> >\n> > 3. After constructing a change and before Queue a change into a transaction,\n> > use RelidByRelfilenumber to obtain the relation associated with the change,\n> > just like obtaining the relation in the ReorderBufferProcessTXN function.\n> >\n> > 4. Relation may be a toast, and there is no good way to get its real\n> > table relation\n> > based on toast relation. Here, I get the real table oid through\n> > toast relname, and\n> > then get the real table relation.\n> >\n>\n> This may be helpful for the case you have mentioned but how about\n> cases where there is nothing to filter by relation? It will add\n> overhead related to the transaction start/end and others for each\n> change. Currently, we do that just once for all the changes that need\n> to be processed. I wonder why the spilling can't be avoided with GUC\n> logical_decoding_work_mem?\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Mon, 4 Dec 2023 10:05:58 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": ">\n> Of course you can, but this will only convert disk space into memory space.\n> For details, please see the case in Email [1].\n>\n> [1]\n> https://www.postgresql.org/message-id/CAGfChW51P944nM5h0HTV9HistvVfwBxNaMt_s-OZ9t%3DuXz%2BZbg%40mail.gmail.com\n>\n> Regards, lijie\n>\n>\nHi lijie,\n\nOverall, I think the patch is a good improvement. Some comments from first\nrun through of patch:\n1. The patch no longer applies cleanly, please rebase.\n\n2. While testing the patch, I saw something strange. If I try to truncate a\ntable that is published. I still see the message:\n2024-03-18 22:25:51.243 EDT [29385] LOG: logical filter change by table\npg_class\n\nThis gives the impression that the truncate operation on the published\ntable has been filtered but it hasn't. Also the log message needs to be\nreworded. Maybe, \"Logical filtering change by non-published table\n<relation_name>\"\n\n3. Below code:\n@@ -1201,11 +1343,14 @@ DecodeMultiInsert(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf)\n+\n+ if (FilterByTable(ctx, change))\n+ continue;;\n\nextra semi-colon after continue.\n\n4. I am not sure if this is possible, but is there a way to avoid the\noverhead in the patch if the publication publishes \"ALL TABLES\"?\n\n5. In function: pgoutput_table_filter() - this code appears to be filtering\nout not just unpublished tables but also applying row based filters on\npublished tables as well. Is this really within the scope of the feature?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n\nOf course you can, but this will only convert disk space into memory space.\n For details, please see the case in Email [1].\n\n[1] https://www.postgresql.org/message-id/CAGfChW51P944nM5h0HTV9HistvVfwBxNaMt_s-OZ9t%3DuXz%2BZbg%40mail.gmail.com\n\nRegards, lijie\nHi lijie,Overall, I think the patch is a good improvement. Some comments from first run through of patch:1. The patch no longer applies cleanly, please rebase.2. While testing the patch, I saw something strange. If I try to truncate a table that is published. I still see the message:2024-03-18 22:25:51.243 EDT [29385] LOG:  logical filter change by table pg_classThis gives the impression that the truncate operation on the published table has been filtered but it hasn't. Also the log message needs to be reworded. Maybe, \"Logical filtering change by non-published table <relation_name>\"3. Below code:@@ -1201,11 +1343,14 @@ DecodeMultiInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)++\t\tif (FilterByTable(ctx, change))+\t\t\tcontinue;;extra semi-colon after continue.4. I am not sure if this is possible, but is there a way to avoid the overhead in the patch if the publication publishes \"ALL TABLES\"?5. In function: pgoutput_table_filter() - this code appears to be filtering out not just unpublished tables but also applying row based filters on published tables as well. Is this really within the scope of the feature? regards,Ajin CherianFujitsu Australia", "msg_date": "Tue, 19 Mar 2024 13:49:57 +1100", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": "Dear Li,\r\n\r\nThanks for proposing and sharing the PoC. Here are my high-level comments.\r\n\r\n1.\r\nWhat if ALTER PUBLICATION ... ADD is executed in parallel?\r\nShould we publish added tables if the altering is done before the transaction is\r\ncommitted? The current patch seems unable to do so because changes for added\r\ntables have not been queued at COMMIT.\r\nIf we should not publish such tables, why?\r\n\r\n2.\r\nThis patch could not apply as-is. Please rebase.\r\n\r\n3. FilterByTable()\r\n\r\n```\r\n+ if (ctx->callbacks.filter_by_origin_cb == NULL)\r\n+ return false;\r\n```\r\n\r\nfilter_by_table_cb should be checked instead of filter_by_origin_cb.\r\nCurrent patch crashes if the filter_by_table_cb() is not implemented.\r\n\r\n4. DecodeSpecConfirm()\r\n\r\n```\r\n+ if (FilterByTable(ctx, change))\r\n+ return;\r\n+\r\n```\r\n\r\nI'm not sure it is needed. Can you explain the reason why you added?\r\n\r\n5. FilterByTable\r\n\r\n```\r\n+ switch (change->action)\r\n+ {\r\n+ /* intentionally fall through */\r\n+ case REORDER_BUFFER_CHANGE_INSERT:\r\n+ case REORDER_BUFFER_CHANGE_UPDATE:\r\n+ case REORDER_BUFFER_CHANGE_DELETE:\r\n+ break;\r\n+ default:\r\n+ return false;\r\n+ }\r\n```\r\n\r\nIIUC, REORDER_BUFFER_CHANGE_TRUNCATE also targes the user table, so I think\r\nit should be accepted. Thought?\r\n\r\n6.\r\n\r\nI got strange errors when I tested the feature. I thought this implied there were\r\nbugs in your patch.\r\n\r\n1. implemented no-op filter atop test_decoding like attached\r\n2. ran `make check` for test_decoding modle\r\n3. some tests failed. Note that \"filter\" was a test added by me.\r\n regression.diffs was also attached.\r\n\r\n```\r\nnot ok 1 - ddl 970 ms\r\nok 2 - xact 36 ms\r\nnot ok 3 - rewrite 525 ms\r\nnot ok 4 - toast 736 ms\r\nok 5 - permissions 50 ms\r\nok 6 - decoding_in_xact 39 ms\r\nnot ok 7 - decoding_into_rel 57 ms\r\nok 8 - binary 21 ms\r\nnot ok 9 - prepared 33 ms\r\nok 10 - replorigin 93 ms\r\nok 11 - time 25 ms\r\nok 12 - messages 47 ms\r\nok 13 - spill 8063 ms\r\nok 14 - slot 124 ms\r\nok 15 - truncate 37 ms\r\nnot ok 16 - stream 60 ms\r\nok 17 - stats 157 ms\r\nok 18 - twophase 122 ms\r\nnot ok 19 - twophase_stream 57 ms\r\nnot ok 20 - filter 20 ms\r\n```\r\n\r\nCurrently I'm not 100% sure the reason, but I think it may read the latest system\r\ncatalog even if ALTER SUBSCRIPTION is executed after changes.\r\nIn below example, an attribute is altered text->somenum, after inserting data.\r\nBut get_changes() outputs as somenum.\r\n\r\n```\r\n BEGIN\r\n- table public.replication_example: INSERT: id[integer]:1 somedata[integer]:1 text[character varying]:'1'\r\n- table public.replication_example: INSERT: id[integer]:2 somedata[integer]:1 text[character varying]:'2'\r\n+ table public.replication_example: INSERT: id[integer]:1 somedata[integer]:1 somenum[character varying]:'1'\r\n+ table public.replication_example: INSERT: id[integer]:2 somedata[integer]:1 somenum[character varying]:'2'\r\n COMMIT\r\n```\r\n\r\nAlso, if the relfilenuber of the relation is changed, an ERROR is raised.\r\n\r\n```\r\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\r\n- data \r\n-----------------------------------------------------------------------------\r\n- BEGIN\r\n- table public.tr_pkey: INSERT: id2[integer]:2 data[integer]:1 id[integer]:2\r\n- COMMIT\r\n- BEGIN\r\n- table public.tr_pkey: DELETE: id[integer]:1\r\n- table public.tr_pkey: DELETE: id[integer]:2\r\n- COMMIT\r\n-(7 rows)\r\n-\r\n+ERROR: could not map filenumber \"base/16384/16397\" to relation OID\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/", "msg_date": "Mon, 20 May 2024 04:58:31 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": "On Tue, Mar 19, 2024 at 1:49 PM Ajin Cherian <[email protected]> wrote:\n\n>\n>\n>> Of course you can, but this will only convert disk space into memory\n>> space.\n>> For details, please see the case in Email [1].\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/CAGfChW51P944nM5h0HTV9HistvVfwBxNaMt_s-OZ9t%3DuXz%2BZbg%40mail.gmail.com\n>>\n>> Regards, lijie\n>>\n>>\n>\nIn some testing, I see a crash:\n(gdb) bt\n#0 0x00007fa5bcbfd277 in raise () from /lib64/libc.so.6\n#1 0x00007fa5bcbfe968 in abort () from /lib64/libc.so.6\n#2 0x00000000009e0940 in ExceptionalCondition (\n conditionName=conditionName@entry=0x7fa5ab8b9842 \"RelationSyncCache !=\nNULL\",\n fileName=fileName@entry=0x7fa5ab8b9820 \"pgoutput.c\",\nlineNumber=lineNumber@entry=1991)\n at assert.c:66\n#3 0x00007fa5ab8b7804 in get_rel_sync_entry (data=data@entry=0x2492288,\n relation=relation@entry=0x7fa5be30a768) at pgoutput.c:1991\n#4 0x00007fa5ab8b7cda in pgoutput_table_filter (ctx=<optimized out>,\nrelation=0x7fa5be30a768,\n change=0x24c5c20) at pgoutput.c:1671\n#5 0x0000000000813761 in filter_by_table_cb_wrapper (ctx=ctx@entry=0x2491fd0,\n\n relation=relation@entry=0x7fa5be30a768, change=change@entry=0x24c5c20)\nat logical.c:1268\n#6 0x000000000080e20f in FilterByTable (ctx=ctx@entry=0x2491fd0,\nchange=change@entry=0x24c5c20)\n at decode.c:690\n#7 0x000000000080e8e3 in DecodeInsert (ctx=ctx@entry=0x2491fd0,\nbuf=buf@entry=0x7fff0db92550)\n at decode.c:1070\n#8 0x000000000080f43d in heap_decode (ctx=ctx@entry=0x2491fd0,\nbuf=buf@entry=0x7fff0db92550)\n at decode.c:485\n#9 0x000000000080eca6 in LogicalDecodingProcessRecord\n(ctx=ctx@entry=0x2491fd0,\nrecord=0x2492368)\n at decode.c:118\n#10 0x000000000081338f in DecodingContextFindStartpoint\n(ctx=ctx@entry=0x2491fd0)\nat logical.c:672\n#11 0x000000000083c650 in CreateReplicationSlot (cmd=cmd@entry=0x2490970)\nat walsender.c:1323\n#12 0x000000000083fd48 in exec_replication_command (\n cmd_string=cmd_string@entry=0x239c880 \"CREATE_REPLICATION_SLOT\n\\\"pg_16387_sync_16384_7371301304766135621\\\" LOGICAL pgoutput (SNAPSHOT\n'use')\") at walsender.c:2116\n\nThe reason for the crash is that the RelationSyncCache was NULL prior to\nreaching a consistent point.\nHi li jie, I see that you created a new thread with an updated version of\nthis patch [1]. I used that patch and addressed the crash seen above,\nrebased the patch and addressed a few other comments.\nI'm happy to help you with this patch and address comments if you are not\navailable.\n\nregards,\nAjin Cherian\nFujitsu Australia\n[1] -\nhttps://www.postgresql.org/message-id/CAGfChW7%2BZMN4_NHPgz24MM42HVO83ecr9TLfpihJ%3DM0s1GkXFw%40mail.gmail.com", "msg_date": "Wed, 22 May 2024 14:17:54 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" }, { "msg_contents": "On Wed, May 22, 2024 at 2:17 PM Ajin Cherian <[email protected]> wrote:\n\n>\n> The reason for the crash is that the RelationSyncCache was NULL prior to\n> reaching a consistent point.\n> Hi li jie, I see that you created a new thread with an updated version of\n> this patch [1]. I used that patch and addressed the crash seen above,\n> rebased the patch and addressed a few other comments.\n> I'm happy to help you with this patch and address comments if you are not\n> available.\n>\n> regards,\n> Ajin Cherian\n> Fujitsu Australia\n> [1] -\n> https://www.postgresql.org/message-id/CAGfChW7%2BZMN4_NHPgz24MM42HVO83ecr9TLfpihJ%3DM0s1GkXFw%40mail.gmail.com\n>\n\nI was discussing this with Kuroda-san who made a patch to add a table\nfilter with test_decoding plugin. The filter does nothing, just returns\nfalse( doesn't filter anything) and I see that 8 test_decoding tests fail.\nIn my analysis, I could see that some of the failures were because the new\nfilter logic was accessing the relation cache using the latest snapshot for\nrelids which was getting incorrect relation information while decoding\nattribute values.\nfor eg:\nCREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text\nvarchar(120));\nBEGIN;\nINSERT INTO replication_example(somedata, text) VALUES (1, 1);\nINSERT INTO replication_example(somedata, text) VALUES (1, 2);\nCOMMIT;\nALTER TABLE replication_example RENAME COLUMN text TO somenum;\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1');\n\nHere, after decoding, the changes for the INSERT, were reflecting the new\ncolumn name (somenum) which was altered in a later transaction. This is\nbecause the new Filterby Table() logic was getting relcache with latest\nsnapshot and does not use a historic snapshot like logical decoding should\nbe doing. This is because the changes are at the decode.c level and not the\nreorderbuffer level and does not have access to the txn from the\nreorderbuffer. This problem could be fixed by invalidating the cache in the\nFilterByTable() logic, but this does not solve other problems like the\ntable name itself is changed in a later transaction. I think the patch has\na fundamental problem that the table filter logic does not respect the\nsnapshot of the transaction being decoded. The historic snapshot is\ncurrently only set up when the actual changes are committed or streamed at\nReorderBufferProcessTXN().\n\nIf the purpose of the patch is to filter out unnecessary changes prior to\nactual decode, then it will use an invalid snapshot and have lots of\nproblems. Otherwise this logic has to be moved to the reorderbuffer level\nand there will be a big overhead of extracting reorderbuffer while each\nchange is queued in memory/disk.\n regards,\nAjin Cherian\nFujitsu Australia\n\nOn Wed, May 22, 2024 at 2:17 PM Ajin Cherian <[email protected]> wrote:The reason for the crash is that the RelationSyncCache was NULL prior to reaching a consistent point. Hi li jie, I see that you created a new thread with an updated version of this patch [1]. I used that patch and addressed the crash seen above, rebased the patch and addressed a few other comments. I'm happy to help you with this patch and address comments if you are not available.regards,Ajin CherianFujitsu Australia[1] - https://www.postgresql.org/message-id/CAGfChW7%2BZMN4_NHPgz24MM42HVO83ecr9TLfpihJ%3DM0s1GkXFw%40mail.gmail.com\nI was discussing this with Kuroda-san who made a patch to add a table \nfilter with test_decoding plugin. The filter does nothing, just returns \nfalse( doesn't filter anything) and I see that 8 test_decoding tests \nfail. In my analysis, I could see that some of the failures were because\n the new filter logic was accessing the relation cache using the latest \nsnapshot for relids which was getting incorrect relation information \nwhile decoding attribute values. for eg:CREATE TABLE replication_example(id SERIAL PRIMARY KEY, somedata int, text varchar(120));BEGIN;INSERT INTO replication_example(somedata, text) VALUES (1, 1);INSERT INTO replication_example(somedata, text) VALUES (1, 2);COMMIT;ALTER TABLE replication_example RENAME COLUMN text TO somenum;SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');Here,\n after decoding, the changes for the INSERT, were reflecting the new \ncolumn name (somenum) which was altered in a later transaction. This is \nbecause the new Filterby Table() logic was getting relcache with latest \nsnapshot and does not use a historic snapshot like logical decoding \nshould be doing. This is because the changes are at the decode.c level \nand not the reorderbuffer level and does not have access to the txn from\n the reorderbuffer. This problem could be fixed by invalidating the \ncache in the FilterByTable() logic, but this does not solve other \nproblems like the table name itself is changed in a later transaction. I\n think the patch has a fundamental problem that the table filter logic \ndoes not respect the snapshot of the transaction being decoded. The \nhistoric snapshot is currently only set up when the actual changes are \ncommitted or streamed at ReorderBufferProcessTXN().If the \npurpose of the patch is to filter out unnecessary changes prior to \nactual decode, then it will use an invalid snapshot and have lots of \nproblems. Otherwise this logic has to be moved to the reorderbuffer \nlevel and there will be a big overhead of extracting reorderbuffer while\n each change is queued in memory/disk. regards,Ajin CherianFujitsu Australia", "msg_date": "Wed, 5 Jun 2024 11:24:45 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: Filter irrelevant change before reassemble transactions\n during logical decoding" } ]
[ { "msg_contents": "Hi there,\n\nWhile benchmarking a new feature involving tablespace support in\nCloudNativePG (Kubernetes operator), I wanted to try out the partitioning\nfeature of pgbench. I saw it supporting both range and hash partitioning,\nbut limited to pgbench_accounts.\n\nWith the attached patch, I extend the partitioning capability to the\npgbench_history table too.\n\nI have been thinking of adding an option to control this, but I preferred\nto ask in this list whether it really makes sense or not (I struggle indeed\nto see use cases where accounts is partitioned and history is not).\n\nPlease let me know what you think.\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com", "msg_date": "Thu, 30 Nov 2023 11:29:15 +0100", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": true, "msg_subject": "Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "Please discard the previous patch and use this one (it had a leftover\ncomment from an initial attempt to limit this to hash case).\n\nThanks,\nGabriele\n\nOn Thu, 30 Nov 2023 at 11:29, Gabriele Bartolini <\[email protected]> wrote:\n\n> Hi there,\n>\n> While benchmarking a new feature involving tablespace support in\n> CloudNativePG (Kubernetes operator), I wanted to try out the partitioning\n> feature of pgbench. I saw it supporting both range and hash partitioning,\n> but limited to pgbench_accounts.\n>\n> With the attached patch, I extend the partitioning capability to the\n> pgbench_history table too.\n>\n> I have been thinking of adding an option to control this, but I preferred\n> to ask in this list whether it really makes sense or not (I struggle indeed\n> to see use cases where accounts is partitioned and history is not).\n>\n> Please let me know what you think.\n>\n> Thanks,\n> Gabriele\n> --\n> Gabriele Bartolini\n> Vice President, Cloud Native at EDB\n> enterprisedb.com\n>\n\n\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com", "msg_date": "Thu, 30 Nov 2023 12:01:54 +0100", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "Hi,\n\nThere are some test failures reported by Cfbot at [1]:\n\n[09:15:01.794] 192/276 postgresql:pgbench /\npgbench/001_pgbench_with_server ERROR 7.48s exit status 3\n[09:15:01.794] >>>\nINITDB_TEMPLATE=/tmp/cirrus-ci-build/build/tmp_install/initdb-template\nLD_LIBRARY_PATH=/tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/lib\nREGRESS_SHLIB=/tmp/cirrus-ci-build/build/src/test/regress/regress.so\nPATH=/tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/bin:/tmp/cirrus-ci-build/build/src/bin/pgbench:/tmp/cirrus-ci-build/build/src/bin/pgbench/test:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin\nPG_REGRESS=/tmp/cirrus-ci-build/build/src/test/regress/pg_regress\nMALLOC_PERTURB_=67 /usr/local/bin/python3\n/tmp/cirrus-ci-build/build/../src/tools/testwrap --basedir\n/tmp/cirrus-ci-build/build --srcdir\n/tmp/cirrus-ci-build/src/bin/pgbench --testgroup pgbench --testname\n001_pgbench_with_server -- /usr/local/bin/perl -I\n/tmp/cirrus-ci-build/src/test/perl -I\n/tmp/cirrus-ci-build/src/bin/pgbench\n/tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl\n[09:15:01.794] ――――――――――――――――――――――――――――――――――――― ✀\n―――――――――――――――――――――――――――――――――――――\n[09:15:01.794] stderr:\n[09:15:01.794] # Failed test 'transaction format for 001_pgbench_log_2'\n[09:15:01.794] # at\n/tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl line\n1247.\n[09:15:01.794] # Failed test 'transaction count for\n/tmp/cirrus-ci-build/build/testrun/pgbench/001_pgbench_with_server/data/t_001_pgbench_with_server_main_data/001_pgbench_log_3.25193\n(11)'\n[09:15:01.794] # at\n/tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl line\n1257.\n[09:15:01.794] # Failed test 'transaction format for 001_pgbench_log_3'\n[09:15:01.794] # at\n/tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl line\n1257.\n[09:15:01.794] # Looks like you failed 3 tests of 439.\n[09:15:01.794]\n[09:15:01.794] (test program exited with status code 3)\n\n[1] - https://cirrus-ci.com/task/5139049757802496\n\nThanks and regards\nShlok Kyal\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:27:15 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "Sorry, I did not intend to send this message for this email. I by\nmistake sent this mail. Please ignore this mail\n\nOn Wed, 10 Jan 2024 at 15:27, Shlok Kyal <[email protected]> wrote:\n>\n> Hi,\n>\n> There are some test failures reported by Cfbot at [1]:\n>\n> [09:15:01.794] 192/276 postgresql:pgbench /\n> pgbench/001_pgbench_with_server ERROR 7.48s exit status 3\n> [09:15:01.794] >>>\n> INITDB_TEMPLATE=/tmp/cirrus-ci-build/build/tmp_install/initdb-template\n> LD_LIBRARY_PATH=/tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/lib\n> REGRESS_SHLIB=/tmp/cirrus-ci-build/build/src/test/regress/regress.so\n> PATH=/tmp/cirrus-ci-build/build/tmp_install//usr/local/pgsql/bin:/tmp/cirrus-ci-build/build/src/bin/pgbench:/tmp/cirrus-ci-build/build/src/bin/pgbench/test:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin\n> PG_REGRESS=/tmp/cirrus-ci-build/build/src/test/regress/pg_regress\n> MALLOC_PERTURB_=67 /usr/local/bin/python3\n> /tmp/cirrus-ci-build/build/../src/tools/testwrap --basedir\n> /tmp/cirrus-ci-build/build --srcdir\n> /tmp/cirrus-ci-build/src/bin/pgbench --testgroup pgbench --testname\n> 001_pgbench_with_server -- /usr/local/bin/perl -I\n> /tmp/cirrus-ci-build/src/test/perl -I\n> /tmp/cirrus-ci-build/src/bin/pgbench\n> /tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl\n> [09:15:01.794] ――――――――――――――――――――――――――――――――――――― ✀\n> ―――――――――――――――――――――――――――――――――――――\n> [09:15:01.794] stderr:\n> [09:15:01.794] # Failed test 'transaction format for 001_pgbench_log_2'\n> [09:15:01.794] # at\n> /tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl line\n> 1247.\n> [09:15:01.794] # Failed test 'transaction count for\n> /tmp/cirrus-ci-build/build/testrun/pgbench/001_pgbench_with_server/data/t_001_pgbench_with_server_main_data/001_pgbench_log_3.25193\n> (11)'\n> [09:15:01.794] # at\n> /tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl line\n> 1257.\n> [09:15:01.794] # Failed test 'transaction format for 001_pgbench_log_3'\n> [09:15:01.794] # at\n> /tmp/cirrus-ci-build/src/bin/pgbench/t/001_pgbench_with_server.pl line\n> 1257.\n> [09:15:01.794] # Looks like you failed 3 tests of 439.\n> [09:15:01.794]\n> [09:15:01.794] (test program exited with status code 3)\n>\n> [1] - https://cirrus-ci.com/task/5139049757802496\n>\n> Thanks and regards\n> Shlok Kyal\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:32:04 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "At 2023-11-30 11:29:15 +0100, [email protected] wrote:\n>\n> With the attached patch, I extend the partitioning capability to the\n> pgbench_history table too.\n> \n> I have been thinking of adding an option to control this, but I preferred\n> to ask in this list whether it really makes sense or not (I struggle indeed\n> to see use cases where accounts is partitioned and history is not).\n\nI don't have a strong opinion about this, but I also can't think of a\nreason to want to create partitions for pgbench_accounts but leave out\npgbench_history.\n\n> From ba8f507b126a9c5bd22dd40bb8ce0c1f0c43ac59 Mon Sep 17 00:00:00 2001\n> From: Gabriele Bartolini <[email protected]>\n> Date: Thu, 30 Nov 2023 11:02:39 +0100\n> Subject: [PATCH] Include pgbench_history in partitioning method for pgbench\n> \n> In case partitioning, make sure that pgbench_history is also partitioned with\n> the same criteria.\n\nI think \"If partitioning\" or \"If we're creating partitions\" would read\nbetter here. Also, same criteria as what? Maybe you could just add \"as\npgbench_accounts\" to the end.\n\n> diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml\n> index 05d3f81619..4c02d2a61d 100644\n> --- a/doc/src/sgml/ref/pgbench.sgml\n> +++ b/doc/src/sgml/ref/pgbench.sgml\n> […]\n> @@ -378,9 +378,9 @@ pgbench <optional> <replaceable>options</replaceable> </optional> <replaceable>d\n> <term><option>--partitions=<replaceable>NUM</replaceable></option></term>\n> <listitem>\n> <para>\n> - Create a partitioned <literal>pgbench_accounts</literal> table with\n> - <replaceable>NUM</replaceable> partitions of nearly equal size for\n> - the scaled number of accounts.\n> + Create partitioned <literal>pgbench_accounts</literal> and <literal>pgbench_history</literal>\n> + tables with <replaceable>NUM</replaceable> partitions of nearly equal size for\n> + the scaled number of accounts - and future history records.\n> Default is <literal>0</literal>, meaning no partitioning.\n> </para>\n\nI would just leave out the \"-\" and write \"number of accounts and future\nhistory records\".\n\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index 2e1650d0ad..87adaf4d8f 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> […]\n> @@ -889,8 +889,10 @@ usage(void)\n> \t\t \" --index-tablespace=TABLESPACE\\n\"\n> \t\t \" create indexes in the specified tablespace\\n\"\n> \t\t \" --partition-method=(range|hash)\\n\"\n> -\t\t \" partition pgbench_accounts with this method (default: range)\\n\"\n> -\t\t \" --partitions=NUM partition pgbench_accounts into NUM parts (default: 0)\\n\"\n> +\t\t \" partition pgbench_accounts and pgbench_history with this method\"\n> +\t\t \" (default: range).\"\n> +\t\t \" --partitions=NUM partition pgbench_accounts and pgbench_history into NUM parts\"\n> +\t\t \" (default: 0)\\n\"\n> \t\t \" --tablespace=TABLESPACE create tables in the specified tablespace\\n\"\n> \t\t \" --unlogged-tables create tables as unlogged tables\\n\"\n> \t\t \"\\nOptions to select what to run:\\n\"\n\nThere's a missing newline after \"(default: range).\".\n\nI read through the rest of the patch closely. It looks fine to me. It\napplies, builds, and does create the partitions as intended.\n\n-- Abhijit\n\n\n", "msg_date": "Tue, 16 Jan 2024 17:23:21 +0530", "msg_from": "Abhijit Menon-Sen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "Hi Abhijit,\n\nThanks for your input. Please accept my updated patch.\n\nCiao,\nGabriele\n\nOn Tue, 16 Jan 2024 at 12:53, Abhijit Menon-Sen <[email protected]> wrote:\n\n> At 2023-11-30 11:29:15 +0100, [email protected] wrote:\n> >\n> > With the attached patch, I extend the partitioning capability to the\n> > pgbench_history table too.\n> >\n> > I have been thinking of adding an option to control this, but I preferred\n> > to ask in this list whether it really makes sense or not (I struggle\n> indeed\n> > to see use cases where accounts is partitioned and history is not).\n>\n> I don't have a strong opinion about this, but I also can't think of a\n> reason to want to create partitions for pgbench_accounts but leave out\n> pgbench_history.\n>\n> > From ba8f507b126a9c5bd22dd40bb8ce0c1f0c43ac59 Mon Sep 17 00:00:00 2001\n> > From: Gabriele Bartolini <[email protected]>\n> > Date: Thu, 30 Nov 2023 11:02:39 +0100\n> > Subject: [PATCH] Include pgbench_history in partitioning method for\n> pgbench\n> >\n> > In case partitioning, make sure that pgbench_history is also partitioned\n> with\n> > the same criteria.\n>\n> I think \"If partitioning\" or \"If we're creating partitions\" would read\n> better here. Also, same criteria as what? Maybe you could just add \"as\n> pgbench_accounts\" to the end.\n>\n> > diff --git a/doc/src/sgml/ref/pgbench.sgml\n> b/doc/src/sgml/ref/pgbench.sgml\n> > index 05d3f81619..4c02d2a61d 100644\n> > --- a/doc/src/sgml/ref/pgbench.sgml\n> > +++ b/doc/src/sgml/ref/pgbench.sgml\n> > […]\n> > @@ -378,9 +378,9 @@ pgbench <optional>\n> <replaceable>options</replaceable> </optional> <replaceable>d\n> >\n> <term><option>--partitions=<replaceable>NUM</replaceable></option></term>\n> > <listitem>\n> > <para>\n> > - Create a partitioned <literal>pgbench_accounts</literal> table\n> with\n> > - <replaceable>NUM</replaceable> partitions of nearly equal size\n> for\n> > - the scaled number of accounts.\n> > + Create partitioned <literal>pgbench_accounts</literal> and\n> <literal>pgbench_history</literal>\n> > + tables with <replaceable>NUM</replaceable> partitions of nearly\n> equal size for\n> > + the scaled number of accounts - and future history records.\n> > Default is <literal>0</literal>, meaning no partitioning.\n> > </para>\n>\n> I would just leave out the \"-\" and write \"number of accounts and future\n> history records\".\n>\n> > diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> > index 2e1650d0ad..87adaf4d8f 100644\n> > --- a/src/bin/pgbench/pgbench.c\n> > +++ b/src/bin/pgbench/pgbench.c\n> > […]\n> > @@ -889,8 +889,10 @@ usage(void)\n> > \" --index-tablespace=TABLESPACE\\n\"\n> > \" create indexes in the\n> specified tablespace\\n\"\n> > \" --partition-method=(range|hash)\\n\"\n> > - \" partition pgbench_accounts\n> with this method (default: range)\\n\"\n> > - \" --partitions=NUM partition pgbench_accounts\n> into NUM parts (default: 0)\\n\"\n> > + \" partition pgbench_accounts\n> and pgbench_history with this method\"\n> > + \" (default: range).\"\n> > + \" --partitions=NUM partition pgbench_accounts\n> and pgbench_history into NUM parts\"\n> > + \" (default: 0)\\n\"\n> > \" --tablespace=TABLESPACE create tables in the\n> specified tablespace\\n\"\n> > \" --unlogged-tables create tables as unlogged\n> tables\\n\"\n> > \"\\nOptions to select what to run:\\n\"\n>\n> There's a missing newline after \"(default: range).\".\n>\n> I read through the rest of the patch closely. It looks fine to me. It\n> applies, builds, and does create the partitions as intended.\n>\n> -- Abhijit\n>\n\n\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com", "msg_date": "Tue, 30 Jan 2024 17:41:03 +0100", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "Hello Gabriele,\n\nI think the improvement makes sense (it's indeed a bit strange to not\npartition the history table), and the patch looks good.\n\nI did think about whether this should be optional in some way - that is,\nseparate from partitioning the accounts table, and users would have to\nexplicitly enable (or disable) it. But I don't think we need to do that.\n\nThe vast majority of users simply want to partition everything. And this\nis just one way to partition the database anyway, it's our opinion on\nhow to do that, but there's many other options how we might partition\nthe tables, and we don't (and don't want too) have options for that.\n\nThe only case that I can think of where this might matter is when\nrunning a benchmarks that will be compared to some earlier results\n(executed using an older pgbench version). That will be affected by\nthis, but I don't think we make many promises about compatibility in\nthis regard ... it's probably better to always compare results only from\nthe same pgbench version, I guess.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 16 Feb 2024 18:50:19 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "On Fri, Feb 16, 2024 at 12:50 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hello Gabriele,\n>\n> I think the improvement makes sense (it's indeed a bit strange to not\n> partition the history table), and the patch looks good.\n>\n> I did think about whether this should be optional in some way - that is,\n> separate from partitioning the accounts table, and users would have to\n> explicitly enable (or disable) it. But I don't think we need to do that.\n>\n> The vast majority of users simply want to partition everything. And this\n> is just one way to partition the database anyway, it's our opinion on\n> how to do that, but there's many other options how we might partition\n> the tables, and we don't (and don't want too) have options for that.\n\nI wonder how common it would be to partition a history table by\naccount ID? I sort of imagined the most common kind of partitioning\nfor an audit table is by time (range). Anyway, I'm not objecting to\ndoing it by account ID, just asking if there is a reason to do so.\n\nSpeaking of which, Tomas said users might want to \"partition\neverything\" -- so any reason not to also partition tellers and\nbranches?\n\nThis change to the docs seems a bit misleading:\n\n <listitem>\n <para>\n- Create a partitioned <literal>pgbench_accounts</literal> table with\n- <replaceable>NUM</replaceable> partitions of nearly equal size for\n- the scaled number of accounts.\n+ Create partitioned <literal>pgbench_accounts</literal> and\n<literal>pgbench_history</literal>\n+ tables with <replaceable>NUM</replaceable> partitions of\nnearly equal size for\n+ the scaled number of accounts and future history records.\n Default is <literal>0</literal>, meaning no partitioning.\n </para>\n </listitem>\n\nIt says that partitions of \"future history records\" will be equal in\nsize. While it's true that at the end of a pgbench run, if you use a\nrandom distribution for aid, the pgbench_history partitions should be\nroughly equally sized, it is confusing to say it will \"create\npgbench_history with partitions of equal size\". Maybe it would be\nbetter to write a new sentence about partitioning pgbench_history\nwithout worrying about mirroring the sentence structure of the\nexisting sentence.\n\n> The only case that I can think of where this might matter is when\n> running a benchmarks that will be compared to some earlier results\n> (executed using an older pgbench version). That will be affected by\n> this, but I don't think we make many promises about compatibility in\n> this regard ... it's probably better to always compare results only from\n> the same pgbench version, I guess.\n\nAs a frequent pgbench user, I always use the same pgbench version even\nwhen comparing different versions of Postgres. Other changes have made\nit difficult to compare results across pgbench versions without\nproviding it as an option (see 06ba4a63b85e). So, I don't think it is\na problem if it is noted in release notes.\n\n- Melanie\n\n\n", "msg_date": "Fri, 16 Feb 2024 15:14:37 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" }, { "msg_contents": "On Fri, Feb 16, 2024 at 3:14 PM Melanie Plageman\n<[email protected]> wrote:\n> [ review comments ]\n\nSince there has been no response to these review comments for more\nthan 3 months, I have set https://commitfest.postgresql.org/48/4679/\nto Returned with Feedback. Please feel free to update the status when\nthere is a new version of the patch (or at least a response to these\ncomments).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 11:53:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extend pgbench partitioning to pgbench_history" } ]
[ { "msg_contents": "I noticed that when a column is dropped, RemoveAttributeById() clears \nout certain fields in pg_attribute, but it leaves the variable-length \nfields at the end (attacl, attoptions, and attfdwoptions) unchanged. \nThis is probably harmless, but it seems wasteful and unclean, and leaves \npotentially dangling data lying around (for example, attacl could \ncontain references to users that are later also dropped).\n\nI suggest the attached patch to set those fields to null when a column \nis marked as dropped.", "msg_date": "Thu, 30 Nov 2023 12:23:46 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Set all variable-length fields of pg_attribute to null on column drop" }, { "msg_contents": "On Thu, Nov 30, 2023 at 6:24 AM Peter Eisentraut <[email protected]> wrote:\n> I noticed that when a column is dropped, RemoveAttributeById() clears\n> out certain fields in pg_attribute, but it leaves the variable-length\n> fields at the end (attacl, attoptions, and attfdwoptions) unchanged.\n> This is probably harmless, but it seems wasteful and unclean, and leaves\n> potentially dangling data lying around (for example, attacl could\n> contain references to users that are later also dropped).\n>\n> I suggest the attached patch to set those fields to null when a column\n> is marked as dropped.\n\nI haven't reviewed the patch, but +1 for the idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 11:45:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set all variable-length fields of pg_attribute to null on column\n drop" }, { "msg_contents": "On 2023-Nov-30, Peter Eisentraut wrote:\n\n> I noticed that when a column is dropped, RemoveAttributeById() clears out\n> certain fields in pg_attribute, but it leaves the variable-length fields at\n> the end (attacl, attoptions, and attfdwoptions) unchanged. This is probably\n> harmless, but it seems wasteful and unclean, and leaves potentially dangling\n> data lying around (for example, attacl could contain references to users\n> that are later also dropped).\n\nYeah, this looks like an ancient oversight -- when DROP COLUMN was added\nwe didn't have any varlena fields in this catalog, and when the first\none was added (attacl in commit 3cb5d6580a33) resetting it on DROP\nCOLUMN was overlooked.\n\nLGTM.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n\n\n", "msg_date": "Fri, 22 Dec 2023 10:05:54 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set all variable-length fields of pg_attribute to null on column\n drop" }, { "msg_contents": "On 22.12.23 10:05, Alvaro Herrera wrote:\n> On 2023-Nov-30, Peter Eisentraut wrote:\n> \n>> I noticed that when a column is dropped, RemoveAttributeById() clears out\n>> certain fields in pg_attribute, but it leaves the variable-length fields at\n>> the end (attacl, attoptions, and attfdwoptions) unchanged. This is probably\n>> harmless, but it seems wasteful and unclean, and leaves potentially dangling\n>> data lying around (for example, attacl could contain references to users\n>> that are later also dropped).\n> \n> Yeah, this looks like an ancient oversight -- when DROP COLUMN was added\n> we didn't have any varlena fields in this catalog, and when the first\n> one was added (attacl in commit 3cb5d6580a33) resetting it on DROP\n> COLUMN was overlooked.\n> \n> LGTM.\n\ncommitted\n\n\n\n", "msg_date": "Fri, 22 Dec 2023 22:01:01 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set all variable-length fields of pg_attribute to null on column\n drop" } ]
[ { "msg_contents": "Hi,\n\nPlease find a patch attached which adds missing sql error code in\nerror reports which are FATAL or PANIC, in xlogrecovery.\nThis will help with deducing patterns when looking at error reports\nfrom multiple postgres instances.\n\n--\nThanks and Regards,\nKrishnakumar (KK).\n[Microsoft]", "msg_date": "Thu, 30 Nov 2023 10:54:12 -0800", "msg_from": "Krishnakumar R <[email protected]>", "msg_from_op": true, "msg_subject": "Add missing error codes to PANIC/FATAL error reports in xlogrecovery" }, { "msg_contents": "Hi,\n\nOn 2023-11-30 10:54:12 -0800, Krishnakumar R wrote:\n> diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c\n> index c61566666a..2f50928e7e 100644\n> --- a/src/backend/access/transam/xlogrecovery.c\n> +++ b/src/backend/access/transam/xlogrecovery.c\n> @@ -630,7 +630,8 @@ InitWalRecovery(ControlFileData *ControlFile, bool *wasShutdown_ptr,\n> \t\t\t\tif (!ReadRecord(xlogprefetcher, LOG, false,\n> \t\t\t\t\t\t\t\tcheckPoint.ThisTimeLineID))\n> \t\t\t\t\tereport(FATAL,\n> -\t\t\t\t\t\t\t(errmsg(\"could not find redo location referenced by checkpoint record\"),\n> +\t\t\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t\t\t errmsg(\"could not find redo location referenced by checkpoint record\"),\n> \t\t\t\t\t\t\t errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\" and add required recovery options.\\n\"\n> \t\t\t\t\t\t\t\t\t \"If you are not restoring from a backup, try removing the file \\\"%s/backup_label\\\".\\n\"\n> \t\t\t\t\t\t\t\t\t \"Be careful: removing \\\"%s/backup_label\\\" will result in a corrupt cluster if restoring from a backup.\",\n\nWondering if we should add a ERRCODE_CLUSTER_CORRUPTED for cases like this. We\nhave ERRCODE_DATA_CORRUPTED and ERRCODE_INDEX_CORRUPTED, which make\nERRCODE_DATA_CORRUPTED feel a bit too specific in this kind of situation?\n\nOTOH, just having anything other than ERRCODE_INTERNAL_ERROR is better.\n\n\n> @@ -640,7 +641,8 @@ InitWalRecovery(ControlFileData *ControlFile, bool *wasShutdown_ptr,\n> \t\telse\n> \t\t{\n> \t\t\tereport(FATAL,\n> -\t\t\t\t\t(errmsg(\"could not locate required checkpoint record\"),\n> +\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t\t errmsg(\"could not locate required checkpoint record\"),\n> \t\t\t\t\t errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\" and add required recovery options.\\n\"\n> \t\t\t\t\t\t\t \"If you are not restoring from a backup, try removing the file \\\"%s/backup_label\\\".\\n\"\n> \t\t\t\t\t\t\t \"Be careful: removing \\\"%s/backup_label\\\" will result in a corrupt cluster if restoring from a backup.\",\n\nAnother aside: Isn't the hint here obsolete since we've removed exclusive\nbackups? I can't think of any scenario now where removing backup_label would\nbe correct in a non-exclusive backup.\n\n\n> @@ -817,7 +820,8 @@ InitWalRecovery(ControlFileData *ControlFile, bool *wasShutdown_ptr,\n> \t\t */\n> \t\tswitchpoint = tliSwitchPoint(ControlFile->checkPointCopy.ThisTimeLineID, expectedTLEs, NULL);\n> \t\tereport(FATAL,\n> -\t\t\t\t(errmsg(\"requested timeline %u is not a child of this server's history\",\n> +\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> +\t\t\t\t errmsg(\"requested timeline %u is not a child of this server's history\",\n> \t\t\t\t\t\trecoveryTargetTLI),\n> \t\t\t\t errdetail(\"Latest checkpoint is at %X/%X on timeline %u, but in the history of the requested timeline, the server forked off from that timeline at %X/%X.\",\n> \t\t\t\t\t\t LSN_FORMAT_ARGS(ControlFile->checkPoint),\n\nHm, this one arguably is not corruption, but we still cannot\ncontinue. ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE or maybe a new error code?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 11:47:25 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in\n xlogrecovery" }, { "msg_contents": "On Thu, Nov 30, 2023 at 2:47 PM Andres Freund <[email protected]> wrote:\n> Another aside: Isn't the hint here obsolete since we've removed exclusive\n> backups? I can't think of any scenario now where removing backup_label would\n> be correct in a non-exclusive backup.\n\nThat's an extremely good point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 15:21:40 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in\n xlogrecovery" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Wondering if we should add a ERRCODE_CLUSTER_CORRUPTED for cases like this. We\n> have ERRCODE_DATA_CORRUPTED and ERRCODE_INDEX_CORRUPTED, which make\n> ERRCODE_DATA_CORRUPTED feel a bit too specific in this kind of situation?\n\nMaybe. We didn't officially define DATA_CORRUPTED as referring to\ntable data, but given the existence of INDEX_CORRUPTED maybe we\nshould treat it as that. In any case ...\n\n> Hm, this one arguably is not corruption, but we still cannot\n> continue. ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE or maybe a new error code?\n\n... I don't really like turning a whole bunch of error cases into\nthe same error code without some closer analysis. I think you\nare right that these need a bit more case-by-case thought.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:02:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in\n xlogrecovery" }, { "msg_contents": "Hi,\n\nOn 2023-11-30 16:02:55 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Wondering if we should add a ERRCODE_CLUSTER_CORRUPTED for cases like this. We\n> > have ERRCODE_DATA_CORRUPTED and ERRCODE_INDEX_CORRUPTED, which make\n> > ERRCODE_DATA_CORRUPTED feel a bit too specific in this kind of situation?\n> \n> Maybe. We didn't officially define DATA_CORRUPTED as referring to\n> table data, but given the existence of INDEX_CORRUPTED maybe we\n> should treat it as that.\n\nI'm on the fence about it. Certainly DATA_CORRUPTED would be more appropriate\nthan INTERNAL_ERROR.\n\n\n> > Hm, this one arguably is not corruption, but we still cannot\n> > continue. ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE or maybe a new error code?\n> \n> ... I don't really like turning a whole bunch of error cases into\n> the same error code without some closer analysis.\n\nOther than this instance, they all indicate that the cluster is toast in some\nway or another. So *_CORRUPTED seems appropriate. And even this instance would\nbe better off as _CORRUPTED than as INTERNAL_ERROR. There's so many of the\nlatter that you can't realistically alert on them occurring.\n\nI don't like my idea of ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE much, that's\nnot something you realistically can alert on, and this error certainly is an\ninstance of \"you're screwed until you manually intervene\".\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 13:10:38 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in\n xlogrecovery" }, { "msg_contents": "Hi,\n\nUpdated the patch with ERRCODE_CLUSTER_CORRUPTED & kept\nERRCODE_DATA_CORRUPTED when recovery is not consistent.\n\n> > > Hm, this one arguably is not corruption, but we still cannot\n> > > continue. ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE or maybe a new error code?\n\nAdded a ERRCODE_TIMELINE_INCONSISTENT to be specific about the\nscenarios with timeline mismatches. Thoughts ?\n\n>> Another aside: Isn't the hint here obsolete since we've removed exclusive\nbackups? I can't think of any scenario now where removing backup_label would\nbe correct in a non-exclusive backup.\n\nAttached another patch which applies on top of the first patch to\nremove the obsolete hint.\n\n- KK", "msg_date": "Mon, 4 Dec 2023 01:07:22 -0800", "msg_from": "Krishnakumar R <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in\n xlogrecovery" } ]
[ { "msg_contents": "I noticed that the postgres_fdw test periodically times out on Windows:\n\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-11-10%2003%3A12%3A58\n\thttps://cirrus-ci.com/task/5504294095421440\n\thttps://cirrus-ci.com/task/4814111003901952\n\thttps://cirrus-ci.com/task/5343037535027200\n\thttps://cirrus-ci.com/task/5893655966253056\n\thttps://cirrus-ci.com/task/4568159320014848\n\thttps://cirrus-ci.com/task/5238067850641408\n\t(and many more from cfbot)\n\n From a quick sampling, the relevant test logs end with either\n\n\tERROR: server \"unknownserver\" does not exist\n\tSTATEMENT: SELECT postgres_fdw_disconnect('unknownserver');\n\nor\n\n\tERROR: relation \"public.non_existent_table\" does not exist\n\tCONTEXT: remote SQL command: SELECT a, b, c FROM public.non_existent_table\n\tSTATEMENT: SELECT * FROM async_pt;\n\nbefore the test seems to hang.\n\nAFAICT the failures began around September 10th, which leads me to wonder\nif this is related to commit 04a09ee. That is little more than a wild\nguess, though. I haven't been able to deduce much else from the logs I can\nfind, and I didn't find any previous reports about this in the archives\nafter lots of searching, so I thought I'd at least park these notes here in\ncase anyone else has ideas.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:38:34 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "postgres_fdw test timeouts" }, { "msg_contents": "On Fri, Dec 1, 2023 at 9:38 AM Nathan Bossart <[email protected]> wrote:\n> AFAICT the failures began around September 10th, which leads me to wonder\n> if this is related to commit 04a09ee. That is little more than a wild\n> guess, though. I haven't been able to deduce much else from the logs I can\n> find, and I didn't find any previous reports about this in the archives\n> after lots of searching, so I thought I'd at least park these notes here in\n> case anyone else has ideas.\n\nThanks for finding this correlation. Yeah, poking around in the cfbot\nhistory database I see about 1 failure like that per day since that\ndate, and there doesn't seem to be anything else as obviously likely\nto be related to wakeups and timeouts. I don't understand what's\nwrong with the logic, and I think it would take someone willing to\ndebug it locally to figure that out. Unless someone has an idea, I'm\nleaning towards reverting that commit and leaving the relatively minor\nproblem that it was intended to fix as a TODO.\n\n\n", "msg_date": "Sun, 3 Dec 2023 12:48:49 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "Hello Thomas,\n\n03.12.2023 02:48, Thomas Munro wrote:\n> Thanks for finding this correlation. Yeah, poking around in the cfbot\n> history database I see about 1 failure like that per day since that\n> date, and there doesn't seem to be anything else as obviously likely\n> to be related to wakeups and timeouts. I don't understand what's\n> wrong with the logic, and I think it would take someone willing to\n> debug it locally to figure that out. Unless someone has an idea, I'm\n> leaning towards reverting that commit and leaving the relatively minor\n> problem that it was intended to fix as a TODO\n\nI've managed to reproduce the failure locally when running postgres_fdw_x/\nregress in parallel (--num-processes 10). It reproduced for me on\non 04a09ee94 (iterations 1, 2, 4), but not on 04a09ee94~1 (30 iterations\npassed).\n\nI'm going to investigate this case within days. Maybe we could find a\nbetter fix for the issue.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 3 Dec 2023 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "On Sun, Dec 3, 2023 at 6:00 PM Alexander Lakhin <[email protected]> wrote:\n> I've managed to reproduce the failure locally when running postgres_fdw_x/\n> regress in parallel (--num-processes 10). It reproduced for me on\n> on 04a09ee94 (iterations 1, 2, 4), but not on 04a09ee94~1 (30 iterations\n> passed).\n>\n> I'm going to investigate this case within days. Maybe we could find a\n> better fix for the issue.\n\nThanks. One thing I can recommend to anyone trying to understand the\nchange is that you view it with:\n\ngit show --ignore-all-space 04a09ee\n\n... because it changed a lot of indentation when wrapping a bunch of\nstuff in a new for loop.\n\n\n", "msg_date": "Sun, 3 Dec 2023 19:00:15 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "Hello Thomas,\n\n03.12.2023 09:00, Thomas Munro wrote:\n> On Sun, Dec 3, 2023 at 6:00 PM Alexander Lakhin <[email protected]> wrote:\n>> I've managed to reproduce the failure locally when running postgres_fdw_x/\n>> regress in parallel (--num-processes 10). It reproduced for me on\n>> on 04a09ee94 (iterations 1, 2, 4), but not on 04a09ee94~1 (30 iterations\n>> passed).\n>>\n>> I'm going to investigate this case within days. Maybe we could find a\n>> better fix for the issue.\n> Thanks. One thing I can recommend to anyone trying to understand the\n> change is that you view it with:\n\nI think, I've found out what's going on here.\nThe culprit is WSAEnumNetworkEvents() assisted by non-trivial logic of\nExecAppendAsyncEventWait().\nFor the case noccurred > 1, ExecAppendAsyncEventWait() performs a loop,\nwhere ExecAsyncNotify() is called for the first AsyncRequest, but the\nsecond one also processed inside, through a recursive call to\nExecAppendAsyncEventWait():\n  -> ExecAsyncNotify -> produce_tuple_asynchronously\n-> ExecScan -> ExecInterpExpr -> ExecSetParamPlan -> ExecProcNodeFirst\n-> ExecAgg -> agg_retrieve_direct -> ExecProcNodeInstr -> ExecAppend\n-> ExecAppendAsyncEventWait.\nHere we get into the first loop and call ExecAsyncConfigureWait() for the\nsecond AsyncRequest (because we haven't reset it's callback_pending yet),\nand it leads to creating another WaitEvent for the second socket inside\npostgresForeignAsyncConfigureWait():\n     AddWaitEventToSet(set, WL_SOCKET_READABLE, PQsocket(fsstate->conn), ...\n\nThis WaitEvent seemingly misses an event that we should get for that socket.\nIt's not that important to get noccured > 1 in\nExecAppendAsyncEventWait() to see the failure, it's enough to call\nWSAEnumNetworkEvents() inside WaitEventSetWaitBlock() for the second socket\n(I tried to exit from the WaitEventSetWaitBlock's new loop prematurely,\nwithout touching occurred_events, returned_events on a second iteration of\nthe loop).\n\nSo it looks like we have the same issue with multiple event handles\nassociated with a single socket here.\nAnd v2-0003-Redesign-Windows-socket-event-management.patch from [1]\n\"surprisingly\" helps in this case as well (I could not see a failure for\n100 iterations of 10 tests in parallel).\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGL0bikWSC2XW-zUgFWNVEpD_gEWXndi2PE5tWqmApkpZQ%40mail.gmail.com\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 7 Dec 2023 12:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "On Thu, Dec 7, 2023 at 10:00 PM Alexander Lakhin <[email protected]> wrote:\n> I think, I've found out what's going on here.\n> The culprit is WSAEnumNetworkEvents() assisted by non-trivial logic of\n> ExecAppendAsyncEventWait().\n> For the case noccurred > 1, ExecAppendAsyncEventWait() performs a loop,\n> where ExecAsyncNotify() is called for the first AsyncRequest, but the\n> second one also processed inside, through a recursive call to\n> ExecAppendAsyncEventWait():\n> -> ExecAsyncNotify -> produce_tuple_asynchronously\n> -> ExecScan -> ExecInterpExpr -> ExecSetParamPlan -> ExecProcNodeFirst\n> -> ExecAgg -> agg_retrieve_direct -> ExecProcNodeInstr -> ExecAppend\n> -> ExecAppendAsyncEventWait.\n> Here we get into the first loop and call ExecAsyncConfigureWait() for the\n> second AsyncRequest (because we haven't reset it's callback_pending yet),\n> and it leads to creating another WaitEvent for the second socket inside\n> postgresForeignAsyncConfigureWait():\n> AddWaitEventToSet(set, WL_SOCKET_READABLE, PQsocket(fsstate->conn), ...\n\nOh, wow. Nice detective work! Thank you for figuring that out.\n\n> So it looks like we have the same issue with multiple event handles\n> associated with a single socket here.\n> And v2-0003-Redesign-Windows-socket-event-management.patch from [1]\n> \"surprisingly\" helps in this case as well (I could not see a failure for\n> 100 iterations of 10 tests in parallel).\n\nYeah, this makes perfect sense.\n\nSo, commit 04a09ee is not guilty. But as the saying goes, \"no good\ndeed goes unpunished\", and work on our Windows port seems to be\nespecially prone to backfiring when kludges combine...\n\nNow we have the question of whether to go forwards (commit the \"socket\ntable\" thing), or backwards (revert 04a09ee for now to clear the CI\nfailures). I don't love the hidden complexity of the socket table and\nam not in a hurry to commit it, but I don't currently see another\nway... on the other hand we have other CI flapping due to that problem\ntoo so reverting 04a09ee would be sweeping problems under the carpet.\nI still need to process your feedback/discoveries on that other thread\nand it may take a few weeks for me to get to it.\n\n\n", "msg_date": "Fri, 8 Dec 2023 09:55:58 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "On Fri, Dec 08, 2023 at 09:55:58AM +1300, Thomas Munro wrote:\n> Oh, wow. Nice detective work! Thank you for figuring that out.\n\n+1\n\n> Now we have the question of whether to go forwards (commit the \"socket\n> table\" thing), or backwards (revert 04a09ee for now to clear the CI\n> failures). I don't love the hidden complexity of the socket table and\n> am not in a hurry to commit it, but I don't currently see another\n> way... on the other hand we have other CI flapping due to that problem\n> too so reverting 04a09ee would be sweeping problems under the carpet.\n> I still need to process your feedback/discoveries on that other thread\n> and it may take a few weeks for me to get to it.\n\nI don't think we need to revert 04a09ee provided the issue is unrelated and\na fix is in development.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 17:02:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "08.12.2023 02:02, Nathan Bossart wrote:\n> On Fri, Dec 08, 2023 at 09:55:58AM +1300, Thomas Munro wrote:\n>> Now we have the question of whether to go forwards (commit the \"socket\n>> table\" thing), or backwards (revert 04a09ee for now to clear the CI\n>> failures). I don't love the hidden complexity of the socket table and\n>> am not in a hurry to commit it, but I don't currently see another\n>> way... on the other hand we have other CI flapping due to that problem\n>> too so reverting 04a09ee would be sweeping problems under the carpet.\n>> I still need to process your feedback/discoveries on that other thread\n>> and it may take a few weeks for me to get to it.\n> I don't think we need to revert 04a09ee provided the issue is unrelated and\n> a fix is in development.\n\nI've reviewed the links posted upthread and analyzed statistics of such\nfailures:\nyes, it happens rather frequently in Cirrus CI, but there might be dozens\nof successful runs, for example:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F45%2F3686\nhas 1 postgres_fdw failure on Windows per 32 runs.\nAnd there is only one such failure for 90 days in the buildfarm.\n(Perhaps the probability of the failure depend on external factors, such as\nconcurrent activity.)\n\nSo I would not say that it's a dominant failure for now, and given that\n04a09ee lives in master only, maybe we can save two commits (Revert +\nRevert of revert) while moving to a more persistent solution.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 10 Dec 2023 12:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "On Sun, Dec 10, 2023 at 12:00:01PM +0300, Alexander Lakhin wrote:\n> So I would not say that it's a dominant failure for now, and given that\n> 04a09ee lives in master only, maybe we can save two commits (Revert +\n> Revert of revert) while moving to a more persistent solution.\n\nI just checked in on this one to see whether we needed to create an \"open\nitem\" for v17. While I'm not seeing the failures anymore, the patch that\nAlexander claimed should fix it [0] doesn't appear to have been committed,\neither. Perhaps this was fixed by something else...\n\n[0] https://postgr.es/m/CA%2BhUKGL0bikWSC2XW-zUgFWNVEpD_gEWXndi2PE5tWqmApkpZQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 7 Mar 2024 16:00:47 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "Hello Nathan,\n\n08.03.2024 01:00, Nathan Bossart wrote:\n> On Sun, Dec 10, 2023 at 12:00:01PM +0300, Alexander Lakhin wrote:\n>> So I would not say that it's a dominant failure for now, and given that\n>> 04a09ee lives in master only, maybe we can save two commits (Revert +\n>> Revert of revert) while moving to a more persistent solution.\n> I just checked in on this one to see whether we needed to create an \"open\n> item\" for v17. While I'm not seeing the failures anymore, the patch that\n> Alexander claimed should fix it [0] doesn't appear to have been committed,\n> either. Perhaps this was fixed by something else...\n>\n> [0] https://postgr.es/m/CA%2BhUKGL0bikWSC2XW-zUgFWNVEpD_gEWXndi2PE5tWqmApkpZQ%40mail.gmail.com\n\nI have re-run the tests and found out that the issue was fixed by\nd3c5f37dd. It changed the inner of the loop \"while (PQisBusy(conn))\",\nformerly contained in pgfdw_get_result() as follows:\n                 /* Data available in socket? */\n                 if (wc & WL_SOCKET_READABLE)\n                 {\n                     if (!PQconsumeInput(conn))\n                         pgfdw_report_error(ERROR, NULL, conn, false, query);\n                 }\n->\n         /* Consume whatever data is available from the socket */\n         if (PQconsumeInput(conn) == 0)\n         {\n             /* trouble; expect PQgetResult() to return NULL */\n             break;\n         }\n\nThat is, the unconditional \"if PQconsumeInput() ...\" eliminates the test\ntimeout.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 9 Mar 2024 10:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw test timeouts" }, { "msg_contents": "On Sat, Mar 09, 2024 at 10:00:00AM +0300, Alexander Lakhin wrote:\n> I have re-run the tests and found out that the issue was fixed by\n> d3c5f37dd. It changed the inner of the loop \"while (PQisBusy(conn))\",\n> formerly contained in pgfdw_get_result() as follows:\n> �� ���� ��� ��� /* Data available in socket? */\n> �� ���� ��� ��� if (wc & WL_SOCKET_READABLE)\n> �� ���� ��� ��� {\n> �� ���� ��� ��� ��� if (!PQconsumeInput(conn))\n> �� ���� ��� ��� ��� ��� pgfdw_report_error(ERROR, NULL, conn, false, query);\n> �� ���� ��� ��� }\n> ->\n> �� ���� /* Consume whatever data is available from the socket */\n> �� ���� if (PQconsumeInput(conn) == 0)\n> �� ���� {\n> �� ���� ��� /* trouble; expect PQgetResult() to return NULL */\n> �� ���� ��� break;\n> �� ���� }\n> \n> That is, the unconditional \"if PQconsumeInput() ...\" eliminates the test\n> timeout.\n\nThanks for confirming! I'm assuming this just masks the underlying\nissue...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 9 Mar 2024 08:24:40 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw test timeouts" } ]
[ { "msg_contents": "Hi,\n\nI recently mentioned to Robert (and also Heikki earlier), that I think I see a\nway to detect an omitted backup_label in a relevant subset of the cases (it'd\napply to the pg_control as well, if we moved to that). Robert encouraged me\nto share the idea, even though it does not provide complete protection.\n\n\nThe subset I think we can address is the following:\n\na) An omitted backup_label would lead to corruption, i.e. without the\n backup_label we won't start recovery at the right position. Obviously it'd\n be better to also catch a wrong procedure when it'd not cause corruption -\n perhaps my idea can be extended to handle that, with a small bit of\n overhead.\n\nb) The backup has been taken from a primary. Unfortunately that probably can't\n be addressed - but the vast majority of backups are taken from a primary,\n so I think it's still a worthwhile protection.\n\n\nHere's my approach\n\n1) We add a XLOG_BACKUP_START WAL record when starting a base backup on a\n primary, emitted just *after* the checkpoint completed\n\n2) When replaying a base backup start record, we create a state file that\n includes the corresponding LSN in the filename\n\n3) On the primary, the state file for XLOG_BACKUP_START is *not* created at\n that time. Instead the state file is created during pg_backup_stop().\n\n4) When replaying a XLOG_BACKUP_END record, we verif that the state file\n created by XLOG_BACKUP_START is present, and error out if not. Backups\n that started before the redo LSN from backup_label are ignored\n (necessitates remembering that LSN, but we've been discussing that anyway).\n\n\nBecause the backup state file on the primary is only created during\npg_backup_stop(), a copy of the data directory taken between pg_backup_start()\nand pg_backup_stop() does *not* contain the corresponding \"backup state\nfile\". Because of this, an omitted backup_label is detected if recovery does\nnot start early enough - recovery won't encounter the XLOG_BACKUP_START record\nand thus would not create the state file, leading to an error in 4).\n\nIt is not a problem that the primary does not create the state file before the\npg_backup_stop() - if the primary crashes before pg_backup_stop(), there is no\nXLOG_BACKUP_END and thus no error will be raised. It's a bit odd that the\nsequence differs between normal processing and recovery, but I think that's\nnothing a good comment couldn't explain.\n\n\nI haven't worked out the details, but I think we might be able extend this to\ncatch errors even if there is no checkpoint during the base backup, by\nemitting the WAL record *before* the RequestCheckpoint(), and creating the\ncorresponding state file during backup_label processing at the start of\nrecovery. That'd probably make the logic for when we can remove the backup\nstate files a bit more complicated, but I think we could deal with that.\n\n\nComments? Swear words?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 12:56:05 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Detecting some cases of missing backup_label" }, { "msg_contents": "Greetings,\n\n* Andres Freund ([email protected]) wrote:\n> I recently mentioned to Robert (and also Heikki earlier), that I think I see a\n> way to detect an omitted backup_label in a relevant subset of the cases (it'd\n> apply to the pg_control as well, if we moved to that). Robert encouraged me\n> to share the idea, even though it does not provide complete protection.\n\nThat would certainly be nice.\n\n> The subset I think we can address is the following:\n> \n> a) An omitted backup_label would lead to corruption, i.e. without the\n> backup_label we won't start recovery at the right position. Obviously it'd\n> be better to also catch a wrong procedure when it'd not cause corruption -\n> perhaps my idea can be extended to handle that, with a small bit of\n> overhead.\n> \n> b) The backup has been taken from a primary. Unfortunately that probably can't\n> be addressed - but the vast majority of backups are taken from a primary,\n> so I think it's still a worthwhile protection.\n\nAgreed that this is a worthwhile set to try and address, even if we\ncan't address other cases.\n\n> Here's my approach\n> \n> 1) We add a XLOG_BACKUP_START WAL record when starting a base backup on a\n> primary, emitted just *after* the checkpoint completed\n> \n> 2) When replaying a base backup start record, we create a state file that\n> includes the corresponding LSN in the filename\n> \n> 3) On the primary, the state file for XLOG_BACKUP_START is *not* created at\n> that time. Instead the state file is created during pg_backup_stop().\n> \n> 4) When replaying a XLOG_BACKUP_END record, we verif that the state file\n> created by XLOG_BACKUP_START is present, and error out if not. Backups\n> that started before the redo LSN from backup_label are ignored\n> (necessitates remembering that LSN, but we've been discussing that anyway).\n> \n> \n> Because the backup state file on the primary is only created during\n> pg_backup_stop(), a copy of the data directory taken between pg_backup_start()\n> and pg_backup_stop() does *not* contain the corresponding \"backup state\n> file\". Because of this, an omitted backup_label is detected if recovery does\n> not start early enough - recovery won't encounter the XLOG_BACKUP_START record\n> and thus would not create the state file, leading to an error in 4).\n\nWhile I see the idea here, I think, doesn't it end up being an issue if\nthings happen like this:\n\npg_backup_start -> XLOG_BACKUP_START WAL written -> new checkpoint\nhappens -> pg_backup_stop -> XLOG_BACKUP_STOP WAL written -> crash\n\nIn that scenario, we'd go back to the new checkpoint (the one *after*\nthe checkpoint that happened before we wrote XLOG_BACKUP_START), start\nreplaying, and then hit the XLOG_BACKUP_STOP and then error out, right?\nEven though we're actually doing crash recovery and everything should be\nfine as long as we replay all of the WAL.\n\nPerhaps we can make the pg_backup_stop and(/or?) the writing out of\nXLOG_BACKUP_STOP wait until just before the next checkpoint and\nhopefully minimize that window ... but I'm not sure if we could make\nthat window zero and what happens if someone does end up hitting it?\nDoesn't seem like there's any way around it, which seems like it might\nbe a problem. I suppose it wouldn't be hard to add some option to tell\nPG to ignore the XLOG_BACKUP_STOP ... but then that's akin to removing\nbackup_label which lands us possibly back into the issue of people\nmis-using that.\n\n> It is not a problem that the primary does not create the state file before the\n> pg_backup_stop() - if the primary crashes before pg_backup_stop(), there is no\n> XLOG_BACKUP_END and thus no error will be raised. It's a bit odd that the\n> sequence differs between normal processing and recovery, but I think that's\n> nothing a good comment couldn't explain.\n\nRight, crashing before pg_backup_stop() is fine, but crashing *after*\nwould be an issue, I think, as outlined above, until the next checkpoint\ncompletes, so we've moved the window but not eliminated it.\n\n> I haven't worked out the details, but I think we might be able extend this to\n> catch errors even if there is no checkpoint during the base backup, by\n> emitting the WAL record *before* the RequestCheckpoint(), and creating the\n> corresponding state file during backup_label processing at the start of\n> recovery. That'd probably make the logic for when we can remove the backup\n> state files a bit more complicated, but I think we could deal with that.\n\nNot entirely following this- are you meaning that we might be able to\nmake something here work in the case where we don't have\npg_backup_start() wait for a checkpoint to happen (which I have some\nserious doubts about?), or are you saying that the above doesn't work\nunless there's at least one post-pg_backup_start() checkpoint? I don't\nimmediately see why that would be the case though. Also, if we wrote\nout the XLOG_BACKUP_START before the checkpoint that we start replay\nfrom and instead move that logic to backup_label processing ... doesn't\nthat end up not working in the same case as we have today- where someone\ndecides to remove backup_label?\n\nGoing to stop guessing here as I'm clearly not understanding something\nabout this part. Maybe this is the part that's addressing the concern\nraised above though and if so, sorry, but would appreciate some\nadditional explanation.\n\nThanks!\n\nStephen", "msg_date": "Tue, 5 Dec 2023 10:54:58 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Detecting some cases of missing backup_label" }, { "msg_contents": "Greetings,\n\n* Stephen Frost ([email protected]) wrote:\n> * Andres Freund ([email protected]) wrote:\n> > I recently mentioned to Robert (and also Heikki earlier), that I think I see a\n> > way to detect an omitted backup_label in a relevant subset of the cases (it'd\n> > apply to the pg_control as well, if we moved to that). Robert encouraged me\n> > to share the idea, even though it does not provide complete protection.\n> \n> That would certainly be nice.\n> \n> > The subset I think we can address is the following:\n> > \n> > a) An omitted backup_label would lead to corruption, i.e. without the\n> > backup_label we won't start recovery at the right position. Obviously it'd\n> > be better to also catch a wrong procedure when it'd not cause corruption -\n> > perhaps my idea can be extended to handle that, with a small bit of\n> > overhead.\n> > \n> > b) The backup has been taken from a primary. Unfortunately that probably can't\n> > be addressed - but the vast majority of backups are taken from a primary,\n> > so I think it's still a worthwhile protection.\n> \n> Agreed that this is a worthwhile set to try and address, even if we\n> can't address other cases.\n> \n> > Here's my approach\n> > \n> > 1) We add a XLOG_BACKUP_START WAL record when starting a base backup on a\n> > primary, emitted just *after* the checkpoint completed\n> > \n> > 2) When replaying a base backup start record, we create a state file that\n> > includes the corresponding LSN in the filename\n> > \n> > 3) On the primary, the state file for XLOG_BACKUP_START is *not* created at\n> > that time. Instead the state file is created during pg_backup_stop().\n> > \n> > 4) When replaying a XLOG_BACKUP_END record, we verif that the state file\n> > created by XLOG_BACKUP_START is present, and error out if not. Backups\n> > that started before the redo LSN from backup_label are ignored\n> > (necessitates remembering that LSN, but we've been discussing that anyway).\n> > \n> > \n> > Because the backup state file on the primary is only created during\n> > pg_backup_stop(), a copy of the data directory taken between pg_backup_start()\n> > and pg_backup_stop() does *not* contain the corresponding \"backup state\n> > file\". Because of this, an omitted backup_label is detected if recovery does\n> > not start early enough - recovery won't encounter the XLOG_BACKUP_START record\n> > and thus would not create the state file, leading to an error in 4).\n> \n> While I see the idea here, I think, doesn't it end up being an issue if\n> things happen like this:\n> \n> pg_backup_start -> XLOG_BACKUP_START WAL written -> new checkpoint\n> happens -> pg_backup_stop -> XLOG_BACKUP_STOP WAL written -> crash\n> \n> In that scenario, we'd go back to the new checkpoint (the one *after*\n> the checkpoint that happened before we wrote XLOG_BACKUP_START), start\n> replaying, and then hit the XLOG_BACKUP_STOP and then error out, right?\n> Even though we're actually doing crash recovery and everything should be\n> fine as long as we replay all of the WAL.\n\nAndres and I discussed this in person at PGConf.eu and the idea is that\nif we find a XLOG_BACKUP_STOP record then we can check if the state file\nwas written out and if so then we can conclude that we are doing crash\nrecovery and not restoring from a backup and therefore we don't error\nout. This also implies that we don't consider PG to be recovered at the\nXLOG_BACKUP_STOP point, if the state file exists, but instead we have to\nbe sure to replay all WAL that's been written. Perhaps we even\nexplicitly refuse to use restore_command in this case?\n\nWe do error out if we hit a XLOG_BACKUP_STOP and the state file\ndoesn't exist, as that implies that we started replaying from a point\nafter a XLOG_BACKUP_START record was written but are working from a copy\nof the data directory which didn't include the state file.\n\nOf course, we need to actually implement and test these different cases\nto make sure it all works but I'm at least feeling better about the idea\nand wanted to share that here.\n\nThanks,\n\nStephen", "msg_date": "Mon, 18 Dec 2023 09:39:49 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Detecting some cases of missing backup_label" }, { "msg_contents": "On 12/18/23 10:39, Stephen Frost wrote:\n> Greetings,\n> \n> * Stephen Frost ([email protected]) wrote:\n>> * Andres Freund ([email protected]) wrote:\n>>> I recently mentioned to Robert (and also Heikki earlier), that I think I see a\n>>> way to detect an omitted backup_label in a relevant subset of the cases (it'd\n>>> apply to the pg_control as well, if we moved to that). Robert encouraged me\n>>> to share the idea, even though it does not provide complete protection.\n>>\n>> That would certainly be nice.\n>>\n>>> The subset I think we can address is the following:\n>>>\n>>> a) An omitted backup_label would lead to corruption, i.e. without the\n>>> backup_label we won't start recovery at the right position. Obviously it'd\n>>> be better to also catch a wrong procedure when it'd not cause corruption -\n>>> perhaps my idea can be extended to handle that, with a small bit of\n>>> overhead.\n>>>\n>>> b) The backup has been taken from a primary. Unfortunately that probably can't\n>>> be addressed - but the vast majority of backups are taken from a primary,\n>>> so I think it's still a worthwhile protection.\n>>\n>> Agreed that this is a worthwhile set to try and address, even if we\n>> can't address other cases.\n>>\n>>> Here's my approach\n>>>\n>>> 1) We add a XLOG_BACKUP_START WAL record when starting a base backup on a\n>>> primary, emitted just *after* the checkpoint completed\n>>>\n>>> 2) When replaying a base backup start record, we create a state file that\n>>> includes the corresponding LSN in the filename\n>>>\n>>> 3) On the primary, the state file for XLOG_BACKUP_START is *not* created at\n>>> that time. Instead the state file is created during pg_backup_stop().\n>>>\n>>> 4) When replaying a XLOG_BACKUP_END record, we verif that the state file\n>>> created by XLOG_BACKUP_START is present, and error out if not. Backups\n>>> that started before the redo LSN from backup_label are ignored\n>>> (necessitates remembering that LSN, but we've been discussing that anyway).\n>>>\n>>>\n>>> Because the backup state file on the primary is only created during\n>>> pg_backup_stop(), a copy of the data directory taken between pg_backup_start()\n>>> and pg_backup_stop() does *not* contain the corresponding \"backup state\n>>> file\". Because of this, an omitted backup_label is detected if recovery does\n>>> not start early enough - recovery won't encounter the XLOG_BACKUP_START record\n>>> and thus would not create the state file, leading to an error in 4).\n>>\n>> While I see the idea here, I think, doesn't it end up being an issue if\n>> things happen like this:\n>>\n>> pg_backup_start -> XLOG_BACKUP_START WAL written -> new checkpoint\n>> happens -> pg_backup_stop -> XLOG_BACKUP_STOP WAL written -> crash\n>>\n>> In that scenario, we'd go back to the new checkpoint (the one *after*\n>> the checkpoint that happened before we wrote XLOG_BACKUP_START), start\n>> replaying, and then hit the XLOG_BACKUP_STOP and then error out, right?\n>> Even though we're actually doing crash recovery and everything should be\n>> fine as long as we replay all of the WAL.\n> \n> Andres and I discussed this in person at PGConf.eu and the idea is that\n> if we find a XLOG_BACKUP_STOP record then we can check if the state file\n> was written out and if so then we can conclude that we are doing crash\n> recovery and not restoring from a backup and therefore we don't error\n> out. This also implies that we don't consider PG to be recovered at the\n> XLOG_BACKUP_STOP point, if the state file exists, but instead we have to\n> be sure to replay all WAL that's been written. Perhaps we even\n> explicitly refuse to use restore_command in this case?\n> \n> We do error out if we hit a XLOG_BACKUP_STOP and the state file\n> doesn't exist, as that implies that we started replaying from a point\n> after a XLOG_BACKUP_START record was written but are working from a copy\n> of the data directory which didn't include the state file.\n> \n> Of course, we need to actually implement and test these different cases\n> to make sure it all works but I'm at least feeling better about the idea\n> and wanted to share that here.\n\nI've run this through a bunch of scenarios (in my head) with parallel \nbackups and it does seem to hold up.\n\nI think we'd need to write the state file before XLOG_BACKUP_START just \nin case. Seems better to have an extra state file rather than have one \nbe missing.\n\nAs you say, we'll need to store redo for the last recovered backup in \npg_control. I guess it would be OK to remove that when the cluster is \npromoted. As long as recovery is going on seems like it would always be \npossible to hit an XLOG_BACKUP_STOP for an even longer running backup.\n\nI'm a little worried about what happens if a state file goes missing, \nbut I guess that could be true of any file in PGDATA.\n\nProbably we'd want to exclude *all* state files from backups, though. \nSeems like in various PITR scenarios it could be hard to determine when \nto remove them.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 20 Dec 2023 13:11:37 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Detecting some cases of missing backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-12-20 13:11:37 -0400, David Steele wrote:\n> I've run this through a bunch of scenarios (in my head) with parallel\n> backups and it does seem to hold up.\n>\n> I think we'd need to write the state file before XLOG_BACKUP_START just in\n> case. Seems better to have an extra state file rather than have one be\n> missing.\n\nThat'd very significantly weaken the approach, afaict, because \"external\" base\nbase backup could end up copying those files. The whole point is to detect\nbroken procedures, so relying on such files being excluded from the base\nbackup seems like a bad idea.\n\nI also see no need to do so - because we'd only verify that a backup start has\nbeen replayed when replaying XLOG_BACKUP_STOP there's no danger in not\ncreating the files during XLOG_BACKUP_START, but doing so just before logging\nthe XLOG_BACKUP_STOP.\n\n\n\n> I'm a little worried about what happens if a state file goes missing, but I\n> guess that could be true of any file in PGDATA.\n\nYea, that seems like a non-issue to me.\n\n\n> Probably we'd want to exclude *all* state files from backups, though.\n\nI don't think so - I think we want the opposite? As noted above, I think in a\nsafety net like this we shouldn't assume that backup procedures were followed\ncorrectly.\n\n\n> Seems like in various PITR scenarios it could be hard to determine when to\n> remove them.\n\nWhy? I think we can basically remove the files when:\n\na) after the checkpoint during which XLOG_BACKUP_STOP was replayed - I think\n we already have the infrastructure to queue file deletions that we can hook\n into\nb) when replaying a shutdown checkpoint / after creation of a shutdown\n checkpoint\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Dec 2023 03:37:46 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Detecting some cases of missing backup_label" }, { "msg_contents": "On 12/21/23 07:37, Andres Freund wrote:\n> \n> On 2023-12-20 13:11:37 -0400, David Steele wrote:\n>> I've run this through a bunch of scenarios (in my head) with parallel\n>> backups and it does seem to hold up.\n>>\n>> I think we'd need to write the state file before XLOG_BACKUP_START just in\n>> case. Seems better to have an extra state file rather than have one be\n>> missing.\n> \n> That'd very significantly weaken the approach, afaict, because \"external\" base\n> base backup could end up copying those files. The whole point is to detect\n> broken procedures, so relying on such files being excluded from the base\n> backup seems like a bad idea.\n> \n> I also see no need to do so - because we'd only verify that a backup start has\n> been replayed when replaying XLOG_BACKUP_STOP there's no danger in not\n> creating the files during XLOG_BACKUP_START, but doing so just before logging\n> the XLOG_BACKUP_STOP.\n\nUgh, I meant XLOG_BACKUP_STOP. So sounds like we are on the same page.\n\n>> Probably we'd want to exclude *all* state files from backups, though.\n> \n> I don't think so - I think we want the opposite? As noted above, I think in a\n> safety net like this we shouldn't assume that backup procedures were followed\n> correctly.\n\nFair enough.\n\n>> Seems like in various PITR scenarios it could be hard to determine when to\n>> remove them.\n> \n> Why? I think we can basically remove the files when:\n> \n> a) after the checkpoint during which XLOG_BACKUP_STOP was replayed - I think\n> we already have the infrastructure to queue file deletions that we can hook\n> into\n> b) when replaying a shutdown checkpoint / after creation of a shutdown\n> checkpoint\n\nI thought about this some more. I *think* any state files a backup can \nsee would have to be for XLOG_BACKUP_STOP records generated during the \nbackup and they would get removed before the cluster had recovered to \nconsistency.\n\nI'd still prefer to exclude state files from the backup, but I agree \nthere is no actual need to do so.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 21 Dec 2023 08:26:29 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Detecting some cases of missing backup_label" } ]
[ { "msg_contents": "Hi all,\n\nBack in 2016, a patch set has been proposed to add support for\nsequence access methods:\nhttps://www.postgresql.org/message-id/flat/CA%2BU5nMLV3ccdzbqCvcedd-HfrE4dUmoFmTBPL_uJ9YjsQbR7iQ%40mail.gmail.com\n\nThis included quite a few concepts, somewhat adapted to the point\nwhere this feature was proposed:\n- Addition of USING clause for CREATE/ALTER SEQUENCE.\n- Addition of WITH clause for sequences, with custom reloptions.\n- All sequences rely on heap\n- The user-case focused on was the possibility to have cluster-wide\nsequences, with sequence storage always linked to heap.\n- Dump/restore logic depended on that, with a set of get/set functions\nto be able to retrieve and force a set of properties to sequences.\n\nA bunch of the implementation and design choices back then come down\nto the fact that *all* the sequence properties were located in a\nsingle heap file, including start, restart, cycle, increment, etc.\nPostgres 10 has split this data with the introduction of the catalog\npg_sequence, that has moved all the sequence properties within it.\nAs a result, the sequence \"heap\" metadata got reduced to its\nlast_value, is_called and log_cnt (to count if a metadata tuple should\nbe WAL-logged). Honestly, I think we can do simpler than the original\nproposal, while satisfying more cases than what the original thread\nwanted to address. One thing is that a sequence AM may want storage,\nbut it should be able to plug in to a table AM of its choice.\n\nPlease find attached a patch set that aims at implementing sequence\naccess methods, with callbacks following a model close to table and\nindex AMs, with a few cases in mind:\n- Global sequences (including range-allocation, local caching).\n- Local custom computations (a-la-snowflake).\n\nThe patch set has been reduced to what I consider the minimum\nacceptable for an implementation, with some properties like:\n- Implementation of USING in CREATE SEQUENCE only, no need for WITH\nand reloptions (could be added later).\n- Implementation of dump/restore, with a GUC to force a default\nsequence AM, and a way to dump/restore without a sequence AM set (like\ntable AMs, this relies on SET and a --no-sequence-access-method).\n- Sequence AMs can use a custom table AM to store its meta-data, which\ncould be heap, or something else. A sequence AM is free to choose if\nit should store data or not, and can plug into a custom RMGR to log\ndata.\n- Ensure compatibility with the existing in-core method, called\n\"local\" in this patch set. This uses a heap table, and a local\nsequence AM registers the same pg_class entry as past releases.\nPerhaps this should have a less generic name, like \"seqlocal\",\n\"sequence_local\", but I have a bad tracking history when it comes to\nname things. I've just inherited the name from the patch of 2016.\n- pg_sequence is used to provide hints (or advices) to the sequence\nAM about the way to compute values. A nice side effect of that is\nthat cross-property check for sequences are the same for all sequence\nAMs. This gives a clean split between pg_sequence and the metadata\nmanaged by sequence AMs.\n\nOn HEAD, sequence.c holds three different concepts, and decided that\nthis stuff should actually split them for table AMs:\n1) pg_sequence and general sequence properties.\n2) Local sequence cache, for lastval(), depending on the last sequence\nvalue fetched.\n3) In-core sequence metadata, used to grab or set values for all\nthe flavors of setval(), nextval(), etc.\n\nWith a focus on compatibility, the key design choices here are that 1)\nand 2) have the same rules shared across all AMs, and 3) is something\nthat sequence AMs are free to play with as they want. Using this\nconcept, the contents of 3) in sequence.c are now local into the\n\"local\" sequence AM:\n- RMGR for sequences, as of xl_seq_rec and RM_SEQ_ID (renamed to use\n\"local\" as well).\n- FormData_pg_sequence_data for the local sequence metadata, including\nits attributes, SEQ_COL_*, the internal routines managing rewrites of\nits heap, etc.\n- In sequence.c, log_cnt is not a counter, just a way to decide if a\nsequence metadata should be reset or not (note that init_params() only\nresets it to 0 if sequence properties are changed).\nAs a result, 30% of sequence.c is trimmed of its in-core AM concepts,\nall moved to local.c.\n\nWhile working on this patch, I've finished by keeping a focus on\ndump/restore permeability and focus on being able to use nextval(),\nsetval(), lastval() and even pg_sequence_last_value() across all AMs\nso as it makes integration with things like SERIAL or GENERATED\ncolumns natural. Hence, the callbacks are shaped so as these\nfunctions are transparent across all sequence AMs. See sequenceam.h\nfor the details about them, and local.c for the \"local\" sequence AM.\n\nThe attached patch set covers all the ground I wanted to cover with\nthis stuff, including dump/restore, tests, docs, compatibility, etc,\netc. I've done a first split of things to make the review more\nedible, as there are a few independent pieces I've bumped into while\nimplementing the callbacks.\n\nHere are some independent refactoring pieces:\n- 0001 is something to make dump/restore transparent across all\nsequence AMs. Currently, pg_dump queries sequence heap tables, but a\nsequence AM may not have any storage locally, or could grab its values\nfrom elsewhere. pg_sequence_last_value(), a non-documented function\nused for pg_sequence, is extended so as it returns a row made of\n(last_value, is_called), so as it can be used for dump data, across\nall AMs.\n- 0002 introduces sequence_open() and sequence_close(). Like tables\nand indexes, this is coupled with a relkind check, and used as the\nsole way to open sequences in sequence.c.\n- 0003 groups the sequence cache updates of sequence.c closer to each\nother. This stuff was hidden in the middle of unrelated computations.\n- 0004 removes all traces of FormData_pg_sequence_data from\ninit_params(), which is used to guess the start value and is_called \nfor a sequence depending on its properties in the catalog pg_sequence.\n- 0005 is an interesting one. I've noticed that I wanted to attach\ncustom attributes to the pg_class entry of a sequence, or just not\nstore any attributes at *all* within it. One approach I have\nconsidered first is to list for the attributes to send to\nDefineRelation() within each AM, but this requires an early lookup at\nthe sequence AM routines, which was gross. Instead, I've chosen the\nmethod similar to views, where attributes are added after the relation\nis defined, using AlterTableInternal(). This simplifies the set of\ncallbacks so as initialization is in charge of creating the sequence\nattributes (if necessary) and add the first batch of metadata tuple\nfor a sequence (again, if necessary). The changes that reflect to\nevent triggers and the commands collected is also something I've\nwanted, as it becomes possible to track what gets internally created\nfor a sequence depending on its AM (see test_ddl_deparse).\n\nThen comes the core of the changes, with a split depending on code\npaths:\n- 0006 includes the backend changes, that caches a set of callback\nroutines for each sequence Relation, with an optional rd_tableam.\nCallbacks are documented in sequenceam.h. Perhaps the sequence RMGR\nrenames should be split into a patch of its own, or just let as-is as\nas it could be shared across more than one AM, but I did not see a\nhuge argument one way or another. The diffs are not that bad\nconsidering that the original patch at +1200 lines for src/backend/,\nwith less documentation for the internal callbacks:\n 45 files changed, 1414 insertions(+), 718 deletions(-)\n- 0007 adds some documentation.\n- 0008 adds support for dump/restore, where I have also incorporated\ntests and docs. The implementation finishes by being really\nstraight-forward, relying on a new option switch to control if\nSET queries for sequence AMs should be dumped and/or restored,\ndepending ona GUC called default_sequence_access_method.\n- 0009 is a short example of sequence AM, which is a kind of in-memory\nsequence reset each time a new connection is made, without any\nphysical storage. I am not clear yet if that's useful as local.c can\nbe used as a point of reference, but I was planning to include that in\none of my own repos on github like my blackhole_am.\n\nI am adding that to the next CF. Thoughts and comments are welcome.\n--\nMichael", "msg_date": "Fri, 1 Dec 2023 14:00:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Sequence Access Methods, round two" }, { "msg_contents": "On Fri, Dec 01, 2023 at 02:00:54PM +0900, Michael Paquier wrote:\n> - 0006 includes the backend changes, that caches a set of callback\n> routines for each sequence Relation, with an optional rd_tableam.\n> Callbacks are documented in sequenceam.h. Perhaps the sequence RMGR\n> renames should be split into a patch of its own, or just let as-is as\n> as it could be shared across more than one AM, but I did not see a\n> huge argument one way or another. The diffs are not that bad\n> considering that the original patch at +1200 lines for src/backend/,\n> with less documentation for the internal callbacks:\n> 45 files changed, 1414 insertions(+), 718 deletions(-)\n\nWhile looking at the patch set, I have noticed that the previous patch\n0006 for the backend changes could be split into two patches to make\nthe review much easier, as of\n- A first patch moving the code related to the in-core sequence AM\nfrom commands/sequence.c to access/sequence/local.c, reshaping the\nsequence RMGR:\n 12 files changed, 793 insertions(+), 611 deletions(-) \n- A second patch to introduce the callbacks, the relcache and the\nbackend pieces, renaming the contents moved to local.c by the first\npatch switching it to the handler:\n 38 files changed, 661 insertions(+), 155 deletions(-) \n\nSo please find attached a v2 set, with some typos fixed on top of this\nextra split.\n\nWhile on it, I have been doing some performance tests to see the\neffect of the extra function pointers from the handler, required for\nthe computation of nextval(), using:\n- Postgres on a tmpfs, running on scissors.\n- An unlogged sequence.\n- \"SELECT count(nextval('popo')) FROM generate_series(1,N);\" where N >\n0.\nAt N=5M, one of my perf machines takes 3230ms in average to run the\nquery on HEAD (646ns per value, 20 runs), and 3315ms with the patch\n(663ns, 20 runs), which is.. Err, not noticeable. But perhaps\nsomebody has a better idea of tests, say more micro-benchmarking\naround nextval_internal()?\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 15:53:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On 01.12.23 06:00, Michael Paquier wrote:\n> Please find attached a patch set that aims at implementing sequence\n> access methods, with callbacks following a model close to table and\n> index AMs, with a few cases in mind:\n> - Global sequences (including range-allocation, local caching).\n> - Local custom computations (a-la-snowflake).\n\nThat's a lot of code, but the use cases are summarized in two lines?!?\n\nI would like to see a lot more elaboration what these uses cases are (I \nrecognize the words, but do we have the same interpretation of them?) \nand how they would be addressed by what you are proposing, and better \nyet an actual implementation of something useful, rather than just a \ndummy test module.\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:05:58 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, 18 Jan 2024, 16:06 Peter Eisentraut, <[email protected]> wrote:\n>\n> On 01.12.23 06:00, Michael Paquier wrote:\n> > Please find attached a patch set that aims at implementing sequence\n> > access methods, with callbacks following a model close to table and\n> > index AMs, with a few cases in mind:\n> > - Global sequences (including range-allocation, local caching).\n> > - Local custom computations (a-la-snowflake).\n>\n> That's a lot of code, but the use cases are summarized in two lines?!?\n>\n> I would like to see a lot more elaboration what these uses cases are (I\n> recognize the words, but do we have the same interpretation of them?)\n> and how they would be addressed by what you are proposing, and better\n> yet an actual implementation of something useful, rather than just a\n> dummy test module.\n\nAt $prevjob we had a use case for PRNG to generate small,\nnon-sequential \"random\" numbers without the birthday problem occurring\nin sqrt(option space) because that'd increase the printed length of\nthe numbers beyond a set limit. The sequence API proposed here\nwould've been a great alternative to the solution we found, as it\nwould allow a sequence to be backed by an Linear Congruential\nGenerator directly, rather than the implementation of our own\ntransactional random_sequence table.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:54:06 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, Jan 18, 2024 at 04:54:06PM +0100, Matthias van de Meent wrote:\n> On Thu, 18 Jan 2024, 16:06 Peter Eisentraut, <[email protected]> wrote:\n>> On 01.12.23 06:00, Michael Paquier wrote:\n>>> Please find attached a patch set that aims at implementing sequence\n>>> access methods, with callbacks following a model close to table and\n>>> index AMs, with a few cases in mind:\n>>> - Global sequences (including range-allocation, local caching).\n>>> - Local custom computations (a-la-snowflake).\n>>\n>> That's a lot of code, but the use cases are summarized in two lines?!?\n>>\n>> I would like to see a lot more elaboration what these uses cases are (I\n>> recognize the words, but do we have the same interpretation of them?)\n>> and how they would be addressed by what you are proposing, and better\n>> yet an actual implementation of something useful, rather than just a\n>> dummy test module.\n> \n> At $prevjob we had a use case for PRNG to generate small,\n> non-sequential \"random\" numbers without the birthday problem occurring\n> in sqrt(option space) because that'd increase the printed length of\n> the numbers beyond a set limit. The sequence API proposed here\n> would've been a great alternative to the solution we found, as it\n> would allow a sequence to be backed by an Linear Congruential\n> Generator directly, rather than the implementation of our own\n> transactional random_sequence table.\n\nInteresting.\n\nYes, one of the advantages of this API layer is that all the\ncomputation is hidden behind a sequence object at the PostgreSQL\nlevel, hence applications just need to set a GUC to select a given\ncomputation method *while* still using the same DDLs from their\napplication, or just append USING to their CREATE SEQUENCE but I've\nheard that applications would just do the former and forget about it.\n\nThe reason why this stuff has bumped into my desk is that we have no\ngood solution in-core for globally-distributed transactions for\nactive-active deployments. First, anything we have needs to be\nplugged into default expressions of attributes like with [1] or [2],\nor a tweak is to use sequence values that are computed with different\nincrements to avoid value overlaps across nodes. Both of these\nrequire application changes, which is meh for a bunch of users. The\nsecond approach with integer-based values can be become particularly a\npain if one has to fix value conflicts across nodes as they'd usually\nrequire extra tweaks with the sequence definitions, especially if it\nblocks applications in the middle of the night. Sequence AMs offer\nmore control on that. For example, snowflake IDs can rely on a GUC to\nset a specific machine ID to force some of the bits of a 64-bit\ninteger to be the same for a single node in an active-active\ndeployment, ensuring that any value computed across *all* the nodes of\na cluster are always unique, while being maintained behind a sequence\nobject in-core. (I can post a module to demonstrate that based on the\nsequence AM APIs, just wait a min.. Having more than a test module\nand/or a contrib is a separate discussion.)\n\nBy the way, patches 0001 to 0004 are just refactoring pieces.\nParticularly, 0001 redesigns pg_sequence_last_value() to work across\nthe board for upgrades and dumps, while avoiding a scan of the\nsequence \"heap\" relation in pg_dump. These are improvements for the\ncore code in any case.\n\n[1]: https://github.com/pgEdge/snowflake\n[2]: https://www.postgresql.org/message-id/TY3PR01MB988983D23E4F1DA10567BC5BF5B9A%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n--\nMichael", "msg_date": "Fri, 19 Jan 2024 08:27:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4677/\n[2] https://cirrus-ci.com/task/5576959615303680\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 17:03:16 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Mon, Jan 22, 2024 at 05:03:16PM +1100, Peter Smith wrote:\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\nIndeed. This is conflicting with the new gist_stratnum_identity on\nOID 8047, so switched to 8048. There was a second one in\nsrc/test/modules/meson.build. Attached is a rebased patch set.\n--\nMichael", "msg_date": "Mon, 22 Jan 2024 15:30:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On 18.01.24 16:54, Matthias van de Meent wrote:\n> At $prevjob we had a use case for PRNG to generate small,\n> non-sequential \"random\" numbers without the birthday problem occurring\n> in sqrt(option space) because that'd increase the printed length of\n> the numbers beyond a set limit. The sequence API proposed here\n> would've been a great alternative to the solution we found, as it\n> would allow a sequence to be backed by an Linear Congruential\n> Generator directly, rather than the implementation of our own\n> transactional random_sequence table.\n\nThis is an interesting use case. I think what you'd need for that is \njust the specification of a different \"nextval\" function and some \nadditional parameters (modulus, multiplier, and increment).\n\nThe proposed sequence AM patch would support a different nextval \nfunction, but does it support additional parameters? I haven't found that.\n\nAnother use case I have wished for from time to time is creating \nsequences using different data types, for example uuids. You'd just \nneed to provide a data-type-specific \"next\" function. However, in this \npatch, all the values and state are hardcoded to int64.\n\nWhile distributed systems can certainly use global int64 identifiers, \nI'd expect that there would also be demand for uuids, so designing this \nmore flexibly would be useful.\n\nI think the proposed patch covers too broad a range of abstraction \nlevels. The use cases described above are very high level and are just \nconcerned with how you get the next value. The current internal \nsequence state would be stored in whatever way it is stored now. But \nthis patch also includes callbacks for very low-level-seeming concepts \nlike table AMs and persistence. Those seem like different things. And \nthe two levels should be combinable. Maybe I want a local sequence of \nuuids or a global sequence of uuids, or a local sequence of integers or \na global sequence of integers. I mean, I haven't thought this through, \nbut I get the feeling that there should be more than one level of API \naround this.\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 10:58:50 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Tue, Jan 23, 2024 at 10:58:50AM +0100, Peter Eisentraut wrote:\n> On 18.01.24 16:54, Matthias van de Meent wrote:\n> The proposed sequence AM patch would support a different nextval function,\n> but does it support additional parameters? I haven't found that.\n\nYes and no. Yes as in \"the patch set can support it\" and no as in\n\"the patch does not implement that yet\". You could do two things\nhere:\n- Add support for reloptions for sequences. The patch does not\ninclude that on purpose because it already covers a lot of ground, and\nthat did not look like a strict requirement to me as a first shot. It\ncan be implemented on top of the patch set. That's not technically\ncomplicated, actually, but there are some shenanigans to discuss with\nthe heap relation used under the hood by a sequence for the in-core\nmethod or any other sequence AM that would need a sequence.\n- Control that with GUCs defined in the AM, which may be weird, still\nenough at relation level. And enough with the current patch set.\n\nreloptions would make the most sense to me here, I assume, to ease we\nhandle use nextval().\n\n> Another use case I have wished for from time to time is creating sequences\n> using different data types, for example uuids. You'd just need to provide a\n> data-type-specific \"next\" function. However, in this patch, all the values\n> and state are hardcoded to int64.\n\nYeah, because all the cases I've seen would be happy with being able\nto map a result to 8 bytes with a controlled computation method. The\nsize of the output generated, the set of data types that can be\nsupported by a table AM and the OID/name of the SQL function in charge\nof retrieving the value could be controlled in the callbacks\nthemselves, and this would require a design of the callbacks. The\nthing is that you *will* need callbacks and an AM layer to be able to\nachieve that. I agree this can be useful. Now this is a separate\nclause in the SEQUENCE DDLs, so it sounds to me like an entirely\ndifferent feature.\n\nFWIW, MSSQL has a concept of custom data types for one, though these\nneed to map to integers (see user-defined_integer_type).\n\nAnother thing is the SQL specification. You or Vik will very likely\ncorrect me here, but the spec mentions that sequences need to work on\ninteger values. A USING clause means that we already diverge from it,\nperhaps it is OK to diverge more. How about DDL properties like\nMin/Max or increment, then?\n\n> I think the proposed patch covers too broad a range of abstraction levels.\n> The use cases described above are very high level and are just concerned\n> with how you get the next value. The current internal sequence state would\n> be stored in whatever way it is stored now. But this patch also includes\n> callbacks for very low-level-seeming concepts like table AMs and\n> persistence. Those seem like different things. And the two levels should\n> be combinable. Maybe I want a local sequence of uuids or a global sequence\n> of uuids, or a local sequence of integers or a global sequence of integers.\n> I mean, I haven't thought this through, but I get the feeling that there\n> should be more than one level of API around this.\n\nThat's a tricky question, and I don't really know how far this needs\nto go. FWIW, like table AMs I don't want the callbacks to be set in\nstone across major releases. Now I am worrying a bit about designing\ncallbacks that are generic, still impact performance because they\nrequire more catalog lookups and/or function point manipulations for\nthe default cases. Separating the computation and the in-core SQL\nfunctions in a cleaner way is a step that helps in any case, IMO,\nthough I agree that the design of the callbacks influences how much is\nexposed to users and AM developers. Having only a USING clause that\ngives support to integer-based results while providing a way to force\nthe computation is useful. Custom data types that can be plugged into\nthe callbacks are also useful, still they are doing to require an AM\ncallback layer so as an AM can decide what it needs to do with the\ndata type given by the user in input of CREATE SEQUENCE.\n--\nMichael", "msg_date": "Thu, 25 Jan 2024 09:38:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On 19.01.24 00:27, Michael Paquier wrote:\n> The reason why this stuff has bumped into my desk is that we have no\n> good solution in-core for globally-distributed transactions for\n> active-active deployments. First, anything we have needs to be\n> plugged into default expressions of attributes like with [1] or [2],\n> or a tweak is to use sequence values that are computed with different\n> increments to avoid value overlaps across nodes. Both of these\n> require application changes, which is meh for a bunch of users.\n\nI don't follow how these require \"application changes\". I guess it \ndepends on where you define the boundary of the \"application\". The \ncited solutions require that you specify a different default expression \nfor \"id\" columns. Is that part of the application side? How would your \nsolution work on that level? AFAICT, you'd still need to specify the \nsequence AM when you create the sequence or identity column. So you'd \nneed to modify the DDL code in any case.\n\n\n", "msg_date": "Thu, 8 Feb 2024 16:06:36 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, Feb 08, 2024 at 04:06:36PM +0100, Peter Eisentraut wrote:\n> On 19.01.24 00:27, Michael Paquier wrote:\n>> The reason why this stuff has bumped into my desk is that we have no\n>> good solution in-core for globally-distributed transactions for\n>> active-active deployments. First, anything we have needs to be\n>> plugged into default expressions of attributes like with [1] or [2],\n>> or a tweak is to use sequence values that are computed with different\n>> increments to avoid value overlaps across nodes. Both of these\n>> require application changes, which is meh for a bunch of users.\n> \n> I don't follow how these require \"application changes\". I guess it depends\n> on where you define the boundary of the \"application\".\n\nYep. There's a dependency to that.\n\n> The cited solutions\n> require that you specify a different default expression for \"id\" columns.\n> Is that part of the application side? How would your solution work on that\n> level? AFAICT, you'd still need to specify the sequence AM when you create\n> the sequence or identity column. So you'd need to modify the DDL code in\n> any case.\n\nOne idea is to rely on a GUC to control what is the default sequence\nAM when taking the DefineRelation() path, so as the sequence AM\nattached to a sequence is known for any DDL operation that may create\none internally, including generated columns. The patch set does that\nwith default_sequence_access_method, including support for\npg_dump[all] and pg_restore to give the possibility to one to force a\nnew default or just dump data without a specific AM (this uses SET\ncommands in-between the CREATE/ALTER commands).\n--\nMichael", "msg_date": "Sun, 11 Feb 2024 09:03:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "Hi Michael,\n\nI took a quick look at this patch series, mostly to understand how it\nworks and how it might interact with the logical decoding patches\ndiscussed in a nearby thread.\n\nFirst, some general review comments:\n\n0001\n------\n\nI think this bit in pg_proc.dat is not quite right:\n\n proallargtypes => '{regclass,bool,int8}', proargmodes => '{i,o,o}',\n proargnames => '{seqname,is_called,last_value}',\n\nthe first argument should not be \"seqname\" but rather \"seqid\".\n\n\n0002, 0003\n------------\nseems fine, cosmetic changes\n\n\n0004\n------\n\nI don't understand this bit in AlterSequence:\n\n last_value = newdataform->last_value;\n is_called = newdataform->is_called;\n\n UnlockReleaseBuffer(buf);\n\n /* Check and set new values */\n init_params(pstate, stmt->options, stmt->for_identity, false,\n seqform, &last_value, &reset_state, &is_called,\n &need_seq_rewrite, &owned_by);\n\nWhy set the values only to pass them to init_params(), which will just\noverwrite them anyway? Or do I get this wrong?\n\nAlso, isn't \"reset_state\" just a different name for (original) log_cnt?\n\n\n0005\n------\n\nI don't quite understand what \"elt\" stands for :-(\n\n\tstmt->tableElts = NIL;\n\nDo we need AT_AddColumnToSequence? It seems to work exactly like\nAT_AddColumn. OTOH we do have AT_AddColumnToView too ...\n\nThinking about this code:\n\n case T_CreateSeqStmt:\n EventTriggerAlterTableStart(parsetree);\n address = DefineSequence(pstate, (CreateSeqStmt *) parsetree);\n /* stashed internally */\n commandCollected = true;\n EventTriggerAlterTableEnd();\n break;\n\nDoes this actually make sense? I mean, are sequences really relations?\nOr was that just a side effect of storing the state in a heap table\n(which is more of an implementation detail)?\n\n\n0006\n------\nno comment, just moving code\n\n\n0007\n------\nI wonder why heap_create_with_catalog needs to do this (check that it's\na sequence):\n\nif ((RELKIND_HAS_TABLE_AM(relkind) && relkind != RELKIND_TOASTVALUE) ||\n relkind == RELKIND_SEQUENCE)\n\nPresumably this is to handle sequences that use heap to store the state?\nMaybe the comment should explain that. Also, will the other table AMs\nneed to do something similar, just in case some sequence happens to use\nthat table AM (which seems out of control of the table AM)?\n\nI don't understand why DefineSequence need to copy the string:\n\n stmt->accessMethod = seq->accessMethod ? pstrdup(seq->accessMethod)\n: NULL;\n\nRelationInitTableAccessMethod now does not need to handle sequences, or\nrather should not be asked to handle sequences. Is there a risk we'd\npass a sequence to the function anyway? Maybe an assert / error would be\nappropriate?\n\nThis bit in RelationBuildLocalRelation looks a bit weird ...\n\n if (RELKIND_HAS_TABLE_AM(relkind))\n RelationInitTableAccessMethod(rel);\n else if (relkind == RELKIND_SEQUENCE)\n RelationInitSequenceAccessMethod(rel);\n\nIt's not a fault of this patch, but shouldn't we now have something like\nRELKIND_HAS_SEQUENCE_AM()?\n\n\n0008-0010\n-----------\nno comment\n\n\nlogical decoding / replication\n--------------------------------\nNow, regarding the logical decoding / replication, would introducing the\nsequence AM interfere with that in some way? Either in general, or with\nrespect to the nearby patch.\n\nThat is, what would it take to support logical replication of sequences\nwith some custom sequence AM? I believe that requires (a) synchronizing\nthe initial value, and (b) decoding the sequence WAL and (c) apply the\ndecoded changes. I don't think the sequence AM breaks any of this, as\nlong as it allows selecting \"current value\", decoding the values from\nWAL, sending them to the subscriber, etc.\n\nI guess the decoding would be up to the RMGR, and this patch maintains\nthe 1:1 mapping of sequences to relfilenodes, right? (That is, CREATE\nand ALTER SEQUENCE would still create a new relfilenode, which is rather\nimportant to decide if a sequence change is transactional.)\n\nIt seems to me this does not change the non-transactional behavior of\nsequences, right?\n\n\nalternative sequence AMs\n--------------------------\nI understand one of the reasons for adding sequence AMs is to allow\nstuff like global/distributed sequences, etc. But will people actually\nuse that?\n\nFor example, I believe Simon originally proposed this in 2016 because\nthe plan was to implement distributed sequences in BDR on top of it. But\nI believe BDR ultimately went with a very different approach, not with\ncustom sequence AMs. So I'm a bit skeptical this being suitable for\nother active-active systems ...\n\nEspecially when the general consensus seems to be that for active-active\nsystems it's much better to use e.g. UUID, because that does not require\nany coordination between the nodes, etc.\n\nI'm not claiming there are no use cases for sequence AMs, of course. For\nexample the PRNG-based sequences mentioned by Mattias seems interesting.\nI don't know how widely useful that is, though, and if it's worth it\n(considering they managed to implement it in a different way).\n\nBut I think it might be a good idea to implement a PoC of such sequence\nAM, if only to verify it can be implemented using the proposed code.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Feb 2024 17:36:00 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, Feb 22, 2024 at 05:36:00PM +0100, Tomas Vondra wrote:\n> 0002, 0003\n> ------------\n> seems fine, cosmetic changes\n\nThanks, I've applied these two for now. I'll reply to the rest\ntomorrow or so.\n\nBy the way, I am really wondering if the update of elm->increment in\nnextval_internal() should be treated as a bug? In the \"fetch\" cache\nif a sequence does not use cycle, we may fail when reaching the upper\nor lower bound for respectively an ascending or descending sequence,\nwhile still keeping what could be an incorrect value if values are\ncached on a follow-up nextval_internal call?\n--\nMichael", "msg_date": "Mon, 26 Feb 2024 17:10:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Mon, 26 Feb 2024 at 09:11, Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 22, 2024 at 05:36:00PM +0100, Tomas Vondra wrote:\n> > 0002, 0003\n> > ------------\n> > seems fine, cosmetic changes\n>\n> Thanks, I've applied these two for now. I'll reply to the rest\n> tomorrow or so.\n\nHuh, that's surprising to me. I'd expected this to get at least a\nfinal set of patches before they'd get committed. After a quick check\n6e951bf seems fine, but I do have some nits on 449e798c:\n\n> +/* ----------------\n> + * validate_relation_kind - check the relation's kind\n> + *\n> + * Make sure relkind is from an index\n\nShouldn't this be \"... from a sequence\"?\n\n> + * ----------------\n> + */\n> +static inline void\n> +validate_relation_kind(Relation r)\n\nShouldn't this be a bit more descriptive than just\n\"validate_relation_kind\"? I notice this is no different from how this\nis handled in index.c and table.c, but I'm not a huge fan of shadowing\nnames, even with static inlines functions.\n\n> -ERROR: \"serialtest1\" is not a sequence\n> +ERROR: cannot open relation \"serialtest1\"\n> +DETAIL: This operation is not supported for tables.\n\nWe seem to lose some details here: We can most definitely open tables.\nWe just can't open them while treating them as sequences, which is not\nmentioned in the error message.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 26 Feb 2024 09:38:06 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Mon, Feb 26, 2024 at 09:38:06AM +0100, Matthias van de Meent wrote:\n> On Mon, 26 Feb 2024 at 09:11, Michael Paquier <[email protected]> wrote:\n>> Thanks, I've applied these two for now. I'll reply to the rest\n>> tomorrow or so.\n> \n> Huh, that's surprising to me. I'd expected this to get at least a\n> final set of patches before they'd get committed.\n\nFWIW, these refactoring pieces just make sense taken independently,\nIMHO. I don't think that the rest of the patch set is going to make\nit into v17, because there's no agreement about the layer we want,\nwhich depend on the use cases we want to solve. Perhaps 0001 or 0004\ncould be salvaged. 0005~ had no real design discussion, so it's good\nfor 18~ as far as I am concerned. That's something that would be fit\nfor an unconference session at the next pgconf in Vancouver, in\ncombination with what we should do to support sequences across logical\nreplication setups.\n\n> After a quick check\n> 6e951bf seems fine, but I do have some nits on 449e798c:\n\nThanks.\n\n>> +/* ----------------\n>> + * validate_relation_kind - check the relation's kind\n>> + *\n>> + * Make sure relkind is from an index\n> \n> Shouldn't this be \"... from a sequence\"?\n\nRight, will fix.\n\n>> + * ----------------\n>> + */\n>> +static inline void\n>> +validate_relation_kind(Relation r)\n> \n> Shouldn't this be a bit more descriptive than just\n> \"validate_relation_kind\"? I notice this is no different from how this\n> is handled in index.c and table.c, but I'm not a huge fan of shadowing\n> names, even with static inlines functions.\n\nNot sure that it matters much, TBH. This is local to sequence.c.\n\n>> -ERROR: \"serialtest1\" is not a sequence\n>> +ERROR: cannot open relation \"serialtest1\"\n>> +DETAIL: This operation is not supported for tables.\n> \n> We seem to lose some details here: We can most definitely open tables.\n> We just can't open them while treating them as sequences, which is not\n> mentioned in the error message.\n\nI am not sure to agree with that. The user already knows that he\nshould be dealing with a sequence based on the DDL used, and we gain\ninformation about the relkind getting manipulated here.\n--\nMichael", "msg_date": "Tue, 27 Feb 2024 08:19:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, Feb 22, 2024 at 05:36:00PM +0100, Tomas Vondra wrote:\n> I took a quick look at this patch series, mostly to understand how it\n> works and how it might interact with the logical decoding patches\n> discussed in a nearby thread.\n\nThanks. Both discussions are linked.\n\n> 0001\n> ------\n> \n> I think this bit in pg_proc.dat is not quite right:\n> \n> proallargtypes => '{regclass,bool,int8}', proargmodes => '{i,o,o}',\n> proargnames => '{seqname,is_called,last_value}',\n> \n> the first argument should not be \"seqname\" but rather \"seqid\".\n\nAh, right. There are not many system functions that use regclass as\narguments, but the existing ones refer more to IDs, not names.\n\n> 0002, 0003\n> ------------\n> seems fine, cosmetic changes\n\nApplied these ones as 449e798c77ed and 6e951bf98e2e.\n\n> 0004\n> ------\n> \n> I don't understand this bit in AlterSequence:\n> \n> last_value = newdataform->last_value;\n> is_called = newdataform->is_called;\n> \n> UnlockReleaseBuffer(buf);\n> \n> /* Check and set new values */\n> init_params(pstate, stmt->options, stmt->for_identity, false,\n> seqform, &last_value, &reset_state, &is_called,\n> &need_seq_rewrite, &owned_by);\n> \n> Why set the values only to pass them to init_params(), which will just\n> overwrite them anyway? Or do I get this wrong?\n\nThe values of \"last_value\" and is_called may not get updated depending\non the options given in the ALTER SEQUENCE query, and they need to use\nas initial state what's been returned from their last heap lookup. \n\n> Also, isn't \"reset_state\" just a different name for (original) log_cnt?\n\nYep. That's quite the point. That's an implementation detail\ndepending on the interface a sequence AM should use, but the main\nargument behind this change is that log_cnt is a counter to decide\nwhen to WAL-log the changes of a relation, but I have noticed that all\nthe paths of init_params() don't care about log_cnt as being a counter\nat all: we just want to know if the state of a sequence should be\nreset. Form_pg_sequence_data is a piece that only the in-core \"local\"\nsequence AM cares about in this proposal.\n\n> 0005\n> ------\n> \n> I don't quite understand what \"elt\" stands for :-(\n> \n> \tstmt->tableElts = NIL;\n>\n> Do we need AT_AddColumnToSequence? It seems to work exactly like\n> AT_AddColumn. OTOH we do have AT_AddColumnToView too ...\n\nYeah, that's just cleaner to use a separate one, to be able to detect\nthe attributes in the DDL deparsing pieces when gathering these pieces\nwith event triggers. At least that's my take once you extract the\npiece that a sequence AM may need a table AM to store its data with\nits own set of attributes (a sequence AM may as well not need a local\ntable for its data).\n\n> Thinking about this code:\n> \n> case T_CreateSeqStmt:\n> EventTriggerAlterTableStart(parsetree);\n> address = DefineSequence(pstate, (CreateSeqStmt *) parsetree);\n> /* stashed internally */\n> commandCollected = true;\n> EventTriggerAlterTableEnd();\n> break;\n> \n> Does this actually make sense? I mean, are sequences really relations?\n> Or was that just a side effect of storing the state in a heap table\n> (which is more of an implementation detail)?\n\nThis was becoming handy when creating custom attributes for the\nunderlying table used by a sequence.\n\nSequences are already relations (views are also relations), we store\nthem in pg_class. Now sequences can also use tables internally to\nstore their data, like the in-core \"local\" sequence AM defined in the\npatch. At least that's the split done in this patch set.\n\n> 0007\n> ------\n> I wonder why heap_create_with_catalog needs to do this (check that it's\n> a sequence):\n> \n> if ((RELKIND_HAS_TABLE_AM(relkind) && relkind != RELKIND_TOASTVALUE) ||\n> relkind == RELKIND_SEQUENCE)\n> \n> Presumably this is to handle sequences that use heap to store the state?\n> Maybe the comment should explain that. Also, will the other table AMs\n> need to do something similar, just in case some sequence happens to use\n> that table AM (which seems out of control of the table AM)?\n\nOkay, I can see why this part can be confusing with the state of\nthings in v2. In DefineRelation(), heap_create_with_catalog() passes\ndown the OID of the sequence access method when creating a sequence,\nnot the OID of the table AM it may rely on. There's coverage for that\nin the regression tests if you remove the check, see the \"Try to drop\nand fail on dependency\" in create_am.sql.\n\nYou have a good point here: there could be a dependency between a\ntable AM and a sequence AM that may depend on it. The best way to\ntackle that would be to add a DEPENDENCY_NORMAL on the amhandler of\nthe table AM when dealing with a sequence amtype in\nCreateAccessMethod() in this design. Does that make sense?\n\n(This may or may not make sense depending on how the design problem\nrelated to the relationship between a sequence AM and its optional\ntable AM is tackled, of course, but at least it makes sense to me in\nthe scope of the design of this patch set.)\n\n> I don't understand why DefineSequence need to copy the string:\n> \n> stmt->accessMethod = seq->accessMethod ? pstrdup(seq->accessMethod)\n> : NULL;\n\nThat's required to pass down the correct sequence AM for\nDefineRelation() when creating the pg_class entry of a sequence.\n\n> RelationInitTableAccessMethod now does not need to handle sequences, or\n> rather should not be asked to handle sequences. Is there a risk we'd\n> pass a sequence to the function anyway? Maybe an assert / error would be\n> appropriate?\n\nHmm. The risk sounds legit. This is something where an assertion\nbased on RELKIND_HAS_TABLE_AM() would be useful. Same argument for\nRelationInitSequenceAccessMethod() with RELKIND_HAS_SEQUENCE_AM()\nsuggested below. I've added these, for now.\n\n> This bit in RelationBuildLocalRelation looks a bit weird ...\n> \n> if (RELKIND_HAS_TABLE_AM(relkind))\n> RelationInitTableAccessMethod(rel);\n> else if (relkind == RELKIND_SEQUENCE)\n> RelationInitSequenceAccessMethod(rel);\n> \n> It's not a fault of this patch, but shouldn't we now have something like\n> RELKIND_HAS_SEQUENCE_AM()?\n\nPerhaps, I was not sure. This would just be a check on\nRELKIND_SEQUENCE, but perhaps that's worth having at the end, and this\nmakes the code more symmetric in the relcache, for one. The comment\nat the top of RELKIND_HAS_TABLE_AM is wrong with 0007 in place anyway.\n\n> logical decoding / replication\n> --------------------------------\n> Now, regarding the logical decoding / replication, would introducing the\n> sequence AM interfere with that in some way? Either in general, or with\n> respect to the nearby patch.\n\nI think it does not. The semantics of the existing in-core \"local\"\nsequence AM are not changed. So what's here is just a large\nrefactoring shaped around the current semantics of the existing\ncomputation method. Perhaps it should be smarter about some aspects,\nbut that's not something we'll know about until folks start\nimplementing their own custom methods. On my side, being able to plug\nin a custom callback into nextval_internal() is the main taker.\n\n> That is, what would it take to support logical replication of sequences\n> with some custom sequence AM? I believe that requires (a) synchronizing\n> the initial value, and (b) decoding the sequence WAL and (c) apply the\n> decoded changes. I don't think the sequence AM breaks any of this, as\n> long as it allows selecting \"current value\", decoding the values from\n> WAL, sending them to the subscriber, etc.\n\nSure, that may make sense to support, particularly if one uses a\nsequence AM that uses a computation method that may not be unique\nacross nodes, and where you may want to copy them. I don't think that\nthis is problem for something like the proposal of this thread or\nwhat the other thread does, they can tackle separate areas (the\nlogirep patch has a lot of value for rolling upgrades where one uses\nlogical replication to create the new node and somebody does not want\nto bother with a custom computation).\n\n> I guess the decoding would be up to the RMGR, and this patch maintains\n> the 1:1 mapping of sequences to relfilenodes, right? (That is, CREATE\n> and ALTER SEQUENCE would still create a new relfilenode, which is rather\n> important to decide if a sequence change is transactional.)\n\nYeah, one \"local\" sequence would have one relfilenode. A sequence AM\nmay want something different, like not using shared buffers, or just\nnot use a relfilenode at all.\n\n> It seems to me this does not change the non-transactional behavior of\n> sequences, right?\n\nThis patch set does nothing about the non-transactional behavior of\nsequences. That seems out of scope to me from the start of what I\nhave sent here.\n\n> alternative sequence AMs\n> --------------------------\n> I understand one of the reasons for adding sequence AMs is to allow\n> stuff like global/distributed sequences, etc. But will people actually\n> use that?\n\nGood question. I have users who would be happy with that, hiding\nbehind sequences custom computations rather than plug in a bunch of\ndefault expressions to various attributes. You can do that today, but\nthis has this limitations depending on how much control one has over\ntheir applications (for example this cannot be easily achieved with\ngenerated columns in a schema).\n\n> For example, I believe Simon originally proposed this in 2016 because\n> the plan was to implement distributed sequences in BDR on top of it. But\n> I believe BDR ultimately went with a very different approach, not with\n> custom sequence AMs. So I'm a bit skeptical this being suitable for\n> other active-active systems ...\n\nSnowflake IDs are popular AFAIK, thanks to the unicity of the values\nacross nodes.\n\n> Especially when the general consensus seems to be that for active-active\n> systems it's much better to use e.g. UUID, because that does not require\n> any coordination between the nodes, etc.\n\nThat means being able to support something larger than 64b values as\nthese are 128b. \n\n> I'm not claiming there are no use cases for sequence AMs, of course. For\n> example the PRNG-based sequences mentioned by Mattias seems interesting.\n> I don't know how widely useful that is, though, and if it's worth it\n> (considering they managed to implement it in a different way).\n\nRight. I bet that they just plugged a default expression to the\nattributes involved. When it comes to users at a large scale, a\nsequence AM makes the change more transparent, especially if DDL\nqueries are replication across multiple logical nodes.\n\n> But I think it might be a good idea to implement a PoC of such sequence\n> AM, if only to verify it can be implemented using the proposed code.\n\nYou mean the PRNG idea or something else? I have a half-baked\nimplementation for snowflake, actually. Would that be enough? I\nstill need to spend more hours on it to polish it. One part I found\nmore difficult than necessary with the patch set of this thread is the\nAPIs used in commands/sequence.c for the buffer manipulations,\nrequiring more duplications. Not impossible to do outside core, but\nI've wanted more refactoring of the routines used by the \"local\"\nsequence AM of this patch.\n\nPlugging in a custom data type on top of the existing sequence objects\nis something entirely different, where we will need a callback\nseparation anyway at the end, IMHO. This seems like a separate topic\nto me at the end, as custom computations with 64b to store them is\nenough based on what I've heard even for hundreds of nodes. I may be\nwrong and may not think big enough, of course.\n\nAttaching a v3 set, fixing one conflict, while on it.\n--\nMichael", "msg_date": "Tue, 27 Feb 2024 10:27:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Tue, Feb 27, 2024 at 10:27:13AM +0900, Michael Paquier wrote:\n> On Thu, Feb 22, 2024 at 05:36:00PM +0100, Tomas Vondra wrote:\n>> 0001\n>> ------\n>> \n>> I think this bit in pg_proc.dat is not quite right:\n>> \n>> proallargtypes => '{regclass,bool,int8}', proargmodes => '{i,o,o}',\n>> proargnames => '{seqname,is_called,last_value}',\n>> \n>> the first argument should not be \"seqname\" but rather \"seqid\".\n> \n> Ah, right. There are not many system functions that use regclass as\n> arguments, but the existing ones refer more to IDs, not names.\n\nThis patch set is not going to be merged for this release, so I am\ngoing to move it to the next commit fest to continue the discussion in\nv18~.\n\nAnyway, there is one piece of this patch set that I think has a lot of\nvalue outside of the discussion with access methods, which is to\nredesign pg_sequence_last_value so as it returns a (last_value,\nis_called) tuple rather than a (last_value). This has the benefit of\nswitching pg_dump to use this function rather than relying on a scan\nof the heap table used by a sequence to retrieve the state of a\nsequence dumped. This is the main diff:\n- appendPQExpBuffer(query,\n- \"SELECT last_value, is_called FROM %s\",\n- fmtQualifiedDumpable(tbinfo));\n+ /*\n+ * In versions 17 and up, pg_sequence_last_value() has been switched to\n+ * return a tuple with last_value and is_called.\n+ */\n+ if (fout->remoteVersion >= 170000)\n+ appendPQExpBuffer(query,\n+ \"SELECT last_value, is_called \"\n+ \"FROM pg_sequence_last_value('%s')\",\n+ fmtQualifiedDumpable(tbinfo));\n+ else\n+ appendPQExpBuffer(query,\n+ \"SELECT last_value, is_called FROM %s\",\n+ fmtQualifiedDumpable(tbinfo));\n\nAre there any objections to that? pg_sequence_last_value() is\nsomething that we've only been relying on internally for the catalog \npg_sequences.\n--\nMichael", "msg_date": "Tue, 12 Mar 2024 08:44:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On 12.03.24 00:44, Michael Paquier wrote:\n> Anyway, there is one piece of this patch set that I think has a lot of\n> value outside of the discussion with access methods, which is to\n> redesign pg_sequence_last_value so as it returns a (last_value,\n> is_called) tuple rather than a (last_value). This has the benefit of\n> switching pg_dump to use this function rather than relying on a scan\n> of the heap table used by a sequence to retrieve the state of a\n> sequence dumped. This is the main diff:\n> - appendPQExpBuffer(query,\n> - \"SELECT last_value, is_called FROM %s\",\n> - fmtQualifiedDumpable(tbinfo));\n> + /*\n> + * In versions 17 and up, pg_sequence_last_value() has been switched to\n> + * return a tuple with last_value and is_called.\n> + */\n> + if (fout->remoteVersion >= 170000)\n> + appendPQExpBuffer(query,\n> + \"SELECT last_value, is_called \"\n> + \"FROM pg_sequence_last_value('%s')\",\n> + fmtQualifiedDumpable(tbinfo));\n> + else\n> + appendPQExpBuffer(query,\n> + \"SELECT last_value, is_called FROM %s\",\n> + fmtQualifiedDumpable(tbinfo));\n> \n> Are there any objections to that? pg_sequence_last_value() is\n> something that we've only been relying on internally for the catalog\n> pg_sequences.\n\nI don't understand what the overall benefit of this change is supposed \nto be.\n\nIf this route were to be pursued, it should be a different function \nname. We shouldn't change the signature of an existing function.\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 07:00:37 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Wed, Mar 13, 2024 at 07:00:37AM +0100, Peter Eisentraut wrote:\n> I don't understand what the overall benefit of this change is supposed to\n> be.\n\nIn the context of this thread, this removes the dependency of sequence\nvalue lookup to heap.\n\n> If this route were to be pursued, it should be a different function name.\n> We shouldn't change the signature of an existing function.\n\nI'm not so sure about that. The existing pg_sequence_last_value is\nundocumented and only used in a system view.\n--\nMichael", "msg_date": "Thu, 14 Mar 2024 09:40:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, Mar 14, 2024 at 09:40:29AM +0900, Michael Paquier wrote:\n> In the context of this thread, this removes the dependency of sequence\n> value lookup to heap.\n\nI am not sure where this is leading in combination with the sequence\nstuff for logical decoding, so for now I am moving this patch to the\nnext commit fest to discuss things in 18~.\n--\nMichael", "msg_date": "Tue, 19 Mar 2024 10:54:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Tue, Mar 19, 2024 at 10:54:41AM +0900, Michael Paquier wrote:\n> I am not sure where this is leading in combination with the sequence\n> stuff for logical decoding, so for now I am moving this patch to the\n> next commit fest to discuss things in 18~.\n\nI have plans to rework this patch set for the next commit fest,\nand this includes some investigation about custom data types that\ncould be plugged into these AMs. For now, please find a rebase as\nthere were a couple of conflicts.\n--\nMichael", "msg_date": "Fri, 19 Apr 2024 16:00:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Fri, Apr 19, 2024 at 04:00:28PM +0900, Michael Paquier wrote:\n> I have plans to rework this patch set for the next commit fest,\n> and this includes some investigation about custom data types that\n> could be plugged into these AMs.\n\nSo, I have worked more on this patch set, and finished by reorganizing\nit more, with more things:\n- The in-core sequence access method is split into more files:\n-- One for its callbacks, called seqlocalam.c.\n-- The WAL replay routines are moved into their own file.\n- As asked, Implementation of a contrib module that introduces a\nsequence access method for snowflake IDs, to demonstrate what can be\ndone using the APIs of the patch. The data of such sequences is\nstored in an unlogged table, based on the assumption that the\ntimestamps and the machine IDs ensure the unicity of the IDs for the\nsequences. The advantage of what's presented here is that support for\nlastval(), nextval() and currval() is straight-forward. Identity\ncolumns are able to feed on that. cache is handled by sequence.c, not\nthe AM. WAL-logging is needed for the init fork, it goes through the\ngeneric WAL APIs like bloom to log full pages. Some docs are\nincluded. This is still a rough WIP, though, and the buffer handling\nis not optimal, and could be made transactional this time (assuming\nautovacuum is able to process them at some point, or perhaps the\nsequence AMs should offer a way for autovacuum to know if such\nsequences should be cleaned up or not).\n\nAfter having done that, I've also found about a module developed by\npgEdge, that copies a bunch of the code from sequence.c, though it is\nnot able to handle the sequence cache:\nhttps://github.com/pgEdge/snowflake\n\nThe approach this module uses is quite similar to what I have here,\nbut it is smarter regarding clock ticking, where the internal sequence\ncounter is bumped only when we fetch the same timestamp as a previous\nattempt. The module presented could be switched to do something\nsimilar by storing into the heap table used by the sequence a bit more\ndata than just the sequence counter. Well, the point I want to make\nat this stage is what can be done with sequence AMs, so let's discuss\nabout that later.\n\nFinally, custom types, where I have come up with a list of open\nquestions:\n- Catalog representation. pg_sequence and pg_sequences switch to\nsomething else than int64.\n- The existing functions are also interesting to consider here.\nnextval() & co would not be usable as they are for sequence AMs that\nuse more than int64. Note that the current design taken in the patch\nhas a strong dependency on int64 (see sequenceam.h). So the types\nwould need to reflect. With identity columns, the change would not be\nthat hard as the executor has NextValueExpr. Perhaps each sequence AM\nshould just have callback equivalents for currval(), nextval() and\nlastval(). This hits with the fact that this makes sequence AMs less\ntransparent to applications because custom data types means different\nfunctions than the native ones.\n- Option representation.\n- I have polled Twitter and Fosstodon with four choices:\n-- No need for that, 64b representation is enough.\n-- Possibility to have integer-like types (MSSQL does something like\nthat).\n-- Support for 128b or larger (UUIDs, etc, with text-like\nrepresentation or varlenas).\n-- Support for binary representation, which comes down to the\npossibility of having sequence values even larger than 128b.\n\nBased on the first estimations, 50%-ish of people mentioned than 64b\nis more than enough, while Jelte mentioned that Citus has tackled this\nproblem with an equivalent of 128b (64b for the sequence values, some\nmore for machine states). Then there's a trend of 25%-ish in favor of\n128b and 25%-ish for more than that. The results are far from being\nfinal, but that's something.\n\nMy own take after pondering about it is that 64b is still more than\nenough for the clustering cases I've seen in the past 15 years or so,\nwhile offering room for implementations even if it comes to thousands\nof nodes. So there's some margin depending on the number of bits\nreserved for the \"machine\" part of the sequence IDs when used in\nclusters.\n\nThe next plan is to hopefully be able to trigger a discussion at the\nnext pgconf.dev at the end of May, but let's see how it goes.\n\nThanks,\n--\nMichael", "msg_date": "Fri, 26 Apr 2024 15:21:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Fri, Apr 26, 2024 at 03:21:29PM +0900, Michael Paquier wrote:\n> The next plan is to hopefully be able to trigger a discussion at the\n> next pgconf.dev at the end of May, but let's see how it goes.\n\nI am a bit behind an update of this thread, but there has been an\nunconference on the topic at the last pgconf.dev. This is based on my\nown notes written down after the session, so if there are gaps, feel\nfree to correct me. The session was called \"Sequences & Clusters\",\nand was in two parts, with the first part covering this thread, and\nthe second part covering the problem of sequences with logical\nreplication for upgrade cases. I've taken a lot of time with the 1st \npart (sorry about that Amit K.!) still the second part has reached an\nagreement about what to do next there, and this is covered by this\nthread these days:\nhttps://www.postgresql.org/message-id/CAA4eK1LC%2BKJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ%40mail.gmail.com\n\nMy overall feeling before this session was that I did not feel that\nfolks grabbed the problem I was trying to solve, and, while it did not\nfeel that the end of the session completely filled the gaps, and least\nfolks finished with some idea of the reason why I've been trying\nsomething here.\n\nFirst, I have spoken for a few minutes about the use-cases I've been\ntrying to solve, where parts of it involve Postgres-XC, an\nauto-proclaimed multi-master solution fork of Postgres, where\nsequences are handled by patching src/backend/commands/sequence.c to\nretrieve values from a GTM (global transaction manager, source of\ntruth for value uniqueness shared by all the nodes), something I got\nmy hands on between 2009~2012 (spoiler: people tend to like more\nscaling out clusters 12 years later). Then explained why Postgres is\nnot good in this area. The original idea is that we want to be able\nfor some applications to scale out Postgres across many hosts while\nmaking it transparent to the user's applications. By that, imagine a\nhuge big box where users can connect to a single point, but\nunderground any connection could involve a connection to a cluster of\nN PostgreSQL nodes, N being rather large (say N > 10k?).\n\nWhy would we want that? One problem behind such configurations is\nthat there is no way to make the values transparent for the\napplication without applying schema changes (attribute defaults, UUIDs\nbut these are large, for example), meaning that schemas cannot really\nbe migrated as-they-are from one space (be it a Postgres cluster of 1\nor more nodes) to a second space (with less more or more nodes), and\nhaving to manipulate clusters with ALTER SEQUENCE commands to ensure\nthat there is no overlap in value does not help much to avoid support\nat 3AM in case of sudden value conflicts because an application has\ngone wild, especially if the node fleet needs to be elastic and\nflexible (yep, there's also that). Note that there are also limits\nwith generated columns that feed from the in-core sequence computation\nof Postgres where all the sequence data is stored in a pseudo-heap\ntable, relying on buffer locks to make sure that in-place updates are\nconcurrent safe. So this thread is about extending the set of\npossibilities in this area for application developers to control how\nsequences are computed.\n\nFirst here is a summary of the use cases that have been mentioned\nwhere a custom computation is handy, based on properties that I've\nunderstood from the conversation:\n- Control of computation of values on a node and cluster-basis,\nusually coming with three properties (put a snowflake ID here):\n-- Global component, usually put in the first bits to force an\nordering of the values across all the nodes. For snowflakes, this is\ncovered by a timestamp, to which an offset can be applied.\n-- Local component, where a portion of the value bits are decided\ndepending on the node where the value is computed.\n-- Local incrementation, where the last bits in the value are used to\nloop if the two first ones happen to be equal, to ensure uniqueness.\n- Cache range of values at node level or session level, retrieved from\na unique source shared by multiple nodes. The range of values is\nretrieved from a single source (PostgreSQL node itself), cached in a\nshared pool in a node or just a backend context for consumption by a\nsession.\n- Transactional behavior to minimize value gaps, which is something I\nhave mentioned but I'm a bit meh on this property as value uniqueness\nis key, while users have learnt to live with value gaps. Still the\nAPIs can make that possible if autovacuum is able to understand that\nsome clean up needs to happen.\n\nAnother set of things that have been mentioned:\n- Is it even correct to call this concept an access method? Should a\ndifferent keyword be used? This depends on the stack layer where the\ncallbacks associated to a sequence are added, I assume. Still, based\non the infrastructure that we already have in place for tables and\nindexes (commands, GUCs), this is still kind of the correct concept to\nme because we can rely on a lot of existing infrastructure, but I also\nget that depending on one's view the opinion diverges.\n- More pluggable layers. The final picture will most likely involve\nmultiple layers of APIs, and not only what's proposed here, with\nxpoints mentioned about:\n-- Custom data types. My answer on this one is that this will need to\nbe controlled by a different clause. I think that this is a different\nfeature than the \"access method\" approach proposed here that would\nneed to happen on top of what's here, where the point is to control\nthe computation (and anything I've seen lately would unlock up to 64b\nof computation space hidden behind integer-like data types). Other\ncluster products out there have also a concept of user-related data\ntypes, which have to be integer-like.\n-- Custom nextval() functions. Here we are going to need a split\nbetween the in-core portion of sequences related to system catalogs\nand the facilities that can be accessed once a sequence OID is known.\nThe patch proposed plugs into nextval_internal() for two reasons:\nbeing able to let CACHE be handled by the core code and not the AM,\nand easier support for generated columns with the existing types where\nnextval_internal() is called from the executor. This part, also,\nis going to require a new SQL clause. Perhaps something will happen\nat some point in the SQL specification itself to put some guidelines,\nwho knows.\n\nWhile on it, I have noticed a couple of conflicts while rebasing, so\nattached is a refreshed patch set.\n\nThanks,\n--\nMichael", "msg_date": "Thu, 20 Jun 2024 15:12:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Thu, Jun 20, 2024 at 03:12:32PM +0900, Michael Paquier wrote:\n> While on it, I have noticed a couple of conflicts while rebasing, so\n> attached is a refreshed patch set.\n\nPlease find attached a new patch set for the next commit fest. The\npatch has required a bit of work to be able to work on HEAD,\nparticularly around the fact that pg_sequence_read_tuple() is able to\ndo the same work as the modifications done for pg_sequence_last_value() \nin the previous patch sets. I have modified the patch set to depend\non that, and adapted pg_dump/restore to it. The dump/restore part has\nalso required some tweaks to make sure that the AM is dumped depending\non if --schema-only and if we care about the values.\n\nFinally, I have been rather annoyed by the addition of log_cnt in the\nnew function pg_sequence_read_tuple(). This patch set could also\nimplement a new system function, but it looks like a waste as we don't\ncare about log_cnt in pg_dump and pg_upgrade on HEAD, so I'm proposing\nto remove it on a different thread:\nhttps://www.postgresql.org/message-id/Zsvka3r-y2ZoXAdH%40paquier.xyz\n--\nMichael", "msg_date": "Mon, 26 Aug 2024 13:45:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" }, { "msg_contents": "On Mon, Aug 26, 2024 at 01:45:12PM +0900, Michael Paquier wrote:\n> Finally, I have been rather annoyed by the addition of log_cnt in the\n> new function pg_sequence_read_tuple(). This patch set could also\n> implement a new system function, but it looks like a waste as we don't\n> care about log_cnt in pg_dump and pg_upgrade on HEAD, so I'm proposing\n> to remove it on a different thread:\n> https://www.postgresql.org/message-id/Zsvka3r-y2ZoXAdH%40paquier.xyz\n\nFollowing a83a944e9fdd, rebased as v8.\n--\nMichael", "msg_date": "Fri, 30 Aug 2024 17:24:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequence Access Methods, round two" } ]
[ { "msg_contents": "If postgres starts, and one of its children is immediately killed, and\nthe cluster is also told to stop, then, instead, the whole system gets\nwedged.\n\n$ initdb -D ./pgdev.dat1\n$ pg_ctl -D ./pgdev.dat1 start -o '-c port=5678'\n$ kill -9 2524495; sleep 0.05; pg_ctl -D ./pgdev.dat1 stop -m fast # 2524495 is a child's pid\n.......................................................... failed\npg_ctl: server does not shut down\n\n$ ps -wwwf --ppid 2524494\nUID PID PPID C STIME TTY TIME CMD\npryzbyj 2524552 2524494 0 20:47 ? 00:00:00 postgres: checkpointer \n\n(gdb) bt\n#0 0x00007f0ce2d08c03 in epoll_wait (epfd=10, events=0x55cb4cbaac28, maxevents=1, timeout=timeout@entry=156481) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 0x000055cb4c219208 in WaitEventSetWaitBlock (set=set@entry=0x55cb4cbaabc0, cur_timeout=cur_timeout@entry=156481, occurred_events=occurred_events@entry=0x7ffd80130410, \n nevents=nevents@entry=1) at ../src/backend/storage/ipc/latch.c:1583\n#2 0x000055cb4c219e02 in WaitEventSetWait (set=0x55cb4cbaabc0, timeout=timeout@entry=300000, occurred_events=occurred_events@entry=0x7ffd80130410, nevents=nevents@entry=1, \n wait_event_info=wait_event_info@entry=83886084) at ../src/backend/storage/ipc/latch.c:1529\n#3 0x000055cb4c219f87 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=timeout@entry=300000, wait_event_info=wait_event_info@entry=83886084)\n at ../src/backend/storage/ipc/latch.c:539\n#4 0x000055cb4c1aabc2 in CheckpointerMain () at ../src/backend/postmaster/checkpointer.c:523\n#5 0x000055cb4c1a8207 in AuxiliaryProcessMain (auxtype=auxtype@entry=CheckpointerProcess) at ../src/backend/postmaster/auxprocess.c:153\n#6 0x000055cb4c1ae63d in StartChildProcess (type=type@entry=CheckpointerProcess) at ../src/backend/postmaster/postmaster.c:5331\n#7 0x000055cb4c1b07f3 in ServerLoop () at ../src/backend/postmaster/postmaster.c:1792\n#8 0x000055cb4c1b1c56 in PostmasterMain (argc=argc@entry=5, argv=argv@entry=0x55cb4cbaa380) at ../src/backend/postmaster/postmaster.c:1466\n#9 0x000055cb4c0f4c1b in main (argc=5, argv=0x55cb4cbaa380) at ../src/backend/main/main.c:198\n\nI noticed this because of the counter-effective behavior of systemd+PGDG\nunit files to run \"pg_ctl stop\" whenever a backend is killed for OOM:\nhttps://www.postgresql.org/message-id/ZVI112aVNCHOQgfF@pryzbyj2023\n\nThis affects v15, and fails at 7ff23c6d27 but not its parent.\n\ncommit 7ff23c6d277d1d90478a51f0dd81414d343f3850 (HEAD)\nAuthor: Thomas Munro <[email protected]>\nDate: Mon Aug 2 17:32:20 2021 +1200\n\n Run checkpointer and bgwriter in crash recovery.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 30 Nov 2023 23:13:25 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "On Fri, Dec 1, 2023 at 6:13 PM Justin Pryzby <[email protected]> wrote:\n> $ kill -9 2524495; sleep 0.05; pg_ctl -D ./pgdev.dat1 stop -m fast # 2524495 is a child's pid\n\n> This affects v15, and fails at 7ff23c6d27 but not its parent.\n\nRepro'd here. I had to make the sleep shorter on my system. Looking...\n\n\n", "msg_date": "Sat, 2 Dec 2023 14:18:59 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "On Sat, Dec 2, 2023 at 2:18 PM Thomas Munro <[email protected]> wrote:\n> On Fri, Dec 1, 2023 at 6:13 PM Justin Pryzby <[email protected]> wrote:\n> > $ kill -9 2524495; sleep 0.05; pg_ctl -D ./pgdev.dat1 stop -m fast # 2524495 is a child's pid\n>\n> > This affects v15, and fails at ) but not its parent.\n>\n> Repro'd here. I had to make the sleep shorter on my system. Looking...\n\nThe PostmasterStateMachine() case for PM_WAIT_BACKENDS doesn't tell\nthe checkpointer to shut down in this race case. We have\nCheckpointerPID != 0 (because 7ff23c6d27 starts it earlier than\nbefore), and FatalError is true because a child recently crashed and\nwe haven't yet received the PMSIGNAL_RECOVERY_STARTED handler that\nwould clear it. Hmm.\n\n\n", "msg_date": "Sat, 2 Dec 2023 15:30:09 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "On Sat, Dec 2, 2023 at 3:30 PM Thomas Munro <[email protected]> wrote:\n> On Sat, Dec 2, 2023 at 2:18 PM Thomas Munro <[email protected]> wrote:\n> > On Fri, Dec 1, 2023 at 6:13 PM Justin Pryzby <[email protected]> wrote:\n> > > $ kill -9 2524495; sleep 0.05; pg_ctl -D ./pgdev.dat1 stop -m fast # 2524495 is a child's pid\n> >\n> > > This affects v15, and fails at ) but not its parent.\n> >\n> > Repro'd here. I had to make the sleep shorter on my system. Looking...\n>\n> The PostmasterStateMachine() case for PM_WAIT_BACKENDS doesn't tell\n> the checkpointer to shut down in this race case. We have\n> CheckpointerPID != 0 (because 7ff23c6d27 starts it earlier than\n> before), and FatalError is true because a child recently crashed and\n> we haven't yet received the PMSIGNAL_RECOVERY_STARTED handler that\n> would clear it. Hmm.\n\nHere is a first attempt at fixing this. I am not yet 100% sure if it\nis right, and there may be a nicer/simpler way to express the\nconditions. It passes the test suite, and it fixes the repro that\nJustin posted. FYI on my machine I had to use sleep 0.005 where he\nhad 0.05, as an FYI if someone else is trying to reproduce the issue.", "msg_date": "Wed, 6 Mar 2024 10:22:09 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi, I somehow fail to be able to mark all checkboxes on this review page...\r\nHowever, build and tested with all passed successfully on Rocky Linux release 8.9 (Green Obsidian).\r\nNot sure of more reviewing is needed on other Operating Systems since this is only my second review.\r\n\r\nCheers, Martijn.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 22 May 2024 21:29:31 +0000", "msg_from": "Martijn Wallet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi, I somehow fail to be able to mark all checkboxes on this review page...\r\nHowever, build and tested with all passed successfully on Rocky Linux release 8.9 (Green Obsidian).\r\nNot sure of more reviewing is needed on other Operating Systems since this is only my second review.\r\n\r\nCheers, Martijn.\r\n\r\nnb: second mail to see spf is fixed and Thomas receives this message.", "msg_date": "Wed, 22 May 2024 21:57:33 +0000", "msg_from": "Martijn Wallet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "On Thu, May 23, 2024 at 9:58 AM Martijn Wallet <[email protected]> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hi, I somehow fail to be able to mark all checkboxes on this review page...\n> However, build and tested with all passed successfully on Rocky Linux release 8.9 (Green Obsidian).\n> Not sure of more reviewing is needed on other Operating Systems since this is only my second review.\n\nThanks!\n\nI'm also hoping to get review of the rather finickity state machine\nlogic involved from people familiar with that; I think it's right, but\nI'd hate to break some other edge case...\n\n> nb: second mail to see spf is fixed and Thomas receives this message.\n\nFTR 171641337152.1103.7326466732639994038.pgcf@coridan.postgresql.org\nand 171641505305.1105.9868637944637520353.pgcf@coridan.postgresql.org\nboth showed up in my inbox, and they both have headers \"Received-SPF:\npass ...\".\n\n\n", "msg_date": "Thu, 23 May 2024 10:29:13 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "Great, thanks for the feedback. It was probably the DKIM.", "msg_date": "Wed, 22 May 2024 23:16:35 +0000", "msg_from": "Martijn Wallet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" }, { "msg_contents": "Last test to have a verified mail, added lists.postgresql.org to spf record. Cheers.", "msg_date": "Wed, 22 May 2024 23:29:35 +0000", "msg_from": "Martijn Wallet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: processes stuck in shutdown following OOM/recovery" } ]
[ { "msg_contents": "I have a stored procedure in Postgres. I have generated some variables in\nthat procedure. These variables are generated inside a loop of query\nresult. Suppose if i have 10 rows from my query then for 10 times I am\ngenerating those variables.\n\nNow I want to create a materialized view where these variables will be the\ncolumns and every row will have values of those variables generated in\nevery iteration of that loop.\n\nWhat can be the better approach for this?\n\nI can create a temporary table with those variables and insert values in\nevery iteration of that loop but that will not serve my purpose. Because I\nwant to drop my existing table where all the values are available and\ncolumns are those variables. My target is to make a materialized view with\nthose variables so that I can get rid of that parent table.\n\nBest Regards\n\n*Rafi*\n\n\n\n\n\n\n\nI have a stored procedure in Postgres. I have generated some \nvariables in that procedure. These variables are generated inside a loop\n of query result. Suppose if i have 10 rows from my query then for 10 \ntimes I am generating those variables.\nNow I want to create a materialized view where these variables will \nbe the columns and every row will have values of those variables \ngenerated in every iteration of that loop.\nWhat can be the better approach for this?\nI can create a temporary table with those variables and insert values\n in every iteration of that loop but that will not serve my purpose. \nBecause I want to drop my existing table where all the values are \navailable and columns are those variables. My target is to make a \nmaterialized view with those variables so that I can get rid of that \nparent table.\n\nBest RegardsRafi", "msg_date": "Fri, 1 Dec 2023 00:18:41 -0600", "msg_from": "Nurul Karim Rafi <[email protected]>", "msg_from_op": true, "msg_subject": "Materialized view in Postgres from the variables rather than SQL\n query results" }, { "msg_contents": "This mailing list is for discussing the development of patches to the\nPostgreSQL code base. Please send your request for help to a more\nappropriate list - specifically the -general list.\n\nDavid J.\n\n\nOn Thursday, November 30, 2023, Nurul Karim Rafi <[email protected]>\nwrote:\n\n> I have a stored procedure in Postgres. I have generated some variables in\n> that procedure. These variables are generated inside a loop of query\n> result. Suppose if i have 10 rows from my query then for 10 times I am\n> generating those variables.\n>\n> Now I want to create a materialized view where these variables will be the\n> columns and every row will have values of those variables generated in\n> every iteration of that loop.\n>\n> What can be the better approach for this?\n>\n> I can create a temporary table with those variables and insert values in\n> every iteration of that loop but that will not serve my purpose. Because I\n> want to drop my existing table where all the values are available and\n> columns are those variables. My target is to make a materialized view with\n> those variables so that I can get rid of that parent table.\n>\n> Best Regards\n>\n> *Rafi*\n>\n\nThis mailing list is for discussing the development of patches to the PostgreSQL code base.  Please send your request for help to a more appropriate list - specifically the -general list.David J.On Thursday, November 30, 2023, Nurul Karim Rafi <[email protected]> wrote:\n\n\n\n\n\nI have a stored procedure in Postgres. I have generated some \nvariables in that procedure. These variables are generated inside a loop\n of query result. Suppose if i have 10 rows from my query then for 10 \ntimes I am generating those variables.\nNow I want to create a materialized view where these variables will \nbe the columns and every row will have values of those variables \ngenerated in every iteration of that loop.\nWhat can be the better approach for this?\nI can create a temporary table with those variables and insert values\n in every iteration of that loop but that will not serve my purpose. \nBecause I want to drop my existing table where all the values are \navailable and columns are those variables. My target is to make a \nmaterialized view with those variables so that I can get rid of that \nparent table.\n\nBest RegardsRafi", "msg_date": "Fri, 1 Dec 2023 06:40:16 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Materialized view in Postgres from the variables rather than SQL\n query results" }, { "msg_contents": "Hi David,\nThanks for replying back.\nAlready did that but haven’t received anything yet.\n\nOn Fri, Dec 1, 2023 at 7:40 PM David G. Johnston <[email protected]>\nwrote:\n\n> This mailing list is for discussing the development of patches to the\n> PostgreSQL code base. Please send your request for help to a more\n> appropriate list - specifically the -general list.\n>\n> David J.\n>\n>\n> On Thursday, November 30, 2023, Nurul Karim Rafi <[email protected]>\n> wrote:\n>\n>> I have a stored procedure in Postgres. I have generated some variables in\n>> that procedure. These variables are generated inside a loop of query\n>> result. Suppose if i have 10 rows from my query then for 10 times I am\n>> generating those variables.\n>>\n>> Now I want to create a materialized view where these variables will be\n>> the columns and every row will have values of those variables generated in\n>> every iteration of that loop.\n>>\n>> What can be the better approach for this?\n>>\n>> I can create a temporary table with those variables and insert values in\n>> every iteration of that loop but that will not serve my purpose. Because I\n>> want to drop my existing table where all the values are available and\n>> columns are those variables. My target is to make a materialized view with\n>> those variables so that I can get rid of that parent table.\n>>\n>> Best Regards\n>>\n>> *Rafi*\n>>\n>\n\nHi David,Thanks for replying back.Already did that but haven’t received anything yet.On Fri, Dec 1, 2023 at 7:40 PM David G. Johnston <[email protected]> wrote:This mailing list is for discussing the development of patches to the PostgreSQL code base.  Please send your request for help to a more appropriate list - specifically the -general list.David J.On Thursday, November 30, 2023, Nurul Karim Rafi <[email protected]> wrote:\n\n\n\n\n\nI have a stored procedure in Postgres. I have generated some \nvariables in that procedure. These variables are generated inside a loop\n of query result. Suppose if i have 10 rows from my query then for 10 \ntimes I am generating those variables.\nNow I want to create a materialized view where these variables will \nbe the columns and every row will have values of those variables \ngenerated in every iteration of that loop.\nWhat can be the better approach for this?\nI can create a temporary table with those variables and insert values\n in every iteration of that loop but that will not serve my purpose. \nBecause I want to drop my existing table where all the values are \navailable and columns are those variables. My target is to make a \nmaterialized view with those variables so that I can get rid of that \nparent table.\n\nBest RegardsRafi", "msg_date": "Fri, 1 Dec 2023 21:15:17 +0600", "msg_from": "Nurul Karim Rafi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Materialized view in Postgres from the variables rather than SQL\n query results" } ]
[ { "msg_contents": "Hi hackers,\n\nI attach a small patch improving document of ECPG host variable.\nPlease back patch to supported version, if possible.\n\nRange of 'bool' as type of ECPG is not defined explicitly. Our customer was confused.\nAdditionally, I could not understand clearly what the existing sentence mentions to user.\n\nMy idea is as follows:\n\n- [b] declared in ecpglib.h if not native\n+ [b] Range of bool is true/false only.\n There is no need to include any header like stdbool.h for type and literals\n because they are defined by ECPG and\n ECPG internally includes a appropriate C language standard header or\n deifnes them if there is no such header.\n\nBest Regards\nRyo Matsumura", "msg_date": "Fri, 1 Dec 2023 06:32:08 +0000", "msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "doc: improve document of ECPG host variable" } ]
[ { "msg_contents": "I noticed that some header files included system header files for no \napparent reason, so I did some digging and found out that in a few cases \nthe original reason has disappeared. So I propose the attached patches \nto remove the unnecessary includes.", "msg_date": "Fri, 1 Dec 2023 08:53:44 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Remove unnecessary includes of system headers in header files" }, { "msg_contents": "Hi Peter,\r\n\r\nI have reviewed the patches and also RUN the command, 'make check-world'. It is working fine. All test cases are passed successfully.\r\n\r\nThanks and Regards,\r\nShubham Khanna.\r\n\r\n-----Original Message-----\r\nFrom: Peter Eisentraut <[email protected]> \r\nSent: Friday, December 1, 2023 1:24 PM\r\nTo: pgsql-hackers <[email protected]>\r\nSubject: Remove unnecessary includes of system headers in header files\r\n\r\nI noticed that some header files included system header files for no apparent reason, so I did some digging and found out that in a few cases the original reason has disappeared. So I propose the attached patches to remove the unnecessary includes.\r\n", "msg_date": "Fri, 1 Dec 2023 10:51:39 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Remove unnecessary includes of system headers in header files" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I noticed that some header files included system header files for no \n> apparent reason, so I did some digging and found out that in a few cases \n> the original reason has disappeared. So I propose the attached patches \n> to remove the unnecessary includes.\n\nSeems generally reasonable. Have you checked that headerscheck and\ncpluspluscheck are happy?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Dec 2023 11:41:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary includes of system headers in header files" }, { "msg_contents": "On 01.12.23 17:41, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> I noticed that some header files included system header files for no\n>> apparent reason, so I did some digging and found out that in a few cases\n>> the original reason has disappeared. So I propose the attached patches\n>> to remove the unnecessary includes.\n> \n> Seems generally reasonable. Have you checked that headerscheck and\n> cpluspluscheck are happy?\n\nYes, I ran it through Cirrus, which includes those checks.\n\n\n\n", "msg_date": "Sat, 2 Dec 2023 09:39:05 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary includes of system headers in header files" }, { "msg_contents": "On 01.12.23 11:51, [email protected] wrote:\n> Hi Peter,\n> \n> I have reviewed the patches and also RUN the command, 'make check-world'. It is working fine. All test cases are passed successfully.\n\ncommitted\n\n> \n> Thanks and Regards,\n> Shubham Khanna.\n> \n> -----Original Message-----\n> From: Peter Eisentraut <[email protected]>\n> Sent: Friday, December 1, 2023 1:24 PM\n> To: pgsql-hackers <[email protected]>\n> Subject: Remove unnecessary includes of system headers in header files\n> \n> I noticed that some header files included system header files for no apparent reason, so I did some digging and found out that in a few cases the original reason has disappeared. So I propose the attached patches to remove the unnecessary includes.\n\n\n\n", "msg_date": "Mon, 4 Dec 2023 06:42:17 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary includes of system headers in header files" } ]
[ { "msg_contents": "[ this thread separated from [1] as the discussion focus shifted ]\n\nH Andres,\n\n29.11.2023 22:39, Andres Freund wrote:\n>> I use the following:\n>> ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:print_stacktrace=1:\\\n>> disable_coredump=0:strict_string_checks=1:check_initialization_order=1:\\\n>> strict_init_order=1:detect_stack_use_after_return=0\n> I wonder if we should add some of these options by default ourselves. We could\n> e.g. add something like the __ubsan_default_options() in\n> src/backend/main/main.c to src/port/... instead, and return a combination of\n> \"our\" options (like detect_leaks=0) and the ones from the environment.\n\nI think that such explicit expression of the project policy regarding\nsanitizer checks is for good, but I see some obstacles on this way.\n\nFirst, I'm not sure what to do with new useful options/maybe new option\nvalues, that will appear in sanitizers eventually. Should the only options,\nthat are supported by all sanitizers' versions, be specified, or we may\nexpect that unsupported options/values would be ignored by old versions?\n\nSecond, what to do with other binaries, that need detect_leaks=0, for\nexample, that same ecpg?\n\n> ISTM that, if it actually works as I theorize it should, using\n> __attribute__((no_sanitize(\"address\"))) would be the easiest approach\n> here. Something like\n>\n> #if defined(__has_feature) && __has_feature(address_sanitizer)\n> #define pg_attribute_no_asan __attribute__((no_sanitize(\"address\")))\n> #else\n> #define pg_attribute_no_asan\n> #endif\n>\n> or such should work.\n\nI've tried adding:\n  bool\n+__attribute__((no_sanitize(\"address\")))\n  stack_is_too_deep(void)\n\nand it does work got me with clang 15, 18: `make check-world` passes with\nASAN_OPTIONS=detect_leaks=0:abort_on_error=1:print_stacktrace=1:\\\ndisable_coredump=0:strict_string_checks=1:check_initialization_order=1:\\\nstrict_init_order=1:detect_stack_use_after_return=1\nUBSAN_OPTIONS=abort_on_error=1:print_stacktrace=1\n\n(with a fix for pg_bsd_indent applied [2])\n\nBut with gcc 11, 12, 13 I get an assertion failure during `make check`:\n#4  0x00007fabadcd67f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x0000557f35260382 in ExceptionalCondition (conditionName=0x557f35ca51a0 \"(uintptr_t) buffer == \nTYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\", fileName=0x557f35ca4fc0 \"md.c\", lineNumber=471) at assert.c:66\n#6  0x0000557f34a3b2bc in mdextend (reln=0x6250000375c8, forknum=MAIN_FORKNUM, blocknum=18, buffer=0x7fabaa800020, \nskipFsync=true) at md.c:471\n#7  0x0000557f34a45a6f in smgrextend (reln=0x6250000375c8, forknum=MAIN_FORKNUM, blocknum=18, buffer=0x7fabaa800020, \nskipFsync=true) at smgr.c:501\n#8  0x0000557f349139ed in RelationCopyStorageUsingBuffer (srclocator=..., dstlocator=..., forkNum=MAIN_FORKNUM, \npermanent=true) at bufmgr.c:4386\n\nThe buffer (buf) declared as follows:\nstatic void\nRelationCopyStorageUsingBuffer(RelFileLocator srclocator,\n                                RelFileLocator dstlocator,\n                                ForkNumber forkNum, bool permanent)\n{\n...\n     PGIOAlignedBlock buf;\n...\n\nBut as we can see, the buffer address is really not 4k-aligned, and that\noffset 0x20 added in run-time only when the server started with\ndetect_stack_use_after_return=1.\nSo it looks like the asan feature detect_stack_use_after_return implemented\nin gcc allows itself to add some data on stack, that breaks our alignment\nexpectations. With all three such Asserts in md.c removed,\n`make check-world` passes for me.\n\n> One thing that's been holding me back on trying to do something around this is\n> the basically non-existing documentation around all of this. I haven't even\n> found documentation referencing the fact that there are headers like\n> sanitizer/asan_interface.h, you just have to figure that out yourself. Compare\n> that to something like valgrind, which has documented this at least somewhat.\n\nYes, so maybe it's reasonable to support only basic/common features (such\nas detect_leaks), leaving advanced ones for ad-hoc usage till they prove\ntheir worthiness.\n\nBest regards,\nAlexander\n\n[1] https://www.postgresql.org/message-id/flat/CWTLB2WWVJJ2.2YV6ERNOL1WVF%40neon.tech\n[2] https://www.postgresql.org/message-id/591971ce-25c1-90f3-0526-5f54e3ebb32e%40gmail.com\n\n\n", "msg_date": "Fri, 1 Dec 2023 12:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Improving asan/ubsan support" }, { "msg_contents": "On Fri Dec 1, 2023 at 3:00 AM CST, Alexander Lakhin wrote:\n> [ this thread separated from [1] as the discussion focus shifted ]\n>\n> H Andres,\n>\n> 29.11.2023 22:39, Andres Freund wrote:\n> >> I use the following:\n> >> ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:print_stacktrace=1:\\\n> >> disable_coredump=0:strict_string_checks=1:check_initialization_order=1:\\\n> >> strict_init_order=1:detect_stack_use_after_return=0\n> > I wonder if we should add some of these options by default ourselves. We could\n> > e.g. add something like the __ubsan_default_options() in\n> > src/backend/main/main.c to src/port/... instead, and return a combination of\n> > \"our\" options (like detect_leaks=0) and the ones from the environment.\n>\n> I think that such explicit expression of the project policy regarding\n> sanitizer checks is for good, but I see some obstacles on this way.\n>\n> First, I'm not sure what to do with new useful options/maybe new option\n> values, that will appear in sanitizers eventually. Should the only options,\n> that are supported by all sanitizers' versions, be specified, or we may\n> expect that unsupported options/values would be ignored by old versions?\n>\n> Second, what to do with other binaries, that need detect_leaks=0, for\n> example, that same ecpg?\n>\n> > ISTM that, if it actually works as I theorize it should, using\n> > __attribute__((no_sanitize(\"address\"))) would be the easiest approach\n> > here. Something like\n> >\n> > #if defined(__has_feature) && __has_feature(address_sanitizer)\n> > #define pg_attribute_no_asan __attribute__((no_sanitize(\"address\")))\n> > #else\n> > #define pg_attribute_no_asan\n> > #endif\n> >\n> > or such should work.\n>\n> I've tried adding:\n>  bool\n> +__attribute__((no_sanitize(\"address\")))\n>  stack_is_too_deep(void)\n>\n> and it does work got me with clang 15, 18: `make check-world` passes with\n> ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:print_stacktrace=1:\\\n> disable_coredump=0:strict_string_checks=1:check_initialization_order=1:\\\n> strict_init_order=1:detect_stack_use_after_return=1\n> UBSAN_OPTIONS=abort_on_error=1:print_stacktrace=1\n>\n> (with a fix for pg_bsd_indent applied [2])\n>\n> But with gcc 11, 12, 13 I get an assertion failure during `make check`:\n> #4  0x00007fabadcd67f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5  0x0000557f35260382 in ExceptionalCondition (conditionName=0x557f35ca51a0 \"(uintptr_t) buffer == \n> TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\", fileName=0x557f35ca4fc0 \"md.c\", lineNumber=471) at assert.c:66\n> #6  0x0000557f34a3b2bc in mdextend (reln=0x6250000375c8, forknum=MAIN_FORKNUM, blocknum=18, buffer=0x7fabaa800020, \n> skipFsync=true) at md.c:471\n> #7  0x0000557f34a45a6f in smgrextend (reln=0x6250000375c8, forknum=MAIN_FORKNUM, blocknum=18, buffer=0x7fabaa800020, \n> skipFsync=true) at smgr.c:501\n> #8  0x0000557f349139ed in RelationCopyStorageUsingBuffer (srclocator=..., dstlocator=..., forkNum=MAIN_FORKNUM, \n> permanent=true) at bufmgr.c:4386\n>\n> The buffer (buf) declared as follows:\n> static void\n> RelationCopyStorageUsingBuffer(RelFileLocator srclocator,\n>                                RelFileLocator dstlocator,\n>                                ForkNumber forkNum, bool permanent)\n> {\n> ...\n>     PGIOAlignedBlock buf;\n> ...\n>\n> But as we can see, the buffer address is really not 4k-aligned, and that\n> offset 0x20 added in run-time only when the server started with\n> detect_stack_use_after_return=1.\n> So it looks like the asan feature detect_stack_use_after_return implemented\n> in gcc allows itself to add some data on stack, that breaks our alignment\n> expectations. With all three such Asserts in md.c removed,\n> `make check-world` passes for me.\n\nDecided to do some digging into this, and Google actually documents[0] \nhow it works. After reading the algorithm, it is obvious why this fails. \nWhat happens if you throw an __attribute__((no_sanitize(\"address\")) on \nthe function? I assume the Asserts would then pass. The commit[1] which \nadded pg_attribute_aligned() provides insight as to why the Asserts \nexist.\n\n> /* If this build supports direct I/O, the buffer must be I/O aligned. */\n\nDisabling instrumentation in functions which use this specific type when \nthe build supports direct IO seems like the best solution.\n\n> > One thing that's been holding me back on trying to do something around this is\n> > the basically non-existing documentation around all of this. I haven't even\n> > found documentation referencing the fact that there are headers like\n> > sanitizer/asan_interface.h, you just have to figure that out yourself. Compare\n> > that to something like valgrind, which has documented this at least somewhat.\n>\n> Yes, so maybe it's reasonable to support only basic/common features (such\n> as detect_leaks), leaving advanced ones for ad-hoc usage till they prove\n> their worthiness.\n\nPossibly, but I think I would rather see upstream support running with \nall features with instrumentation turned off in various sections of \ncode. Even some assistance from AddressSanitizer is better than none. \nHere[1][2] are all the AddressSanitizer flags for those curious.\n\n> Best regards,\n> Alexander\n>\n> [1] https://www.postgresql.org/message-id/flat/CWTLB2WWVJJ2.2YV6ERNOL1WVF%40neon.tech\n> [2] https://www.postgresql.org/message-id/591971ce-25c1-90f3-0526-5f54e3ebb32e%40gmail.com\n\nI personally would like to see Postgres have support for \nAddressSanitizer. I think it already supports UndefinedBehaviorSanitizer \nif I am remembering the buildfarm properly. AddressSanitizer has been so \nhelpful in past experiences writing C.\n\n[0]: https://github.com/google/sanitizers/wiki/AddressSanitizerUseAfterReturn#algorithm\n[1]: https://github.com/postgres/postgres/commit/faeedbcefd40bfdf314e048c425b6d9208896d90\n[2]: https://github.com/google/sanitizers/wiki/AddressSanitizerFlags\n[3]: https://github.com/google/sanitizers/wiki/SanitizerCommonFlags\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 01 Dec 2023 15:48:54 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving asan/ubsan support" }, { "msg_contents": "Hello Tristan,\n\n02.12.2023 00:48, Tristan Partin wrote:\n>\n>> So it looks like the asan feature detect_stack_use_after_return implemented\n>> in gcc allows itself to add some data on stack, that breaks our alignment\n>> expectations. With all three such Asserts in md.c removed,\n>> `make check-world` passes for me.\n>\n> Decided to do some digging into this, and Google actually documents[0] how it works. After reading the algorithm, it \n> is obvious why this fails. What happens if you throw an __attribute__((no_sanitize(\"address\")) on the function? I \n> assume the Asserts would then pass. The commit[1] which added pg_attribute_aligned() provides insight as to why the \n> Asserts exist.\n\nThank you for spending your time on this!\n\nYes, I understand what those Asserts were added for, I removed them just\nto check what else is on the way.\nAnd I can confirm that marking that function with the no_sanitize attribute\nfixes that exact failure. Then the same attribute has to be added to\n_hash_alloc_buckets(), to prevent:\nTRAP: failed Assert(\"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\"), File: \"md.c\", Line: 471, PID: 1766976\n\n#5  0x00005594a6a0f0d0 in ExceptionalCondition (conditionName=0x5594a7454a60 \"(uintptr_t) buffer == \nTYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\", fileName=0x5594a7454880 \"md.c\", lineNumber=471) at assert.c:66\n#6  0x00005594a61ce133 in mdextend (reln=0x625000037e48, forknum=MAIN_FORKNUM, blocknum=9, buffer=0x7fc3b3947020, \nskipFsync=false) at md.c:471\n#7  0x00005594a61d89ab in smgrextend (reln=0x625000037e48, forknum=MAIN_FORKNUM, blocknum=9, buffer=0x7fc3b3947020, \nskipFsync=false) at smgr.c:501\n#8  0x00005594a4a0c43d in _hash_alloc_buckets (rel=0x7fc3a89714f8, firstblock=6, nblocks=4) at hashpage.c:1033\n\nAnd to RelationCopyStorage(), to prevent:\nTRAP: failed Assert(\"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\"), File: \"md.c\", Line: 752, PID: 1787855\n\n#5  0x000056081d5688bc in ExceptionalCondition (conditionName=0x56081dfaea40 \"(uintptr_t) buffer == \nTYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\", fileName=0x56081dfae860 \"md.c\", lineNumber=752) at assert.c:66\n#6  0x000056081cd29415 in mdread (reln=0x629000043158, forknum=MAIN_FORKNUM, blocknum=0, buffer=0x7fe480633020) at md.c:752\n#7  0x000056081cd32cb3 in smgrread (reln=0x629000043158, forknum=MAIN_FORKNUM, blocknum=0, buffer=0x7fe480633020) at \nsmgr.c:565\n#8  0x000056081b9ed5f2 in RelationCopyStorage (src=0x629000043158, dst=0x629000041248, forkNum=MAIN_FORKNUM, \nrelpersistence=112 'p') at storage.c:487\n\nProbably, it has to be added for all the functions where PGIOAlignedBlock\nlocated on stack...\n\nBut I still wonder, how it works with clang, why that extra attribute is\nnot required?\nIn other words, such implementation specifics discourage me...\n\n>\n> Possibly, but I think I would rather see upstream support running with all features with instrumentation turned off in \n> various sections of code. Even some assistance from AddressSanitizer is better than none. Here[1][2] are all the \n> AddressSanitizer flags for those curious.\n\nYeah, and you might also need to specify extra flags to successfully run\npostgres with newer sanitizers' versions. Say, for clang-18 you need to\nspecify -fno-sanitize=function (which is not recognized by gcc 13.2), to\navoid errors like this:\nrunning bootstrap script ... dynahash.c:1120:4: runtime error: call to function strlcpy through pointer to incorrect \nfunction type 'void *(*)(void *, const void *, unsigned long)'\n.../src/port/strlcpy.c:46: note: strlcpy defined here\n     #0 0x556af5e0b0a9 in hash_search_with_hash_value .../src/backend/utils/hash/dynahash.c:1120:4\n     #1 0x556af5e08f4f in hash_search .../src/backend/utils/hash/dynahash.c:958:9\n\n> I personally would like to see Postgres have support for  AddressSanitizer. I think it already supports \n> UndefinedBehaviorSanitizer if I am remembering the buildfarm properly. AddressSanitizer has been so helpful in past \n> experiences writing C.\n\nMe too. I find it very valuable for my personal usage but I'm afraid it's\nstill not very stable/mature.\nOne more example. Just adding -fsanitize=undefined for gcc 12, 13 (I tried\n12.1, 13.0, 13.2) produces new warnings like:\nIn function 'PageGetItemId',\n     inlined from 'heap_xlog_update' at heapam.c:9569:9:\n../../../../src/include/storage/bufpage.h:243:16: warning: array subscript -1 is below array bounds of 'ItemIdData[]' \n[-Warray-bounds=]\n   243 |         return &((PageHeader) page)->pd_linp[offsetNumber - 1];\n       | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBut I don't get such warnings when I use gcc 11.3 (though it generates\nother ones) or clang (15, 18). They also aren't produced with -O0, -O1...\nMaybe it's another gcc bug, I'm not sure how to deal with it.\n(I can research this issue, if it makes any sense.)\n\nSo I would say that cost of providing/maintaining full support for asan\n(hwasan), ubsan is not near zero, unfortunately. I would estimate it to\n10-20 discussions/commits on start/5-10 per year later (not including fixes\nfor bugs that would be found). If it's affordable for the project, I'd like\nto have such support out-of-the-box.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 2 Dec 2023 09:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving asan/ubsan support" } ]
[ { "msg_contents": "The comment of search_indexed_tlist_for_var says:\n\n * In debugging builds, we cross-check the varnullingrels of the subplan\n * output Var based on nrm_match.\n\nHowever, this cross-check will also be performed in non-debug builds\never since commit 867be9c07, which converts this check from Asserts to\ntest-and-elog. The commit message there also says:\n\n Committed separately with the idea that eventually we'll revert\n this. It might be awhile though.\n\nI wonder if now is the time to revert it, since there have been no\nrelated bugs reported for quite a while. Otherwise I think we may need\nto revise the comment of search_indexed_tlist_for_var to clarify that\nthe cross-check is not limited to debugging builds.\n\nPlease note that if we intend to revert commit 867be9c07, we need to\nrevert 69c430626 too.\n\nThanks\nRichard\n\nThe comment of search_indexed_tlist_for_var says: * In debugging builds, we cross-check the varnullingrels of the subplan * output Var based on nrm_match.However, this cross-check will also be performed in non-debug buildsever since commit 867be9c07, which converts this check from Asserts totest-and-elog.  The commit message there also says:    Committed separately with the idea that eventually we'll revert    this.  It might be awhile though.I wonder if now is the time to revert it, since there have been norelated bugs reported for quite a while.  Otherwise I think we may needto revise the comment of search_indexed_tlist_for_var to clarify thatthe cross-check is not limited to debugging builds.Please note that if we intend to revert commit 867be9c07, we need torevert 69c430626 too.ThanksRichard", "msg_date": "Fri, 1 Dec 2023 19:12:34 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "On 2023-Dec-01, Richard Guo wrote:\n\n> However, this cross-check will also be performed in non-debug builds\n> ever since commit 867be9c07, which converts this check from Asserts to\n> test-and-elog. The commit message there also says:\n> \n> Committed separately with the idea that eventually we'll revert\n> this. It might be awhile though.\n> \n> I wonder if now is the time to revert it, since there have been no\n> related bugs reported for quite a while.\n\nI don't know anything about this, but maybe it would be better to let\nthese elogs there for longer, so that users have time to upgrade and\ntest. This new code has proven quite tricky, and if I understand\ncorrectly, if we do run some query with wrong varnullingrels in\nproduction code without elog and where Assert() does nothing, that might\nsilently lead to wrong results.\n\nOTOH keeping the elog there might impact performance. Would that be\nsignificant?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Debido a que la velocidad de la luz es mucho mayor que la del sonido,\n algunas personas nos parecen brillantes un minuto antes\n de escuchar las pelotudeces que dicen.\" (Roberto Fontanarrosa)\n\n\n", "msg_date": "Fri, 1 Dec 2023 13:25:31 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Dec-01, Richard Guo wrote:\n>> However, this cross-check will also be performed in non-debug builds\n>> ever since commit 867be9c07, which converts this check from Asserts to\n>> test-and-elog. The commit message there also says:\n>> Committed separately with the idea that eventually we'll revert\n>> this. It might be awhile though.\n>> I wonder if now is the time to revert it, since there have been no\n>> related bugs reported for quite a while.\n\n> I don't know anything about this, but maybe it would be better to let\n> these elogs there for longer, so that users have time to upgrade and\n> test.\n\nYeah. It's good that we've not had field reports against 16.0 or 16.1,\nbut we can't really expect that 16.x has seen widespread adoption yet.\nI do think we should revert this eventually, but I'd wait perhaps\nanother year.\n\n> OTOH keeping the elog there might impact performance. Would that be\n> significant?\n\nDoubt it'd be anything measurable, in comparison to all the other\nstuff the planner does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Dec 2023 13:27:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "On Sat, Dec 2, 2023 at 2:27 AM Tom Lane <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > On 2023-Dec-01, Richard Guo wrote:\n> >> However, this cross-check will also be performed in non-debug builds\n> >> ever since commit 867be9c07, which converts this check from Asserts to\n> >> test-and-elog. The commit message there also says:\n> >> Committed separately with the idea that eventually we'll revert\n> >> this. It might be awhile though.\n> >> I wonder if now is the time to revert it, since there have been no\n> >> related bugs reported for quite a while.\n>\n> > I don't know anything about this, but maybe it would be better to let\n> > these elogs there for longer, so that users have time to upgrade and\n> > test.\n>\n> Yeah. It's good that we've not had field reports against 16.0 or 16.1,\n> but we can't really expect that 16.x has seen widespread adoption yet.\n> I do think we should revert this eventually, but I'd wait perhaps\n> another year.\n\n\nThen here is a trivial patch to adjust the comment, which should get\nreverted along with 867be9c07.\n\nThanks\nRichard", "msg_date": "Mon, 4 Dec 2023 16:42:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nComment is updated correctly.", "msg_date": "Mon, 18 Dec 2023 15:26:40 +0000", "msg_from": "Matt Skelley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "On Mon, Dec 4, 2023 at 3:42 AM Richard Guo <[email protected]> wrote:\n> Then here is a trivial patch to adjust the comment, which should get\n> reverted along with 867be9c07.\n\nRichard, since you're a committer now, maybe you'd like to commit\nthis. I don't really understand the portion of your commit message\ninside the parentheses and would suggest that you just delete that,\nbut the rest seems fine.\n\nIf you do commit it, also update the status at\nhttps://commitfest.postgresql.org/48/4683/\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 13:07:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "On Wed, May 15, 2024 at 1:07 AM Robert Haas <[email protected]> wrote:\n\n> On Mon, Dec 4, 2023 at 3:42 AM Richard Guo <[email protected]> wrote:\n> > Then here is a trivial patch to adjust the comment, which should get\n> > reverted along with 867be9c07.\n>\n> Richard, since you're a committer now, maybe you'd like to commit\n> this. I don't really understand the portion of your commit message\n> inside the parentheses and would suggest that you just delete that,\n> but the rest seems fine.\n>\n> If you do commit it, also update the status at\n> https://commitfest.postgresql.org/48/4683/\n\n\nThank you for the suggestion. Yeah, this is a good candidate for my\nfirst commit. :-) I will aim to do it during the next commitfest.\n\nThanks\nRichard\n\nOn Wed, May 15, 2024 at 1:07 AM Robert Haas <[email protected]> wrote:On Mon, Dec 4, 2023 at 3:42 AM Richard Guo <[email protected]> wrote:\n> Then here is a trivial patch to adjust the comment, which should get\n> reverted along with 867be9c07.\n\nRichard, since you're a committer now, maybe you'd like to commit\nthis. I don't really understand the portion of your commit message\ninside the parentheses and would suggest that you just delete that,\nbut the rest seems fine.\n\nIf you do commit it, also update the status at\nhttps://commitfest.postgresql.org/48/4683/Thank you for the suggestion.  Yeah, this is a good candidate for myfirst commit. :-)  I will aim to do it during the next commitfest.ThanksRichard", "msg_date": "Thu, 16 May 2024 17:58:32 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "On Thu, May 16, 2024 at 5:58 AM Richard Guo <[email protected]> wrote:\n> Thank you for the suggestion. Yeah, this is a good candidate for my\n> first commit. :-) I will aim to do it during the next commitfest.\n\nYou don't need to wait for the next CommitFest to fix a comment (or a\nbug). And, indeed, it's better if you do this before we branch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 08:42:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" }, { "msg_contents": "On Thu, May 16, 2024 at 8:42 PM Robert Haas <[email protected]> wrote:\n> You don't need to wait for the next CommitFest to fix a comment (or a\n> bug). And, indeed, it's better if you do this before we branch.\n\nPatch pushed and the CF entry closed. Thank you for the suggestion.\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 10 Jun 2024 15:04:10 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A wrong comment about search_indexed_tlist_for_var" } ]
[ { "msg_contents": "Hi all\n\nCompiling PostgreSQL 13.13 with option –with-llvm fails with Developer Studio 12.6 as well as with gcc 13.2.0.\nI have installed the developer/llvm/clang\" + \"developer/llvm/clang-build pkgs (13.0.1).\n\n- It works without the llvm option\n- I have also tried it with 16.1 – no success either\n\no With Developer Studio (psql 13.13):\n\n# ./configure CC='/opt/developerstudio12.6/bin/cc -m64 -xarch=native' --enable-dtrace DTRACEFLAGS='-64' --with-system-tzdata=/usr/share/lib/zoneinfo --with-llvm\n\n# gmake all\n...\n/opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -I../../../src/include -c -o pg_shmem.o pg_shmem.c\ngmake[3]: *** No rule to make target 'tas.bc', needed by 'objfiles.txt'. Stop.\ngmake[3]: Leaving directory '/opt/cnd/opt24_13.13_gmake_all_llvm/src/backend/port'\ngmake[2]: *** [common.mk:39: port-recursive] Error 2\ngmake[2]: Leaving directory '/opt/cnd/opt24_13.13_gmake_all_llvm/src/backend'\ngmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\ngmake[1]: Leaving directory '/opt/cnd/opt24_13.13_gmake_all_llvm/src'\ngmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\no With gcc (psql 13.13):\n\n#./configure CC='/usr/bin/gcc -m64' --with-system-tzdata=/usr/share/lib/zoneinfo --with-llvm\n\n# time gmake all\n...\n-Wl,--as-needed -Wl,-R'/usr/local/pgsql/lib' -lLLVM-13\nUndefined first referenced\nsymbol in file\nTTSOpsHeapTuple llvmjit_deform.o\npfree llvmjit.o\n…\nMemoryContextAllocZero llvmjit.o\npkglib_path llvmjit.o\nExecEvalStepOp llvmjit_expr.o\nerrhidestmt llvmjit.o\nld: warning: symbol referencing errors\n/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -O2 -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -flto=thin -emit-llvm -c -o llvmjit_types.bc llvmjit_types.c\ngmake[2]: Leaving directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/src/backend/jit/llvm'\ngmake[1]: Leaving directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/src'\ngmake -C config all\ngmake[1]: Entering directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/config'\ngmake[1]: Nothing to be done for 'all'.\ngmake[1]: Leaving directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/config'\n\n\nKind regards\nSasha\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager.\n\n\n\n\n\n\n\n\n\n\nHi all\n \nCompiling PostgreSQL 13.13 with option –with-llvm fails with Developer Studio 12.6 as well as with gcc 13.2.0.\nI have installed the developer/llvm/clang\" + \"developer/llvm/clang-build pkgs (13.0.1).\n \n- It works without the llvm option\n\n- I have also tried it with 16.1 – no success either\n \no With Developer Studio (psql 13.13):\n \n#\n./configure CC='/opt/developerstudio12.6/bin/cc -m64 -xarch=native' --enable-dtrace DTRACEFLAGS='-64' --with-system-tzdata=/usr/share/lib/zoneinfo --with-llvm\n \n# gmake all\n...\n/opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -I../../../src/include    -c -o pg_shmem.o pg_shmem.c\ngmake[3]: *** No rule to make target 'tas.bc', needed by 'objfiles.txt'.  Stop.\ngmake[3]: Leaving directory '/opt/cnd/opt24_13.13_gmake_all_llvm/src/backend/port'\ngmake[2]: *** [common.mk:39: port-recursive] Error 2\ngmake[2]: Leaving directory '/opt/cnd/opt24_13.13_gmake_all_llvm/src/backend'\ngmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\ngmake[1]: Leaving directory '/opt/cnd/opt24_13.13_gmake_all_llvm/src'\ngmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n \n\n \no With gcc\n(psql 13.13):\n \n#./configure CC='/usr/bin/gcc -m64' --with-system-tzdata=/usr/share/lib/zoneinfo --with-llvm\n \n# time gmake all\n...\n-Wl,--as-needed -Wl,-R'/usr/local/pgsql/lib'  -lLLVM-13\nUndefined                       first referenced\nsymbol                             in file\nTTSOpsHeapTuple                     llvmjit_deform.o\npfree                               llvmjit.o\n…\nMemoryContextAllocZero              llvmjit.o\npkglib_path                         llvmjit.o\nExecEvalStepOp                      llvmjit_expr.o\nerrhidestmt                         llvmjit.o\nld: warning: symbol referencing errors\n/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -O2  -D__STDC_LIMIT_MACROS\n -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include  -I../../../../src/include   -flto=thin -emit-llvm -c -o llvmjit_types.bc llvmjit_types.c\ngmake[2]: Leaving directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/src/backend/jit/llvm'\ngmake[1]: Leaving directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/src'\ngmake -C config all\ngmake[1]: Entering directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/config'\ngmake[1]: Nothing to be done for 'all'.\ngmake[1]: Leaving directory '/opt/cnd/opt25_13.13_gcc_gmak_all_llvm/config'\n \n \nKind regards\n\nSasha\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received\n this email in error please notify the system manager.", "msg_date": "Fri, 1 Dec 2023 17:02:25 +0000", "msg_from": "Sacha Hottinger <[email protected]>", "msg_from_op": true, "msg_subject": "Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi,\n\nOn 2023-12-01 17:02:25 +0000, Sacha Hottinger wrote:\n> Compiling PostgreSQL 13.13 with option –with-llvm fails with Developer Studio 12.6 as well as with gcc 13.2.0.\n> I have installed the developer/llvm/clang\" + \"developer/llvm/clang-build pkgs (13.0.1).\n\nUh, huh. I did not expect that anybody would ever really do that on\nsolaris. Not that the breakage was intentional, that's a separate issue.\n\nIs this on x86-64 or sparc?\n\n\nI'm somewhat confused that you report this to happen with gcc as well. We\ndon't use .s files there. Oh, I guess you see a different error\nthere:\n\n> o With gcc (psql 13.13):\n>\n> #./configure CC='/usr/bin/gcc -m64' --with-system-tzdata=/usr/share/lib/zoneinfo --with-llvm\n>\n> # time gmake all\n> ...\n> -Wl,--as-needed -Wl,-R'/usr/local/pgsql/lib' -lLLVM-13\n> Undefined first referenced\n> symbol in file\n> TTSOpsHeapTuple llvmjit_deform.o\n> pfree llvmjit.o\n> …\n> MemoryContextAllocZero llvmjit.o\n> pkglib_path llvmjit.o\n> ExecEvalStepOp llvmjit_expr.o\n> errhidestmt llvmjit.o\n> ld: warning: symbol referencing errors\n\nThis is odd. I think this is when building llvmjit.so - unfortunately there's\nnot enough details to figure out what's wrong here.\n\nOh, one thing that might be going wrong is that you just set the C compiler to\nbe gcc, but not C++ - what happens if you addtionally set CXX to g++?\n\n\n\nI did not think about .o files generated from .s when writing the make\ninfrastructure for JITing. At first I thought the easiest solution would be\nto just add a rule to build .bc from .s - but that doesn't work in the\nsunstudio case, because it relies on preprocessor logic that's specific to sun\nstudio - which clang can't parse. Gah.\n\nThus the attached hack - I think that should work. It'd mostly be interesting\nto see if this is the only roadblock or if there's more.\n\n\nTo be honest, the only case where .s files matter today is building with sun\nstudio, and that's a compiler we're planning to remove support for. So I'm not\nsure it's worth fixing, if it adds complexity.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 1 Dec 2023 11:49:04 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi Andres\n\nMany thanks for your help and the fix.\n\n> Is this on x86-64 or sparc?\nIt is SPARC\n\n> Oh, one thing that might be going wrong is that you just set the C compiler to\n> be gcc, but not C++ - what happens if you addtionally set CXX to g++?\n\n// That seems to get set correctly:\n# grep ^'CXX=' config.log\nCXX='g++'\n\n// I used the patch command to patch the src/backend/port/Makefile with your attached file and tried again with the Sun Studio compiler. There is now a different error at this stage:\n…\n/opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -I../../../src/include -c -o pg_shmem.o pg_shmem.c\necho | /usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -I../../../src/include -flto=thin -emit-llvm -c -xc -o tas.bc tas.s\ntas.s:1:1: error: expected identifier or '('\n!-------------------------------------------------------------------------\n^\n1 error generated.\ngmake[3]: *** [Makefile:42: tas.bc] Error 1\ngmake[3]: Leaving directory '/opt/cnd/opt28_13.3_gmake_all_llvm_fix/src/backend/port'\ngmake[2]: *** [common.mk:39: port-recursive] Error 2\ngmake[2]: Leaving directory '/opt/cnd/opt28_13.3_gmake_all_llvm_fix/src/backend'\ngmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\ngmake[1]: Leaving directory '/opt/cnd/opt28_13.3_gmake_all_llvm_fix/src'\ngmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\n// Have attached the config.log, gmake all full log, and patched Makefile.\n\n\nBest regards\nSasha\n\nVon: Andres Freund <[email protected]>\nDatum: Freitag, 1. Dezember 2023 um 20:49\nAn: Sacha Hottinger <[email protected]>\nCc: [email protected] <[email protected]>\nBetreff: Re: Building PosgresSQL with LLVM fails on Solaris 11.4\nHi,\n\nOn 2023-12-01 17:02:25 +0000, Sacha Hottinger wrote:\n> Compiling PostgreSQL 13.13 with option –with-llvm fails with Developer Studio 12.6 as well as with gcc 13.2.0.\n> I have installed the developer/llvm/clang\" + \"developer/llvm/clang-build pkgs (13.0.1).\n\nUh, huh. I did not expect that anybody would ever really do that on\nsolaris. Not that the breakage was intentional, that's a separate issue.\n\nIs this on x86-64 or sparc?\n\n\nI'm somewhat confused that you report this to happen with gcc as well. We\ndon't use .s files there. Oh, I guess you see a different error\nthere:\n\n> o With gcc (psql 13.13):\n>\n> #./configure CC='/usr/bin/gcc -m64' --with-system-tzdata=/usr/share/lib/zoneinfo --with-llvm\n>\n> # time gmake all\n> ...\n> -Wl,--as-needed -Wl,-R'/usr/local/pgsql/lib' -lLLVM-13\n> Undefined first referenced\n> symbol in file\n> TTSOpsHeapTuple llvmjit_deform.o\n> pfree llvmjit.o\n> …\n> MemoryContextAllocZero llvmjit.o\n> pkglib_path llvmjit.o\n> ExecEvalStepOp llvmjit_expr.o\n> errhidestmt llvmjit.o\n> ld: warning: symbol referencing errors\n\nThis is odd. I think this is when building llvmjit.so - unfortunately there's\nnot enough details to figure out what's wrong here.\n\nOh, one thing that might be going wrong is that you just set the C compiler to\nbe gcc, but not C++ - what happens if you addtionally set CXX to g++?\n\n\n\nI did not think about .o files generated from .s when writing the make\ninfrastructure for JITing. At first I thought the easiest solution would be\nto just add a rule to build .bc from .s - but that doesn't work in the\nsunstudio case, because it relies on preprocessor logic that's specific to sun\nstudio - which clang can't parse. Gah.\n\nThus the attached hack - I think that should work. It'd mostly be interesting\nto see if this is the only roadblock or if there's more.\n\n\nTo be honest, the only case where .s files matter today is building with sun\nstudio, and that's a compiler we're planning to remove support for. So I'm not\nsure it's worth fixing, if it adds complexity.\n\nGreetings,\n\nAndres Freund\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager.", "msg_date": "Fri, 1 Dec 2023 23:06:59 +0000", "msg_from": "Sacha Hottinger <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi,\n\nOn 2023-12-01 23:06:59 +0000, Sacha Hottinger wrote:\n> // I used the patch command to patch the src/backend/port/Makefile with your attached file and tried again with the Sun Studio compiler. There is now a different error at this stage:\n> …\n> /opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -I../../../src/include -c -o pg_shmem.o pg_shmem.c\n> echo | /usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -I../../../src/include -flto=thin -emit-llvm -c -xc -o tas.bc tas.s\n> tas.s:1:1: error: expected identifier or '('\n> !-------------------------------------------------------------------------\n> ^\n> 1 error generated.\n\nThat's me making a silly mistake... I've attached at an updated, but still\nblindly written, diff.\n\n\n> // Have attached the config.log, gmake all full log, and patched Makefile.\n\nCould you attach config.log and gmake for the gcc based build? Because so far\nI have no idea what causes the linker issue there.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 6 Dec 2023 10:01:33 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi Andres\n\nThanks a lot.\nIt now got much further but failed here with Sun Studio:\n…\ngmake[2]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/test/perl'\ngmake -C backend/jit/llvm all\ngmake[2]: Entering directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/backend/jit/llvm'\n/opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -KPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -c -o llvmjit.o llvmjit.c\n\"llvmjit.c\", line 493: warning: argument #1 is incompatible with prototype:\n prototype: pointer to void : \"../../../../src/include/jit/llvmjit_emit.h\", line 27\n argument : pointer to function(pointer to struct FunctionCallInfoBaseData {pointer to struct FmgrInfo {..} flinfo, pointer to struct Node {..} context, pointer to struct Node {..} resultinfo, unsigned int fncollation, _Bool isnull, short nargs, array[-1] of struct NullableDatum {..} args}) returning unsigned long\ng++ -O -std=c++14 -KPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -c -o llvmjit_error.o llvmjit_error.cpp\ng++: error: unrecognized command-line option ‘-KPIC’; did you mean ‘-fPIC’?\ngmake[2]: *** [<builtin>: llvmjit_error.o] Error 1\ngmake[2]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/backend/jit/llvm'\ngmake[1]: *** [Makefile:42: all-backend/jit/llvm-recurse] Error 2\ngmake[1]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src'\ngmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\nWith ggc it fails at the same step as before.\nI have attached the log files of the SunStudio and gcc runs to the email.\n\nMany thanks for your help.\n\nBest regards\nSacha\n\nVon: Andres Freund <[email protected]>\nDatum: Mittwoch, 6. Dezember 2023 um 19:01\nAn: Sacha Hottinger <[email protected]>\nCc: [email protected] <[email protected]>\nBetreff: Re: Building PosgresSQL with LLVM fails on Solaris 11.4\nHi,\n\nOn 2023-12-01 23:06:59 +0000, Sacha Hottinger wrote:\n> // I used the patch command to patch the src/backend/port/Makefile with your attached file and tried again with the Sun Studio compiler. There is now a different error at this stage:\n> …\n> /opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -I../../../src/include -c -o pg_shmem.o pg_shmem.c\n> echo | /usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -I../../../src/include -flto=thin -emit-llvm -c -xc -o tas.bc tas.s\n> tas.s:1:1: error: expected identifier or '('\n> !-------------------------------------------------------------------------\n> ^\n> 1 error generated.\n\nThat's me making a silly mistake... I've attached at an updated, but still\nblindly written, diff.\n\n\n> // Have attached the config.log, gmake all full log, and patched Makefile.\n\nCould you attach config.log and gmake for the gcc based build? Because so far\nI have no idea what causes the linker issue there.\n\nGreetings,\n\nAndres Freund\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager.", "msg_date": "Thu, 7 Dec 2023 13:43:55 +0000", "msg_from": "Sacha Hottinger <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi,\n\nOn 2023-12-07 13:43:55 +0000, Sacha Hottinger wrote:\n> Thanks a lot.\n> It now got much further but failed here with Sun Studio:\n> …\n> gmake[2]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/test/perl'\n> gmake -C backend/jit/llvm all\n> gmake[2]: Entering directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/backend/jit/llvm'\n> /opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -KPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -c -o llvmjit.o llvmjit.c\n> \"llvmjit.c\", line 493: warning: argument #1 is incompatible with prototype:\n> prototype: pointer to void : \"../../../../src/include/jit/llvmjit_emit.h\", line 27\n> argument : pointer to function(pointer to struct FunctionCallInfoBaseData {pointer to struct FmgrInfo {..} flinfo, pointer to struct Node {..} context, pointer to struct Node {..} resultinfo, unsigned int fncollation, _Bool isnull, short nargs, array[-1] of struct NullableDatum {..} args}) returning unsigned long\n> g++ -O -std=c++14 -KPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -c -o llvmjit_error.o llvmjit_error.cpp\n> g++: error: unrecognized command-line option ‘-KPIC’; did you mean ‘-fPIC’?\n> gmake[2]: *** [<builtin>: llvmjit_error.o] Error 1\n> gmake[2]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/backend/jit/llvm'\n> gmake[1]: *** [Makefile:42: all-backend/jit/llvm-recurse] Error 2\n> gmake[1]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src'\n> gmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\nI don't know where the -KPIC is coming from. And TBH, I don't see much point\ntrying to fix a scenario involving matching sun studio C with g++.\n\n\n> With ggc it fails at the same step as before.\n> I have attached the log files of the SunStudio and gcc runs to the email.\n\nI don't see a failure with gcc.\n\nThe warnings are emitted for every extension and compilation succeeds.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Dec 2023 08:50:26 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi Andres\n\nThanks for your reply.\nThe reason I was suspicious with the warnings of the gcc build was, because gmake check reported 138 out of 202 tests to have failed. I have attached the output of gmake check.\n\nAfter you mentioned that gcc did not report any errors, just warnings, we installed the build.\nFirst, it seeemed to work and SELECT pg_jit_available(); showed \"pg_jit_available\" as \"t\" but the DB showed strange behaviour. I.e. not always, but sometimes running \"show parallel_tuple_cost\" caused postmaster to restart a server process.\nWe had to back to the previous installation.\n\nIt seems there is definitievly something wrong with the result gcc created.\n\nBest regards\nSacha\n\nVon: Andres Freund <[email protected]>\nDatum: Donnerstag, 7. Dezember 2023 um 17:50\nAn: Sacha Hottinger <[email protected]>\nCc: [email protected] <[email protected]>\nBetreff: Re: Building PosgresSQL with LLVM fails on Solaris 11.4\nHi,\n\nOn 2023-12-07 13:43:55 +0000, Sacha Hottinger wrote:\n> Thanks a lot.\n> It now got much further but failed here with Sun Studio:\n> …\n> gmake[2]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/test/perl'\n> gmake -C backend/jit/llvm all\n> gmake[2]: Entering directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/backend/jit/llvm'\n> /opt/developerstudio12.6/bin/cc -m64 -xarch=native -Xa -v -O -KPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -c -o llvmjit.o llvmjit.c\n> \"llvmjit.c\", line 493: warning: argument #1 is incompatible with prototype:\n> prototype: pointer to void : \"../../../../src/include/jit/llvmjit_emit.h\", line 27\n> argument : pointer to function(pointer to struct FunctionCallInfoBaseData {pointer to struct FmgrInfo {..} flinfo, pointer to struct Node {..} context, pointer to struct Node {..} resultinfo, unsigned int fncollation, _Bool isnull, short nargs, array[-1] of struct NullableDatum {..} args}) returning unsigned long\n> g++ -O -std=c++14 -KPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/include -I../../../../src/include -c -o llvmjit_error.o llvmjit_error.cpp\n> g++: error: unrecognized command-line option ‘-KPIC’; did you mean ‘-fPIC’?\n> gmake[2]: *** [<builtin>: llvmjit_error.o] Error 1\n> gmake[2]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src/backend/jit/llvm'\n> gmake[1]: *** [Makefile:42: all-backend/jit/llvm-recurse] Error 2\n> gmake[1]: Leaving directory '/opt/cnd/opt28-2_13.3_gmake_all_llvm_fixV2/src'\n> gmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\nI don't know where the -KPIC is coming from. And TBH, I don't see much point\ntrying to fix a scenario involving matching sun studio C with g++.\n\n\n> With ggc it fails at the same step as before.\n> I have attached the log files of the SunStudio and gcc runs to the email.\n\nI don't see a failure with gcc.\n\nThe warnings are emitted for every extension and compilation succeeds.\n\nGreetings,\n\nAndres Freund\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager.", "msg_date": "Wed, 13 Dec 2023 15:18:02 +0000", "msg_from": "Sacha Hottinger <[email protected]>", "msg_from_op": true, "msg_subject": "AW: Building PosgresSQL with LLVM fails on Solaris 11.4" }, { "msg_contents": "Hi,\n\nOn 2023-12-13 15:18:02 +0000, Sacha Hottinger wrote:\n> Thanks for your reply.\n> The reason I was suspicious with the warnings of the gcc build was, because gmake check reported 138 out of 202 tests to have failed. I have attached the output of gmake check.\n\nThat'll likely be due to assertion / segmentation failures.\n\nYou'd need to enable core dumps and show a backtrace.\n\nI assume that if you run tests without JIT support (e.g. by export\nPGOPTIONS='-c jit=0'; gmake check), no such problem occurs?\n\n\n> After you mentioned that gcc did not report any errors, just warnings, we installed the build.\n> First, it seeemed to work and SELECT pg_jit_available(); showed \"pg_jit_available\" as \"t\" but the DB showed strange behaviour. I.e. not always, but sometimes running \"show parallel_tuple_cost\" caused postmaster to restart a server process.\n> We had to back to the previous installation.\n> \n> It seems there is definitievly something wrong with the result gcc created.\n\nI suspect that the LLVM version you used does something wrong on sparc. Which\nversion of LLVM is it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 21 Dec 2023 04:27:38 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Building PosgresSQL with LLVM fails on Solaris 11.4" } ]
[ { "msg_contents": "I'm currently spending some time looking for \"Needs Review\" threads\nthat have some combination of 1) stalled for a long time and 2) lack\nconsensus, with an aim to clearing them out of CF.\n\nSince we got a late start, I will begin moving entries over to January\non Monday. I thought I remembered discussion on a \"bulk move\" button,\nbut if it's there, I can't find it...\n\n--\nJohn Naylor\n\n\n", "msg_date": "Sat, 2 Dec 2023 14:23:28 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest 2023-11 is almost over" } ]
[ { "msg_contents": "Hi,\n\nI was recently looking at the code around the WAL_DEBUG macro and GUC.\nWhen enabled, the code does the following:\n\n1. Creates a memory context that allows pallocs within critical sections.\n2. Decodes (not logical decoding but DecodeXLogRecord()) every WAL\nrecord using the above memory context that's generated in the server\nand emits a LOG message.\n3. Emits messages at DEBUG level in AdvanceXLInsertBuffer(), at LOG\nlevel in XLogFlush(), at LOG level in XLogBackgroundFlush().\n4. Emits messages at LOG level for every record that the server\nreplays/applies in the main redo loop.\n\nI enabled this code by compiling with the WAL_DEBUG macro and setting\nwal_debug GUC to on. Firstly, the compilation on Windows failed\nbecause XL_ROUTINE was passed inappropriately for XLogReaderAllocate()\nused. After fixing the compilation issue [1], the TAP tests started to\nfail [2] which I'm sure we can fix.\n\nI started to think if this code is needed at all in production. How\nabout we do either of the following?\n\na) Remove the WAL_DEBUG macro and move all the code under the\nwal_debug GUC? Since the GUC is already marked as DEVELOPER_OPTION,\nthe users will know the consequences of enabling it in production.\nb) Remove both the WAL_DEBUG macro and the wal_debug GUC. I don't\nthink (2) is needed to be in core especially when tools like\npg_walinspect and pg_waldump can do the same job. And, the messages in\n(3) and (4) can be turned to some DEBUGX level without being under the\nWAL_DEBUG macro.\n\nI have no idea if anyone uses WAL_DEBUG macro and wal_debug GUCs in\nproduction, if we have somebody using it, I think we need to fix the\ncompilation and test failure issues, and start testing this code\n(perhaps I can think of setting up a buildfarm member to help here).\n\nI'm in favour of option (b), but I'd like to hear more thoughts on this.\n\n[1]\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex ca7100d4db..52633793d4 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -1023,8 +1023,12 @@ XLogInsertRecord(XLogRecData *rdata,\n\npalloc(DecodeXLogRecordRequiredSpace(record->xl_tot_len));\n\n if (!debug_reader)\n- debug_reader =\nXLogReaderAllocate(wal_segment_size, NULL,\n-\n XL_ROUTINE(), NULL);\n+ debug_reader = XLogReaderAllocate(wal_segment_size,\n+\n NULL,\n+\n XL_ROUTINE(.page_read = NULL,\n+\n .segment_open = NULL,\n+\n .segment_close = NULL),\n+\n NULL);\n\n[2]\nsrc/test/subscription/t/029_on_error.pl because the test gets LSN from\nan error context message emitted to server logs which the new\nWAL_DEBUG LOG messages flood the server logs with.\nsrc/bin/initdb/t/001_initdb.pl because the WAL_DEBUG LOG messages are\nemitted to the console while initdb.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 2 Dec 2023 19:36:29 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n> I started to think if this code is needed at all in production. How\n> about we do either of the following?\n\nWell, the fact that this code is hidden behind an off-by-default macro\nseems like a pretty strong indicator that it is not intended for\nproduction. But that doesn't mean we should remove it. \n\n> a) Remove the WAL_DEBUG macro and move all the code under the\n> wal_debug GUC? Since the GUC is already marked as DEVELOPER_OPTION,\n> the users will know the consequences of enabling it in production.\n\nI think the key to this option is verifying there's no measurable\nperformance impact.\n\n> b) Remove both the WAL_DEBUG macro and the wal_debug GUC. I don't\n> think (2) is needed to be in core especially when tools like\n> pg_walinspect and pg_waldump can do the same job. And, the messages in\n> (3) and (4) can be turned to some DEBUGX level without being under the\n> WAL_DEBUG macro.\n\nIs there anything provided by wal_debug that can't be found via\npg_walinspect/pg_waldump?\n\n> I have no idea if anyone uses WAL_DEBUG macro and wal_debug GUCs in\n> production, if we have somebody using it, I think we need to fix the\n> compilation and test failure issues, and start testing this code\n> (perhaps I can think of setting up a buildfarm member to help here).\n\n+1 for at least fixing the code and tests, provided we decide to keep it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 2 Dec 2023 16:30:43 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n>> I started to think if this code is needed at all in production. How\n>> about we do either of the following?\n\n> Well, the fact that this code is hidden behind an off-by-default macro\n> seems like a pretty strong indicator that it is not intended for\n> production. But that doesn't mean we should remove it. \n\nAgreed, production is not the question here. The question is whether\nit's of any use to developers either. It looks to me that the code's\nbeen broken since v13, if not before, which very strongly suggests\nthat nobody is using it. Think I'd vote for nuking it rather than\nputting effort into fixing it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Dec 2023 17:46:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Sun, Dec 3, 2023 at 4:16 AM Tom Lane <[email protected]> wrote:\n>\n> Nathan Bossart <[email protected]> writes:\n> > On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n> >> I started to think if this code is needed at all in production. How\n> >> about we do either of the following?\n>\n> > Well, the fact that this code is hidden behind an off-by-default macro\n> > seems like a pretty strong indicator that it is not intended for\n> > production. But that doesn't mean we should remove it.\n>\n> Agreed, production is not the question here. The question is whether\n> it's of any use to developers either. It looks to me that the code's\n> been broken since v13, if not before, which very strongly suggests\n> that nobody is using it. Think I'd vote for nuking it rather than\n> putting effort into fixing it.\n\nHow about something like the attached? Please see the commit message\nfor more detailed information.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 3 Dec 2023 20:23:56 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Sun, Dec 3, 2023 at 4:00 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n> > I started to think if this code is needed at all in production. How\n> > about we do either of the following?\n>\n> Well, the fact that this code is hidden behind an off-by-default macro\n> seems like a pretty strong indicator that it is not intended for\n> production. But that doesn't mean we should remove it.\n\nI think all that the WAL_DEBUG code offers can be achieved with other\nmeans after adjusting a few pieces. Please see the commit message in\nthe v1 patch posted in this thread at\nhttps://www.postgresql.org/message-id/CALj2ACW5zPMT09eqXvh2Ve7Kz02HShTwyjG%2BxTzkrzeKtQMnQQ%40mail.gmail.com.\n\n> > a) Remove the WAL_DEBUG macro and move all the code under the\n> > wal_debug GUC? Since the GUC is already marked as DEVELOPER_OPTION,\n> > the users will know the consequences of enabling it in production.\n>\n> I think the key to this option is verifying there's no measurable\n> performance impact.\n\nFWIW, enabling this has a huge impact in production. For instance,\nrecovery TAP tests are ~10% slower with the WAL_DEBUG macro enabled. I\ndon't think we go the route of keeping this code.\n\nWAL_DEBUG macro enabled:\nAll tests successful.\nFiles=38, Tests=531, 157 wallclock secs ( 0.18 usr 0.05 sys + 14.96\ncusr 16.11 csys = 31.30 CPU)\nResult: PASS\n\nHEAD:\nAll tests successful.\nFiles=38, Tests=531, 143 wallclock secs ( 0.15 usr 0.06 sys + 14.24\ncusr 15.62 csys = 30.07 CPU)\nResult: PASS\n\n> > b) Remove both the WAL_DEBUG macro and the wal_debug GUC. I don't\n> > think (2) is needed to be in core especially when tools like\n> > pg_walinspect and pg_waldump can do the same job. And, the messages in\n> > (3) and (4) can be turned to some DEBUGX level without being under the\n> > WAL_DEBUG macro.\n>\n> Is there anything provided by wal_debug that can't be found via\n> pg_walinspect/pg_waldump?\n\nI don't think so. The WAL record decoding can be achieved with\npg_walinspect or pg_waldump. The page comparison check in\ngeneric_xlog.c can be moved under USE_ASSERT_CHECKING macro. PSA v1\npatch posted in this thread.\n\n> > I have no idea if anyone uses WAL_DEBUG macro and wal_debug GUCs in\n> > production, if we have somebody using it, I think we need to fix the\n> > compilation and test failure issues, and start testing this code\n> > (perhaps I can think of setting up a buildfarm member to help here).\n>\n> +1 for at least fixing the code and tests, provided we decide to keep it.\n\nWith no real use of this code in production, instead of fixing\ncompilation issues and TAP test failures to maintain the code, I think\nit's better to adjust a few pieces and remove the other stuff like in\nthe attached v1 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 3 Dec 2023 20:30:24 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Sun, Dec 03, 2023 at 08:30:24PM +0530, Bharath Rupireddy wrote:\n> On Sun, Dec 3, 2023 at 4:00 AM Nathan Bossart <[email protected]> wrote:\n> > On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n> > > b) Remove both the WAL_DEBUG macro and the wal_debug GUC. I don't\n> > > think (2) is needed to be in core especially when tools like\n> > > pg_walinspect and pg_waldump can do the same job. And, the messages in\n> > > (3) and (4) can be turned to some DEBUGX level without being under the\n> > > WAL_DEBUG macro.\n> >\n> > Is there anything provided by wal_debug that can't be found via\n> > pg_walinspect/pg_waldump?\n> \n> I don't think so. The WAL record decoding can be achieved with\n> pg_walinspect or pg_waldump.\n\nCan be, but the WAL_DEBUG model is mighty convenient:\n- Cooperates with backtrace_functions\n- Change log_line_prefix to correlate any log_line_prefix fact with WAL records\n- See WAL records interleaved with non-WAL log messages\n\n> > > I have no idea if anyone uses WAL_DEBUG macro and wal_debug GUCs in\n> > > production, if we have somebody using it, I think we need to fix the\n\nI don't use it in production, but I use it more than any other of our many\nDEBUG macros.\n\n> > > compilation and test failure issues, and start testing this code\n> > > (perhaps I can think of setting up a buildfarm member to help here).\n\nWAL_DEBUG compiles and works just fine on GNU/Linux. I'm not surprised the\nfailure to compile on Windows has escaped notice, because Windows-specific WAL\nbehaviors are so rare. We consistently do our WAL-related development on\nnon-Windows. Needless to say, I wouldn't object to fixing WAL_DEBUG for\nWindows.\n\nFixing tests is less valuable, especially since it's clear when a test fails\nthrough extra messages the test didn't expect. I bet other DEBUG macros make\nsome tests fail that way, which doesn't devalue those macros. A test patch\nmight be okay nonetheless, but a buildfarm member is more likely to have\nnegative value. It would create urgent work. In the hypothetical buildfarm\nmember's absence, the project would be just fine if that work never happens.\nA buildfarm member that compiles but doesn't test could be okay.\n\n\n", "msg_date": "Sun, 3 Dec 2023 11:07:05 -0800", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Sun, Dec 03, 2023 at 11:07:05AM -0800, Noah Misch wrote:\n> Can be, but the WAL_DEBUG model is mighty convenient:\n> - Cooperates with backtrace_functions\n> - Change log_line_prefix to correlate any log_line_prefix fact with WAL records\n> - See WAL records interleaved with non-WAL log messages\n>\n> I don't use it in production, but I use it more than any other of our many\n> DEBUG macros.\n\nSo do I as a quick workaround to check the validity of records\ngenerated without having to spawn a standby replaying the records.\nSince 027_stream_regress.pl exists, I agree that its value has\ndecreased and that all patches should have queries to check their\nrecords anyway, but it does not make it useless for developers.\n\n> Fixing tests is less valuable, especially since it's clear when a test fails\n> through extra messages the test didn't expect. I bet other DEBUG macros make\n> some tests fail that way, which doesn't devalue those macros. A test patch\n> might be okay nonetheless, but a buildfarm member is more likely to have\n> negative value. It would create urgent work. In the hypothetical buildfarm\n> member's absence, the project would be just fine if that work never happens.\n> A buildfarm member that compiles but doesn't test could be okay.\n\nI can add the flag in one of my nix animals if we don't have any to\nprovide minimal coverage, that's not an issue for me. I'd suggest to\njust fix the build on Windows, this flag is a low maintenance burden.\n--\nMichael", "msg_date": "Mon, 4 Dec 2023 10:14:36 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Mon, Dec 04, 2023 at 10:14:36AM +0900, Michael Paquier wrote:\n> I can add the flag in one of my nix animals if we don't have any to\n> provide minimal coverage, that's not an issue for me. I'd suggest to\n> just fix the build on Windows, this flag is a low maintenance burden.\n\nHearing nothing about that, I've reproduced the failure, checked that\nthe proposed fix is OK, and applied it down to 13 where this was\nintroduced.\n\nRegarding the tests, like Noah, I am not really sure that it is worth\nspending resources on fixing as they'd require wal_debug = on to\nbreak.\n--\nMichael", "msg_date": "Wed, 6 Dec 2023 15:00:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On 02.12.23 15:06, Bharath Rupireddy wrote:\n> I enabled this code by compiling with the WAL_DEBUG macro and setting\n> wal_debug GUC to on. Firstly, the compilation on Windows failed\n> because XL_ROUTINE was passed inappropriately for XLogReaderAllocate()\n> used.\n\nThis kind of thing could be mostly avoided if we didn't hide all the \nWAL_DEBUG behind #ifdefs. For example, in the attached patch, I instead \nchanged it so that\n\n if (XLOG_DEBUG)\n\nresolves to\n\n if (false)\n\nin the normal case. That way, we don't need to wrap that in #ifdef \nWAL_DEBUG, and the compiler can see the disabled code and make sure it \ncontinues to build.", "msg_date": "Wed, 6 Dec 2023 12:27:17 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Wed, Dec 6, 2023, at 8:27 AM, Peter Eisentraut wrote:\n> On 02.12.23 15:06, Bharath Rupireddy wrote:\n> > I enabled this code by compiling with the WAL_DEBUG macro and setting\n> > wal_debug GUC to on. Firstly, the compilation on Windows failed\n> > because XL_ROUTINE was passed inappropriately for XLogReaderAllocate()\n> > used.\n> \n> This kind of thing could be mostly avoided if we didn't hide all the \n> WAL_DEBUG behind #ifdefs.\n\nAFAICS LOCK_DEBUG also hides its GUCs behind #ifdefs. The fact that XLOG_DEBUG\nis a variable but seems like a constant surprises me. I would rename it to\nXLogDebug or xlog_debug.\n\n> in the normal case. That way, we don't need to wrap that in #ifdef \n> WAL_DEBUG, and the compiler can see the disabled code and make sure it \n> continues to build.\n\nI didn't check the LOCK_DEBUG code path to make sure it fits in the same\ncategory as WAL_DEBUG. If it does, maybe it is worth to apply the same logic\nthere.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Dec 6, 2023, at 8:27 AM, Peter Eisentraut wrote:On 02.12.23 15:06, Bharath Rupireddy wrote:> I enabled this code by compiling with the WAL_DEBUG macro and setting> wal_debug GUC to on. Firstly, the compilation on Windows failed> because XL_ROUTINE was passed inappropriately for XLogReaderAllocate()> used.This kind of thing could be mostly avoided if we didn't hide all the WAL_DEBUG behind #ifdefs.AFAICS LOCK_DEBUG also hides its GUCs behind #ifdefs. The fact that XLOG_DEBUGis a variable but seems like a constant surprises me. I would rename it toXLogDebug or xlog_debug.in the normal case.  That way, we don't need to wrap that in #ifdef WAL_DEBUG, and the compiler can see the disabled code and make sure it continues to build.I didn't check the LOCK_DEBUG code path to make sure it fits in the samecategory as WAL_DEBUG. If it does, maybe it is worth to apply the same logicthere.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 06 Dec 2023 09:46:09 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This kind of thing could be mostly avoided if we didn't hide all the \n> WAL_DEBUG behind #ifdefs. For example, in the attached patch, I instead \n> changed it so that\n> if (XLOG_DEBUG)\n> resolves to\n> if (false)\n> in the normal case. That way, we don't need to wrap that in #ifdef \n> WAL_DEBUG, and the compiler can see the disabled code and make sure it \n> continues to build.\n\nHmm, maybe, but I'm not sure this would be an unalloyed good.\nThe main concern I have is compilers and static analyzers starting\nto bleat about unreachable code (warnings like \"variable set but\nnever used\", or the like, seem plausible). The dead code would\nalso decrease our code coverage statistics, not that those are\nwonderful now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Dec 2023 10:06:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Wed, Dec 06, 2023 at 09:46:09AM -0300, Euler Taveira wrote:\n> On Wed, Dec 6, 2023, at 8:27 AM, Peter Eisentraut wrote:\n>> This kind of thing could be mostly avoided if we didn't hide all the \n>> WAL_DEBUG behind #ifdefs.\n> \n> AFAICS LOCK_DEBUG also hides its GUCs behind #ifdefs. The fact that XLOG_DEBUG\n> is a variable but seems like a constant surprises me. I would rename it to\n> XLogDebug or xlog_debug.\n\n+1. Or just wal_debug for greppability.\n\n>> in the normal case. That way, we don't need to wrap that in #ifdef \n>> WAL_DEBUG, and the compiler can see the disabled code and make sure it \n>> continues to build.\n> \n> I didn't check the LOCK_DEBUG code path to make sure it fits in the same\n> category as WAL_DEBUG. If it does, maybe it is worth to apply the same logic\n> there.\n\nPerformWalRecovery() with its log for RM_XACT_ID is something that\nstresses me a bit though because this is in the main redo loop which\nis never free. The same can be said about GenericXLogFinish() because\nthe extra computation happens while holding a buffer and marking it\ndirty. The ones in xlog.c are free of charge as they are called\noutside any critical portions.\n\nThis makes me wonder how much we need to care about\ntrace_recovery_messages, actually, and I've never used it.\n--\nMichael", "msg_date": "Thu, 7 Dec 2023 09:51:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Wed, Dec 6, 2023, at 9:51 PM, Michael Paquier wrote:\n> PerformWalRecovery() with its log for RM_XACT_ID is something that\n> stresses me a bit though because this is in the main redo loop which\n> is never free. The same can be said about GenericXLogFinish() because\n> the extra computation happens while holding a buffer and marking it\n> dirty. The ones in xlog.c are free of charge as they are called\n> outside any critical portions.\n> \n> This makes me wonder how much we need to care about\n> trace_recovery_messages, actually, and I've never used it.\n\nIIUC trace_recovery_messages was a debugging aid in the 9.0 era when the HS was\nintroduced. I'm also wondering if anyone used it in the past years.\n\nelog.c:\n\n* Intention is to keep this for at least the whole of the 9.0 production\n* release, so we can more easily diagnose production problems in the field.\n* It should go away eventually, though, because it's an ugly and\n* hard-to-explain kluge.\n*/\nint\ntrace_recovery(int trace_level)\n{\n if (trace_level < LOG &&\n trace_level >= trace_recovery_messages)\n return LOG; \n\n return trace_level;\n}\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Dec 6, 2023, at 9:51 PM, Michael Paquier wrote:PerformWalRecovery() with its log for RM_XACT_ID is something thatstresses me a bit though because this is in the main redo loop whichis never free.  The same can be said about GenericXLogFinish() becausethe extra computation happens while holding a buffer and marking itdirty.  The ones in xlog.c are free of charge as they are calledoutside any critical portions.This makes me wonder how much we need to care abouttrace_recovery_messages, actually, and I've never used it.IIUC trace_recovery_messages was a debugging aid in the 9.0 era when the HS wasintroduced. I'm also wondering if anyone used it in the past years.elog.c:* Intention is to keep this for at least the whole of the 9.0 production* release, so we can more easily diagnose production problems in the field.* It should go away eventually, though, because it's an ugly and* hard-to-explain kluge.*/inttrace_recovery(int trace_level){    if (trace_level < LOG &&        trace_level >= trace_recovery_messages)        return LOG;     return trace_level;}--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 06 Dec 2023 23:32:19 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Wed, Dec 06, 2023 at 11:32:19PM -0300, Euler Taveira wrote:\n> IIUC trace_recovery_messages was a debugging aid in the 9.0 era when the HS was\n> introduced. I'm also wondering if anyone used it in the past years.\n\nFWIW, I'd be +1 for getting rid of entirely, with its conditional\nblock in PerformWalRecovery(), as it does not bring any additional\nvalue now that it is possible to achieve much more with pg_waldump\n(pg_xlogdump before that) introduced a couple of years later in 9.3.\n--\nMichael", "msg_date": "Thu, 7 Dec 2023 11:40:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Mon, Dec 4, 2023 at 12:37 AM Noah Misch <[email protected]> wrote:\n>\n> On Sun, Dec 03, 2023 at 08:30:24PM +0530, Bharath Rupireddy wrote:\n> > On Sun, Dec 3, 2023 at 4:00 AM Nathan Bossart <[email protected]> wrote:\n> > > On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n> > > > b) Remove both the WAL_DEBUG macro and the wal_debug GUC. I don't\n> > > > think (2) is needed to be in core especially when tools like\n> > > > pg_walinspect and pg_waldump can do the same job. And, the messages in\n> > > > (3) and (4) can be turned to some DEBUGX level without being under the\n> > > > WAL_DEBUG macro.\n> > >\n> > > Is there anything provided by wal_debug that can't be found via\n> > > pg_walinspect/pg_waldump?\n> >\n> > I don't think so. The WAL record decoding can be achieved with\n> > pg_walinspect or pg_waldump.\n>\n> Can be, but the WAL_DEBUG model is mighty convenient:\n> - Cooperates with backtrace_functions\n> - Change log_line_prefix to correlate any log_line_prefix fact with WAL records\n> - See WAL records interleaved with non-WAL log messages\n\nAgree it helps in all of the above situations, but I'm curious to know\nwhat sorts of problems it helps debug with.\n\nThe interesting pieces that WAL_DEBUG code does are the following:\n\n1. Decodes the WAL record right after it's written to WAL buffers in\nXLogInsertRecord. What problem does it help to detect?\n2. Emits a log message for every WAL record applied in the main redo\napply loop. Enabling this isn't cheap for sure even for developer\nenvironments; I've observed a 10% increase in recovery test time)\n3. Emits log messages for WAL writes/flushes and WAL buffer page\ninitializations. These messages don't have to be hiding under a macro,\nbut a DEBUGX level is sufficient.\n\n> > > > I have no idea if anyone uses WAL_DEBUG macro and wal_debug GUCs in\n> > > > production, if we have somebody using it, I think we need to fix the\n>\n> I don't use it in production, but I use it more than any other of our many\n> DEBUG macros.\n\nI'm just curious to know what sorts of problems WAL_DEBUG code helps\ndebug with. Is the WAL_DEBUG code (1) or (2) or (3) that helped you\nthe most? Is converting the LOG messages (3) to DEBUGX level going to\nhelp in your case? Can you please throw some light on this?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 16:50:30 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Thu, Dec 7, 2023 at 8:10 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Dec 06, 2023 at 11:32:19PM -0300, Euler Taveira wrote:\n> > IIUC trace_recovery_messages was a debugging aid in the 9.0 era when the HS was\n> > introduced. I'm also wondering if anyone used it in the past years.\n>\n> FWIW, I'd be +1 for getting rid of entirely, with\n\n+1 for removing trace_recovery_messages. Firstly, it doesn't cover all\nthe recovery related messages as it promises, so it's an incomplete\nfeature. Secondly, it needs a bit of understanding as to how it gels\nwith client_min_messages and log_min_messages.\n\n> its conditional\n> block in PerformWalRecovery(), as it does not bring any additional\n> value now that it is possible to achieve much more with pg_waldump\n> (pg_xlogdump before that) introduced a couple of years later in 9.3.\n\nAnd, I agree that the functionality (description of the WAL record\nbeing applied) of conditional trace_recovery_messages code under\nWAL_DEBUG macro in PerformWalRecovery's main redo apply loop can more\neasily be achieved with either pg_walinspect or pg_waldump. That's my\npoint as well for getting rid of WAL_DEBUG macro related code after\nconverting a few messages to DEBUGX level.\n\nThe comment atop trace_recovery [1] function says it should go away\neventually and seems to have served the purpose when the recovery\nrelated code was introduced in PG 9.0.\n\nFWIW, the attached patch is what I've left with after removing\ntrace_recovery_messages related code, 9 files changed, 19\ninsertions(+), 97 deletions(-).\n\n[1]\n * Intention is to keep this for at least the whole of the 9.0 production\n * release, so we can more easily diagnose production problems in the field.\n * It should go away eventually, though, because it's an ugly and\n * hard-to-explain kluge.\n */\nint\ntrace_recovery(int trace_level)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 7 Dec 2023 17:29:55 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Thu, Dec 7, 2023 at 6:20 AM Bharath Rupireddy\n<[email protected]> wrote:\n> I'm just curious to know what sorts of problems WAL_DEBUG code helps\n> debug with. Is the WAL_DEBUG code (1) or (2) or (3) that helped you\n> the most? Is converting the LOG messages (3) to DEBUGX level going to\n> help in your case? Can you please throw some light on this?\n\nI don't like the idea of removing WAL_DEBUG. I haven't used it in a\nwhile, but I have used it, and it's been very helpful. You can for\nexample run a psql session and easily see what each command you type\ngenerates in terms of WAL, without having to run pg_waldump over and\nover and figure out which output is new. That's not something that I\nneed to do super-commonly, but I have wanted to do it for certain\nprojects, and I don't think that maintaining the WAL_DEBUG code in\ntree is really causing us very much hassle. In fact, I'd argue that of\nall of the various debugging bits that are part of the tree, WAL_DEBUG\nis the most useful by a good margin. Things like OPTIMIZER_DEBUG and\nLOCK_DEBUG seem to me to have much less utility. LOCK_DEBUG for\nexample produces a completely unworkable volume of output even for\nvery simple operations.\n\nI've never been much of a believer in trace_recovery_messages, either,\nbut I'm somewhat sympathetic to the problem it's trying to solve. My\nincremental backup patch set adds a new background process, and what\nare you supposed to do if you have a problem with that process? You\ncan crank up the server debugging level overall, but what if you just\nwant the walsummarizer process, or say the walsummarizer and any\nprocesses trying to take an incremental backup, to do super-detailed\nlogging? We don't have a model for that. I thought about adding\ntrace_walsummarizer_messages or debug_walsummarizer or something for\nthis exact reason, but I didn't because the existing\ntrace_recovery_messages setting seems like a kludge I don't want to\npropagate. But that leaves me with nothing other than boosting up the\ndebug level for the whole server, which is an annoying thing to have\nto do if you really only care about one subsystem.\n\nI don't know what the right answer is exactly, but there should be\nsome way of telling the system something more fine-grained than \"do\nmore logging\" or \"do a lot more logging\" or \"do really a lot more\nlogging\" ...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 10:42:45 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Thu, Dec 07, 2023 at 04:50:30PM +0530, Bharath Rupireddy wrote:\n> On Mon, Dec 4, 2023 at 12:37 AM Noah Misch <[email protected]> wrote:\n> > On Sun, Dec 03, 2023 at 08:30:24PM +0530, Bharath Rupireddy wrote:\n> > > On Sun, Dec 3, 2023 at 4:00 AM Nathan Bossart <[email protected]> wrote:\n> > > > On Sat, Dec 02, 2023 at 07:36:29PM +0530, Bharath Rupireddy wrote:\n\n> The interesting pieces that WAL_DEBUG code does are the following:\n> \n> 1. Decodes the WAL record right after it's written to WAL buffers in\n> XLogInsertRecord. What problem does it help to detect?\n\nI think it helped me understand why a test case I was writing didn't reach the\nbug I expected it to reach.\n\n> 2. Emits a log message for every WAL record applied in the main redo\n> apply loop. Enabling this isn't cheap for sure even for developer\n> environments; I've observed a 10% increase in recovery test time)\n> 3. Emits log messages for WAL writes/flushes and WAL buffer page\n> initializations. These messages don't have to be hiding under a macro,\n> but a DEBUGX level is sufficient.\n> \n> > > > > I have no idea if anyone uses WAL_DEBUG macro and wal_debug GUCs in\n> > > > > production, if we have somebody using it, I think we need to fix the\n> >\n> > I don't use it in production, but I use it more than any other of our many\n> > DEBUG macros.\n> \n> I'm just curious to know what sorts of problems WAL_DEBUG code helps\n> debug with. Is the WAL_DEBUG code (1) or (2) or (3) that helped you\n> the most?\n\nFor me, (1) and (2) came up several times, and (3) came up once. I don't\nremember which of (1) or (2) helped more.\n\n> Is converting the LOG messages (3) to DEBUGX level going to\n> help in your case?\n\nNot in my case.\n\n\n", "msg_date": "Thu, 7 Dec 2023 08:35:13 -0800", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Thu, Dec 07, 2023 at 05:29:55PM +0530, Bharath Rupireddy wrote:\n> The comment atop trace_recovery [1] function says it should go away\n> eventually and seems to have served the purpose when the recovery\n> related code was introduced in PG 9.0.\n> \n> FWIW, the attached patch is what I've left with after removing\n> trace_recovery_messages related code, 9 files changed, 19\n> insertions(+), 97 deletions(-).\n\nLooks acceptable to me. Does somebody object to this removal?\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 13:45:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" }, { "msg_contents": "On Fri, Dec 08, 2023 at 01:45:18PM +0900, Michael Paquier wrote:\n> Looks acceptable to me. Does somebody object to this removal?\n\nHearing nothing, done that.\n--\nMichael", "msg_date": "Mon, 11 Dec 2023 11:52:00 +0100", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is WAL_DEBUG related code still relevant today?" } ]
[ { "msg_contents": "Hello hackers,\n\n# MOTIVATION\n\nMy recent experiences with problematic queries in customers motivated\nme to write this patch proposing a new feature to enhance visibility\non what active queries are doing.\n\nPostgreSQL already offers 2 very powerful tools for query troubleshooting:\n\n- EXPLAIN: gives us hints on potential bottlenecks in an execution plan.\n\n- EXPLAIN ANALYZE: shows precisely where bottlenecks are, but the query\nmust finish.\n\nIn my humble opinion we are missing something in the middle. Having\nvisibility\nover in-flight queries would provide more insights than a plain EXPLAIN\nand would allow us to analyze super problematic queries that never finish\na EXPLAIN ANALYZE execution.\n\nConsidering that every active query has an execution plan, the new feature\ncan target not only controlled EXPLAIN statements but also any query in\nprogress. This allows us to identify if a slow active query is using a\ndifferent plan and why (for example, custom settings set a session level\nthat are currently only visible to the backend).\n\n# PROPOSAL\n\nThe feature works similarly to the recently introduced\npg_log_backend_memory_contexts().\n\nThe patch adds function pg_log_backend_explain_plan(PID) to be executed as\nsuperuser in a second backend to signal the target backend to print\nexecution\nplan details in the log.\n\nFor regular queries (called without instrumentation) PG will log the plain\nexplain along with useful details like custom settings.\n\nWhen targeting a query with instrumentation enabled PG will log the complete\nEXPLAIN ANALYZE plan with current row count and, if enabled, timing for each\nnode. Considering that the query is in progress the output will include the\nfollowing per node:\n\n- (never executed) for nodes that weren't touched yet (or\n may never be).\n- (in progress) for nodes currently being executed, ie,\n InstrStartNode was called and clock is ticking there.\n\nParallel workers can be targeted too, where PG will log only the relevant\npart\nof the complete execution plan.\n\n# DEMONSTRATION\n\na) Targeting a not valid PG process:\n\npostgres=# select pg_log_backend_explain_plan(1);\nWARNING: PID 1 is not a PostgreSQL server process\n pg_log_backend_explain_plan\n-----------------------------\n f\n(1 row)\n\nb) Targeting a PG process not running a query:\n\npostgres=# select pg_log_backend_explain_plan(24103);\n pg_log_backend_explain_plan\n-----------------------------\n t\n(1 row)\n\n2023-12-02 16:30:19.979 UTC [24103] LOG: PID 24103 not executing a\nstatement with in-flight explain logging enabled\n\nc) Targeting an active query without any instrumentation:\n\npostgres=# select pg_log_backend_explain_plan(24103);\n pg_log_backend_explain_plan\n-----------------------------\n t\n(1 row)\n\n2023-12-02 16:33:10.968 UTC [24103] LOG: logging explain plan of PID 24103\nQuery Text: select *\nfrom t2 a\ninner join t1 b on a.c1=b.c1\ninner join t1 c on a.c1=c.c1\ninner join t1 d on a.c1=d.c1\ninner join t1 e on a.c1=e.c1;\nGather (cost=70894.63..202643.27 rows=1000000 width=20)\n Workers Planned: 2\n -> Parallel Hash Join (cost=69894.63..101643.27 rows=416667 width=20)\n Hash Cond: (a.c1 = e.c1)\n -> Parallel Hash Join (cost=54466.62..77218.65 rows=416667\nwidth=16)\n Hash Cond: (a.c1 = c.c1)\n -> Parallel Hash Join (cost=15428.00..29997.42 rows=416667\nwidth=8)\n Hash Cond: (b.c1 = a.c1)\n -> Parallel Seq Scan on t1 b (cost=0.00..8591.67\nrows=416667 width=4)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667\nwidth=4)\n -> Parallel Seq Scan on t2 a (cost=0.00..8591.67\nrows=416667 width=4)\n -> Parallel Hash (cost=32202.28..32202.28 rows=416667\nwidth=8)\n -> Parallel Hash Join (cost=15428.00..32202.28\nrows=416667 width=8)\n Hash Cond: (c.c1 = d.c1)\n -> Parallel Seq Scan on t1 c (cost=0.00..8591.67\nrows=416667 width=4)\n -> Parallel Hash (cost=8591.67..8591.67\nrows=416667 width=4)\n -> Parallel Seq Scan on t1 d\n (cost=0.00..8591.67 rows=416667 width=4)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4)\n -> Parallel Seq Scan on t1 e (cost=0.00..8591.67 rows=416667\nwidth=4)\nSettings: max_parallel_workers_per_gather = '4'\n\nd) Targeting a parallel query (and its parallel workers) with\ninstrumentation:\n\npostgres=# select pid, backend_type,pg_log_backend_explain_plan(pid)\npostgres=# from pg_stat_activity\npostgres=# where (backend_type = 'client backend' and pid !=\npg_backend_pid())\npostgres=# or backend_type = 'parallel worker';\n pid | backend_type | pg_log_backend_explain_plan\n-------+-----------------+-----------------------------\n 24103 | client backend | t\n 24389 | parallel worker | t\n 24390 | parallel worker | t\n(3 rows)\n\n2023-12-02 16:36:34.840 UTC [24103] LOG: logging explain plan of PID 24103\nQuery Text: explain (analyze, buffers)\nselect *\nfrom t2 a\ninner join t1 b on a.c1=b.c1\ninner join t1 c on a.c1=c.c1\ninner join t1 d on a.c1=d.c1\ninner join t1 e on a.c1=e.c1;\nGather (cost=70894.63..202643.27 rows=1000000 width=20) (never executed)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Hash Join (cost=69894.63..101643.27 rows=416667 width=20)\n(never executed)\n Hash Cond: (a.c1 = e.c1)\n -> Parallel Hash Join (cost=54466.62..77218.65 rows=416667\nwidth=16) (never executed)\n Hash Cond: (a.c1 = c.c1)\n -> Parallel Hash Join (cost=15428.00..29997.42 rows=416667\nwidth=8) (never executed)\n Hash Cond: (b.c1 = a.c1)\n -> Parallel Seq Scan on t1 b (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667\nwidth=4) (never executed)\n -> Parallel Seq Scan on t2 a (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=32202.28..32202.28 rows=416667\nwidth=8) (never executed)\n -> Parallel Hash Join (cost=15428.00..32202.28\nrows=416667 width=8) (never executed)\n Hash Cond: (c.c1 = d.c1)\n -> Parallel Seq Scan on t1 c (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=8591.67..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Seq Scan on t1 d\n (cost=0.00..8591.67 rows=416667 width=4) (actual time=0.023..8.688\nrows=107903 loops=1) (in progress)\n Buffers: shared hit=46 read=432\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4)\n(actual time=607.171..607.171 rows=341486 loops=1)\n Buffers: shared hit=717 read=794, temp written=896\n -> Parallel Seq Scan on t1 e (cost=0.00..8591.67 rows=416667\nwidth=4) (actual time=0.009..20.413 rows=341486 loops=1)\n Buffers: shared hit=717 read=794\nSettings: max_parallel_workers_per_gather = '4'\n\n2023-12-02 16:36:34.841 UTC [24389] LOG: logging explain plan of PID 24389\nQuery Text: explain (analyze, buffers)\nselect *\nfrom t2 a\ninner join t1 b on a.c1=b.c1\ninner join t1 c on a.c1=c.c1\ninner join t1 d on a.c1=d.c1\ninner join t1 e on a.c1=e.c1;\nParallel Hash Join (cost=69894.63..101643.27 rows=416667 width=20) (never\nexecuted)\n Hash Cond: (a.c1 = e.c1)\n -> Parallel Hash Join (cost=54466.62..77218.65 rows=416667 width=16)\n(never executed)\n Hash Cond: (a.c1 = c.c1)\n -> Parallel Hash Join (cost=15428.00..29997.42 rows=416667\nwidth=8) (never executed)\n Hash Cond: (b.c1 = a.c1)\n -> Parallel Seq Scan on t1 b (cost=0.00..8591.67 rows=416667\nwidth=4) (never executed)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4)\n(never executed)\n -> Parallel Seq Scan on t2 a (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=32202.28..32202.28 rows=416667 width=8)\n(never executed)\n -> Parallel Hash Join (cost=15428.00..32202.28 rows=416667\nwidth=8) (never executed)\n Hash Cond: (c.c1 = d.c1)\n -> Parallel Seq Scan on t1 c (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667\nwidth=4) (never executed)\n -> Parallel Seq Scan on t1 d (cost=0.00..8591.67\nrows=416667 width=4) (actual time=0.024..7.486 rows=99146 loops=1) (in\nprogress)\n Buffers: shared hit=43 read=396\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4) (actual\ntime=595.768..595.768 rows=329056 loops=1)\n Buffers: shared hit=752 read=704, temp written=868\n -> Parallel Seq Scan on t1 e (cost=0.00..8591.67 rows=416667\nwidth=4) (actual time=0.003..20.849 rows=329056 loops=1)\n Buffers: shared hit=752 read=704\nSettings: max_parallel_workers_per_gather = '4'\n\n2023-12-02 16:36:34.844 UTC [24390] LOG: logging explain plan of PID 24390\nQuery Text: explain (analyze, buffers)\nselect *\nfrom t2 a\ninner join t1 b on a.c1=b.c1\ninner join t1 c on a.c1=c.c1\ninner join t1 d on a.c1=d.c1\ninner join t1 e on a.c1=e.c1;\nParallel Hash Join (cost=69894.63..101643.27 rows=416667 width=20) (never\nexecuted)\n Hash Cond: (a.c1 = e.c1)\n -> Parallel Hash Join (cost=54466.62..77218.65 rows=416667 width=16)\n(never executed)\n Hash Cond: (a.c1 = c.c1)\n -> Parallel Hash Join (cost=15428.00..29997.42 rows=416667\nwidth=8) (never executed)\n Hash Cond: (b.c1 = a.c1)\n -> Parallel Seq Scan on t1 b (cost=0.00..8591.67 rows=416667\nwidth=4) (never executed)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4)\n(never executed)\n -> Parallel Seq Scan on t2 a (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=32202.28..32202.28 rows=416667 width=8)\n(never executed)\n -> Parallel Hash Join (cost=15428.00..32202.28 rows=416667\nwidth=8) (never executed)\n Hash Cond: (c.c1 = d.c1)\n -> Parallel Seq Scan on t1 c (cost=0.00..8591.67\nrows=416667 width=4) (never executed)\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667\nwidth=4) (never executed)\n -> Parallel Seq Scan on t1 d (cost=0.00..8591.67\nrows=416667 width=4) (actual time=0.005..7.186 rows=98901 loops=1) (in\nprogress)\n Buffers: shared hit=11 read=427\n -> Parallel Hash (cost=8591.67..8591.67 rows=416667 width=4) (actual\ntime=594.224..594.224 rows=329458 loops=1)\n Buffers: shared hit=708 read=750, temp written=864\n -> Parallel Seq Scan on t1 e (cost=0.00..8591.67 rows=416667\nwidth=4) (actual time=0.955..21.233 rows=329458 loops=1)\n Buffers: shared hit=708 read=750\nSettings: max_parallel_workers_per_gather = '4'\n\n# IMPLEMENTATION DETAILS\n\n- Process signaling\n\nThe whole process signaling implementation is identical to the logic done\nfor pg_log_backend_memory_contexts(). After signaling a process, the\nultimate\nfunction called to perform the plan logging is\nProcessLogExplainPlanInterrupt()\nin explain.c.\n\n- How to track a currently running query?\n\nExplain plans are printed via a QueryDesc structure so we need to be able\nto access that object for the currently running query.\n\nFor a simple select query, where the QueryDesc is created here (\nhttps://github.com/postgres/postgres/blob/REL_16_STABLE/src/backend/tcop/pquery.c#L495\n)\nthe QueryDesc is accessible via the global ActivePortal pointer as the\nobjects is stored here (\nhttps://github.com/postgres/postgres/blob/REL_16_STABLE/src/backend/tcop/pquery.c#L522\n).\n\nThe problem is that for EXPLAIN commands the QueryDesc created here (\nhttps://github.com/postgres/postgres/blob/REL_16_STABLE/src/backend/commands/explain.c#L575\n)\nisn't accessible externally. It exists only in that code context.\n\nSo my solution was to have a global pointer in explain.c that is either\nNULL or is pointed to the currently active QueryDesc. At the end of\nstandard_ExecutorStart() in execMain.c I call new function\nExplainTrackQuery(QueryDesc)\nin explain.c that will take care of pointing the global pointer to the\nQueryDesc instance.\n\nThis is an important part of the code. The overhead of the implementation\nis that every query will do the new logic of assigning the global pointer\nand making sure pointer is always valid (see next section).\n\n- How to make sure the new global pointer is always valid?\n\nThe global pointer starts as NULL, gets assigned via\nExplainTrackQuery(QueryDesc)\nand gets cleared with the help of a MemoryContextCallback.\n\nThe strategy there is that a MemoryContextCallback will be assigned\nin the same MemoryContext where the tracked QueryDesc was created. When\nthe MemoryContext is gone (executor is complete) the QueryDesc instance\nwill be destroyed and function QueryDescReleaseFunc() in explain.c will\nbe called to clear the global pointer. With that we can make sure that\nthe pointer always get cleared, even if the query gets cancelled.\n\n- Safely printing in-flight execution plans\n\nA plan string is built in function ExplainNode here (\nhttps://github.com/postgres/postgres/blob/REL_16_STABLE/src/backend/commands/explain.c#L1178\n)\nwhich is called at the end of a query execution when EXPLAIN is used.\nThat function performs logic using a PlanState (part of QueryDesc) of\nthe running query and a ExplainState.\n\nThe main challenge there is that ExplainNode calls InstrEndLoop which\nchanges values in Instrumentation. This is ok for a regular EXPLAIN\nwhere the query is already complete but not ok for the new feature with\nin-flight explains.\n\nSo the new code has custom logic to clone Instrumentation instance of\nthe current node. The cloned object can be safely written.\n\nFunction InstrEndLoop has a safety rule here (\nhttps://github.com/postgres/postgres/blob/REL_16_STABLE/src/backend/executor/instrument.c#L148\n)\nthat prevents adjusting instrumentation details in a running node. This\nnever happens in the current code logic but with the new in-flight\nexplain it will happen very often.\n\nI didn't want to remove this safety rule as InstrEndLoop gets called in\nother places too (even in auto_explain) so the solution was to keep\nInstrEndLoop and have a new InstrEndLoopForce for the in-flight explain.\nBoth InstrEndLoop and InstrEndLoopForce call a new internal\nInstrEndLoopInternal to avoid duplicating the code.\n\n- Memory management\n\nConsidering that pg_log_backend_explain_plan() can be called indefinite\ntimes in the same query execution, all allocated objects in the new\nimplementation (via palloc) are manually deallocated. This avoids private\nmemory to keep growing until MemoryContext is released.\n\n- ExplainState customization\n\nA ExplainState is allocated and customized for the in-flight logging.\nInstrumentation related settings are enabled based on how the target\nquery started, which is usually via EXPLAIN ANALYZE or with auto_explain.\n\nes = NewExplainState();\nes->in_flight = true;\nes->analyze = currentQueryDesc->instrument_options;\nes->buffers = (currentQueryDesc->instrument_options &\n INSTRUMENT_BUFFERS) != 0;\nes->wal = (currentQueryDesc->instrument_options &\n INSTRUMENT_WAL) != 0;\nes->timing = (currentQueryDesc->instrument_options &\n INSTRUMENT_TIMER) != 0;\n\nThere are other settings that I currently selected some static values\nfor testing:\n\nes->summary = (es->analyze);\nes->format = EXPLAIN_FORMAT_TEXT;\nes->verbose = false;\nes->settings = true;\n\nFor those we can think about customizations like global settings or\npassing through attributes in pg_log_backend_explain_plan(). There\nis definitely room for improvement here.\n\n- Implementation overhead\n\nAs mentioned earlier, the new feature adds overhead by having\nto adjust the query desc global pointer in every QueryDesc that\npasses through standard_ExecutorStart(). If we think this is not\na good idea we can think about moving ExplainTrackQuery(QueryDesc)\nto specific QueryDesc allocations, like having the feature just\nfor EXPLAIN commands. But that would limit what we can inspect.\n\n# FINAL CONSIDERATIONS\n\nThis should be enough for an initial proposal. Apologies for the huge\nmail. This is my first patch so I am probably missing a lot of standards\nand good practices the community is already familiar with.\n\nPlus, I haven't implemented any tests yet. If you think it is worth\nconsidering this new feature I can work on them.\n\nKind Regards,\n\nRafael Castro.", "msg_date": "Sat, 2 Dec 2023 16:30:59 -0300", "msg_from": "Rafael Thofehrn Castro <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal: In-flight explain logging" }, { "msg_contents": "Have you seen the other recent patch regarding this? [1] The mailing\nlist thread was active pretty recently. The submission is marked as\nNeeds Review. I haven't looked at either patch, but the proposals are\nvery similar as I understand it.\n\n[1]: https://commitfest.postgresql.org/45/4345/\n\n\n", "msg_date": "Sat, 2 Dec 2023 12:20:40 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: In-flight explain logging" }, { "msg_contents": "Hello Maciek,\n\nThanks for pointing that out. They are indeed super similar. Before I wrote\nthe patch I searched for\n\"explain\" related ones. I guess I should have performed a better search.\n\nComparing the patches, there is one main difference: the existing patch\nprints only the plan without\nany instrumentation details of the current execution state at that\nparticular time. That is precisely what\nI am looking for with this new feature.\n\nI guess the way to proceed here would be to use the already existing patch\nas a lot of work was done\nthere already.\n\nHello Maciek,Thanks for pointing that out. They are indeed super similar. Before I wrote the patch I searched for\"explain\" related ones. I guess I should have performed a better search.Comparing the patches, there is one main difference: the existing patch prints only the plan withoutany instrumentation details of the current execution state at that particular time. That is precisely what I am looking for with this new feature.I guess the way to proceed here would be to use the already existing patch as a lot of work was donethere already.", "msg_date": "Sat, 2 Dec 2023 17:56:56 -0300", "msg_from": "Rafael Thofehrn Castro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal: In-flight explain logging" }, { "msg_contents": "Hi Rafael,\n\nOn Sun, Dec 3, 2023 at 2:27 AM Rafael Thofehrn Castro\n<[email protected]> wrote:\n>\n> Hello Maciek,\n>\n> Thanks for pointing that out. They are indeed super similar. Before I wrote the patch I searched for\n> \"explain\" related ones. I guess I should have performed a better search.\n>\n> Comparing the patches, there is one main difference: the existing patch prints only the plan without\n> any instrumentation details of the current execution state at that particular time. That is precisely what\n> I am looking for with this new feature.\n>\n> I guess the way to proceed here would be to use the already existing patch as a lot of work was done\n> there already.\n\nIt might help if you add an incremental patch for reporting\ninstrumentation details on top of already existing patches on that\nthread.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 4 Dec 2023 21:48:06 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal: In-flight explain logging" } ]
[ { "msg_contents": "hey guys,\n\nWe notice Postgres logs, pg_stat_statements and pg_stat_activity will\nrecord passwords when using \"CREATE\" statement to create user with\npassword. Can we provide users with an option to obfuscate those passwords?\n\nYours,\nGuanqun\n\nhey guys,We notice Postgres logs, pg_stat_statements and pg_stat_activity will record passwords when using \"CREATE\" statement to create user with password. Can we provide users with an option to obfuscate those passwords?Yours,Guanqun", "msg_date": "Sat, 2 Dec 2023 15:36:44 -0500", "msg_from": "Guanqun Yang <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal obfuscate password in pg logs" }, { "msg_contents": "Guanqun Yang <[email protected]> writes:\n> We notice Postgres logs, pg_stat_statements and pg_stat_activity will\n> record passwords when using \"CREATE\" statement to create user with\n> password. Can we provide users with an option to obfuscate those passwords?\n\nSee the many, many prior discussions of this idea.\nThe short answer is that you're better off securing your logs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Dec 2023 16:04:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal obfuscate password in pg logs" } ]
[ { "msg_contents": "Hi hackers,\n\nThere is a breaking change of API since the v2.12.0 of libxml2[1][2]. My\ncompiler complains about incompatible function signatures:\n\n\n/usr/bin/clang -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla\n-Werror=unguarded-availability-new -Wendif-labels -Wmissing\n-format-attribute -Wcast-function-type -Wformat-security\n-fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-unused-command-line-argument -Wno-compou\nnd-token-split-by-macro -Wno-cast-function-type-strict -g -Og -g3 -I. -I.\n-I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o\nxml.o xml.c\nxml.c:1199:45: error: incompatible function pointer types passing 'void\n(void *, xmlErrorPtr)' (aka 'void (void *, struct _xmlError *)') to\nparameter of type '\nxmlStructuredErrorFunc' (aka 'void (*)(void *, const struct _xmlError *)')\n[-Wincompatible-function-pointer-types]\n xmlSetStructuredErrorFunc((void *) errcxt, xml_errorHandler);\n ^~~~~~~~~~~~~~~~\n/usr/include/libxml2/libxml/xmlerror.h:898:29: note: passing argument to\nparameter 'handler' here\n xmlStructuredErrorFunc handler);\n ^\nxml.c:4806:55: error: incompatible function pointer types passing 'void\n(void *, xmlErrorPtr)' (aka 'void (void *, struct _xmlError *)') to\nparameter of type '\nxmlStructuredErrorFunc' (aka 'void (*)(void *, const struct _xmlError *)')\n[-Wincompatible-function-pointer-types]\n xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt,\nxml_errorHandler);\n\n ^~~~~~~~~~~~~~~~\n/usr/include/libxml2/libxml/xmlerror.h:898:29: note: passing argument to\nparameter 'handler' here\n xmlStructuredErrorFunc handler);\n ^\nxml.c:4860:55: error: incompatible function pointer types passing 'void\n(void *, xmlErrorPtr)' (aka 'void (void *, struct _xmlError *)') to\nparameter of type '\nxmlStructuredErrorFunc' (aka 'void (*)(void *, const struct _xmlError *)')\n[-Wincompatible-function-pointer-types]\n xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt,\nxml_errorHandler);\n\n ^~~~~~~~~~~~~~~~\n/usr/include/libxml2/libxml/xmlerror.h:898:29: note: passing argument to\nparameter 'handler' here\n xmlStructuredErrorFunc handler);\n ^\nxml.c:5003:55: error: incompatible function pointer types passing 'void\n(void *, xmlErrorPtr)' (aka 'void (void *, struct _xmlError *)') to\nparameter of type '\nxmlStructuredErrorFunc' (aka 'void (*)(void *, const struct _xmlError *)')\n[-Wincompatible-function-pointer-types]\n xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt,\nxml_errorHandler);\n\n ^~~~~~~~~~~~~~~~\n/usr/include/libxml2/libxml/xmlerror.h:898:29: note: passing argument to\nparameter 'handler' here\n xmlStructuredErrorFunc handler);\n ^\n4 errors generated.\nmake[4]: *** [<builtin>: xml.o] Error 1\n\n\nHere is a quick workaround for it.\n\n[1]\nhttps://github.com/GNOME/libxml2/commit/61034116d0a3c8b295c6137956adc3ae55720711\n[2]\nhttps://github.com/GNOME/libxml2/commit/45470611b047db78106dcb2fdbd4164163c15ab7\n\nBest Regards,\nXing", "msg_date": "Sun, 3 Dec 2023 23:17:55 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Make PostgreSQL work with newer libxml2." } ]
[ { "msg_contents": "Hi,\n\nThe commit 44cac934 replaced \"char buf[BLCKSZ]\" with PGAlignedBlock to\navoid issues on alignment-picky hardware. While it replaced most of the\ninstances, there are still some more left. How about we use PGAlignedBlock\nthere too, something like the attached patch? A note [2] in the commit\n44cac934 says that ensuring proper alignment makes kernel data transfers\nfasters and the left-over \"char buf[BLCKSZ]\" either do read() or write()\nsystem calls, so it might be worth to align them with PGAlignedBlock.\n\nThoughts?\n\nPS: FWIW, I verified what difference actually char buf[BLCKSZ] and the\nunion PGAlignedBlock does make with alignment with a sample code like [3]\nwhich gives a different alignment requirement, see below:\n\nsize of data 8192, alignment of data 1\nsize of data_aligned 8192, alignment of data_aligned 8\n\n[1]\ncommit 44cac9346479d4b0cc9195b0267fd13eb4e7442c\nAuthor: Tom Lane <[email protected]>\nDate: Sat Sep 1 15:27:12 2018 -0400\n\n Avoid using potentially-under-aligned page buffers.\n\n[2]\n I used these types even for variables where there's no risk of a\n misaligned access, since ensuring proper alignment should make\n kernel data transfers faster. I also changed some places where\n we had been palloc'ing short-lived buffers, for coding style\n uniformity and to save palloc/pfree overhead.\n\n[3]\n#include <stdio.h>\n\n#define BLCKSZ 8192\n\ntypedef union PGAlignedBlock\n{\n char data[BLCKSZ];\n double force_align_d;\n long long int force_align_i64;\n} PGAlignedBlock;\n\nint main(int argc, char **argv)\n{\n char data[BLCKSZ];\n PGAlignedBlock data_aligned;\n\n printf(\"size of data %ld, alignment of data %ld\\n\", sizeof(data),\n_Alignof(data));\n printf(\"size of data_aligned %ld, alignment of data_aligned %ld\\n\",\nsizeof(data_aligned), _Alignof(data_aligned));\n\n return 0;\n}\n\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Dec 2023 06:59:13 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Use PGAlignedBlock instead of \"char buf[BLCKSZ]\" in more places" }, { "msg_contents": "On Mon, Dec 04, 2023 at 06:59:13AM +0530, Bharath Rupireddy wrote:\n> The commit 44cac934 replaced \"char buf[BLCKSZ]\" with PGAlignedBlock to\n> avoid issues on alignment-picky hardware. While it replaced most of the\n> instances, there are still some more left. How about we use PGAlignedBlock\n> there too, something like the attached patch? A note [2] in the commit\n> 44cac934 says that ensuring proper alignment makes kernel data transfers\n> fasters and the left-over \"char buf[BLCKSZ]\" either do read() or write()\n> system calls, so it might be worth to align them with PGAlignedBlock.\n> \n> Thoughts?\n\nThe buffers used to write the lock file and the TLI history file are\nnot page buffers, and this could make code readers think that these\nare pages. So I am honestly not sure if there's a point in changing\nthem because the current code is not incorrect, isn't it? It looks\nlike 2042b3428d39 for the TLI history file and 52948169bcdd for the\nlock file began using BLCKSZ because that was just a handy thing to\ndo, and because we know they would never get beyond that.\n--\nMichael", "msg_date": "Mon, 4 Dec 2023 14:46:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use PGAlignedBlock instead of \"char buf[BLCKSZ]\" in more places" }, { "msg_contents": "On 04.12.23 06:46, Michael Paquier wrote:\n> On Mon, Dec 04, 2023 at 06:59:13AM +0530, Bharath Rupireddy wrote:\n>> The commit 44cac934 replaced \"char buf[BLCKSZ]\" with PGAlignedBlock to\n>> avoid issues on alignment-picky hardware. While it replaced most of the\n>> instances, there are still some more left. How about we use PGAlignedBlock\n>> there too, something like the attached patch? A note [2] in the commit\n>> 44cac934 says that ensuring proper alignment makes kernel data transfers\n>> fasters and the left-over \"char buf[BLCKSZ]\" either do read() or write()\n>> system calls, so it might be worth to align them with PGAlignedBlock.\n>>\n>> Thoughts?\n> \n> The buffers used to write the lock file and the TLI history file are\n> not page buffers, and this could make code readers think that these\n> are pages.\n\nThe type is called \"aligned block\", not \"aligned buffer\" or \"aligned \npage\", so I don't think it's incorrect to try to use it.\n\nSo I am honestly not sure if there's a point in changing\n> them because the current code is not incorrect, isn't it? It looks\n> like 2042b3428d39 for the TLI history file and 52948169bcdd for the\n> lock file began using BLCKSZ because that was just a handy thing to\n> do, and because we know they would never get beyond that.\n\nYeah, it's not clear why these need to be block-sized. We shouldn't \nperpetuate this without more clarity about this.\n\n\n\n", "msg_date": "Mon, 4 Dec 2023 15:53:44 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use PGAlignedBlock instead of \"char buf[BLCKSZ]\" in more places" }, { "msg_contents": "Hi,\n\nOn 2023-12-04 15:53:44 +0100, Peter Eisentraut wrote:\n> On 04.12.23 06:46, Michael Paquier wrote:\n> > On Mon, Dec 04, 2023 at 06:59:13AM +0530, Bharath Rupireddy wrote:\n> > > The commit 44cac934 replaced \"char buf[BLCKSZ]\" with PGAlignedBlock to\n> > > avoid issues on alignment-picky hardware. While it replaced most of the\n> > > instances, there are still some more left. How about we use PGAlignedBlock\n> > > there too, something like the attached patch? A note [2] in the commit\n> > > 44cac934 says that ensuring proper alignment makes kernel data transfers\n> > > fasters and the left-over \"char buf[BLCKSZ]\" either do read() or write()\n> > > system calls, so it might be worth to align them with PGAlignedBlock.\n> > > \n> > > Thoughts?\n> > \n> > The buffers used to write the lock file and the TLI history file are\n> > not page buffers, and this could make code readers think that these\n> > are pages.\n> \n> The type is called \"aligned block\", not \"aligned buffer\" or \"aligned page\",\n> so I don't think it's incorrect to try to use it.\n\nBlock is a type defined in bufmgr.h...\n\n\n> So I am honestly not sure if there's a point in changing\n> > them because the current code is not incorrect, isn't it? It looks\n> > like 2042b3428d39 for the TLI history file and 52948169bcdd for the\n> > lock file began using BLCKSZ because that was just a handy thing to\n> > do, and because we know they would never get beyond that.\n> \n> Yeah, it's not clear why these need to be block-sized. We shouldn't\n> perpetuate this without more clarity about this.\n\nIf we change something, we should consider making buffers like these aligned\nto page sizes, rather than just MAXALIGNED.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Dec 2023 09:47:24 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use PGAlignedBlock instead of \"char buf[BLCKSZ]\" in more places" }, { "msg_contents": "On Mon, Dec 04, 2023 at 09:47:24AM -0800, Andres Freund wrote:\n> If we change something, we should consider making buffers like these aligned\n> to page sizes, rather than just MAXALIGNED.\n\nYou mean 4k kernel pages, right? That makes sense to me.\n--\nMichael", "msg_date": "Tue, 5 Dec 2023 12:47:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use PGAlignedBlock instead of \"char buf[BLCKSZ]\" in more places" } ]
[ { "msg_contents": "When one tries to connect to a server and port which is protected by a\nfirewall, ones get messages like this:\n\nUnix:\npsql: error: connection to server at \"192.168.0.26\", port 5432 failed:\nConnection timed out\n Is the server running on that host and accepting TCP/IP connections?\n\nWindows:\npsql: error: connection to server at \"192.168.0.26\", port 5432 failed:\nConnection timed out (0x0000274C/10060)\n Is the server running on that host and accepting TCP/IP connections?\n\nBut the hint given is unhelpful, and even positively misleading. If the\nport is blocked by a firewall, it doesn't imply the database server is\nnot listening (if one could just get to it), and it doesn't matter if the\ndatabase server is listening. If for some reason it weren't listening as\nwell as being blocked, making it listen wouldn't help as long it remains\nblocked at the firewall.\n\nIs there some portable way to detect this cause of the connection problem\n(connection timeout) and issue a more suitable hint\nexplicitly mentioning firewalls and routers, or perhaps just no hint at all?\n\nAs far as I know, only a firewall causes this problem, at least on a\npersistent basis. Maybe you could see it sporadically on a vastly\noverloaded server or a server caught in the process of rebooting. It would\nbe better to give a hint that is correct the vast majority of the time than\none that is wrong the vast majority of the time.\n\nThere are a lot of questions about this on, for example, stackoverflow. I\nthink people might be better able to figure it out for themselves if the\nhint were not actively leading them astray.\n\nCheers,\n\nJeff\n\nWhen one tries to connect to a server and port which is protected by a firewall, ones get messages like this:Unix:psql: error: connection to server at \"192.168.0.26\", port 5432 failed: Connection timed out        Is the server running on that host and accepting TCP/IP connections?Windows:psql: error: connection to server at \"192.168.0.26\", port 5432 failed: Connection timed out (0x0000274C/10060)        Is the server running on that host and accepting TCP/IP connections?But the hint given is unhelpful, and even positively misleading.  If the port is blocked by a firewall, it doesn't imply the database server is not listening (if one could just get to it), and it doesn't matter if the database server is listening.  If for some reason it weren't listening as well as being blocked, making it listen wouldn't help as long it remains blocked at the firewall. Is there some portable way to detect this cause of the connection problem (connection timeout) and issue a more suitable hint explicitly mentioning firewalls and routers, or perhaps just no hint at all?As far as I know, only a firewall causes this problem, at least on a persistent basis.  Maybe you could see it sporadically on a vastly overloaded server or a server caught in the process of rebooting.  It would be better to give a hint that is correct the vast majority of the time than one that is wrong the vast majority of the time.There are a lot of questions about this on, for example, stackoverflow.  I think people might be better able to figure it out for themselves if the hint were not actively leading them astray.Cheers,Jeff", "msg_date": "Sun, 3 Dec 2023 21:46:48 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "connection timeout hint" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> When one tries to connect to a server and port which is protected by a\n> firewall, ones get messages like this:\n\n> Unix:\n> psql: error: connection to server at \"192.168.0.26\", port 5432 failed:\n> Connection timed out\n> Is the server running on that host and accepting TCP/IP connections?\n\n> Windows:\n> psql: error: connection to server at \"192.168.0.26\", port 5432 failed:\n> Connection timed out (0x0000274C/10060)\n> Is the server running on that host and accepting TCP/IP connections?\n\n> But the hint given is unhelpful, and even positively misleading.\n\nWell, maybe. I think you're right that it would be useful to give\ndifferent hints for ETIMEDOUT and ECONNREFUSED, but sadly it seems\nnot uncommon for systems to just drop connection requests that\nthere's no listening process for.\n\nCan we break down the possible cases any further?\n\n* Target host doesn't exist or is down: could give ETIMEDOUT,\nEHOSTDOWN, or EHOSTUNREACH.\n\n* Host is up, but PG server is not running or not listening on that\nsocket: ideally ECONNREFUSED, but could be ETIMEDOUT if the local\nfirewall blocks it.\n\n* Server is running, but firewall blocks reaching it: almost\ncertainly ETIMEDOUT.\n\nAre there more cases?\n\nThe current hint is already reasonably on-point for EHOSTDOWN,\nEHOSTUNREACH, ECONNREFUSED. I agree it's too specific for\nETIMEDOUT, but what exactly would be a better message in view\nof the multitude of possible causes?\n\n> Is there some portable way to detect this cause of the connection problem\n> (connection timeout)\n\nNo, I don't think so. We have the kernel errno to work with\nand little more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Dec 2023 22:10:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: connection timeout hint" } ]
[ { "msg_contents": "Hi,\n\nCommit 5579388d removed a bunch of dead code, formerly needed for old\nsystems that lacked getaddrinfo() in the early days of IPv6. We\nalready used the system getaddrinfo() via either configure-time tests\n(Unix) or runtime tests (Windows using attempt-to-find-with-dlsym that\nalways succeeded on modern systems), so no modern system needed the\nfallback code, except for one small detail:\n\ngetaddrinfo() has a companion function to spit out human readable\nerror messages, and although Windows has that too, it's not thread\nsafe[1]. libpq shouldn't call it, or else an unlucky multi-threaded\nprogram might see an error message messed up by another thread.\n\nHere's a patch to put that bit back. It's simpler than before: the\noriginal replacement had a bunch of #ifdefs for various historical\nreasons, but now we can just handle the 8 documented EAI errors on\nWindows.\n\nNoticed while wondering why the list of symbols reported in bug #18219\ndidn't include gai_strerrorA. That turned out to be because it is\nstatic inline in ws2tcpip.h, and its definition set alarm bells\nringing. Avoid.\n\n[1] https://learn.microsoft.com/en-us/windows/win32/api/ws2tcpip/nf-ws2tcpip-getaddrinfo", "msg_date": "Mon, 4 Dec 2023 16:21:24 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "On second thoughts, I guess it would make more sense to use the exact\nmessages Windows' own implementation would return instead of whatever\nwe had in the past (probably cribbed from some other OS or just made\nup?). I asked CI to spit those out[1]. Updated patch attached. Will\nadd to CF.\n\n[1] https://cirrus-ci.com/task/5816802207334400?logs=main#L15", "msg_date": "Tue, 5 Dec 2023 08:26:54 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "At Tue, 5 Dec 2023 08:26:54 +1300, Thomas Munro <[email protected]> wrote in \n> On second thoughts, I guess it would make more sense to use the exact\n> messages Windows' own implementation would return instead of whatever\n> we had in the past (probably cribbed from some other OS or just made\n> up?). I asked CI to spit those out[1]. Updated patch attached. Will\n> add to CF.\n> \n> [1] https://cirrus-ci.com/task/5816802207334400?logs=main#L15\n\nWindows' gai_strerror outputs messages that correspond to the language\nenvironment. Similarly, I think that the messages that the messages\nreturned by our version should be translatable.\n\nThese messages may add extra line-end periods to the parent (or\ncotaining) messages when appended. This looks as follows.\n\n(auth.c:517 : errdetail_log() : sub (detail) message)\n> Could not translate client host name \"hoge\" to IP address: An address incompatible with the requested protocol was used..\n\n(hba.c:1562 : errmsg() : main message)\n> invalid IP address \"192.0.2.1\": This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server.\n\nWhen I first saw the first version, I thought it would be better to\nuse Windows' own messages, just like you did. However, considering the\ncontent of the message above, wouldn't it be better to adhere to\nLinux-style messages overall?\n\nA slightly subtler point is that the second example seems to have a\nmisalignment between the descriptions before and after the colon, but\ndo you think it's not something to be concerned about to this extent?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 05 Dec 2023 11:43:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "On Tue, Dec 5, 2023 at 3:43 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n> At Tue, 5 Dec 2023 08:26:54 +1300, Thomas Munro <[email protected]> wrote in\n> > On second thoughts, I guess it would make more sense to use the exact\n> > messages Windows' own implementation would return instead of whatever\n> > we had in the past (probably cribbed from some other OS or just made\n> > up?). I asked CI to spit those out[1]. Updated patch attached. Will\n> > add to CF.\n> >\n> > [1] https://cirrus-ci.com/task/5816802207334400?logs=main#L15\n>\n> Windows' gai_strerror outputs messages that correspond to the language\n> environment. Similarly, I think that the messages that the messages\n> returned by our version should be translatable.\n\nHmm, that is a good point. Wow, POSIX has given us a terrible\ninterface here, in terms of resource management. Let's see what glibc\ndoes:\n\nhttps://github.com/lattera/glibc/blob/master/sysdeps/posix/gai_strerror.c\nhttps://github.com/lattera/glibc/blob/master/sysdeps/posix/gai_strerror-strs.h\n\nIt doesn't look like it knows about locales at all. And a test\nprogram seems to confirm:\n\n#include <locale.h>\n#include <netdb.h>\n#include <stdio.h>\nint main()\n{\n setlocale(LC_MESSAGES, \"ja_JP.UTF-8\");\n printf(\"%s\\n\", gai_strerror(EAI_MEMORY));\n}\n\nThat prints:\n\nMemory allocation failure\n\nFreeBSD tries harder, and prints:\n\nメモリ割り当て失敗\n\nWe can see that it has a thread-local variable that holds a copy of\nthat localised string until the next call to gai_strerror() in the\nsame thread:\n\nhttps://github.com/freebsd/freebsd-src/blob/main/lib/libc/net/gai_strerror.c\nhttps://github.com/freebsd/freebsd-src/blob/main/lib/libc/nls/ja_JP.UTF-8.msg\n\nFreeBSD's message catalogues would provide a read-made source of\ntranslations, bu... hmm, if glibc doesn't bother and the POSIX\ninterface is unhelpful and Windows' own implementation is so willfully\nunusable, I don't really feel inclined to build a whole thread-local\ncache thing on our side just to support this mess.\n\nSo I think we should just hard-code the error messages in English and\nmove on. However, English is my language so perhaps I should abstain\nand leave it to others to decide how important that is.\n\n> These messages may add extra line-end periods to the parent (or\n> cotaining) messages when appended. This looks as follows.\n>\n> (auth.c:517 : errdetail_log() : sub (detail) message)\n> > Could not translate client host name \"hoge\" to IP address: An address incompatible with the requested protocol was used..\n>\n> (hba.c:1562 : errmsg() : main message)\n> > invalid IP address \"192.0.2.1\": This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server.\n>\n> When I first saw the first version, I thought it would be better to\n> use Windows' own messages, just like you did. However, considering the\n> content of the message above, wouldn't it be better to adhere to\n> Linux-style messages overall?\n\nYeah, I agree that either the glibc or the FreeBSD messages would be\nbetter than those now that I've seen them. They are short and sweet.\n\n> A slightly subtler point is that the second example seems to have a\n> misalignment between the descriptions before and after the colon, but\n> do you think it's not something to be concerned about to this extent?\n\nI didn't understand what you meant here.\n\n\n", "msg_date": "Thu, 7 Dec 2023 09:43:37 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "At Thu, 7 Dec 2023 09:43:37 +1300, Thomas Munro <[email protected]> wrote in \r\n> On Tue, Dec 5, 2023 at 3:43 PM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> > Windows' gai_strerror outputs messages that correspond to the language\r\n> > environment. Similarly, I think that the messages that the messages\r\n> > returned by our version should be translatable.\r\n> \r\n> Hmm, that is a good point. Wow, POSIX has given us a terrible\r\n> interface here, in terms of resource management. Let's see what glibc\r\n> does:\r\n>\r\n> https://github.com/lattera/glibc/blob/master/sysdeps/posix/gai_strerror.c\r\n> https://github.com/lattera/glibc/blob/master/sysdeps/posix/gai_strerror-strs.h\r\n\r\nIt is quite a sight for sore eyes...\r\n\r\n> It doesn't look like it knows about locales at all. And a test\r\n> program seems to confirm:\r\n..\r\n> setlocale(LC_MESSAGES, \"ja_JP.UTF-8\");\r\n> printf(\"%s\\n\", gai_strerror(EAI_MEMORY));\r\n> \r\n> That prints:\r\n> \r\n> Memory allocation failure\r\n> \r\n> FreeBSD tries harder, and prints:\r\n> \r\n> メモリ割り当て失敗\r\n> \r\n> We can see that it has a thread-local variable that holds a copy of\r\n> that localised string until the next call to gai_strerror() in the\r\n> same thread:\r\n> \r\n> https://github.com/freebsd/freebsd-src/blob/main/lib/libc/net/gai_strerror.c\r\n> https://github.com/freebsd/freebsd-src/blob/main/lib/libc/nls/ja_JP.UTF-8.msg\r\n> \r\n> FreeBSD's message catalogues would provide a read-made source of\r\n> translations, bu... hmm, if glibc doesn't bother and the POSIX\r\n> interface is unhelpful and Windows' own implementation is so willfully\r\n> unusable, I don't really feel inclined to build a whole thread-local\r\n> cache thing on our side just to support this mess.\r\n\r\nI agree, I wouldn't want to do it either.\r\n\r\n> So I think we should just hard-code the error messages in English and\r\n> move on. However, English is my language so perhaps I should abstain\r\n> and leave it to others to decide how important that is.\r\n\r\nI also think that would be a good way.\r\n\r\n> > These messages may add extra line-end periods to the parent (or\r\n> > cotaining) messages when appended. This looks as follows.\r\n> >\r\n> > (auth.c:517 : errdetail_log() : sub (detail) message)\r\n> > > Could not translate client host name \"hoge\" to IP address: An address incompatible with the requested protocol was used..\r\n> >\r\n> > (hba.c:1562 : errmsg() : main message)\r\n> > > invalid IP address \"192.0.2.1\": This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server.\r\n> >\r\n> > When I first saw the first version, I thought it would be better to\r\n> > use Windows' own messages, just like you did. However, considering the\r\n> > content of the message above, wouldn't it be better to adhere to\r\n> > Linux-style messages overall?\r\n> \r\n> Yeah, I agree that either the glibc or the FreeBSD messages would be\r\n> better than those now that I've seen them. They are short and sweet.\r\n> \r\n> > A slightly subtler point is that the second example seems to have a\r\n> > misalignment between the descriptions before and after the colon, but\r\n> > do you think it's not something to be concerned about to this extent?\r\n> \r\n> I didn't understand what you meant here.\r\n\r\nIf it was just a temporary error that couldn't be resolved, it doesn't\r\nmean that the IP address is invalid. If such a cause is possible, then\r\nprobabyly an error message saying \"failed to resolve\" would be more\r\nappropriate. However, I wrote it meaning that there is no need to go\r\nto great length to ensure consistency with this message.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Thu, 07 Dec 2023 10:44:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "On Wed, Dec 6, 2023 at 8:45 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n> > So I think we should just hard-code the error messages in English and\n> > move on. However, English is my language so perhaps I should abstain\n> > and leave it to others to decide how important that is.\n>\n> I also think that would be a good way.\n\nConsidering this remark from Kyotaro Horiguchi, I think the\npreviously-posted patch could be committed.\n\nThomas, do you plan to do that, or are there outstanding issues here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 14:52:11 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "On Tue, 5 Dec 2023 at 00:57, Thomas Munro <[email protected]> wrote:\n>\n> On second thoughts, I guess it would make more sense to use the exact\n> messages Windows' own implementation would return instead of whatever\n> we had in the past (probably cribbed from some other OS or just made\n> up?). I asked CI to spit those out[1]. Updated patch attached. Will\n> add to CF.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n\n=== Applying patches on top of PostgreSQL commit ID\n376c216138c75e161d39767650ea30536f23b482 ===\n=== applying patch ./v2-0001-Fix-gai_strerror-thread-safety-on-Windows.patch\npatching file configure\nHunk #1 succeeded at 16388 (offset 34 lines).\npatching file configure.ac\nHunk #1 succeeded at 1885 (offset 7 lines).\npatching file src/include/port/win32/sys/socket.h\npatching file src/port/meson.build\npatching file src/port/win32gai_strerror.c\ncan't find file to patch at input line 134\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n--------------------------\n|diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n|index 46df01cc8d..c51296bdb6 100644\n|--- a/src/tools/msvc/Mkvcbuild.pm\n|+++ b/src/tools/msvc/Mkvcbuild.pm\n--------------------------\nNo file to patch. Skipping patch.\n1 out of 1 hunk ignored\n\nPlease have a look and post an updated version.\n\n[1] - http://cfbot.cputube.org/patch_46_4682.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 08:25:29 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" }, { "msg_contents": "On Tue, Jan 16, 2024 at 8:52 AM Robert Haas <[email protected]> wrote:\n> On Wed, Dec 6, 2023 at 8:45 PM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> > > So I think we should just hard-code the error messages in English and\n> > > move on. However, English is my language so perhaps I should abstain\n> > > and leave it to others to decide how important that is.\n> >\n> > I also think that would be a good way.\n>\n> Considering this remark from Kyotaro Horiguchi, I think the\n> previously-posted patch could be committed.\n>\n> Thomas, do you plan to do that, or are there outstanding issues here?\n\nPushed. I went with FreeBSD's error messages (I assume it'd be OK to\ntake glibc's too under fair use but I didn't want to think about\nthat).\n\n\n", "msg_date": "Mon, 12 Feb 2024 11:25:53 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gai_strerror() is not thread-safe on Windows" } ]
[ { "msg_contents": "Hi all,\n\nOn a recent thread about adding support for event triggers with\nREINDEX, a change has been proposed to make REINDEX queries reflect in\nthe logs under the DDL category:\nhttps://www.postgresql.org/message-id/ZW0ltJXJ2Aigvizl%40paquier.xyz\n\nREINDEX being classified as LOGSTMT_ALL comes from 893632be4e17 back\nin 2006, and the code does not know what to do about it. Doing the\nchange would be as simple as that:\n case T_ReindexStmt:\n- lev = LOGSTMT_ALL; /* should this be DDL? */\n+ lev = LOGSTMT_DDL;\n\nREINDEX is philosophically a maintenance command and a Postgres\nextension not in the SQL standard, so it does not really qualify as a\nDDL because it does not do in object definitions, so we could just\ndelete this comment. Or could it be more useful to consider that as a\nspecial case and report it as a DDL, impacting log_statements?\n\nAny thoughts?\n--\nMichael", "msg_date": "Mon, 4 Dec 2023 14:26:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Should REINDEX be listed under DDL?" }, { "msg_contents": "On Mon, 2023-12-04 at 14:26 +0900, Michael Paquier wrote:\n> On a recent thread about adding support for event triggers with\n> REINDEX, a change has been proposed to make REINDEX queries reflect in\n> the logs under the DDL category:\n> https://www.postgresql.org/message-id/ZW0ltJXJ2Aigvizl%40paquier.xyz\n> \n> REINDEX being classified as LOGSTMT_ALL comes from 893632be4e17 back\n> in 2006, and the code does not know what to do about it. Doing the\n> change would be as simple as that:\n> case T_ReindexStmt:\n> - lev = LOGSTMT_ALL; /* should this be DDL? */\n> + lev = LOGSTMT_DDL;\n> \n> REINDEX is philosophically a maintenance command and a Postgres\n> extension not in the SQL standard, so it does not really qualify as a\n> DDL because it does not do in object definitions, so we could just\n> delete this comment. Or could it be more useful to consider that as a\n> special case and report it as a DDL, impacting log_statements?\n\nIt should be qualified just like CREATE INDEX.\nBoth are not covered by the standard, which does not mention indexes,\nsince they are an \"implementation detail\".\n\nI think that it is pretty clear that CREATE INDEX should be considered\nDDL, since it defines (creates) and object. The same should apply to\nREINDEX.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 04 Dec 2023 08:53:56 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should REINDEX be listed under DDL?" }, { "msg_contents": "On Mon, 4 Dec 2023 at 02:54, Laurenz Albe <[email protected]> wrote:\n\n> REINDEX is philosophically a maintenance command and a Postgres\n> > extension not in the SQL standard, so it does not really qualify as a\n> > DDL because it does not do in object definitions, so we could just\n> > delete this comment. Or could it be more useful to consider that as a\n> > special case and report it as a DDL, impacting log_statements?\n>\n> It should be qualified just like CREATE INDEX.\n> Both are not covered by the standard, which does not mention indexes,\n> since they are an \"implementation detail\".\n>\n> I think that it is pretty clear that CREATE INDEX should be considered\n> DDL, since it defines (creates) and object. The same should apply to\n> REINDEX.\n>\n\nIsn't REINDEX more like REFRESH MATERIALIZED VIEW and CLUSTER (especially\nwithout USING)?\n\nCREATE INDEX (really, CREATE anything) is clearly DDL as it creates a new\nobject, and DROP and ALTER are the same. But REINDEX just reaches below the\nabstraction and maintains the existing object without changing its\ndefinition.\n\nI don't think whether it's in the standard is the controlling fact. It's\nnot just DDL vs. not; there are naturally at least 3 categories: DDL,\nmaintenance, and data modification.\n\nGetting back to the question at hand, I think REINDEX should be treated the\nsame as VACUUM and CLUSTER (without USING). So if and only if they are\nconsidered DDL for this purpose then REINDEX should be too.\n\nOn Mon, 4 Dec 2023 at 02:54, Laurenz Albe <[email protected]> wrote:\n> REINDEX is philosophically a maintenance command and a Postgres\n> extension not in the SQL standard, so it does not really qualify as a\n> DDL because it does not do in object definitions, so we could just\n> delete this comment.  Or could it be more useful to consider that as a\n> special case and report it as a DDL, impacting log_statements?\n\nIt should be qualified just like CREATE INDEX.\nBoth are not covered by the standard, which does not mention indexes,\nsince they are an \"implementation detail\".\n\nI think that it is pretty clear that CREATE INDEX should be considered\nDDL, since it defines (creates) and object.  The same should apply to\nREINDEX.Isn't REINDEX more like REFRESH MATERIALIZED VIEW and CLUSTER (especially without USING)?CREATE INDEX (really, CREATE anything) is clearly DDL as it creates a new object, and DROP and ALTER are the same. But REINDEX just reaches below the abstraction and maintains the existing object without changing its definition.I don't think whether it's in the standard is the controlling fact. It's not just DDL vs. not; there are naturally at least 3 categories: DDL, maintenance, and data modification.Getting back to the question at hand, I think REINDEX should be treated the same as VACUUM and CLUSTER (without USING). So if and only if they are considered DDL for this purpose then REINDEX should be too.", "msg_date": "Mon, 4 Dec 2023 07:50:10 -0500", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should REINDEX be listed under DDL?" } ]
[ { "msg_contents": "src/backend/nodes/print.c contains a number of functions that print node \ntypes, mostly to stdout. Most of these are not actually used anywhere \nin the code. Are they meant to be inserted into the code ad hoc for \ndebugging? Is anyone using these?\n\nThis file has clearly not been updated substantially in a long time, and \nfunctions like print_expr() are clearly outdated.\n\nelog_node_display() and its callees are used, but I suppose these could \nbe kept locally in postgres.c.\n\nOther than that, is this file still needed?\n\n\n", "msg_date": "Mon, 4 Dec 2023 07:01:42 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Which parts of src/backend/nodes/print.c are used?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> src/backend/nodes/print.c contains a number of functions that print node \n> types, mostly to stdout. Most of these are not actually used anywhere \n> in the code. Are they meant to be inserted into the code ad hoc for \n> debugging? Is anyone using these?\n\nPersonally, I use pprint() a lot. (I invoke it manually from gdb and\nthen look into the postmaster log for results.) Its cousins such as\nformat_node_dump look like they were added by people with slightly\ndifferent tastes in output format, so they probably have a\nconstituency somewhere.\n\nI tend to agree that print_rt() and the other tree-printing routines\nbelow it (down to, but not including, print_slot) are not as useful\nas invoking the outfuncs.c code; but others might think differently.\nSometimes you don't want all the gory detail.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 04 Dec 2023 08:50:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which parts of src/backend/nodes/print.c are used?" }, { "msg_contents": "On 12/4/23 05:50, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> src/backend/nodes/print.c contains a number of functions that print node\n>> types, mostly to stdout. Most of these are not actually used anywhere\n>> in the code. Are they meant to be inserted into the code ad hoc for\n>> debugging? Is anyone using these?\n> \n> Personally, I use pprint() a lot. (I invoke it manually from gdb and\n> then look into the postmaster log for results.) Its cousins such as\n> format_node_dump look like they were added by people with slightly\n> different tastes in output format, so they probably have a\n> constituency somewhere.\n> \n> I tend to agree that print_rt() and the other tree-printing routines\n> below it (down to, but not including, print_slot) are not as useful\n> as invoking the outfuncs.c code; but others might think differently.\n> Sometimes you don't want all the gory detail.\n\nI've wondered about these functions for years. I use pprint a lot, and I've wanted to use \nprint_slot/print_rt/print_tl (especially print_slot), but they never seemed to do anything. For \ninstance debugging `SELECT 1;`:\n\n(gdb) b ExecResult\nBreakpoint 1 at 0x5fcc25f1ffcb: file nodeResult.c, line 68.\n(gdb) c\nContinuing.\n\nBreakpoint 1, ExecResult (pstate=0x5fcc285272f8) at nodeResult.c:68\n68 {\n(gdb) call print_rt(((PlannedStmt *)pstate->plan)->rtable)\n(gdb) call print_slot(pstate->ps_ResultTupleSlot)\n(gdb)\n\nEven with log_min_messages and client_min_messages set to DEBUG5, nothing appears in psql or the log \nor gdb. How are you supposed to use these functions?\n\nOr if you want a real table, I still see no output after `ExecScanFetch` with:\n\ncreate table s(i) as select generate_series(1,10);\nselect i from s;\n\nI even tried dup'ing the backend's stdout to a file, but still got nothing:\n\n(gdb) call creat(\"/tmp/pgout\", 0600)\n$1 = 103\n(gdb) call dup2(103, 1)\n'dup2' has unknown return type; cast the call to its declared return type\n(gdb) call (int)dup2(103, 1)\n$2 = 1\n(gdb) b ExecScanFetch\nBreakpoint 1 at 0x5fcc25ef026e: file execScan.c, line 37.\n(gdb) c\nContinuing.\n\nBreakpoint 1, ExecScanFetch (node=node@entry=0x5fcc2852d348, \naccessMtd=accessMtd@entry=0x5fcc25f20b74 <SeqNext>, recheckMtd=recheckMtd@entry=0x5fcc25f20b28 \n<SeqRecheck>) at execScan.c:37\n37 {\n(gdb) fin\nRun till exit from #0 ExecScanFetch (node=node@entry=0x5fcc2852d348, \naccessMtd=accessMtd@entry=0x5fcc25f20b74 <SeqNext>, recheckMtd=recheckMtd@entry=0x5fcc25f20b28 \n<SeqRecheck>) at execScan.c:37\n0x00005fcc25ef044a in ExecScan (node=0x5fcc2852d348, accessMtd=accessMtd@entry=0x5fcc25f20b74 \n<SeqNext>, recheckMtd=recheckMtd@entry=0x5fcc25f20b28 <SeqRecheck>) at execScan.c:180\n180 return ExecScanFetch(node, accessMtd, recheckMtd);\nValue returned is $3 = (TupleTableSlot *) 0x5fcc2852d538\n(gdb) call print_slot($3)\n(gdb)\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]\n\n\n", "msg_date": "Thu, 8 Aug 2024 11:50:57 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which parts of src/backend/nodes/print.c are used?" } ]
[ { "msg_contents": "Hi,\n\nI want to work on making COPY format extendable. I attach\nthe first patch for it. I'll send more patches after this is\nmerged.\n\n\nBackground:\n\nCurrently, COPY TO/FROM supports only \"text\", \"csv\" and\n\"binary\" formats. There are some requests to support more\nCOPY formats. For example:\n\n* 2023-11: JSON and JSON lines [1]\n* 2022-04: Apache Arrow [2]\n* 2018-02: Apache Avro, Apache Parquet and Apache ORC [3]\n\n(FYI: I want to add support for Apache Arrow.)\n\nThere were discussions how to add support for more formats. [3][4]\nIn these discussions, we got a consensus about making COPY\nformat extendable.\n\nBut it seems that nobody works on this yet. So I want to\nwork on this. (If there is anyone who wants to work on this\ntogether, I'm happy.)\n\n\nSummary:\n\nThe attached patch introduces CopyToFormatOps struct that is\nsimilar to TupleTableSlotOps for TupleTableSlot but\nCopyToFormatOps is for COPY TO format. CopyToFormatOps has\nroutines to implement a COPY TO format.\n\nThe attached patch doesn't change:\n\n* the current behavior (all existing tests are still passed\n without changing them)\n* the existing \"text\", \"csv\" and \"binary\" format output\n implementations including local variable names (the\n attached patch just move them and adjust indent)\n* performance (no significant loss of performance)\n\nIn other words, this is just a refactoring for further\nchanges to make COPY format extendable. If I use \"complete\nthe task and then request reviews for it\" approach, it will\nbe difficult to review because changes for it will be\nlarge. So I want to work on this step by step. Is it\nacceptable?\n\nTODOs that should be done in subsequent patches:\n\n* Add some CopyToState readers such as CopyToStateGetDest(),\n CopyToStateGetAttnums() and CopyToStateGetOpts()\n (We will need to consider which APIs should be exported.)\n (This is for implemeing COPY TO format by extension.)\n* Export CopySend*() in src/backend/commands/copyto.c\n (This is for implemeing COPY TO format by extension.)\n* Add API to register a new COPY TO format implementation\n* Add \"CREATE XXX\" to register a new COPY TO format (or COPY\n TO/FROM format) implementation\n (\"CREATE COPY HANDLER\" was suggested in [5].)\n* Same for COPY FROM\n\n\nPerformance:\n\nWe got a consensus about making COPY format extendable but\nwe should care about performance. [6]\n\n> I think that step 1 ought to be to convert the existing\n> formats into plug-ins, and demonstrate that there's no\n> significant loss of performance.\n\nSo I measured COPY TO time with/without this change. You can\nsee there is no significant loss of performance.\n\nData: Random 32 bit integers:\n\n CREATE TABLE data (int32 integer);\n INSERT INTO data\n SELECT random() * 10000\n FROM generate_series(1, ${n_records});\n\nThe number of records: 100K, 1M and 10M\n\n100K without this change:\n\n format,elapsed time (ms)\n text,22.527\n csv,23.822\n binary,24.806\n\n100K with this change:\n\n format,elapsed time (ms)\n text,22.919\n csv,24.643\n binary,24.705\n\n1M without this change:\n\n format,elapsed time (ms)\n text,223.457\n csv,233.583\n binary,242.687\n\n1M with this change:\n\n format,elapsed time (ms)\n text,224.591\n csv,233.964\n binary,247.164\n\n10M without this change:\n\n format,elapsed time (ms)\n text,2330.383\n csv,2411.394\n binary,2590.817\n\n10M with this change:\n\n format,elapsed time (ms)\n text,2231.307\n csv,2408.067\n binary,2473.617\n\n\n[1]: https://www.postgresql.org/message-id/flat/24e3ee88-ec1e-421b-89ae-8a47ee0d2df1%40joeconway.com#a5e6b8829f9a74dfc835f6f29f2e44c5\n[2]: https://www.postgresql.org/message-id/flat/CAGrfaBVyfm0wPzXVqm0%3Dh5uArYh9N_ij%2BsVpUtDHqkB%3DVyB3jw%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/flat/20180210151304.fonjztsynewldfba%40gmail.com\n[4]: https://www.postgresql.org/message-id/flat/3741749.1655952719%40sss.pgh.pa.us#2bb7af4a3d2c7669f9a49808d777a20d\n[5]: https://www.postgresql.org/message-id/20180211211235.5x3jywe5z3lkgcsr%40alap3.anarazel.de\n[6]: https://www.postgresql.org/message-id/3741749.1655952719%40sss.pgh.pa.us\n\n\nThanks,\n-- \nkou", "msg_date": "Mon, 04 Dec 2023 15:35:48 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Make COPY format extendable: Extract COPY TO format implementations" }, { "msg_contents": "On Mon, Dec 04, 2023 at 03:35:48PM +0900, Sutou Kouhei wrote:\n> I want to work on making COPY format extendable. I attach\n> the first patch for it. I'll send more patches after this is\n> merged.\n\nGiven the current discussion about adding JSON, I think this could be a\nnice bit of refactoring that could ultimately open the door to providing\nother COPY formats via shared libraries.\n\n> In other words, this is just a refactoring for further\n> changes to make COPY format extendable. If I use \"complete\n> the task and then request reviews for it\" approach, it will\n> be difficult to review because changes for it will be\n> large. So I want to work on this step by step. Is it\n> acceptable?\n\nI think it makes sense to do this part independently, but we should be\ncareful to design this with the follow-up tasks in mind.\n\n> So I measured COPY TO time with/without this change. You can\n> see there is no significant loss of performance.\n> \n> Data: Random 32 bit integers:\n> \n> CREATE TABLE data (int32 integer);\n> INSERT INTO data\n> SELECT random() * 10000\n> FROM generate_series(1, ${n_records});\n\nSeems encouraging. I assume the performance concerns stem from the use of\nfunction pointers. Or was there something else?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 12:24:58 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nThanks for replying to this proposal!\n\nIn <20231205182458.GC2757816@nathanxps13>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 5 Dec 2023 12:24:58 -0600,\n Nathan Bossart <[email protected]> wrote:\n\n> I think it makes sense to do this part independently, but we should be\n> careful to design this with the follow-up tasks in mind.\n\nOK. I'll keep updating the \"TODOs\" section in the original\ne-mail. It also includes design in the follow-up tasks. We\ncan discuss the design separately from the patches\nsubmitting. (The current submitted patch just focuses on\nrefactoring but we can discuss the final design.)\n\n> I assume the performance concerns stem from the use of\n> function pointers. Or was there something else?\n\nI think so too.\n\nThe original e-mail that mentioned the performance concern\n[1] didn't say about the reason but the use of function\npointers might be concerned.\n\nIf the currently supported formats (\"text\", \"csv\" and\n\"binary\") are implemented as an extension, it may have more\nconcerns but we will keep them as built-in formats for\ncompatibility. So I think that no more concerns exist for\nthese formats.\n\n\n[1]: https://www.postgresql.org/message-id/flat/3741749.1655952719%40sss.pgh.pa.us#2bb7af4a3d2c7669f9a49808d777a20d\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 06 Dec 2023 11:44:47 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Dec 6, 2023 at 10:45 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for replying to this proposal!\n>\n> In <20231205182458.GC2757816@nathanxps13>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 5 Dec 2023 12:24:58 -0600,\n> Nathan Bossart <[email protected]> wrote:\n>\n> > I think it makes sense to do this part independently, but we should be\n> > careful to design this with the follow-up tasks in mind.\n>\n> OK. I'll keep updating the \"TODOs\" section in the original\n> e-mail. It also includes design in the follow-up tasks. We\n> can discuss the design separately from the patches\n> submitting. (The current submitted patch just focuses on\n> refactoring but we can discuss the final design.)\n>\n> > I assume the performance concerns stem from the use of\n> > function pointers. Or was there something else?\n>\n> I think so too.\n>\n> The original e-mail that mentioned the performance concern\n> [1] didn't say about the reason but the use of function\n> pointers might be concerned.\n>\n> If the currently supported formats (\"text\", \"csv\" and\n> \"binary\") are implemented as an extension, it may have more\n> concerns but we will keep them as built-in formats for\n> compatibility. So I think that no more concerns exist for\n> these formats.\n>\n\nFor the modern formats(parquet, orc, avro, etc.), will they be\nimplemented as extensions or in core?\n\nThe patch looks good except for a pair of extra curly braces.\n\n>\n> [1]: https://www.postgresql.org/message-id/flat/3741749.1655952719%40sss.pgh.pa.us#2bb7af4a3d2c7669f9a49808d777a20d\n>\n>\n> Thanks,\n> --\n> kou\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 6 Dec 2023 11:18:35 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3Jf7kPV3ez5OHu-pFGscKfVyd9KkubMF199etkfz=EPRg@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 6 Dec 2023 11:18:35 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> For the modern formats(parquet, orc, avro, etc.), will they be\n> implemented as extensions or in core?\n\nI think that they should be implemented as extensions\nbecause they will depend of external libraries and may not\nuse C. For example, C++ will be used for Apache Parquet\nbecause the official Apache Parquet C++ implementation\nexists but the C implementation doesn't.\n\n(I can implement an extension for Apache Parquet after we\ncomplete this feature. I'll implement an extension for\nApache Arrow with the official Apache Arrow C++\nimplementation. And it's easy that we convert Apache Arrow\ndata to Apache Parquet with the official Apache Parquet\nimplementation.)\n\n> The patch looks good except for a pair of extra curly braces.\n\nThanks for the review! I attach the v2 patch that removes\nextra curly braces for \"if (isnull)\".\n\n\nThanks,\n-- \nkou", "msg_date": "Wed, 06 Dec 2023 15:19:08 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Dec 6, 2023 at 2:19 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3Jf7kPV3ez5OHu-pFGscKfVyd9KkubMF199etkfz=EPRg@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 6 Dec 2023 11:18:35 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> > For the modern formats(parquet, orc, avro, etc.), will they be\n> > implemented as extensions or in core?\n>\n> I think that they should be implemented as extensions\n> because they will depend of external libraries and may not\n> use C. For example, C++ will be used for Apache Parquet\n> because the official Apache Parquet C++ implementation\n> exists but the C implementation doesn't.\n>\n> (I can implement an extension for Apache Parquet after we\n> complete this feature. I'll implement an extension for\n> Apache Arrow with the official Apache Arrow C++\n> implementation. And it's easy that we convert Apache Arrow\n> data to Apache Parquet with the official Apache Parquet\n> implementation.)\n>\n> > The patch looks good except for a pair of extra curly braces.\n>\n> Thanks for the review! I attach the v2 patch that removes\n> extra curly braces for \"if (isnull)\".\n>\nFor the extra curly braces, I mean the following code block in\nCopyToFormatBinaryStart:\n\n+ { <-- I thought this is useless?\n+ /* Generate header for a binary copy */\n+ int32 tmp;\n+\n+ /* Signature */\n+ CopySendData(cstate, BinarySignature, 11);\n+ /* Flags field */\n+ tmp = 0;\n+ CopySendInt32(cstate, tmp);\n+ /* No header extension */\n+ tmp = 0;\n+ CopySendInt32(cstate, tmp);\n+ }\n\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 6 Dec 2023 15:11:34 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3K9dE2gt3+K+h=DwTqMenR84aeYuYS+cty3SR3LAeDBAQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 6 Dec 2023 15:11:34 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> For the extra curly braces, I mean the following code block in\n> CopyToFormatBinaryStart:\n> \n> + { <-- I thought this is useless?\n> + /* Generate header for a binary copy */\n> + int32 tmp;\n> +\n> + /* Signature */\n> + CopySendData(cstate, BinarySignature, 11);\n> + /* Flags field */\n> + tmp = 0;\n> + CopySendInt32(cstate, tmp);\n> + /* No header extension */\n> + tmp = 0;\n> + CopySendInt32(cstate, tmp);\n> + }\n\nOh, I see. I've removed and attach the v3 patch. In general,\nI don't change variable name and so on in this patch. I just\nmove codes in this patch. But I also removed the \"tmp\"\nvariable for this case because I think that the name isn't\nsuitable for larger scope. (I think that \"tmp\" is acceptable\nin a small scope like the above code.)\n\nNew code:\n\n/* Generate header for a binary copy */\n/* Signature */\nCopySendData(cstate, BinarySignature, 11);\n/* Flags field */\nCopySendInt32(cstate, 0);\n/* No header extension */\nCopySendInt32(cstate, 0);\n\n\nThanks,\n-- \nkou", "msg_date": "Wed, 06 Dec 2023 16:28:34 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "\tSutou Kouhei wrote:\n\n> * 2022-04: Apache Arrow [2]\n> * 2018-02: Apache Avro, Apache Parquet and Apache ORC [3]\n> \n> (FYI: I want to add support for Apache Arrow.)\n> \n> There were discussions how to add support for more formats. [3][4]\n> In these discussions, we got a consensus about making COPY\n> format extendable.\n\n\nThese formats seem all column-oriented whereas COPY is row-oriented\nat the protocol level [1].\nWith regard to the procotol, how would it work to support these formats?\n\n\n[1] https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-COPY\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 06 Dec 2023 13:31:59 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Dec 6, 2023 at 3:28 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3K9dE2gt3+K+h=DwTqMenR84aeYuYS+cty3SR3LAeDBAQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 6 Dec 2023 15:11:34 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> > For the extra curly braces, I mean the following code block in\n> > CopyToFormatBinaryStart:\n> >\n> > + { <-- I thought this is useless?\n> > + /* Generate header for a binary copy */\n> > + int32 tmp;\n> > +\n> > + /* Signature */\n> > + CopySendData(cstate, BinarySignature, 11);\n> > + /* Flags field */\n> > + tmp = 0;\n> > + CopySendInt32(cstate, tmp);\n> > + /* No header extension */\n> > + tmp = 0;\n> > + CopySendInt32(cstate, tmp);\n> > + }\n>\n> Oh, I see. I've removed and attach the v3 patch. In general,\n> I don't change variable name and so on in this patch. I just\n> move codes in this patch. But I also removed the \"tmp\"\n> variable for this case because I think that the name isn't\n> suitable for larger scope. (I think that \"tmp\" is acceptable\n> in a small scope like the above code.)\n>\n> New code:\n>\n> /* Generate header for a binary copy */\n> /* Signature */\n> CopySendData(cstate, BinarySignature, 11);\n> /* Flags field */\n> CopySendInt32(cstate, 0);\n> /* No header extension */\n> CopySendInt32(cstate, 0);\n>\n>\n> Thanks,\n> --\n> kou\n\nHi Kou,\n\nI read the thread[1] you posted and I think Andres's suggestion sounds great.\n\nShould we extract both *copy to* and *copy from* for the first step, in that\ncase we can add the pg_copy_handler catalog smoothly later.\n\nAttached V4 adds 'extract copy from' and it passed the cirrus ci,\nplease take a look.\n\nI added a hook *copy_from_end* but this might be removed later if not used.\n\n[1]: https://www.postgresql.org/message-id/20180211211235.5x3jywe5z3lkgcsr%40alap3.anarazel.de\n-- \nRegards\nJunwang Zhao", "msg_date": "Wed, 6 Dec 2023 22:07:51 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Dec 6, 2023 at 8:32 PM Daniel Verite <[email protected]> wrote:\n>\n> Sutou Kouhei wrote:\n>\n> > * 2022-04: Apache Arrow [2]\n> > * 2018-02: Apache Avro, Apache Parquet and Apache ORC [3]\n> >\n> > (FYI: I want to add support for Apache Arrow.)\n> >\n> > There were discussions how to add support for more formats. [3][4]\n> > In these discussions, we got a consensus about making COPY\n> > format extendable.\n>\n>\n> These formats seem all column-oriented whereas COPY is row-oriented\n> at the protocol level [1].\n> With regard to the procotol, how would it work to support these formats?\n>\n\nThey have kind of *RowGroup* concepts, a bunch of rows goes to a RowBatch\nand the data of the same column goes together.\n\nI think they should fit the COPY semantics and there are some FDW out there for\nthese modern formats, like [1]. If we support COPY to deal with the\nformat, it will\nbe easier to interact with them(without creating\nserver/usermapping/foreign table).\n\n[1]: https://github.com/adjust/parquet_fdw\n\n>\n> [1] https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-COPY\n>\n>\n> Best regards,\n> --\n> Daniel Vérité\n> https://postgresql.verite.pro/\n> Twitter: @DanielVerite\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 6 Dec 2023 22:32:14 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Dec 06, 2023 at 10:07:51PM +0800, Junwang Zhao wrote:\n> I read the thread[1] you posted and I think Andres's suggestion sounds great.\n> \n> Should we extract both *copy to* and *copy from* for the first step, in that\n> case we can add the pg_copy_handler catalog smoothly later.\n> \n> Attached V4 adds 'extract copy from' and it passed the cirrus ci,\n> please take a look.\n> \n> I added a hook *copy_from_end* but this might be removed later if not used.\n> \n> [1]: https://www.postgresql.org/message-id/20180211211235.5x3jywe5z3lkgcsr%40alap3.anarazel.de\n\nI was looking at the differences between v3 posted by Sutou-san and\nv4 from you, seeing that:\n\n+/* Routines for a COPY HANDLER implementation. */\n+typedef struct CopyHandlerOps\n {\n /* Called when COPY TO is started. This will send a header. */\n- void (*start) (CopyToState cstate, TupleDesc tupDesc);\n+ void (*copy_to_start) (CopyToState cstate, TupleDesc tupDesc);\n \n /* Copy one row for COPY TO. */\n- void (*one_row) (CopyToState cstate, TupleTableSlot *slot);\n+ void (*copy_to_one_row) (CopyToState cstate, TupleTableSlot *slot);\n \n /* Called when COPY TO is ended. This will send a trailer. */\n- void (*end) (CopyToState cstate);\n-} CopyToFormatOps;\n+ void (*copy_to_end) (CopyToState cstate);\n+\n+ void (*copy_from_start) (CopyFromState cstate, TupleDesc tupDesc);\n+ bool (*copy_from_next) (CopyFromState cstate, ExprContext *econtext,\n+ Datum *values, bool *nulls);\n+ void (*copy_from_error_callback) (CopyFromState cstate);\n+ void (*copy_from_end) (CopyFromState cstate);\n+} CopyHandlerOps;\n\nAnd we've spent a good deal of time refactoring the copy code so as\nthe logic behind TO and FROM is split. Having a set of routines that\ngroups both does not look like a step in the right direction to me,\nand v4 is an attempt at solving two problems, while v3 aims to improve\none case. It seems to me that each callback portion should be focused\non staying in its own area of the code, aka copyfrom*.c or copyto*.c.\n--\nMichael", "msg_date": "Thu, 7 Dec 2023 09:38:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3LSRhK601Bn50u71BgfNWm4q3kv-o-KEq=hrbyLbY_EsA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 6 Dec 2023 22:07:51 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> Should we extract both *copy to* and *copy from* for the first step, in that\n> case we can add the pg_copy_handler catalog smoothly later.\n\nI don't object it (mixing TO/FROM changes to one patch) but\nit may make review difficult. Is it acceptable?\n\nFYI: I planed that I implement TO part, and then FROM part,\nand then unify TO/FROM parts if needed. [1]\n\n> Attached V4 adds 'extract copy from' and it passed the cirrus ci,\n> please take a look.\n\nThanks. Here are my comments:\n\n> +\t\t/*\n> +\t\t\t* Error is relevant to a particular line.\n> +\t\t\t*\n> +\t\t\t* If line_buf still contains the correct line, print it.\n> +\t\t\t*/\n> +\t\tif (cstate->line_buf_valid)\n\nWe need to fix the indentation.\n\n> +CopyFromFormatBinaryStart(CopyFromState cstate, TupleDesc tupDesc)\n> +{\n> +\tFmgrInfo *in_functions;\n> +\tOid\t\t *typioparams;\n> +\tOid\t\t\tin_func_oid;\n> +\tAttrNumber\tnum_phys_attrs;\n> +\n> +\t/*\n> +\t * Pick up the required catalog information for each attribute in the\n> +\t * relation, including the input function, the element type (to pass to\n> +\t * the input function), and info about defaults and constraints. (Which\n> +\t * input function we use depends on text/binary format choice.)\n> +\t */\n> +\tnum_phys_attrs = tupDesc->natts;\n> +\tin_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));\n> +\ttypioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));\n\nWe need to update the comment because defaults and\nconstraints aren't picked up here.\n\n> +CopyFromFormatTextStart(CopyFromState cstate, TupleDesc tupDesc)\n...\n> +\t/*\n> +\t * Pick up the required catalog information for each attribute in the\n> +\t * relation, including the input function, the element type (to pass to\n> +\t * the input function), and info about defaults and constraints. (Which\n> +\t * input function we use depends on text/binary format choice.)\n> +\t */\n> +\tin_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));\n> +\ttypioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));\n\nditto.\n\n\n> @@ -1716,15 +1776,6 @@ BeginCopyFrom(ParseState *pstate,\n> \t\tReceiveCopyBinaryHeader(cstate);\n> \t}\n\nI think that this block should be moved to\nCopyFromFormatBinaryStart() too. But we need to run it after\nwe setup inputs such as data_source_cb, pipe and filename...\n\n+/* Routines for a COPY HANDLER implementation. */\n+typedef struct CopyHandlerOps\n+{\n+\t/* Called when COPY TO is started. This will send a header. */\n+\tvoid\t\t(*copy_to_start) (CopyToState cstate, TupleDesc tupDesc);\n+\n+\t/* Copy one row for COPY TO. */\n+\tvoid\t\t(*copy_to_one_row) (CopyToState cstate, TupleTableSlot *slot);\n+\n+\t/* Called when COPY TO is ended. This will send a trailer. */\n+\tvoid\t\t(*copy_to_end) (CopyToState cstate);\n+\n+\tvoid\t\t(*copy_from_start) (CopyFromState cstate, TupleDesc tupDesc);\n+\tbool\t\t(*copy_from_next) (CopyFromState cstate, ExprContext *econtext,\n+\t\t\t \t\t\t\t\t Datum *values, bool *nulls);\n+\tvoid\t\t(*copy_from_error_callback) (CopyFromState cstate);\n+\tvoid\t\t(*copy_from_end) (CopyFromState cstate);\n+} CopyHandlerOps;\n\nIt seems that \"copy_\" prefix is redundant. Should we use\n\"to_start\" instead of \"copy_to_start\" and so on?\n\nBTW, it seems that \"COPY FROM (FORMAT json)\" may not be implemented. [2]\nWe may need to care about NULL copy_from_* cases.\n\n\n> I added a hook *copy_from_end* but this might be removed later if not used.\n\nIt may be useful to clean up resources for COPY FROM but the\npatch doesn't call the copy_from_end. How about removing it\nfor now? We can add it and call it from EndCopyFrom() later?\nBecause it's not needed for now.\n\nI think that we should focus on refactoring instead of\nadding a new feature in this patch.\n\n\n[1]: https://www.postgresql.org/message-id/20231204.153548.2126325458835528809.kou%40clear-code.com\n[2]: https://www.postgresql.org/message-id/flat/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 07 Dec 2023 14:04:58 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Dec 7, 2023 at 8:39 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Dec 06, 2023 at 10:07:51PM +0800, Junwang Zhao wrote:\n> > I read the thread[1] you posted and I think Andres's suggestion sounds great.\n> >\n> > Should we extract both *copy to* and *copy from* for the first step, in that\n> > case we can add the pg_copy_handler catalog smoothly later.\n> >\n> > Attached V4 adds 'extract copy from' and it passed the cirrus ci,\n> > please take a look.\n> >\n> > I added a hook *copy_from_end* but this might be removed later if not used.\n> >\n> > [1]: https://www.postgresql.org/message-id/20180211211235.5x3jywe5z3lkgcsr%40alap3.anarazel.de\n>\n> I was looking at the differences between v3 posted by Sutou-san and\n> v4 from you, seeing that:\n>\n> +/* Routines for a COPY HANDLER implementation. */\n> +typedef struct CopyHandlerOps\n> {\n> /* Called when COPY TO is started. This will send a header. */\n> - void (*start) (CopyToState cstate, TupleDesc tupDesc);\n> + void (*copy_to_start) (CopyToState cstate, TupleDesc tupDesc);\n>\n> /* Copy one row for COPY TO. */\n> - void (*one_row) (CopyToState cstate, TupleTableSlot *slot);\n> + void (*copy_to_one_row) (CopyToState cstate, TupleTableSlot *slot);\n>\n> /* Called when COPY TO is ended. This will send a trailer. */\n> - void (*end) (CopyToState cstate);\n> -} CopyToFormatOps;\n> + void (*copy_to_end) (CopyToState cstate);\n> +\n> + void (*copy_from_start) (CopyFromState cstate, TupleDesc tupDesc);\n> + bool (*copy_from_next) (CopyFromState cstate, ExprContext *econtext,\n> + Datum *values, bool *nulls);\n> + void (*copy_from_error_callback) (CopyFromState cstate);\n> + void (*copy_from_end) (CopyFromState cstate);\n> +} CopyHandlerOps;\n>\n> And we've spent a good deal of time refactoring the copy code so as\n> the logic behind TO and FROM is split. Having a set of routines that\n> groups both does not look like a step in the right direction to me,\n\nThe point of this refactor (from my view) is to make it possible to add new\ncopy handlers in extensions, just like access method. As Andres suggested,\na system catalog like *pg_copy_handler*, if we split TO and FROM into two\nsets of routines, does that mean we have to create two catalog(\npg_copy_from_handler and pg_copy_to_handler)?\n\n> and v4 is an attempt at solving two problems, while v3 aims to improve\n> one case. It seems to me that each callback portion should be focused\n> on staying in its own area of the code, aka copyfrom*.c or copyto*.c.\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 7 Dec 2023 16:37:36 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Dec 7, 2023 at 1:05 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3LSRhK601Bn50u71BgfNWm4q3kv-o-KEq=hrbyLbY_EsA@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 6 Dec 2023 22:07:51 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> > Should we extract both *copy to* and *copy from* for the first step, in that\n> > case we can add the pg_copy_handler catalog smoothly later.\n>\n> I don't object it (mixing TO/FROM changes to one patch) but\n> it may make review difficult. Is it acceptable?\n>\n> FYI: I planed that I implement TO part, and then FROM part,\n> and then unify TO/FROM parts if needed. [1]\n\nI'm fine with step by step refactoring, let's just wait for more\nsuggestions.\n\n>\n> > Attached V4 adds 'extract copy from' and it passed the cirrus ci,\n> > please take a look.\n>\n> Thanks. Here are my comments:\n>\n> > + /*\n> > + * Error is relevant to a particular line.\n> > + *\n> > + * If line_buf still contains the correct line, print it.\n> > + */\n> > + if (cstate->line_buf_valid)\n>\n> We need to fix the indentation.\n>\n> > +CopyFromFormatBinaryStart(CopyFromState cstate, TupleDesc tupDesc)\n> > +{\n> > + FmgrInfo *in_functions;\n> > + Oid *typioparams;\n> > + Oid in_func_oid;\n> > + AttrNumber num_phys_attrs;\n> > +\n> > + /*\n> > + * Pick up the required catalog information for each attribute in the\n> > + * relation, including the input function, the element type (to pass to\n> > + * the input function), and info about defaults and constraints. (Which\n> > + * input function we use depends on text/binary format choice.)\n> > + */\n> > + num_phys_attrs = tupDesc->natts;\n> > + in_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));\n> > + typioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));\n>\n> We need to update the comment because defaults and\n> constraints aren't picked up here.\n>\n> > +CopyFromFormatTextStart(CopyFromState cstate, TupleDesc tupDesc)\n> ...\n> > + /*\n> > + * Pick up the required catalog information for each attribute in the\n> > + * relation, including the input function, the element type (to pass to\n> > + * the input function), and info about defaults and constraints. (Which\n> > + * input function we use depends on text/binary format choice.)\n> > + */\n> > + in_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));\n> > + typioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));\n>\n> ditto.\n>\n>\n> > @@ -1716,15 +1776,6 @@ BeginCopyFrom(ParseState *pstate,\n> > ReceiveCopyBinaryHeader(cstate);\n> > }\n>\n> I think that this block should be moved to\n> CopyFromFormatBinaryStart() too. But we need to run it after\n> we setup inputs such as data_source_cb, pipe and filename...\n>\n> +/* Routines for a COPY HANDLER implementation. */\n> +typedef struct CopyHandlerOps\n> +{\n> + /* Called when COPY TO is started. This will send a header. */\n> + void (*copy_to_start) (CopyToState cstate, TupleDesc tupDesc);\n> +\n> + /* Copy one row for COPY TO. */\n> + void (*copy_to_one_row) (CopyToState cstate, TupleTableSlot *slot);\n> +\n> + /* Called when COPY TO is ended. This will send a trailer. */\n> + void (*copy_to_end) (CopyToState cstate);\n> +\n> + void (*copy_from_start) (CopyFromState cstate, TupleDesc tupDesc);\n> + bool (*copy_from_next) (CopyFromState cstate, ExprContext *econtext,\n> + Datum *values, bool *nulls);\n> + void (*copy_from_error_callback) (CopyFromState cstate);\n> + void (*copy_from_end) (CopyFromState cstate);\n> +} CopyHandlerOps;\n>\n> It seems that \"copy_\" prefix is redundant. Should we use\n> \"to_start\" instead of \"copy_to_start\" and so on?\n>\n> BTW, it seems that \"COPY FROM (FORMAT json)\" may not be implemented. [2]\n> We may need to care about NULL copy_from_* cases.\n>\n>\n> > I added a hook *copy_from_end* but this might be removed later if not used.\n>\n> It may be useful to clean up resources for COPY FROM but the\n> patch doesn't call the copy_from_end. How about removing it\n> for now? We can add it and call it from EndCopyFrom() later?\n> Because it's not needed for now.\n>\n> I think that we should focus on refactoring instead of\n> adding a new feature in this patch.\n>\n>\n> [1]: https://www.postgresql.org/message-id/20231204.153548.2126325458835528809.kou%40clear-code.com\n> [2]: https://www.postgresql.org/message-id/flat/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com\n>\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 7 Dec 2023 16:46:53 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "\nOn 2023-12-07 Th 03:37, Junwang Zhao wrote:\n>\n> The point of this refactor (from my view) is to make it possible to add new\n> copy handlers in extensions, just like access method. As Andres suggested,\n> a system catalog like *pg_copy_handler*, if we split TO and FROM into two\n> sets of routines, does that mean we have to create two catalog(\n> pg_copy_from_handler and pg_copy_to_handler)?\n\n\n\nSurely not. Either have two fields, one for the TO handler and one for \nthe FROM handler, or a flag on each row indicating if it's a FROM or TO \nhandler.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 11:38:47 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 8, 2023 at 1:39 AM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-12-07 Th 03:37, Junwang Zhao wrote:\n> >\n> > The point of this refactor (from my view) is to make it possible to add new\n> > copy handlers in extensions, just like access method. As Andres suggested,\n> > a system catalog like *pg_copy_handler*, if we split TO and FROM into two\n> > sets of routines, does that mean we have to create two catalog(\n> > pg_copy_from_handler and pg_copy_to_handler)?\n>\n>\n>\n> Surely not. Either have two fields, one for the TO handler and one for\n> the FROM handler, or a flag on each row indicating if it's a FROM or TO\n> handler.\n\nTrue.\n\nBut why do we need a system catalog like pg_copy_handler in the first\nplace? I imagined that an extension can define a handler function\nreturning a set of callbacks and the parser can lookup the handler\nfunction by name, like FDW and TABLESAMPLE.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Dec 2023 04:27:14 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 8, 2023 at 3:27 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Dec 8, 2023 at 1:39 AM Andrew Dunstan <[email protected]> wrote:\n> >\n> >\n> > On 2023-12-07 Th 03:37, Junwang Zhao wrote:\n> > >\n> > > The point of this refactor (from my view) is to make it possible to add new\n> > > copy handlers in extensions, just like access method. As Andres suggested,\n> > > a system catalog like *pg_copy_handler*, if we split TO and FROM into two\n> > > sets of routines, does that mean we have to create two catalog(\n> > > pg_copy_from_handler and pg_copy_to_handler)?\n> >\n> >\n> >\n> > Surely not. Either have two fields, one for the TO handler and one for\n> > the FROM handler, or a flag on each row indicating if it's a FROM or TO\n> > handler.\n\nIf we wrap the two fields into a single structure, that will still be in\ncopy.h, which I think is not necessary. A single routing wrapper should\nbe enough, the actual implementation still stays separate\ncopy_[to/from].c files.\n\n>\n> True.\n>\n> But why do we need a system catalog like pg_copy_handler in the first\n> place? I imagined that an extension can define a handler function\n> returning a set of callbacks and the parser can lookup the handler\n> function by name, like FDW and TABLESAMPLE.\n>\nI can see FDW related utility commands but no TABLESAMPLE related,\nand there is a pg_foreign_data_wrapper system catalog which has\na *fdwhandler* field.\n\nIf we want extensions to create a new copy handler, I think\nsomething like pg_copy_hander should be necessary.\n\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\nI go one step further to implement the pg_copy_handler, attached V5 is\nthe implementation with some changes suggested by Kou.\n\nYou can also review this on this github pull request [1].\n\n[1]: https://github.com/zhjwpku/postgres/pull/1/files\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 8 Dec 2023 10:32:27 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 08, 2023 at 10:32:27AM +0800, Junwang Zhao wrote:\n> I can see FDW related utility commands but no TABLESAMPLE related,\n> and there is a pg_foreign_data_wrapper system catalog which has\n> a *fdwhandler* field.\n\n+ */ +CATALOG(pg_copy_handler,4551,CopyHandlerRelationId)\n\nUsing a catalog is an over-engineered design. Others have provided\nhints about that upthread, but it would be enough to have one or two\nhandler types that are wrapped around one or two SQL *functions*, like\ntablesamples. It seems like you've missed it, but feel free to read\nabout tablesample-method.sgml, that explains how this is achieved for\ntablesamples.\n\n> If we want extensions to create a new copy handler, I think\n> something like pg_copy_hander should be necessary.\n\nA catalog is not necessary, that's the point, because it can be\nreplaced by a scan of pg_proc with the function name defined in a COPY\nquery (be it through a FORMAT, or different option in a DefElem).\nAn example of extension with tablesamples is contrib/tsm_system_rows/,\nthat just uses a function returning a tsm_handler: \nCREATE FUNCTION system_rows(internal)\nRETURNS tsm_handler\nAS 'MODULE_PATHNAME', 'tsm_system_rows_handler'\nLANGUAGE C STRICT;\n\nThen SELECT queries rely on the contents of the TABLESAMPLE clause to\nfind the set of callbacks it should use by calling the function.\n\n+/* Routines for a COPY HANDLER implementation. */\n+typedef struct CopyRoutine\n+{\n\nFWIW, I find weird the concept of having one handler for both COPY\nFROM and COPY TO as each one of them has callbacks that are mutually\nexclusive to the other, but I'm OK if there is a consensus of only\none. So I'd suggest to use *two* NodeTags instead for a cleaner\nsplit, meaning that we'd need two functions for each method. My point\nis that a custom COPY handler could just define a COPY TO handler or a\nCOPY FROM handler, though it mostly comes down to a matter of taste\nregarding how clean the error handling becomes if one tries to use a\nset of callbacks with a COPY type (TO or FROM) not matching it.\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 14:17:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 8, 2023 at 2:17 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Dec 08, 2023 at 10:32:27AM +0800, Junwang Zhao wrote:\n> > I can see FDW related utility commands but no TABLESAMPLE related,\n> > and there is a pg_foreign_data_wrapper system catalog which has\n> > a *fdwhandler* field.\n>\n> + */ +CATALOG(pg_copy_handler,4551,CopyHandlerRelationId)\n>\n> Using a catalog is an over-engineered design. Others have provided\n> hints about that upthread, but it would be enough to have one or two\n> handler types that are wrapped around one or two SQL *functions*, like\n> tablesamples. It seems like you've missed it, but feel free to read\n> about tablesample-method.sgml, that explains how this is achieved for\n> tablesamples.\n\nAgreed. My previous example of FDW was not a good one, I missed something.\n\n>\n> > If we want extensions to create a new copy handler, I think\n> > something like pg_copy_hander should be necessary.\n>\n> A catalog is not necessary, that's the point, because it can be\n> replaced by a scan of pg_proc with the function name defined in a COPY\n> query (be it through a FORMAT, or different option in a DefElem).\n> An example of extension with tablesamples is contrib/tsm_system_rows/,\n> that just uses a function returning a tsm_handler:\n> CREATE FUNCTION system_rows(internal)\n> RETURNS tsm_handler\n> AS 'MODULE_PATHNAME', 'tsm_system_rows_handler'\n> LANGUAGE C STRICT;\n>\n> Then SELECT queries rely on the contents of the TABLESAMPLE clause to\n> find the set of callbacks it should use by calling the function.\n>\n> +/* Routines for a COPY HANDLER implementation. */\n> +typedef struct CopyRoutine\n> +{\n>\n> FWIW, I find weird the concept of having one handler for both COPY\n> FROM and COPY TO as each one of them has callbacks that are mutually\n> exclusive to the other, but I'm OK if there is a consensus of only\n> one. So I'd suggest to use *two* NodeTags instead for a cleaner\n> split, meaning that we'd need two functions for each method. My point\n> is that a custom COPY handler could just define a COPY TO handler or a\n> COPY FROM handler, though it mostly comes down to a matter of taste\n> regarding how clean the error handling becomes if one tries to use a\n> set of callbacks with a COPY type (TO or FROM) not matching it.\n\nI tend to agree to have separate two functions for each method. But\ngiven we implement it in tablesample-way, I think we need to make it\nclear how to call one of the two functions depending on COPY TO and\nFROM.\n\nIIUC in tablesamples cases, we scan pg_proc to find the handler\nfunction like system_rows(internal) by the method name specified in\nthe query. On the other hand, in COPY cases, the queries would be\ngoing to be like:\n\nCOPY tab TO stdout WITH (format = 'arrow');\nand\nCOPY tab FROM stdin WITH (format = 'arrow');\n\nSo a custom COPY extension would not be able to define SQL functions\njust like arrow(internal) for example. We might need to define a rule\nlike the function returning copy_in/out_handler must be defined as\n<method name>_to(internal) and <method_name>_from(internal).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Dec 2023 15:42:06 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 08, 2023 at 03:42:06PM +0900, Masahiko Sawada wrote:\n> So a custom COPY extension would not be able to define SQL functions\n> just like arrow(internal) for example. We might need to define a rule\n> like the function returning copy_in/out_handler must be defined as\n> <method name>_to(internal) and <method_name>_from(internal).\n\nYeah, I was wondering if there was a trick to avoid the input internal\nargument conflict, but cannot recall something elegant on the top of\nmy mind. Anyway, I'd be OK with any approach as long as it plays\nnicely with the query integration, and that's FORMAT's DefElem with\nits string value to do the function lookups.\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 16:02:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Dear Junagn, Sutou-san,\r\n\r\nBasically I agree your point - improving a extendibility is good.\r\n(I remember that this theme was talked at Japan PostgreSQL conference)\r\nBelow are my comments for your patch.\r\n\r\n01. General\r\n\r\nJust to confirm - is it OK to partially implement APIs? E.g., only COPY TO is\r\navailable. Currently it seems not to consider a case which is not implemented.\r\n\r\n02. General\r\n\r\nIt might be trivial, but could you please clarify how users can extend? Is it OK\r\nto do below steps?\r\n\r\n1. Create a handler function, via CREATE FUNCTION,\r\n2. Register a handler, via new SQL (CREATE COPY HANDLER),\r\n3. Specify the added handler as COPY ... FORMAT clause.\r\n\r\n03. General\r\n\r\nCould you please add document-related tasks to your TODO? I imagined like\r\nfdwhandler.sgml.\r\n\r\n04. General - copyright\r\n\r\nFor newly added files, the below copyright seems sufficient. See applyparallelworker.c.\r\n\r\n```\r\n * Copyright (c) 2023, PostgreSQL Global Development Group\r\n```\r\n\r\n05. src/include/catalog/* files\r\n\r\nIIUC, 8000 or higher OIDs should be used while developing a patch. src/include/catalog/unused_oids\r\nwould suggest a candidate which you can use.\r\n\r\n06. copy.c\r\n\r\nI felt that we can create files per copying methods, like copy_{text|csv|binary}.c,\r\nlike indexes.\r\nHow do other think?\r\n\r\n07. fmt_to_name()\r\n\r\nI'm not sure the function is really needed. Can we follow like get_foreign_data_wrapper_oid()\r\nand remove the funciton?\r\n\r\n08. GetCopyRoutineByName()\r\n\r\nShould we use syscache for searching a catalog?\r\n\r\n09. CopyToFormatTextSendEndOfRow(), CopyToFormatBinaryStart()\r\n\r\nComments still refer CopyHandlerOps, whereas it was renamed.\r\n\r\n10. copy.h\r\n\r\nPer foreign.h and fdwapi.h, should we add a new header file and move some APIs?\r\n\r\n11. copy.h\r\n\r\n```\r\n-/* These are private in commands/copy[from|to].c */\r\n-typedef struct CopyFromStateData *CopyFromState;\r\n-typedef struct CopyToStateData *CopyToState;\r\n```\r\n\r\nAre above changes really needed?\r\n\r\n12. CopyFormatOptions\r\n\r\nCan we remove `bool binary` in future?\r\n\r\n13. external functions\r\n\r\n```\r\n+extern void CopyToFormatTextStart(CopyToState cstate, TupleDesc tupDesc);\r\n+extern void CopyToFormatTextOneRow(CopyToState cstate, TupleTableSlot *slot);\r\n+extern void CopyToFormatTextEnd(CopyToState cstate);\r\n+extern void CopyFromFormatTextStart(CopyFromState cstate, TupleDesc tupDesc);\r\n+extern bool CopyFromFormatTextNext(CopyFromState cstate, ExprContext *econtext,\r\n+\r\nDatum *values, bool *nulls);\r\n+extern void CopyFromFormatTextErrorCallback(CopyFromState cstate);\r\n+\r\n+extern void CopyToFormatBinaryStart(CopyToState cstate, TupleDesc tupDesc);\r\n+extern void CopyToFormatBinaryOneRow(CopyToState cstate, TupleTableSlot *slot);\r\n+extern void CopyToFormatBinaryEnd(CopyToState cstate);\r\n+extern void CopyFromFormatBinaryStart(CopyFromState cstate, TupleDesc tupDesc);\r\n+extern bool CopyFromFormatBinaryNext(CopyFromState cstate,\r\nExprContext *econtext,\r\n+\r\n Datum *values, bool *nulls);\r\n+extern void CopyFromFormatBinaryErrorCallback(CopyFromState cstate);\r\n```\r\n\r\nFYI - If you add files for {text|csv|binary}, these declarations can be removed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Sat, 9 Dec 2023 02:43:49 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Sat, Dec 9, 2023 at 10:43 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Junagn, Sutou-san,\n>\n> Basically I agree your point - improving a extendibility is good.\n> (I remember that this theme was talked at Japan PostgreSQL conference)\n> Below are my comments for your patch.\n>\n> 01. General\n>\n> Just to confirm - is it OK to partially implement APIs? E.g., only COPY TO is\n> available. Currently it seems not to consider a case which is not implemented.\n>\nFor partially implements, we can leave the hook as NULL, and check the NULL\nat *ProcessCopyOptions* and report error if not supported.\n\n> 02. General\n>\n> It might be trivial, but could you please clarify how users can extend? Is it OK\n> to do below steps?\n>\n> 1. Create a handler function, via CREATE FUNCTION,\n> 2. Register a handler, via new SQL (CREATE COPY HANDLER),\n> 3. Specify the added handler as COPY ... FORMAT clause.\n>\nMy original thought was option 2, but as Michael point, option 1 is\nthe right way\nto go.\n\n> 03. General\n>\n> Could you please add document-related tasks to your TODO? I imagined like\n> fdwhandler.sgml.\n>\n> 04. General - copyright\n>\n> For newly added files, the below copyright seems sufficient. See applyparallelworker.c.\n>\n> ```\n> * Copyright (c) 2023, PostgreSQL Global Development Group\n> ```\n>\n> 05. src/include/catalog/* files\n>\n> IIUC, 8000 or higher OIDs should be used while developing a patch. src/include/catalog/unused_oids\n> would suggest a candidate which you can use.\n\nYeah, I will run renumber_oids.pl at last.\n\n>\n> 06. copy.c\n>\n> I felt that we can create files per copying methods, like copy_{text|csv|binary}.c,\n> like indexes.\n> How do other think?\n\nNot sure about this, it seems others have put a lot of effort into\nsplitting TO and From.\nAlso like to hear from others.\n\n>\n> 07. fmt_to_name()\n>\n> I'm not sure the function is really needed. Can we follow like get_foreign_data_wrapper_oid()\n> and remove the funciton?\n\nI have referenced some code from greenplum, will remove this.\n\n>\n> 08. GetCopyRoutineByName()\n>\n> Should we use syscache for searching a catalog?\n>\n> 09. CopyToFormatTextSendEndOfRow(), CopyToFormatBinaryStart()\n>\n> Comments still refer CopyHandlerOps, whereas it was renamed.\n>\n> 10. copy.h\n>\n> Per foreign.h and fdwapi.h, should we add a new header file and move some APIs?\n>\n> 11. copy.h\n>\n> ```\n> -/* These are private in commands/copy[from|to].c */\n> -typedef struct CopyFromStateData *CopyFromState;\n> -typedef struct CopyToStateData *CopyToState;\n> ```\n>\n> Are above changes really needed?\n>\n> 12. CopyFormatOptions\n>\n> Can we remove `bool binary` in future?\n>\n> 13. external functions\n>\n> ```\n> +extern void CopyToFormatTextStart(CopyToState cstate, TupleDesc tupDesc);\n> +extern void CopyToFormatTextOneRow(CopyToState cstate, TupleTableSlot *slot);\n> +extern void CopyToFormatTextEnd(CopyToState cstate);\n> +extern void CopyFromFormatTextStart(CopyFromState cstate, TupleDesc tupDesc);\n> +extern bool CopyFromFormatTextNext(CopyFromState cstate, ExprContext *econtext,\n> +\n> Datum *values, bool *nulls);\n> +extern void CopyFromFormatTextErrorCallback(CopyFromState cstate);\n> +\n> +extern void CopyToFormatBinaryStart(CopyToState cstate, TupleDesc tupDesc);\n> +extern void CopyToFormatBinaryOneRow(CopyToState cstate, TupleTableSlot *slot);\n> +extern void CopyToFormatBinaryEnd(CopyToState cstate);\n> +extern void CopyFromFormatBinaryStart(CopyFromState cstate, TupleDesc tupDesc);\n> +extern bool CopyFromFormatBinaryNext(CopyFromState cstate,\n> ExprContext *econtext,\n> +\n> Datum *values, bool *nulls);\n> +extern void CopyFromFormatBinaryErrorCallback(CopyFromState cstate);\n> ```\n>\n> FYI - If you add files for {text|csv|binary}, these declarations can be removed.\n>\n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n>\n\nThanks for all the valuable suggestions.\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sat, 9 Dec 2023 16:39:11 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Junwang\n\nPlease also see my presentation slides from last years PostgreSQL\nConference in Berlin (attached)\n\nThe main Idea is to make not just \"format\", but also \"transport\" and\n\"stream processing\" extendable via virtual function tables.\n\nBtw, will any of you here be in Prague next week ?\nWould be a good opportunity to discuss this in person.\n\n\nBest Regards\nHannu\n\nOn Sat, Dec 9, 2023 at 9:39 AM Junwang Zhao <[email protected]> wrote:\n>\n> On Sat, Dec 9, 2023 at 10:43 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Junagn, Sutou-san,\n> >\n> > Basically I agree your point - improving a extendibility is good.\n> > (I remember that this theme was talked at Japan PostgreSQL conference)\n> > Below are my comments for your patch.\n> >\n> > 01. General\n> >\n> > Just to confirm - is it OK to partially implement APIs? E.g., only COPY TO is\n> > available. Currently it seems not to consider a case which is not implemented.\n> >\n> For partially implements, we can leave the hook as NULL, and check the NULL\n> at *ProcessCopyOptions* and report error if not supported.\n>\n> > 02. General\n> >\n> > It might be trivial, but could you please clarify how users can extend? Is it OK\n> > to do below steps?\n> >\n> > 1. Create a handler function, via CREATE FUNCTION,\n> > 2. Register a handler, via new SQL (CREATE COPY HANDLER),\n> > 3. Specify the added handler as COPY ... FORMAT clause.\n> >\n> My original thought was option 2, but as Michael point, option 1 is\n> the right way\n> to go.\n>\n> > 03. General\n> >\n> > Could you please add document-related tasks to your TODO? I imagined like\n> > fdwhandler.sgml.\n> >\n> > 04. General - copyright\n> >\n> > For newly added files, the below copyright seems sufficient. See applyparallelworker.c.\n> >\n> > ```\n> > * Copyright (c) 2023, PostgreSQL Global Development Group\n> > ```\n> >\n> > 05. src/include/catalog/* files\n> >\n> > IIUC, 8000 or higher OIDs should be used while developing a patch. src/include/catalog/unused_oids\n> > would suggest a candidate which you can use.\n>\n> Yeah, I will run renumber_oids.pl at last.\n>\n> >\n> > 06. copy.c\n> >\n> > I felt that we can create files per copying methods, like copy_{text|csv|binary}.c,\n> > like indexes.\n> > How do other think?\n>\n> Not sure about this, it seems others have put a lot of effort into\n> splitting TO and From.\n> Also like to hear from others.\n>\n> >\n> > 07. fmt_to_name()\n> >\n> > I'm not sure the function is really needed. Can we follow like get_foreign_data_wrapper_oid()\n> > and remove the funciton?\n>\n> I have referenced some code from greenplum, will remove this.\n>\n> >\n> > 08. GetCopyRoutineByName()\n> >\n> > Should we use syscache for searching a catalog?\n> >\n> > 09. CopyToFormatTextSendEndOfRow(), CopyToFormatBinaryStart()\n> >\n> > Comments still refer CopyHandlerOps, whereas it was renamed.\n> >\n> > 10. copy.h\n> >\n> > Per foreign.h and fdwapi.h, should we add a new header file and move some APIs?\n> >\n> > 11. copy.h\n> >\n> > ```\n> > -/* These are private in commands/copy[from|to].c */\n> > -typedef struct CopyFromStateData *CopyFromState;\n> > -typedef struct CopyToStateData *CopyToState;\n> > ```\n> >\n> > Are above changes really needed?\n> >\n> > 12. CopyFormatOptions\n> >\n> > Can we remove `bool binary` in future?\n> >\n> > 13. external functions\n> >\n> > ```\n> > +extern void CopyToFormatTextStart(CopyToState cstate, TupleDesc tupDesc);\n> > +extern void CopyToFormatTextOneRow(CopyToState cstate, TupleTableSlot *slot);\n> > +extern void CopyToFormatTextEnd(CopyToState cstate);\n> > +extern void CopyFromFormatTextStart(CopyFromState cstate, TupleDesc tupDesc);\n> > +extern bool CopyFromFormatTextNext(CopyFromState cstate, ExprContext *econtext,\n> > +\n> > Datum *values, bool *nulls);\n> > +extern void CopyFromFormatTextErrorCallback(CopyFromState cstate);\n> > +\n> > +extern void CopyToFormatBinaryStart(CopyToState cstate, TupleDesc tupDesc);\n> > +extern void CopyToFormatBinaryOneRow(CopyToState cstate, TupleTableSlot *slot);\n> > +extern void CopyToFormatBinaryEnd(CopyToState cstate);\n> > +extern void CopyFromFormatBinaryStart(CopyFromState cstate, TupleDesc tupDesc);\n> > +extern bool CopyFromFormatBinaryNext(CopyFromState cstate,\n> > ExprContext *econtext,\n> > +\n> > Datum *values, bool *nulls);\n> > +extern void CopyFromFormatBinaryErrorCallback(CopyFromState cstate);\n> > ```\n> >\n> > FYI - If you add files for {text|csv|binary}, these declarations can be removed.\n> >\n> > Best Regards,\n> > Hayato Kuroda\n> > FUJITSU LIMITED\n> >\n>\n> Thanks for all the valuable suggestions.\n>\n> --\n> Regards\n> Junwang Zhao\n>\n>", "msg_date": "Sat, 9 Dec 2023 12:38:46 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nThanks for reviewing our latest patch!\n\nIn \n <TY3PR01MB9889C9234CD220A3A7075F0DF589A@TY3PR01MB9889.jpnprd01.prod.outlook.com>\n \"RE: Make COPY format extendable: Extract COPY TO format implementations\" on Sat, 9 Dec 2023 02:43:49 +0000,\n \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n\n> (I remember that this theme was talked at Japan PostgreSQL conference)\n\nYes. I should have talked to you more at the conference...\nI will do it next time!\n\n\nCan we discuss how to proceed this improvement?\n\nThere are 2 approaches for it:\n\n1. Do the followings concurrently:\n a. Implementing small changes that got a consensus and\n merge them step-by-step\n (e.g. We got a consensus that we need to extract the\n current format related routines.)\n b. Discuss design\n\n (v1-v3 patches use this approach.)\n\n2. Implement one (large) complete patch set with design\n discussion and merge it\n\n (v4- patches use this approach.)\n\nWhich approach is preferred? (Or should we choose another\napproach?)\n\nI thought that 1. is preferred because it will reduce review\ncost. So I chose 1.\n\nIf 2. is preferred, I'll use 2. (I'll add more changes to\nthe latest patch.)\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Sun, 10 Dec 2023 05:44:07 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoDkoGL6yJ_HjNOg9cU=aAdW8uQ3rSQOeRS0SX85LPPNwQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 8 Dec 2023 15:42:06 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n> So a custom COPY extension would not be able to define SQL functions\n> just like arrow(internal) for example. We might need to define a rule\n> like the function returning copy_in/out_handler must be defined as\n> <method name>_to(internal) and <method_name>_from(internal).\n\nWe may not need to add \"_to\"/\"_from\" suffix by checking both\nof argument type and return type. Because we use different\nreturn type for copy_in/out_handler.\n\nBut the current LookupFuncName() family doesn't check return\ntype. If we use this approach, we need to improve the\ncurrent LookupFuncName() family too.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Sun, 10 Dec 2023 05:54:56 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAMT0RQRqVo4fGDWHqOn+wr_eoiXQVfyC=8-c=H=y6VcNxi6BvQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Sat, 9 Dec 2023 12:38:46 +0100,\n Hannu Krosing <[email protected]> wrote:\n\n> Please also see my presentation slides from last years PostgreSQL\n> Conference in Berlin (attached)\n\nThanks for sharing your idea here.\n\n> The main Idea is to make not just \"format\", but also \"transport\" and\n> \"stream processing\" extendable via virtual function tables.\n\n\"Transport\" and \"stream processing\" are out of scope in this\nthread. How about starting new threads for them and discuss\nthem there?\n\n> Btw, will any of you here be in Prague next week ?\n\nSorry. No.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Sun, 10 Dec 2023 06:01:36 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Sun, Dec 10, 2023 at 4:44 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for reviewing our latest patch!\n>\n> In\n> <TY3PR01MB9889C9234CD220A3A7075F0DF589A@TY3PR01MB9889.jpnprd01.prod.outlook.com>\n> \"RE: Make COPY format extendable: Extract COPY TO format implementations\" on Sat, 9 Dec 2023 02:43:49 +0000,\n> \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n>\n> > (I remember that this theme was talked at Japan PostgreSQL conference)\n>\n> Yes. I should have talked to you more at the conference...\n> I will do it next time!\n>\n>\n> Can we discuss how to proceed this improvement?\n>\n> There are 2 approaches for it:\n>\n> 1. Do the followings concurrently:\n> a. Implementing small changes that got a consensus and\n> merge them step-by-step\n> (e.g. We got a consensus that we need to extract the\n> current format related routines.)\n> b. Discuss design\n>\n> (v1-v3 patches use this approach.)\n>\n> 2. Implement one (large) complete patch set with design\n> discussion and merge it\n>\n> (v4- patches use this approach.)\n>\n> Which approach is preferred? (Or should we choose another\n> approach?)\n>\n> I thought that 1. is preferred because it will reduce review\n> cost. So I chose 1.\n>\n> If 2. is preferred, I'll use 2. (I'll add more changes to\n> the latest patch.)\n>\nI'm ok with both, and I'd like to work with you for the parquet\nextension, excited about this new feature, thanks for bringing\nthis up.\n\nForgive me for making so much noise about approach 2, I\njust want to hear about more suggestions of the final shape\nof this feature.\n\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 10 Dec 2023 08:24:51 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Sun, Dec 10, 2023 at 5:44 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for reviewing our latest patch!\n>\n> In\n> <TY3PR01MB9889C9234CD220A3A7075F0DF589A@TY3PR01MB9889.jpnprd01.prod.outlook.com>\n> \"RE: Make COPY format extendable: Extract COPY TO format implementations\" on Sat, 9 Dec 2023 02:43:49 +0000,\n> \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n>\n> > (I remember that this theme was talked at Japan PostgreSQL conference)\n>\n> Yes. I should have talked to you more at the conference...\n> I will do it next time!\n>\n>\n> Can we discuss how to proceed this improvement?\n>\n> There are 2 approaches for it:\n>\n> 1. Do the followings concurrently:\n> a. Implementing small changes that got a consensus and\n> merge them step-by-step\n> (e.g. We got a consensus that we need to extract the\n> current format related routines.)\n> b. Discuss design\n\nIt's preferable to make patches small for easy review. We can merge\nthem anytime before commit if necessary.\n\nI think we need to discuss overall design about callbacks and how\nextensions define a custom copy handler etc. It may require some PoC\npatches. Once we have a consensus on overall design we polish patches\nincluding the documentation changes and regression tests.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Dec 2023 09:36:38 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Sun, Dec 10, 2023 at 5:55 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoDkoGL6yJ_HjNOg9cU=aAdW8uQ3rSQOeRS0SX85LPPNwQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 8 Dec 2023 15:42:06 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > So a custom COPY extension would not be able to define SQL functions\n> > just like arrow(internal) for example. We might need to define a rule\n> > like the function returning copy_in/out_handler must be defined as\n> > <method name>_to(internal) and <method_name>_from(internal).\n>\n> We may not need to add \"_to\"/\"_from\" suffix by checking both\n> of argument type and return type. Because we use different\n> return type for copy_in/out_handler.\n>\n> But the current LookupFuncName() family doesn't check return\n> type. If we use this approach, we need to improve the\n> current LookupFuncName() family too.\n\nIIUC we cannot create two same name functions with the same arguments\nbut a different return value type in the first place. It seems to me\nto be an overkill to change such a design.\n\nAnother idea is to encapsulate copy_to/from_handler by a super class\nlike copy_handler. The handler function is called with an argument,\nsay copyto, and returns copy_handler encapsulating either\ncopy_to/from_handler depending on the argument.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Dec 2023 10:57:15 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Dec 11, 2023 at 10:57:15AM +0900, Masahiko Sawada wrote:\n> IIUC we cannot create two same name functions with the same arguments\n> but a different return value type in the first place. It seems to me\n> to be an overkill to change such a design.\n\nAgreed to not touch the logictics of LookupFuncName() for the sake of\nthis thread. I have not checked the SQL specification, but I recall\nthat there are a few assumptions from the spec embedded in the lookup\nlogic particularly when it comes to specify a procedure name without\narguments.\n\n> Another idea is to encapsulate copy_to/from_handler by a super class\n> like copy_handler. The handler function is called with an argument,\n> say copyto, and returns copy_handler encapsulating either\n> copy_to/from_handler depending on the argument.\n\nYep, that's possible as well and can work as a cross-check between the\nargument and the NodeTag assigned to the handler structure returned by\nthe function.\n\nAt the end, the final result of the patch should IMO include:\n- Documentation about how one can register a custom copy_handler.\n- Something in src/test/modules/, minimalistic still useful that can\nbe used as a template when one wants to implement their own handler.\nThe documentation should mention about this module.\n- No need for SQL functions for all the in-core handlers: let's just\nreturn pointers to them based on the options given.\n\nIt would be probably cleaner to split the patch so as the code is\nrefactored and evaluated with the in-core handlers first, and then\nextended with the pluggable facilities and the function lookups.\n--\nMichael", "msg_date": "Mon, 11 Dec 2023 11:19:40 +0100", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Sat, Dec 9, 2023 at 7:38 PM Hannu Krosing <[email protected]> wrote:\n>\n> Hi Junwang\n>\n> Please also see my presentation slides from last years PostgreSQL\n> Conference in Berlin (attached)\n\nI read through the slides, really promising ideas, it's will be great\nif we can get there at last.\n\n>\n> The main Idea is to make not just \"format\", but also \"transport\" and\n> \"stream processing\" extendable via virtual function tables.\nThe code is really coupled, it is not easy to do all of these in one round,\nit will be great if you have a POC patch.\n\n>\n> Btw, will any of you here be in Prague next week ?\n> Would be a good opportunity to discuss this in person.\nSorry, no.\n\n>\n>\n> Best Regards\n> Hannu\n>\n> On Sat, Dec 9, 2023 at 9:39 AM Junwang Zhao <[email protected]> wrote:\n> >\n> > On Sat, Dec 9, 2023 at 10:43 AM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > Dear Junagn, Sutou-san,\n> > >\n> > > Basically I agree your point - improving a extendibility is good.\n> > > (I remember that this theme was talked at Japan PostgreSQL conference)\n> > > Below are my comments for your patch.\n> > >\n> > > 01. General\n> > >\n> > > Just to confirm - is it OK to partially implement APIs? E.g., only COPY TO is\n> > > available. Currently it seems not to consider a case which is not implemented.\n> > >\n> > For partially implements, we can leave the hook as NULL, and check the NULL\n> > at *ProcessCopyOptions* and report error if not supported.\n> >\n> > > 02. General\n> > >\n> > > It might be trivial, but could you please clarify how users can extend? Is it OK\n> > > to do below steps?\n> > >\n> > > 1. Create a handler function, via CREATE FUNCTION,\n> > > 2. Register a handler, via new SQL (CREATE COPY HANDLER),\n> > > 3. Specify the added handler as COPY ... FORMAT clause.\n> > >\n> > My original thought was option 2, but as Michael point, option 1 is\n> > the right way\n> > to go.\n> >\n> > > 03. General\n> > >\n> > > Could you please add document-related tasks to your TODO? I imagined like\n> > > fdwhandler.sgml.\n> > >\n> > > 04. General - copyright\n> > >\n> > > For newly added files, the below copyright seems sufficient. See applyparallelworker.c.\n> > >\n> > > ```\n> > > * Copyright (c) 2023, PostgreSQL Global Development Group\n> > > ```\n> > >\n> > > 05. src/include/catalog/* files\n> > >\n> > > IIUC, 8000 or higher OIDs should be used while developing a patch. src/include/catalog/unused_oids\n> > > would suggest a candidate which you can use.\n> >\n> > Yeah, I will run renumber_oids.pl at last.\n> >\n> > >\n> > > 06. copy.c\n> > >\n> > > I felt that we can create files per copying methods, like copy_{text|csv|binary}.c,\n> > > like indexes.\n> > > How do other think?\n> >\n> > Not sure about this, it seems others have put a lot of effort into\n> > splitting TO and From.\n> > Also like to hear from others.\n> >\n> > >\n> > > 07. fmt_to_name()\n> > >\n> > > I'm not sure the function is really needed. Can we follow like get_foreign_data_wrapper_oid()\n> > > and remove the funciton?\n> >\n> > I have referenced some code from greenplum, will remove this.\n> >\n> > >\n> > > 08. GetCopyRoutineByName()\n> > >\n> > > Should we use syscache for searching a catalog?\n> > >\n> > > 09. CopyToFormatTextSendEndOfRow(), CopyToFormatBinaryStart()\n> > >\n> > > Comments still refer CopyHandlerOps, whereas it was renamed.\n> > >\n> > > 10. copy.h\n> > >\n> > > Per foreign.h and fdwapi.h, should we add a new header file and move some APIs?\n> > >\n> > > 11. copy.h\n> > >\n> > > ```\n> > > -/* These are private in commands/copy[from|to].c */\n> > > -typedef struct CopyFromStateData *CopyFromState;\n> > > -typedef struct CopyToStateData *CopyToState;\n> > > ```\n> > >\n> > > Are above changes really needed?\n> > >\n> > > 12. CopyFormatOptions\n> > >\n> > > Can we remove `bool binary` in future?\n> > >\n> > > 13. external functions\n> > >\n> > > ```\n> > > +extern void CopyToFormatTextStart(CopyToState cstate, TupleDesc tupDesc);\n> > > +extern void CopyToFormatTextOneRow(CopyToState cstate, TupleTableSlot *slot);\n> > > +extern void CopyToFormatTextEnd(CopyToState cstate);\n> > > +extern void CopyFromFormatTextStart(CopyFromState cstate, TupleDesc tupDesc);\n> > > +extern bool CopyFromFormatTextNext(CopyFromState cstate, ExprContext *econtext,\n> > > +\n> > > Datum *values, bool *nulls);\n> > > +extern void CopyFromFormatTextErrorCallback(CopyFromState cstate);\n> > > +\n> > > +extern void CopyToFormatBinaryStart(CopyToState cstate, TupleDesc tupDesc);\n> > > +extern void CopyToFormatBinaryOneRow(CopyToState cstate, TupleTableSlot *slot);\n> > > +extern void CopyToFormatBinaryEnd(CopyToState cstate);\n> > > +extern void CopyFromFormatBinaryStart(CopyFromState cstate, TupleDesc tupDesc);\n> > > +extern bool CopyFromFormatBinaryNext(CopyFromState cstate,\n> > > ExprContext *econtext,\n> > > +\n> > > Datum *values, bool *nulls);\n> > > +extern void CopyFromFormatBinaryErrorCallback(CopyFromState cstate);\n> > > ```\n> > >\n> > > FYI - If you add files for {text|csv|binary}, these declarations can be removed.\n> > >\n> > > Best Regards,\n> > > Hayato Kuroda\n> > > FUJITSU LIMITED\n> > >\n> >\n> > Thanks for all the valuable suggestions.\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n> >\n> >\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 11 Dec 2023 18:44:39 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Dec 11, 2023 at 7:19 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 10:57:15AM +0900, Masahiko Sawada wrote:\n> > IIUC we cannot create two same name functions with the same arguments\n> > but a different return value type in the first place. It seems to me\n> > to be an overkill to change such a design.\n>\n> Agreed to not touch the logictics of LookupFuncName() for the sake of\n> this thread. I have not checked the SQL specification, but I recall\n> that there are a few assumptions from the spec embedded in the lookup\n> logic particularly when it comes to specify a procedure name without\n> arguments.\n>\n> > Another idea is to encapsulate copy_to/from_handler by a super class\n> > like copy_handler. The handler function is called with an argument,\n> > say copyto, and returns copy_handler encapsulating either\n> > copy_to/from_handler depending on the argument.\n>\n> Yep, that's possible as well and can work as a cross-check between the\n> argument and the NodeTag assigned to the handler structure returned by\n> the function.\n>\n> At the end, the final result of the patch should IMO include:\n> - Documentation about how one can register a custom copy_handler.\n> - Something in src/test/modules/, minimalistic still useful that can\n> be used as a template when one wants to implement their own handler.\n> The documentation should mention about this module.\n> - No need for SQL functions for all the in-core handlers: let's just\n> return pointers to them based on the options given.\n\nAgreed.\n\n> It would be probably cleaner to split the patch so as the code is\n> refactored and evaluated with the in-core handlers first, and then\n> extended with the pluggable facilities and the function lookups.\n\nAgreed.\n\nI've sketched the above idea including a test module in\nsrc/test/module/test_copy_format, based on v2 patch. It's not splitted\nand is dirty so just for discussion.\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 11 Dec 2023 23:31:29 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Dec 11, 2023 at 10:32 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 7:19 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Mon, Dec 11, 2023 at 10:57:15AM +0900, Masahiko Sawada wrote:\n> > > IIUC we cannot create two same name functions with the same arguments\n> > > but a different return value type in the first place. It seems to me\n> > > to be an overkill to change such a design.\n> >\n> > Agreed to not touch the logictics of LookupFuncName() for the sake of\n> > this thread. I have not checked the SQL specification, but I recall\n> > that there are a few assumptions from the spec embedded in the lookup\n> > logic particularly when it comes to specify a procedure name without\n> > arguments.\n> >\n> > > Another idea is to encapsulate copy_to/from_handler by a super class\n> > > like copy_handler. The handler function is called with an argument,\n> > > say copyto, and returns copy_handler encapsulating either\n> > > copy_to/from_handler depending on the argument.\n> >\n> > Yep, that's possible as well and can work as a cross-check between the\n> > argument and the NodeTag assigned to the handler structure returned by\n> > the function.\n> >\n> > At the end, the final result of the patch should IMO include:\n> > - Documentation about how one can register a custom copy_handler.\n> > - Something in src/test/modules/, minimalistic still useful that can\n> > be used as a template when one wants to implement their own handler.\n> > The documentation should mention about this module.\n> > - No need for SQL functions for all the in-core handlers: let's just\n> > return pointers to them based on the options given.\n>\n> Agreed.\n>\n> > It would be probably cleaner to split the patch so as the code is\n> > refactored and evaluated with the in-core handlers first, and then\n> > extended with the pluggable facilities and the function lookups.\n>\n> Agreed.\n>\n> I've sketched the above idea including a test module in\n> src/test/module/test_copy_format, based on v2 patch. It's not splitted\n> and is dirty so just for discussion.\n>\nThe test_copy_format extension doesn't use the fields of CopyToState and\nCopyFromState in this patch, I think we should move CopyFromStateData\nand CopyToStateData to commands/copy.h, what do you think?\n\nThe framework in the patch LGTM.\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 12 Dec 2023 10:09:03 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Dear Sutou-san, Junwang,\n\nSorry for the delay reply.\n\n> \n> Can we discuss how to proceed this improvement?\n> \n> There are 2 approaches for it:\n> \n> 1. Do the followings concurrently:\n> a. Implementing small changes that got a consensus and\n> merge them step-by-step\n> (e.g. We got a consensus that we need to extract the\n> current format related routines.)\n> b. Discuss design\n> \n> (v1-v3 patches use this approach.)\n> \n> 2. Implement one (large) complete patch set with design\n> discussion and merge it\n> \n> (v4- patches use this approach.)\n> \n> Which approach is preferred? (Or should we choose another\n> approach?)\n> \n> I thought that 1. is preferred because it will reduce review\n> cost. So I chose 1.\n\nI'm ok to use approach 1, but could you please divide a large patch? E.g.,\n\n0001. defines an infrastructure for copy-API\n0002. adjusts current codes to use APIs\n0003. adds a test module in src/test/modules or contrib.\n...\n\nThis approach helps reviewers to see patches deeper. Separated patches can be\ncombined when they are close to committable.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 12 Dec 2023 02:31:53 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Tue, Dec 12, 2023 at 11:09 AM Junwang Zhao <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 10:32 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Dec 11, 2023 at 7:19 PM Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Mon, Dec 11, 2023 at 10:57:15AM +0900, Masahiko Sawada wrote:\n> > > > IIUC we cannot create two same name functions with the same arguments\n> > > > but a different return value type in the first place. It seems to me\n> > > > to be an overkill to change such a design.\n> > >\n> > > Agreed to not touch the logictics of LookupFuncName() for the sake of\n> > > this thread. I have not checked the SQL specification, but I recall\n> > > that there are a few assumptions from the spec embedded in the lookup\n> > > logic particularly when it comes to specify a procedure name without\n> > > arguments.\n> > >\n> > > > Another idea is to encapsulate copy_to/from_handler by a super class\n> > > > like copy_handler. The handler function is called with an argument,\n> > > > say copyto, and returns copy_handler encapsulating either\n> > > > copy_to/from_handler depending on the argument.\n> > >\n> > > Yep, that's possible as well and can work as a cross-check between the\n> > > argument and the NodeTag assigned to the handler structure returned by\n> > > the function.\n> > >\n> > > At the end, the final result of the patch should IMO include:\n> > > - Documentation about how one can register a custom copy_handler.\n> > > - Something in src/test/modules/, minimalistic still useful that can\n> > > be used as a template when one wants to implement their own handler.\n> > > The documentation should mention about this module.\n> > > - No need for SQL functions for all the in-core handlers: let's just\n> > > return pointers to them based on the options given.\n> >\n> > Agreed.\n> >\n> > > It would be probably cleaner to split the patch so as the code is\n> > > refactored and evaluated with the in-core handlers first, and then\n> > > extended with the pluggable facilities and the function lookups.\n> >\n> > Agreed.\n> >\n> > I've sketched the above idea including a test module in\n> > src/test/module/test_copy_format, based on v2 patch. It's not splitted\n> > and is dirty so just for discussion.\n> >\n> The test_copy_format extension doesn't use the fields of CopyToState and\n> CopyFromState in this patch, I think we should move CopyFromStateData\n> and CopyToStateData to commands/copy.h, what do you think?\n\nYes, I basically agree with that, where we move CopyFromStateData to\nmight depend on how we define COPY FROM APIs though. I think we can\nmove CopyToStateData to copy.h at least.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Dec 2023 20:48:18 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoCvjGserrtEU=UcA3Mfyfe6ftf9OXPHv9fiJ9DmXMJ2nQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Dec 2023 10:57:15 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n> IIUC we cannot create two same name functions with the same arguments\n> but a different return value type in the first place. It seems to me\n> to be an overkill to change such a design.\n\nOh, sorry. I didn't notice it.\n\n> Another idea is to encapsulate copy_to/from_handler by a super class\n> like copy_handler. The handler function is called with an argument,\n> say copyto, and returns copy_handler encapsulating either\n> copy_to/from_handler depending on the argument.\n\nIt's for using \"${copy_format_name}\" such as \"json\" and\n\"parquet\" as a function name, right? If we use the\n\"${copy_format_name}\" approach, we can't use function names\nthat are already used by tablesample method handler such as\n\"system\" and \"bernoulli\" for COPY FORMAT name. Because both\nof tablesample method handler function and COPY FORMAT\nhandler function use \"(internal)\" as arguments.\n\nI think that tablesample method names and COPY FORMAT names\nwill not be conflicted but the limitation (using the same\nnamespace for tablesample method and COPY FORMAT) is\nunnecessary limitation.\n\nHow about using prefix (\"copy_to_${copy_format_name}\" or\nsomething) or suffix (\"${copy_format_name}_copy_to\" or\nsomething) for function names? For example,\n\"copy_to_json\"/\"copy_from_json\" for \"json\" COPY FORMAT.\n\n(\"copy_${copy_format_name}\" that returns copy_handler\nencapsulating either copy_to/from_handler depending on the\nargument may be an option.)\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 14 Dec 2023 18:44:14 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Dec 14, 2023 at 6:44 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoCvjGserrtEU=UcA3Mfyfe6ftf9OXPHv9fiJ9DmXMJ2nQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Dec 2023 10:57:15 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > IIUC we cannot create two same name functions with the same arguments\n> > but a different return value type in the first place. It seems to me\n> > to be an overkill to change such a design.\n>\n> Oh, sorry. I didn't notice it.\n>\n> > Another idea is to encapsulate copy_to/from_handler by a super class\n> > like copy_handler. The handler function is called with an argument,\n> > say copyto, and returns copy_handler encapsulating either\n> > copy_to/from_handler depending on the argument.\n>\n> It's for using \"${copy_format_name}\" such as \"json\" and\n> \"parquet\" as a function name, right?\n\nRight.\n\n> If we use the\n> \"${copy_format_name}\" approach, we can't use function names\n> that are already used by tablesample method handler such as\n> \"system\" and \"bernoulli\" for COPY FORMAT name. Because both\n> of tablesample method handler function and COPY FORMAT\n> handler function use \"(internal)\" as arguments.\n>\n> I think that tablesample method names and COPY FORMAT names\n> will not be conflicted but the limitation (using the same\n> namespace for tablesample method and COPY FORMAT) is\n> unnecessary limitation.\n\nPresumably, such function name collisions are not limited to\ntablesample and copy, but apply to all functions that have an\n\"internal\" argument. To avoid collisions, extensions can be created in\na different schema than public. And note that built-in format copy\nhandler doesn't need to declare its handler function.\n\n>\n> How about using prefix (\"copy_to_${copy_format_name}\" or\n> something) or suffix (\"${copy_format_name}_copy_to\" or\n> something) for function names? For example,\n> \"copy_to_json\"/\"copy_from_json\" for \"json\" COPY FORMAT.\n>\n> (\"copy_${copy_format_name}\" that returns copy_handler\n> encapsulating either copy_to/from_handler depending on the\n> argument may be an option.)\n\nWhile there is a way to avoid collision as I mentioned above, I can\nsee the point that we might want to avoid using a generic function\nname such as \"arrow\" and \"parquet\" as custom copy handler functions.\nAdding a prefix or suffix would be one option but to give extensions\nmore flexibility, another option would be to support format = 'custom'\nand add the \"handler\" option to specify a copy handler function name\nto call. For example, COPY ... FROM ... WITH (FORMAT = 'custom',\nHANDLER = 'arrow_copy_handler').\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 15 Dec 2023 05:19:43 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoCZv3cVU+NxR2s9J_dWvjrS350GFFr2vMgCH8wWxQ5hTQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 15 Dec 2023 05:19:43 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n> To avoid collisions, extensions can be created in a\n> different schema than public.\n\nThanks. I didn't notice it.\n\n> And note that built-in format copy handler doesn't need to\n> declare its handler function.\n\nRight. I know it.\n\n> Adding a prefix or suffix would be one option but to give extensions\n> more flexibility, another option would be to support format = 'custom'\n> and add the \"handler\" option to specify a copy handler function name\n> to call. For example, COPY ... FROM ... WITH (FORMAT = 'custom',\n> HANDLER = 'arrow_copy_handler').\n\nInteresting. If we use this option, users can choose an COPY\nFORMAT implementation they like from multiple\nimplementations. For example, a developer may implement a\nCOPY FROM FORMAT = 'json' handler with PostgreSQL's JSON\nrelated API and another developer may implement a handler\nwith simdjson[1] which is a fast JSON parser. Users can\nchoose whichever they like.\n\nBut specifying HANDLER = '...' explicitly is a bit\ninconvenient. Because only one handler will be installed in\nmost use cases. In the case, users don't need to choose one\nhandler.\n\nIf we choose this option, it may be better that we also\nprovide a mechanism that can work without HANDLER. Searching\na function by name like tablesample method does is an option.\n\n\n[1]: https://github.com/simdjson/simdjson\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 15 Dec 2023 09:53:05 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn \n <OS3PR01MB9882F023300EDC5AFD8A8339F58EA@OS3PR01MB9882.jpnprd01.prod.outlook.com>\n \"RE: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 12 Dec 2023 02:31:53 +0000,\n \"Hayato Kuroda (Fujitsu)\" <[email protected]> wrote:\n\n>> Can we discuss how to proceed this improvement?\n>> \n>> There are 2 approaches for it:\n>> \n>> 1. Do the followings concurrently:\n>> a. Implementing small changes that got a consensus and\n>> merge them step-by-step\n>> (e.g. We got a consensus that we need to extract the\n>> current format related routines.)\n>> b. Discuss design\n>> \n>> (v1-v3 patches use this approach.)\n>> \n>> 2. Implement one (large) complete patch set with design\n>> discussion and merge it\n>> \n>> (v4- patches use this approach.)\n>> \n>> Which approach is preferred? (Or should we choose another\n>> approach?)\n>> \n>> I thought that 1. is preferred because it will reduce review\n>> cost. So I chose 1.\n> \n> I'm ok to use approach 1, but could you please divide a large patch? E.g.,\n> \n> 0001. defines an infrastructure for copy-API\n> 0002. adjusts current codes to use APIs\n> 0003. adds a test module in src/test/modules or contrib.\n> ...\n> \n> This approach helps reviewers to see patches deeper. Separated patches can be\n> combined when they are close to committable.\n\nIt seems that I should have chosen another approach based on\ncomments so far:\n\n3. Do the followings in order:\n a. Implement a workable (but maybe dirty and/or incomplete)\n implementation to discuss design like [1], discuss\n design with it and get a consensus on design\n b. Implement small patches based on the design\n\n[1]: https://www.postgresql.org/message-id/CAD21AoCunywHird3GaPzWe6s9JG1wzxj3Cr6vGN36DDheGjOjA%40mail.gmail.com \n\nI'll implement a custom COPY FORMAT handler with [1] and\nprovide a feedback with the experience. (It's for a.)\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 15 Dec 2023 11:55:18 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 15, 2023 at 8:53 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoCZv3cVU+NxR2s9J_dWvjrS350GFFr2vMgCH8wWxQ5hTQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 15 Dec 2023 05:19:43 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > To avoid collisions, extensions can be created in a\n> > different schema than public.\n>\n> Thanks. I didn't notice it.\n>\n> > And note that built-in format copy handler doesn't need to\n> > declare its handler function.\n>\n> Right. I know it.\n>\n> > Adding a prefix or suffix would be one option but to give extensions\n> > more flexibility, another option would be to support format = 'custom'\n> > and add the \"handler\" option to specify a copy handler function name\n> > to call. For example, COPY ... FROM ... WITH (FORMAT = 'custom',\n> > HANDLER = 'arrow_copy_handler').\n>\nI like the prefix/suffix idea, easy to implement. *custom* is not a FORMAT,\nand user has to know the name of the specific handler names, not\nintuitive.\n\n> Interesting. If we use this option, users can choose an COPY\n> FORMAT implementation they like from multiple\n> implementations. For example, a developer may implement a\n> COPY FROM FORMAT = 'json' handler with PostgreSQL's JSON\n> related API and another developer may implement a handler\n> with simdjson[1] which is a fast JSON parser. Users can\n> choose whichever they like.\nNot sure about this, why not move Json copy handler to contrib\nas an example for others, any extensions share the same format\nfunction name and just install one? No bound would implement\nanother CSV or TEXT copy handler IMHO.\n>\n> But specifying HANDLER = '...' explicitly is a bit\n> inconvenient. Because only one handler will be installed in\n> most use cases. In the case, users don't need to choose one\n> handler.\n>\n> If we choose this option, it may be better that we also\n> provide a mechanism that can work without HANDLER. Searching\n> a function by name like tablesample method does is an option.\n>\n>\n> [1]: https://github.com/simdjson/simdjson\n>\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 15 Dec 2023 11:27:30 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 15, 2023 at 9:53 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoCZv3cVU+NxR2s9J_dWvjrS350GFFr2vMgCH8wWxQ5hTQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 15 Dec 2023 05:19:43 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > To avoid collisions, extensions can be created in a\n> > different schema than public.\n>\n> Thanks. I didn't notice it.\n>\n> > And note that built-in format copy handler doesn't need to\n> > declare its handler function.\n>\n> Right. I know it.\n>\n> > Adding a prefix or suffix would be one option but to give extensions\n> > more flexibility, another option would be to support format = 'custom'\n> > and add the \"handler\" option to specify a copy handler function name\n> > to call. For example, COPY ... FROM ... WITH (FORMAT = 'custom',\n> > HANDLER = 'arrow_copy_handler').\n>\n> Interesting. If we use this option, users can choose an COPY\n> FORMAT implementation they like from multiple\n> implementations. For example, a developer may implement a\n> COPY FROM FORMAT = 'json' handler with PostgreSQL's JSON\n> related API and another developer may implement a handler\n> with simdjson[1] which is a fast JSON parser. Users can\n> choose whichever they like.\n>\n> But specifying HANDLER = '...' explicitly is a bit\n> inconvenient. Because only one handler will be installed in\n> most use cases. In the case, users don't need to choose one\n> handler.\n>\n> If we choose this option, it may be better that we also\n> provide a mechanism that can work without HANDLER. Searching\n> a function by name like tablesample method does is an option.\n\nAgreed. We can search the function by format name by default and the\nuser can optionally specify the handler function name in case where\nthe names of the installed custom copy handler collide. Probably the\nhandler option stuff could be a follow-up patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 15 Dec 2023 12:48:17 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3JuShA6g19Nt_Ejk15BrNA6PmeCbK7p81izZi71muGq3g@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 15 Dec 2023 11:27:30 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n>> > Adding a prefix or suffix would be one option but to give extensions\n>> > more flexibility, another option would be to support format = 'custom'\n>> > and add the \"handler\" option to specify a copy handler function name\n>> > to call. For example, COPY ... FROM ... WITH (FORMAT = 'custom',\n>> > HANDLER = 'arrow_copy_handler').\n>>\n> I like the prefix/suffix idea, easy to implement. *custom* is not a FORMAT,\n> and user has to know the name of the specific handler names, not\n> intuitive.\n\nAh! I misunderstood this idea. \"custom\" is the special\nformat to use \"HANDLER\". I thought that we can use it like\n\n (FORMAT = 'arrow', HANDLER = 'arrow_copy_handler_impl1')\n\nand\n\n (FORMAT = 'arrow', HANDLER = 'arrow_copy_handler_impl2')\n\n.\n\n>> Interesting. If we use this option, users can choose an COPY\n>> FORMAT implementation they like from multiple\n>> implementations. For example, a developer may implement a\n>> COPY FROM FORMAT = 'json' handler with PostgreSQL's JSON\n>> related API and another developer may implement a handler\n>> with simdjson[1] which is a fast JSON parser. Users can\n>> choose whichever they like.\n> Not sure about this, why not move Json copy handler to contrib\n> as an example for others, any extensions share the same format\n> function name and just install one? No bound would implement\n> another CSV or TEXT copy handler IMHO.\n\nI should have used a different format not JSON as an example\nfor easy to understand. I just wanted to say that extension\ndevelopers can implement another implementation without\nconflicting another implementation.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 15 Dec 2023 13:45:31 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 15, 2023 at 12:45 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3JuShA6g19Nt_Ejk15BrNA6PmeCbK7p81izZi71muGq3g@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 15 Dec 2023 11:27:30 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> >> > Adding a prefix or suffix would be one option but to give extensions\n> >> > more flexibility, another option would be to support format = 'custom'\n> >> > and add the \"handler\" option to specify a copy handler function name\n> >> > to call. For example, COPY ... FROM ... WITH (FORMAT = 'custom',\n> >> > HANDLER = 'arrow_copy_handler').\n> >>\n> > I like the prefix/suffix idea, easy to implement. *custom* is not a FORMAT,\n> > and user has to know the name of the specific handler names, not\n> > intuitive.\n>\n> Ah! I misunderstood this idea. \"custom\" is the special\n> format to use \"HANDLER\". I thought that we can use it like\n>\n> (FORMAT = 'arrow', HANDLER = 'arrow_copy_handler_impl1')\n>\n> and\n>\n> (FORMAT = 'arrow', HANDLER = 'arrow_copy_handler_impl2')\n>\n> .\n>\n> >> Interesting. If we use this option, users can choose an COPY\n> >> FORMAT implementation they like from multiple\n> >> implementations. For example, a developer may implement a\n> >> COPY FROM FORMAT = 'json' handler with PostgreSQL's JSON\n> >> related API and another developer may implement a handler\n> >> with simdjson[1] which is a fast JSON parser. Users can\n> >> choose whichever they like.\n> > Not sure about this, why not move Json copy handler to contrib\n> > as an example for others, any extensions share the same format\n> > function name and just install one? No bound would implement\n> > another CSV or TEXT copy handler IMHO.\n>\n> I should have used a different format not JSON as an example\n> for easy to understand. I just wanted to say that extension\n> developers can implement another implementation without\n> conflicting another implementation.\n\nYeah, I can see the value of the HANDLER option now. The possibility\nof two extensions for the same format using same hanlder name should\nbe rare I guess ;)\n>\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 15 Dec 2023 14:02:49 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoCunywHird3GaPzWe6s9JG1wzxj3Cr6vGN36DDheGjOjA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Dec 2023 23:31:29 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n> I've sketched the above idea including a test module in\n> src/test/module/test_copy_format, based on v2 patch. It's not splitted\n> and is dirty so just for discussion.\n\nI implemented a sample COPY TO handler for Apache Arrow that\nsupports only integer and text.\n\nI needed to extend the patch:\n\n1. Add an opaque space for custom COPY TO handler\n * Add CopyToState{Get,Set}Opaque()\n https://github.com/kou/postgres/commit/5a610b6a066243f971e029432db67152cfe5e944\n\n2. Export CopyToState::attnumlist\n * Add CopyToStateGetAttNumList()\n https://github.com/kou/postgres/commit/15fcba8b4e95afa86edb3f677a7bdb1acb1e7688\n\n3. Export CopySend*()\n * Rename CopySend*() to CopyToStateSend*() and export them\n * Exception: CopySendEndOfRow() to CopyToStateFlush() because\n it just flushes the internal buffer now.\n https://github.com/kou/postgres/commit/289a5640135bde6733a1b8e2c412221ad522901e\n\nThe attached patch is based on the Sawada-san's patch and\nincludes the above changes. Note that this patch is also\ndirty so just for discussion.\n\nMy suggestions from this experience:\n\n1. Split COPY handler to COPY TO handler and COPY FROM handler\n\n * CopyFormatRoutine is a bit tricky. An extension needs\n to create a CopyFormatRoutine node and\n a CopyToFormatRoutine node.\n\n * If we just require \"copy_to_${FORMAT}(internal)\"\n function and \"copy_from_${FORMAT}(internal)\" function,\n we can remove the tricky approach. And it also avoid\n name collisions with other handler such as tablesample\n handler.\n See also:\n https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com#af71f364d0a9f5c144e45b447e5c16c9\n\n2. Need an opaque space like IndexScanDesc::opaque does\n\n * A custom COPY TO handler needs to keep its data\n\n3. Export CopySend*()\n\n * If we like minimum API, we just need to export\n CopySendData() and CopySendEndOfRow(). But\n CopySend{String,Char,Int32,Int16}() will be convenient\n custom COPY TO handlers. (A custom COPY TO handler for\n Apache Arrow doesn't need them.)\n\nQuestions:\n\n1. What value should be used for \"format\" in\n PgMsg_CopyOutResponse message?\n\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/copyto.c;h=c66a047c4a79cc614784610f385f1cd0935350f3;hb=9ca6e7b9411e36488ef539a2c1f6846ac92a7072#l144\n\n It's 1 for binary format and 0 for text/csv format.\n\n Should we make it customizable by custom COPY TO handler?\n If so, what value should be used for this?\n\n2. Do we need more tries for design discussion for the first\n implementation? If we need, what should we try?\n\n\nThanks,\n-- \nkou", "msg_date": "Thu, 21 Dec 2023 18:35:04 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Dec 21, 2023 at 06:35:04PM +0900, Sutou Kouhei wrote:\n> * If we just require \"copy_to_${FORMAT}(internal)\"\n> function and \"copy_from_${FORMAT}(internal)\" function,\n> we can remove the tricky approach. And it also avoid\n> name collisions with other handler such as tablesample\n> handler.\n> See also:\n> https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com#af71f364d0a9f5c144e45b447e5c16c9\n\nHmm. I prefer the unique name approach for the COPY portions without\nenforcing any naming policy on the function names returning the\nhandlers, actually, though I can see your point.\n\n> 2. Need an opaque space like IndexScanDesc::opaque does\n> \n> * A custom COPY TO handler needs to keep its data\n\nSounds useful to me to have a private area passed down to the\ncallbacks.\n\n> 3. Export CopySend*()\n> \n> * If we like minimum API, we just need to export\n> CopySendData() and CopySendEndOfRow(). But\n> CopySend{String,Char,Int32,Int16}() will be convenient\n> custom COPY TO handlers. (A custom COPY TO handler for\n> Apache Arrow doesn't need them.)\n\nHmm. Not sure on this one. This may come down to externalize the\nmanipulation of fe_msgbuf. Particularly, could it be possible that\nsome custom formats don't care at all about the network order?\n\n> Questions:\n> \n> 1. What value should be used for \"format\" in\n> PgMsg_CopyOutResponse message?\n> \n> It's 1 for binary format and 0 for text/csv format.\n> \n> Should we make it customizable by custom COPY TO handler?\n> If so, what value should be used for this?\n\nInteresting point. It looks very tempting to give more flexibility to\npeople who'd like to use their own code as we have one byte in the\nprotocol but just use 0/1. Hence it feels natural to have a callback\nfor that.\n\nIt also means that we may want to think harder about copy_is_binary in\nlibpq in the future step. Now, having a backend implementation does\nnot need any libpq bits, either, because a client stack may just want\nto speak the Postgres protocol directly. Perhaps a custom COPY\nimplementation would be OK with how things are in libpq, as well,\ntweaking its way through with just text or binary.\n\n> 2. Do we need more tries for design discussion for the first\n> implementation? If we need, what should we try?\n\nA makeNode() is used with an allocation in the current memory context\nin the function returning the handler. I would have assume that this\nstuff returns a handler as a const struct like table AMs.\n--\nMichael", "msg_date": "Fri, 22 Dec 2023 10:00:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Dec 22, 2023 at 10:00 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Dec 21, 2023 at 06:35:04PM +0900, Sutou Kouhei wrote:\n> > * If we just require \"copy_to_${FORMAT}(internal)\"\n> > function and \"copy_from_${FORMAT}(internal)\" function,\n> > we can remove the tricky approach. And it also avoid\n> > name collisions with other handler such as tablesample\n> > handler.\n> > See also:\n> > https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com#af71f364d0a9f5c144e45b447e5c16c9\n>\n> Hmm. I prefer the unique name approach for the COPY portions without\n> enforcing any naming policy on the function names returning the\n> handlers, actually, though I can see your point.\n\nYeah, another idea is to provide support functions to return a\nCopyFormatRoutine wrapping either CopyToFormatRoutine or\nCopyFromFormatRoutine. For example:\n\nextern CopyFormatRoutine *MakeCopyToFormatRoutine(const\nCopyToFormatRoutine *routine);\n\nextensions can do like:\n\nstatic const CopyToFormatRoutine testfmt_handler = {\n .type = T_CopyToFormatRoutine,\n .start_fn = testfmt_copyto_start,\n .onerow_fn = testfmt_copyto_onerow,\n .end_fn = testfmt_copyto_end\n};\n\nDatum\ncopy_testfmt_handler(PG_FUNCTION_ARGS)\n{\n CopyFormatRoutine *routine = MakeCopyToFormatRoutine(&testfmt_handler);\n :\n\n>\n> > 2. Need an opaque space like IndexScanDesc::opaque does\n> >\n> > * A custom COPY TO handler needs to keep its data\n>\n> Sounds useful to me to have a private area passed down to the\n> callbacks.\n>\n\n+1\n\n>\n> > Questions:\n> >\n> > 1. What value should be used for \"format\" in\n> > PgMsg_CopyOutResponse message?\n> >\n> > It's 1 for binary format and 0 for text/csv format.\n> >\n> > Should we make it customizable by custom COPY TO handler?\n> > If so, what value should be used for this?\n>\n> Interesting point. It looks very tempting to give more flexibility to\n> people who'd like to use their own code as we have one byte in the\n> protocol but just use 0/1. Hence it feels natural to have a callback\n> for that.\n\n+1\n\n>\n> It also means that we may want to think harder about copy_is_binary in\n> libpq in the future step. Now, having a backend implementation does\n> not need any libpq bits, either, because a client stack may just want\n> to speak the Postgres protocol directly. Perhaps a custom COPY\n> implementation would be OK with how things are in libpq, as well,\n> tweaking its way through with just text or binary.\n>\n> > 2. Do we need more tries for design discussion for the first\n> > implementation? If we need, what should we try?\n>\n> A makeNode() is used with an allocation in the current memory context\n> in the function returning the handler. I would have assume that this\n> stuff returns a handler as a const struct like table AMs.\n\n+1\n\nThe example I mentioned above does that.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Dec 2023 10:23:28 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Dec 21, 2023 at 6:35 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoCunywHird3GaPzWe6s9JG1wzxj3Cr6vGN36DDheGjOjA@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Dec 2023 23:31:29 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > I've sketched the above idea including a test module in\n> > src/test/module/test_copy_format, based on v2 patch. It's not splitted\n> > and is dirty so just for discussion.\n>\n> I implemented a sample COPY TO handler for Apache Arrow that\n> supports only integer and text.\n>\n> I needed to extend the patch:\n>\n> 1. Add an opaque space for custom COPY TO handler\n> * Add CopyToState{Get,Set}Opaque()\n> https://github.com/kou/postgres/commit/5a610b6a066243f971e029432db67152cfe5e944\n>\n> 2. Export CopyToState::attnumlist\n> * Add CopyToStateGetAttNumList()\n> https://github.com/kou/postgres/commit/15fcba8b4e95afa86edb3f677a7bdb1acb1e7688\n\nI think we can move CopyToState to copy.h and we don't need to have\nset/get functions for its fields.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Dec 2023 10:48:18 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Dec 21, 2023 at 5:35 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoCunywHird3GaPzWe6s9JG1wzxj3Cr6vGN36DDheGjOjA@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Dec 2023 23:31:29 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > I've sketched the above idea including a test module in\n> > src/test/module/test_copy_format, based on v2 patch. It's not splitted\n> > and is dirty so just for discussion.\n>\n> I implemented a sample COPY TO handler for Apache Arrow that\n> supports only integer and text.\n>\n> I needed to extend the patch:\n>\n> 1. Add an opaque space for custom COPY TO handler\n> * Add CopyToState{Get,Set}Opaque()\n> https://github.com/kou/postgres/commit/5a610b6a066243f971e029432db67152cfe5e944\n>\n> 2. Export CopyToState::attnumlist\n> * Add CopyToStateGetAttNumList()\n> https://github.com/kou/postgres/commit/15fcba8b4e95afa86edb3f677a7bdb1acb1e7688\n>\n> 3. Export CopySend*()\n> * Rename CopySend*() to CopyToStateSend*() and export them\n> * Exception: CopySendEndOfRow() to CopyToStateFlush() because\n> it just flushes the internal buffer now.\n> https://github.com/kou/postgres/commit/289a5640135bde6733a1b8e2c412221ad522901e\n>\nI guess the purpose of these helpers is to avoid expose CopyToState to\ncopy.h, but I\nthink expose CopyToState to user might make life easier, users might want to use\nthe memory contexts of the structure (though I agree not all the\nfields are necessary\nfor extension handers).\n\n> The attached patch is based on the Sawada-san's patch and\n> includes the above changes. Note that this patch is also\n> dirty so just for discussion.\n>\n> My suggestions from this experience:\n>\n> 1. Split COPY handler to COPY TO handler and COPY FROM handler\n>\n> * CopyFormatRoutine is a bit tricky. An extension needs\n> to create a CopyFormatRoutine node and\n> a CopyToFormatRoutine node.\n>\n> * If we just require \"copy_to_${FORMAT}(internal)\"\n> function and \"copy_from_${FORMAT}(internal)\" function,\n> we can remove the tricky approach. And it also avoid\n> name collisions with other handler such as tablesample\n> handler.\n> See also:\n> https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com#af71f364d0a9f5c144e45b447e5c16c9\n>\n> 2. Need an opaque space like IndexScanDesc::opaque does\n>\n> * A custom COPY TO handler needs to keep its data\n\nI once thought users might want to parse their own options, maybe this\nis a use case for this opaque space.\n\nFor the name, I thought private_data might be a better candidate than\nopaque, but I do not insist.\n>\n> 3. Export CopySend*()\n>\n> * If we like minimum API, we just need to export\n> CopySendData() and CopySendEndOfRow(). But\n> CopySend{String,Char,Int32,Int16}() will be convenient\n> custom COPY TO handlers. (A custom COPY TO handler for\n> Apache Arrow doesn't need them.)\n\nDo you use the arrow library to control the memory? Is there a way that\nwe can let the arrow use postgres' memory context? I'm not sure this\nis necessary, just raise the question for discussion.\n>\n> Questions:\n>\n> 1. What value should be used for \"format\" in\n> PgMsg_CopyOutResponse message?\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/copyto.c;h=c66a047c4a79cc614784610f385f1cd0935350f3;hb=9ca6e7b9411e36488ef539a2c1f6846ac92a7072#l144\n>\n> It's 1 for binary format and 0 for text/csv format.\n>\n> Should we make it customizable by custom COPY TO handler?\n> If so, what value should be used for this?\n>\n> 2. Do we need more tries for design discussion for the first\n> implementation? If we need, what should we try?\n>\n>\n> Thanks,\n> --\n> kou\n\n+PG_FUNCTION_INFO_V1(copy_testfmt_handler);\n+Datum\n+copy_testfmt_handler(PG_FUNCTION_ARGS)\n+{\n+ bool is_from = PG_GETARG_BOOL(0);\n+ CopyFormatRoutine *cp = makeNode(CopyFormatRoutine);;\n+\n\nextra semicolon.\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 22 Dec 2023 10:58:05 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 22 Dec 2023 10:00:24 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>> 3. Export CopySend*()\n>> \n>> * If we like minimum API, we just need to export\n>> CopySendData() and CopySendEndOfRow(). But\n>> CopySend{String,Char,Int32,Int16}() will be convenient\n>> custom COPY TO handlers. (A custom COPY TO handler for\n>> Apache Arrow doesn't need them.)\n> \n> Hmm. Not sure on this one. This may come down to externalize the\n> manipulation of fe_msgbuf. Particularly, could it be possible that\n> some custom formats don't care at all about the network order?\n\nIt means that all custom formats should control byte order\nby themselves instead of using CopySendInt*() that always\nuse network byte order, right? It makes sense. Let's export\nonly CopySendData() and CopySendEndOfRow().\n\n\n>> 1. What value should be used for \"format\" in\n>> PgMsg_CopyOutResponse message?\n>> \n>> It's 1 for binary format and 0 for text/csv format.\n>> \n>> Should we make it customizable by custom COPY TO handler?\n>> If so, what value should be used for this?\n> \n> Interesting point. It looks very tempting to give more flexibility to\n> people who'd like to use their own code as we have one byte in the\n> protocol but just use 0/1. Hence it feels natural to have a callback\n> for that.\n\nOK. Let's add a callback something like:\n\ntypedef int16 (*CopyToGetFormat_function) (CopyToState cstate);\n\n> It also means that we may want to think harder about copy_is_binary in\n> libpq in the future step. Now, having a backend implementation does\n> not need any libpq bits, either, because a client stack may just want\n> to speak the Postgres protocol directly. Perhaps a custom COPY\n> implementation would be OK with how things are in libpq, as well,\n> tweaking its way through with just text or binary.\n\nCan we defer this discussion after we commit a basic custom\nCOPY format handler mechanism?\n\n>> 2. Do we need more tries for design discussion for the first\n>> implementation? If we need, what should we try?\n> \n> A makeNode() is used with an allocation in the current memory context\n> in the function returning the handler. I would have assume that this\n> stuff returns a handler as a const struct like table AMs.\n\nIf we use this approach, we can't use the Sawada-san's\nidea[1] that provides a convenient API to hide\nCopyFormatRoutine internal. The idea provides\nMakeCopy{To,From}FormatRoutine(). They return a new\nCopyFormatRoutine* with suitable is_from member. They can't\nuse static const CopyFormatRoutine because they may be called\nmultiple times in the same process.\n\nWe can use the satic const struct approach by choosing one\nof the followings:\n\n1. Use separated function for COPY {TO,FROM} format handlers\n as I suggested.\n\n2. Don't provide convenient API. Developers construct\n CopyFormatRoutine by themselves. But it may be a bit\n tricky.\n\n3. Similar to 2. but don't use a bit tricky approach (don't\n embed Copy{To,From}FormatRoutine nodes into\n CopyFormatRoutine).\n\n Use unified function for COPY {TO,FROM} format handlers\n but CopyFormatRoutine always have both of COPY {TO,FROM}\n format routines and these routines aren't nodes:\n\n typedef struct CopyToFormatRoutine\n {\n CopyToStart_function start_fn;\n CopyToOneRow_function onerow_fn;\n CopyToEnd_function end_fn;\n } CopyToFormatRoutine;\n\n /* XXX: just copied from COPY TO routines */\n typedef struct CopyFromFormatRoutine\n {\n CopyFromStart_function start_fn;\n CopyFromOneRow_function onerow_fn;\n CopyFromEnd_function end_fn;\n } CopyFromFormatRoutine;\n\n typedef struct CopyFormatRoutine\n {\n NodeTag\t\ttype;\n\n CopyToFormatRoutine\t to_routine;\n CopyFromFormatRoutine\t from_routine;\n } CopyFormatRoutine;\n\n ----\n\n static const CopyFormatRoutine testfmt_handler = {\n .type = T_CopyFormatRoutine,\n .to_routine = {\n .start_fn = testfmt_copyto_start,\n .onerow_fn = testfmt_copyto_onerow,\n .end_fn = testfmt_copyto_end,\n },\n .from_routine = {\n .start_fn = testfmt_copyfrom_start,\n .onerow_fn = testfmt_copyfrom_onerow,\n .end_fn = testfmt_copyfrom_end,\n },\n };\n\n PG_FUNCTION_INFO_V1(copy_testfmt_handler);\n Datum\n copy_testfmt_handler(PG_FUNCTION_ARGS)\n {\n PG_RETURN_POINTER(&testfmt_handler);\n }\n\n4. ... other idea?\n\n\n[1] https://www.postgresql.org/message-id/flat/CAD21AoDs9cOjuVbA_krGizAdc50KE%2BFjAuEXWF0NZwbMnc7F3Q%40mail.gmail.com#71bb03d9237252382b245dd33e705a3a\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 10 Jan 2024 12:00:34 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoD=UapH4Wh06G6H5XAzPJ0iJg9YcW8r7E2UEJkZ8QsosA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 22 Dec 2023 10:48:18 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n>> I needed to extend the patch:\n>>\n>> 1. Add an opaque space for custom COPY TO handler\n>> * Add CopyToState{Get,Set}Opaque()\n>> https://github.com/kou/postgres/commit/5a610b6a066243f971e029432db67152cfe5e944\n>>\n>> 2. Export CopyToState::attnumlist\n>> * Add CopyToStateGetAttNumList()\n>> https://github.com/kou/postgres/commit/15fcba8b4e95afa86edb3f677a7bdb1acb1e7688\n> \n> I think we can move CopyToState to copy.h and we don't need to have\n> set/get functions for its fields.\n\nI don't object the idea if other PostgreSQL developers\nprefer the approach. Is there any PostgreSQL developer who\nobjects that we export Copy{To,From}StateData as public API?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 10 Jan 2024 12:06:44 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3+jG_NKOUmcxDyEX2xSggBXReZ4H=e3RFsUtedY88A03w@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 22 Dec 2023 10:58:05 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n>> 1. Add an opaque space for custom COPY TO handler\n>> * Add CopyToState{Get,Set}Opaque()\n>> https://github.com/kou/postgres/commit/5a610b6a066243f971e029432db67152cfe5e944\n>>\n>> 2. Export CopyToState::attnumlist\n>> * Add CopyToStateGetAttNumList()\n>> https://github.com/kou/postgres/commit/15fcba8b4e95afa86edb3f677a7bdb1acb1e7688\n>>\n>> 3. Export CopySend*()\n>> * Rename CopySend*() to CopyToStateSend*() and export them\n>> * Exception: CopySendEndOfRow() to CopyToStateFlush() because\n>> it just flushes the internal buffer now.\n>> https://github.com/kou/postgres/commit/289a5640135bde6733a1b8e2c412221ad522901e\n>>\n> I guess the purpose of these helpers is to avoid expose CopyToState to\n> copy.h,\n\nYes.\n\n> but I\n> think expose CopyToState to user might make life easier, users might want to use\n> the memory contexts of the structure (though I agree not all the\n> fields are necessary\n> for extension handers).\n\nOK. I don't object it as I said in another e-mail:\nhttps://www.postgresql.org/message-id/flat/20240110.120644.1876591646729327180.kou%40clear-code.com#d923173e9625c20319750155083cbd72\n\n>> 2. Need an opaque space like IndexScanDesc::opaque does\n>>\n>> * A custom COPY TO handler needs to keep its data\n> \n> I once thought users might want to parse their own options, maybe this\n> is a use case for this opaque space.\n\nGood catch! I forgot to suggest a callback for custom format\noptions. How about the following API?\n\n----\n...\ntypedef bool (*CopyToProcessOption_function) (CopyToState cstate, DefElem *defel);\n\n...\ntypedef bool (*CopyFromProcessOption_function) (CopyFromState cstate, DefElem *defel);\n\ntypedef struct CopyToFormatRoutine\n{\n\t...\n\tCopyToProcessOption_function process_option_fn;\n} CopyToFormatRoutine;\n\ntypedef struct CopyFromFormatRoutine\n{\n\t...\n\tCopyFromProcessOption_function process_option_fn;\n} CopyFromFormatRoutine;\n----\n\n----\ndiff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\nindex e7597894bf..1aa8b62551 100644\n--- a/src/backend/commands/copy.c\n+++ b/src/backend/commands/copy.c\n@@ -416,6 +416,7 @@ void\n ProcessCopyOptions(ParseState *pstate,\n \t\t\t\t CopyFormatOptions *opts_out,\n \t\t\t\t bool is_from,\n+\t\t\t\t void *cstate, /* CopyToState* for !is_from, CopyFromState* for is_from */\n \t\t\t\t List *options)\n {\n \tbool\t\tformat_specified = false;\n@@ -593,11 +594,19 @@ ProcessCopyOptions(ParseState *pstate,\n \t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n \t\t}\n \t\telse\n-\t\t\tereport(ERROR,\n-\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n-\t\t\t\t\t errmsg(\"option \\\"%s\\\" not recognized\",\n-\t\t\t\t\t\t\tdefel->defname),\n-\t\t\t\t\t parser_errposition(pstate, defel->location)));\n+\t\t{\n+\t\t\tbool processed;\n+\t\t\tif (is_from)\n+\t\t\t\tprocessed = opts_out->from_ops->process_option_fn(cstate, defel);\n+\t\t\telse\n+\t\t\t\tprocessed = opts_out->to_ops->process_option_fn(cstate, defel);\n+\t\t\tif (!processed)\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n+\t\t\t\t\t\t errmsg(\"option \\\"%s\\\" not recognized\",\n+\t\t\t\t\t\t\t\tdefel->defname),\n+\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n+\t\t}\n \t}\n \n \t/*\n----\n\n> For the name, I thought private_data might be a better candidate than\n> opaque, but I do not insist.\n\nI don't have a strong opinion for this. Here are the number\nof headers that use \"private_data\" and \"opaque\":\n\n$ grep -r private_data --files-with-matches src/include | wc -l\n6\n$ grep -r opaque --files-with-matches src/include | wc -l\n38\n\nIt seems that we use \"opaque\" than \"private_data\" in general.\n\nbut it seems that we use\n\"opaque\" than \"private_data\" in our code.\n\n> Do you use the arrow library to control the memory?\n\nYes.\n\n> Is there a way that\n> we can let the arrow use postgres' memory context?\n\nYes. Apache Arrow C++ provides a memory pool feature and we\ncan implement PostgreSQL's memory context based memory pool\nfor this. (But this is a custom COPY TO/FROM handler's\nimplementation details.)\n\n> I'm not sure this\n> is necessary, just raise the question for discussion.\n\nCould you clarify what should we discuss? We should require\nthat COPY TO/FROM handlers should use PostgreSQL's memory\ncontext for all internal memory allocations?\n\n> +PG_FUNCTION_INFO_V1(copy_testfmt_handler);\n> +Datum\n> +copy_testfmt_handler(PG_FUNCTION_ARGS)\n> +{\n> + bool is_from = PG_GETARG_BOOL(0);\n> + CopyFormatRoutine *cp = makeNode(CopyFormatRoutine);;\n> +\n> \n> extra semicolon.\n\nI noticed it too :-)\nBut I ignored it because the current implementation is only\nfor discussion. We know that it may be dirty.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:20:23 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 10, 2024 at 12:00 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 22 Dec 2023 10:00:24 +0900,\n> Michael Paquier <[email protected]> wrote:\n>\n> >> 3. Export CopySend*()\n> >>\n> >> * If we like minimum API, we just need to export\n> >> CopySendData() and CopySendEndOfRow(). But\n> >> CopySend{String,Char,Int32,Int16}() will be convenient\n> >> custom COPY TO handlers. (A custom COPY TO handler for\n> >> Apache Arrow doesn't need them.)\n> >\n> > Hmm. Not sure on this one. This may come down to externalize the\n> > manipulation of fe_msgbuf. Particularly, could it be possible that\n> > some custom formats don't care at all about the network order?\n>\n> It means that all custom formats should control byte order\n> by themselves instead of using CopySendInt*() that always\n> use network byte order, right? It makes sense. Let's export\n> only CopySendData() and CopySendEndOfRow().\n>\n>\n> >> 1. What value should be used for \"format\" in\n> >> PgMsg_CopyOutResponse message?\n> >>\n> >> It's 1 for binary format and 0 for text/csv format.\n> >>\n> >> Should we make it customizable by custom COPY TO handler?\n> >> If so, what value should be used for this?\n> >\n> > Interesting point. It looks very tempting to give more flexibility to\n> > people who'd like to use their own code as we have one byte in the\n> > protocol but just use 0/1. Hence it feels natural to have a callback\n> > for that.\n>\n> OK. Let's add a callback something like:\n>\n> typedef int16 (*CopyToGetFormat_function) (CopyToState cstate);\n>\n> > It also means that we may want to think harder about copy_is_binary in\n> > libpq in the future step. Now, having a backend implementation does\n> > not need any libpq bits, either, because a client stack may just want\n> > to speak the Postgres protocol directly. Perhaps a custom COPY\n> > implementation would be OK with how things are in libpq, as well,\n> > tweaking its way through with just text or binary.\n>\n> Can we defer this discussion after we commit a basic custom\n> COPY format handler mechanism?\n>\n> >> 2. Do we need more tries for design discussion for the first\n> >> implementation? If we need, what should we try?\n> >\n> > A makeNode() is used with an allocation in the current memory context\n> > in the function returning the handler. I would have assume that this\n> > stuff returns a handler as a const struct like table AMs.\n>\n> If we use this approach, we can't use the Sawada-san's\n> idea[1] that provides a convenient API to hide\n> CopyFormatRoutine internal. The idea provides\n> MakeCopy{To,From}FormatRoutine(). They return a new\n> CopyFormatRoutine* with suitable is_from member. They can't\n> use static const CopyFormatRoutine because they may be called\n> multiple times in the same process.\n>\n> We can use the satic const struct approach by choosing one\n> of the followings:\n>\n> 1. Use separated function for COPY {TO,FROM} format handlers\n> as I suggested.\n>\n> 2. Don't provide convenient API. Developers construct\n> CopyFormatRoutine by themselves. But it may be a bit\n> tricky.\n>\n> 3. Similar to 2. but don't use a bit tricky approach (don't\n> embed Copy{To,From}FormatRoutine nodes into\n> CopyFormatRoutine).\n>\n> Use unified function for COPY {TO,FROM} format handlers\n> but CopyFormatRoutine always have both of COPY {TO,FROM}\n> format routines and these routines aren't nodes:\n>\n> typedef struct CopyToFormatRoutine\n> {\n> CopyToStart_function start_fn;\n> CopyToOneRow_function onerow_fn;\n> CopyToEnd_function end_fn;\n> } CopyToFormatRoutine;\n>\n> /* XXX: just copied from COPY TO routines */\n> typedef struct CopyFromFormatRoutine\n> {\n> CopyFromStart_function start_fn;\n> CopyFromOneRow_function onerow_fn;\n> CopyFromEnd_function end_fn;\n> } CopyFromFormatRoutine;\n>\n> typedef struct CopyFormatRoutine\n> {\n> NodeTag type;\n>\n> CopyToFormatRoutine to_routine;\n> CopyFromFormatRoutine from_routine;\n> } CopyFormatRoutine;\n>\n> ----\n>\n> static const CopyFormatRoutine testfmt_handler = {\n> .type = T_CopyFormatRoutine,\n> .to_routine = {\n> .start_fn = testfmt_copyto_start,\n> .onerow_fn = testfmt_copyto_onerow,\n> .end_fn = testfmt_copyto_end,\n> },\n> .from_routine = {\n> .start_fn = testfmt_copyfrom_start,\n> .onerow_fn = testfmt_copyfrom_onerow,\n> .end_fn = testfmt_copyfrom_end,\n> },\n> };\n>\n> PG_FUNCTION_INFO_V1(copy_testfmt_handler);\n> Datum\n> copy_testfmt_handler(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_POINTER(&testfmt_handler);\n> }\n>\n> 4. ... other idea?\n\nIt's a just idea but the fourth idea is to provide a convenient macro\nto make it easy to construct the CopyFormatRoutine. For example,\n\n#define COPYTO_ROUTINE(...) (Node *) &(CopyToFormatRoutine) {__VA_ARGS__}\n\nstatic const CopyFormatRoutine testfmt_copyto_handler = {\n .type = T_CopyFormatRoutine,\n .is_from = true,\n .routine = COPYTO_ROUTINE (\n .start_fn = testfmt_copyto_start,\n .onerow_fn = testfmt_copyto_onerow,\n .end_fn = testfmt_copyto_end\n )\n};\n\nDatum\ncopy_testfmt_handler(PG_FUNCTION_ARGS)\n{\n PG_RETURN_POINTER(& testfmt_copyto_handler);\n}\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:33:22 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoC_dhfS97DKwTL+2nvgBOYrmN9XVYrE8w2SuDgghb-yzg@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 10 Jan 2024 15:33:22 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n>> We can use the satic const struct approach by choosing one\n>> of the followings:\n>>\n>> ...\n>>\n>> 4. ... other idea?\n> \n> It's a just idea but the fourth idea is to provide a convenient macro\n> to make it easy to construct the CopyFormatRoutine. For example,\n> \n> #define COPYTO_ROUTINE(...) (Node *) &(CopyToFormatRoutine) {__VA_ARGS__}\n> \n> static const CopyFormatRoutine testfmt_copyto_handler = {\n> .type = T_CopyFormatRoutine,\n> .is_from = true,\n> .routine = COPYTO_ROUTINE (\n> .start_fn = testfmt_copyto_start,\n> .onerow_fn = testfmt_copyto_onerow,\n> .end_fn = testfmt_copyto_end\n> )\n> };\n> \n> Datum\n> copy_testfmt_handler(PG_FUNCTION_ARGS)\n> {\n> PG_RETURN_POINTER(& testfmt_copyto_handler);\n> }\n\nInteresting. But I feel that it introduces another (a bit)\ntricky mechanism...\n\nBTW, we also need to set .type:\n\n .routine = COPYTO_ROUTINE (\n .type = T_CopyToFormatRoutine,\n .start_fn = testfmt_copyto_start,\n .onerow_fn = testfmt_copyto_onerow,\n .end_fn = testfmt_copyto_end\n )\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:40:28 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 10, 2024 at 3:40 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoC_dhfS97DKwTL+2nvgBOYrmN9XVYrE8w2SuDgghb-yzg@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 10 Jan 2024 15:33:22 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> >> We can use the satic const struct approach by choosing one\n> >> of the followings:\n> >>\n> >> ...\n> >>\n> >> 4. ... other idea?\n> >\n> > It's a just idea but the fourth idea is to provide a convenient macro\n> > to make it easy to construct the CopyFormatRoutine. For example,\n> >\n> > #define COPYTO_ROUTINE(...) (Node *) &(CopyToFormatRoutine) {__VA_ARGS__}\n> >\n> > static const CopyFormatRoutine testfmt_copyto_handler = {\n> > .type = T_CopyFormatRoutine,\n> > .is_from = true,\n> > .routine = COPYTO_ROUTINE (\n> > .start_fn = testfmt_copyto_start,\n> > .onerow_fn = testfmt_copyto_onerow,\n> > .end_fn = testfmt_copyto_end\n> > )\n> > };\n> >\n> > Datum\n> > copy_testfmt_handler(PG_FUNCTION_ARGS)\n> > {\n> > PG_RETURN_POINTER(& testfmt_copyto_handler);\n> > }\n>\n> Interesting. But I feel that it introduces another (a bit)\n> tricky mechanism...\n\nRight. On the other hand, I don't think the idea 3 is good for the\nsame reason Michael-san pointed out before[1][2].\n\n>\n> BTW, we also need to set .type:\n>\n> .routine = COPYTO_ROUTINE (\n> .type = T_CopyToFormatRoutine,\n> .start_fn = testfmt_copyto_start,\n> .onerow_fn = testfmt_copyto_onerow,\n> .end_fn = testfmt_copyto_end\n> )\n\nI think it's fine as the same is true for table AM.\n\n[1] https://www.postgresql.org/message-id/ZXEUIy6wl4jHy6Nm%40paquier.xyz\n[2] https://www.postgresql.org/message-id/ZXKm9tmnSPIVrqZz%40paquier.xyz\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 16:53:48 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoC4HVuxOrsX1fLwj=5hdEmjvZoQw6PJGzxqxHNnYSQUVQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 10 Jan 2024 16:53:48 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n>> Interesting. But I feel that it introduces another (a bit)\n>> tricky mechanism...\n> \n> Right. On the other hand, I don't think the idea 3 is good for the\n> same reason Michael-san pointed out before[1][2].\n>\n> [1] https://www.postgresql.org/message-id/ZXEUIy6wl4jHy6Nm%40paquier.xyz\n> [2] https://www.postgresql.org/message-id/ZXKm9tmnSPIVrqZz%40paquier.xyz\n\nI think that the important part of the Michael-san's opinion\nis \"keep COPY TO implementation and COPY FROM implementation\nseparated for maintainability\".\n\nThe patch focused in [1][2] uses one routine for both of\nCOPY TO and COPY FROM. If we use the approach, we need to\nchange one common routine from copyto.c and copyfrom.c (or\nexport callbacks from copyto.c and copyfrom.c and use them\nin copyto.c to construct one common routine). It's\nthe problem.\n\nThe idea 3 still has separated routines for COPY TO and COPY\nFROM. So I think that it still keeps COPY TO implementation\nand COPY FROM implementation separated.\n\n>> BTW, we also need to set .type:\n>>\n>> .routine = COPYTO_ROUTINE (\n>> .type = T_CopyToFormatRoutine,\n>> .start_fn = testfmt_copyto_start,\n>> .onerow_fn = testfmt_copyto_onerow,\n>> .end_fn = testfmt_copyto_end\n>> )\n> \n> I think it's fine as the same is true for table AM.\n\nAh, sorry. I should have said explicitly. I don't this that\nit's not a problem. I just wanted to say that it's missing.\n\n\nDefining one more static const struct instead of providing a\nconvenient (but a bit tricky) macro may be straightforward:\n\nstatic const CopyToFormatRoutine testfmt_copyto_routine = {\n .type = T_CopyToFormatRoutine,\n .start_fn = testfmt_copyto_start,\n .onerow_fn = testfmt_copyto_onerow,\n .end_fn = testfmt_copyto_end\n};\n\nstatic const CopyFormatRoutine testfmt_copyto_handler = {\n .type = T_CopyFormatRoutine,\n .is_from = false,\n .routine = (Node *) &testfmt_copyto_routine\n};\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 11 Jan 2024 10:24:45 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nHere is the current summary for a this discussion to make\nCOPY format extendable. It's for reaching consensus and\nstarting implementing the feature. (I'll start implementing\nthe feature once we reach consensus.) If you have any\nopinion, please share it.\n\nConfirmed:\n\n1.1 Making COPY format extendable will not reduce performance.\n [1]\n\nDecisions:\n\n2.1 Use separated handler for COPY TO and COPY FROM because\n our COPY TO implementation (copyto.c) and COPY FROM\n implementation (coypfrom.c) are separated.\n [2]\n\n2.2 Don't use system catalog for COPY TO/FROM handlers. We can\n just use a function(internal) that returns a handler instead.\n [3]\n\n2.3 The implementation must include documentation.\n [5]\n\n2.4 The implementation must include test.\n [6]\n\n2.5 The implementation should be consist of small patches\n for easy to review.\n [6]\n\n2.7 Copy{To,From}State must have a opaque space for\n handlers.\n [8]\n\n2.8 Export CopySendData() and CopySendEndOfRow() for COPY TO\n handlers.\n [8]\n\n2.9 Make \"format\" in PgMsg_CopyOutResponse message\n extendable.\n [9]\n\n2.10 Make makeNode() call avoidable in function(internal)\n that returns COPY TO/FROM handler.\n [9]\n\n2.11 Custom COPY TO/FROM handlers must be able to parse\n their options.\n [11]\n\nDiscussing:\n\n3.1 Should we use one function(internal) for COPY TO/FROM\n handlers or two function(internal)s (one is for COPY TO\n handler and another is for COPY FROM handler)?\n [4]\n\n3.2 If we use separated function(internal) for COPY TO/FROM\n handlers, we need to define naming rule. For example,\n <method_name>_to(internal) for COPY TO handler and\n <method_name>_from(internal) for COPY FROM handler.\n [4]\n\n3.3 Should we use prefix or suffix for function(internal)\n name to avoid name conflict with other handlers such as\n tablesample handlers?\n [7]\n\n3.4 Should we export Copy{To,From}State? Or should we just\n provide getters/setters to access Copy{To,From}State\n internal?\n [10]\n\n\n[1] https://www.postgresql.org/message-id/flat/20231204.153548.2126325458835528809.kou%40clear-code.com\n[2] https://www.postgresql.org/message-id/flat/ZXEUIy6wl4jHy6Nm%40paquier.xyz\n[3] https://www.postgresql.org/message-id/flat/CAD21AoAhcZkAp_WDJ4sSv_%2Bg2iCGjfyMFgeu7MxjnjX_FutZAg%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/flat/CAD21AoDkoGL6yJ_HjNOg9cU%3DaAdW8uQ3rSQOeRS0SX85LPPNwQ%40mail.gmail.com\n[5] https://www.postgresql.org/message-id/flat/TY3PR01MB9889C9234CD220A3A7075F0DF589A%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n[6] https://www.postgresql.org/message-id/flat/ZXbiPNriHHyUrcTF%40paquier.xyz\n[7] https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com\n[8] https://www.postgresql.org/message-id/flat/20231221.183504.1240642084042888377.kou%40clear-code.com\n[9] https://www.postgresql.org/message-id/flat/ZYTfqGppMc9e_w2k%40paquier.xyz\n[10] https://www.postgresql.org/message-id/flat/CAD21AoD%3DUapH4Wh06G6H5XAzPJ0iJg9YcW8r7E2UEJkZ8QsosA%40mail.gmail.com\n[11] https://www.postgresql.org/message-id/flat/20240110.152023.1920937326588672387.kou%40clear-code.com\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 12 Jan 2024 14:46:15 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 10, 2024 at 2:20 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3+jG_NKOUmcxDyEX2xSggBXReZ4H=e3RFsUtedY88A03w@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 22 Dec 2023 10:58:05 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> >> 1. Add an opaque space for custom COPY TO handler\n> >> * Add CopyToState{Get,Set}Opaque()\n> >> https://github.com/kou/postgres/commit/5a610b6a066243f971e029432db67152cfe5e944\n> >>\n> >> 2. Export CopyToState::attnumlist\n> >> * Add CopyToStateGetAttNumList()\n> >> https://github.com/kou/postgres/commit/15fcba8b4e95afa86edb3f677a7bdb1acb1e7688\n> >>\n> >> 3. Export CopySend*()\n> >> * Rename CopySend*() to CopyToStateSend*() and export them\n> >> * Exception: CopySendEndOfRow() to CopyToStateFlush() because\n> >> it just flushes the internal buffer now.\n> >> https://github.com/kou/postgres/commit/289a5640135bde6733a1b8e2c412221ad522901e\n> >>\n> > I guess the purpose of these helpers is to avoid expose CopyToState to\n> > copy.h,\n>\n> Yes.\n>\n> > but I\n> > think expose CopyToState to user might make life easier, users might want to use\n> > the memory contexts of the structure (though I agree not all the\n> > fields are necessary\n> > for extension handers).\n>\n> OK. I don't object it as I said in another e-mail:\n> https://www.postgresql.org/message-id/flat/20240110.120644.1876591646729327180.kou%40clear-code.com#d923173e9625c20319750155083cbd72\n>\n> >> 2. Need an opaque space like IndexScanDesc::opaque does\n> >>\n> >> * A custom COPY TO handler needs to keep its data\n> >\n> > I once thought users might want to parse their own options, maybe this\n> > is a use case for this opaque space.\n>\n> Good catch! I forgot to suggest a callback for custom format\n> options. How about the following API?\n>\n> ----\n> ...\n> typedef bool (*CopyToProcessOption_function) (CopyToState cstate, DefElem *defel);\n>\n> ...\n> typedef bool (*CopyFromProcessOption_function) (CopyFromState cstate, DefElem *defel);\n>\n> typedef struct CopyToFormatRoutine\n> {\n> ...\n> CopyToProcessOption_function process_option_fn;\n> } CopyToFormatRoutine;\n>\n> typedef struct CopyFromFormatRoutine\n> {\n> ...\n> CopyFromProcessOption_function process_option_fn;\n> } CopyFromFormatRoutine;\n> ----\n>\n> ----\n> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n> index e7597894bf..1aa8b62551 100644\n> --- a/src/backend/commands/copy.c\n> +++ b/src/backend/commands/copy.c\n> @@ -416,6 +416,7 @@ void\n> ProcessCopyOptions(ParseState *pstate,\n> CopyFormatOptions *opts_out,\n> bool is_from,\n> + void *cstate, /* CopyToState* for !is_from, CopyFromState* for is_from */\n> List *options)\n> {\n> bool format_specified = false;\n> @@ -593,11 +594,19 @@ ProcessCopyOptions(ParseState *pstate,\n> parser_errposition(pstate, defel->location)));\n> }\n> else\n> - ereport(ERROR,\n> - (errcode(ERRCODE_SYNTAX_ERROR),\n> - errmsg(\"option \\\"%s\\\" not recognized\",\n> - defel->defname),\n> - parser_errposition(pstate, defel->location)));\n> + {\n> + bool processed;\n> + if (is_from)\n> + processed = opts_out->from_ops->process_option_fn(cstate, defel);\n> + else\n> + processed = opts_out->to_ops->process_option_fn(cstate, defel);\n> + if (!processed)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"option \\\"%s\\\" not recognized\",\n> + defel->defname),\n> + parser_errposition(pstate, defel->location)));\n> + }\n> }\n>\n> /*\n> ----\n\nLooks good.\n\n>\n> > For the name, I thought private_data might be a better candidate than\n> > opaque, but I do not insist.\n>\n> I don't have a strong opinion for this. Here are the number\n> of headers that use \"private_data\" and \"opaque\":\n>\n> $ grep -r private_data --files-with-matches src/include | wc -l\n> 6\n> $ grep -r opaque --files-with-matches src/include | wc -l\n> 38\n>\n> It seems that we use \"opaque\" than \"private_data\" in general.\n>\n> but it seems that we use\n> \"opaque\" than \"private_data\" in our code.\n>\n> > Do you use the arrow library to control the memory?\n>\n> Yes.\n>\n> > Is there a way that\n> > we can let the arrow use postgres' memory context?\n>\n> Yes. Apache Arrow C++ provides a memory pool feature and we\n> can implement PostgreSQL's memory context based memory pool\n> for this. (But this is a custom COPY TO/FROM handler's\n> implementation details.)\n>\n> > I'm not sure this\n> > is necessary, just raise the question for discussion.\n>\n> Could you clarify what should we discuss? We should require\n> that COPY TO/FROM handlers should use PostgreSQL's memory\n> context for all internal memory allocations?\n\nYes, handlers should use PostgreSQL's memory context, and I think\ncreating other memory context under CopyToStateData.copycontext\nshould be suggested for handler creators, so I proposed exporting\nCopyToStateData to public header.\n>\n> > +PG_FUNCTION_INFO_V1(copy_testfmt_handler);\n> > +Datum\n> > +copy_testfmt_handler(PG_FUNCTION_ARGS)\n> > +{\n> > + bool is_from = PG_GETARG_BOOL(0);\n> > + CopyFormatRoutine *cp = makeNode(CopyFormatRoutine);;\n> > +\n> >\n> > extra semicolon.\n>\n> I noticed it too :-)\n> But I ignored it because the current implementation is only\n> for discussion. We know that it may be dirty.\n>\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 12 Jan 2024 14:40:41 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3J02NzGBxG1rP9C4u7qRLOqUjSOdy3q5_5v__fydS3XcA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 12 Jan 2024 14:40:41 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n>> Could you clarify what should we discuss? We should require\n>> that COPY TO/FROM handlers should use PostgreSQL's memory\n>> context for all internal memory allocations?\n> \n> Yes, handlers should use PostgreSQL's memory context, and I think\n> creating other memory context under CopyToStateData.copycontext\n> should be suggested for handler creators, so I proposed exporting\n> CopyToStateData to public header.\n\nI see.\n\nWe can provide a getter for CopyToStateData::copycontext if\nwe don't want to export CopyToStateData. Note that I don't\nhave a strong opinion whether we should export\nCopyToStateData or not.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 15 Jan 2024 15:23:50 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIf there are no more comments for the current design, I'll\nstart implementing this feature with the following\napproaches for \"Discussing\" items:\n\n> 3.1 Should we use one function(internal) for COPY TO/FROM\n> handlers or two function(internal)s (one is for COPY TO\n> handler and another is for COPY FROM handler)?\n> [4]\n\nI'll choose \"one function(internal) for COPY TO/FROM handlers\".\n\n> 3.4 Should we export Copy{To,From}State? Or should we just\n> provide getters/setters to access Copy{To,From}State\n> internal?\n> [10]\n\nI'll export Copy{To,From}State.\n\n\nThanks,\n-- \nkou\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 12 Jan 2024 14:46:15 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> Hi,\n> \n> Here is the current summary for a this discussion to make\n> COPY format extendable. It's for reaching consensus and\n> starting implementing the feature. (I'll start implementing\n> the feature once we reach consensus.) If you have any\n> opinion, please share it.\n> \n> Confirmed:\n> \n> 1.1 Making COPY format extendable will not reduce performance.\n> [1]\n> \n> Decisions:\n> \n> 2.1 Use separated handler for COPY TO and COPY FROM because\n> our COPY TO implementation (copyto.c) and COPY FROM\n> implementation (coypfrom.c) are separated.\n> [2]\n> \n> 2.2 Don't use system catalog for COPY TO/FROM handlers. We can\n> just use a function(internal) that returns a handler instead.\n> [3]\n> \n> 2.3 The implementation must include documentation.\n> [5]\n> \n> 2.4 The implementation must include test.\n> [6]\n> \n> 2.5 The implementation should be consist of small patches\n> for easy to review.\n> [6]\n> \n> 2.7 Copy{To,From}State must have a opaque space for\n> handlers.\n> [8]\n> \n> 2.8 Export CopySendData() and CopySendEndOfRow() for COPY TO\n> handlers.\n> [8]\n> \n> 2.9 Make \"format\" in PgMsg_CopyOutResponse message\n> extendable.\n> [9]\n> \n> 2.10 Make makeNode() call avoidable in function(internal)\n> that returns COPY TO/FROM handler.\n> [9]\n> \n> 2.11 Custom COPY TO/FROM handlers must be able to parse\n> their options.\n> [11]\n> \n> Discussing:\n> \n> 3.1 Should we use one function(internal) for COPY TO/FROM\n> handlers or two function(internal)s (one is for COPY TO\n> handler and another is for COPY FROM handler)?\n> [4]\n> \n> 3.2 If we use separated function(internal) for COPY TO/FROM\n> handlers, we need to define naming rule. For example,\n> <method_name>_to(internal) for COPY TO handler and\n> <method_name>_from(internal) for COPY FROM handler.\n> [4]\n> \n> 3.3 Should we use prefix or suffix for function(internal)\n> name to avoid name conflict with other handlers such as\n> tablesample handlers?\n> [7]\n> \n> 3.4 Should we export Copy{To,From}State? Or should we just\n> provide getters/setters to access Copy{To,From}State\n> internal?\n> [10]\n> \n> \n> [1] https://www.postgresql.org/message-id/flat/20231204.153548.2126325458835528809.kou%40clear-code.com\n> [2] https://www.postgresql.org/message-id/flat/ZXEUIy6wl4jHy6Nm%40paquier.xyz\n> [3] https://www.postgresql.org/message-id/flat/CAD21AoAhcZkAp_WDJ4sSv_%2Bg2iCGjfyMFgeu7MxjnjX_FutZAg%40mail.gmail.com\n> [4] https://www.postgresql.org/message-id/flat/CAD21AoDkoGL6yJ_HjNOg9cU%3DaAdW8uQ3rSQOeRS0SX85LPPNwQ%40mail.gmail.com\n> [5] https://www.postgresql.org/message-id/flat/TY3PR01MB9889C9234CD220A3A7075F0DF589A%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n> [6] https://www.postgresql.org/message-id/flat/ZXbiPNriHHyUrcTF%40paquier.xyz\n> [7] https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com\n> [8] https://www.postgresql.org/message-id/flat/20231221.183504.1240642084042888377.kou%40clear-code.com\n> [9] https://www.postgresql.org/message-id/flat/ZYTfqGppMc9e_w2k%40paquier.xyz\n> [10] https://www.postgresql.org/message-id/flat/CAD21AoD%3DUapH4Wh06G6H5XAzPJ0iJg9YcW8r7E2UEJkZ8QsosA%40mail.gmail.com\n> [11] https://www.postgresql.org/message-id/flat/20240110.152023.1920937326588672387.kou%40clear-code.com\n> \n> \n> Thanks,\n> -- \n> kou\n> \n> \n\n\n", "msg_date": "Mon, 15 Jan 2024 15:27:02 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Jan 11, 2024 at 10:24 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoC4HVuxOrsX1fLwj=5hdEmjvZoQw6PJGzxqxHNnYSQUVQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 10 Jan 2024 16:53:48 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> >> Interesting. But I feel that it introduces another (a bit)\n> >> tricky mechanism...\n> >\n> > Right. On the other hand, I don't think the idea 3 is good for the\n> > same reason Michael-san pointed out before[1][2].\n> >\n> > [1] https://www.postgresql.org/message-id/ZXEUIy6wl4jHy6Nm%40paquier.xyz\n> > [2] https://www.postgresql.org/message-id/ZXKm9tmnSPIVrqZz%40paquier.xyz\n>\n> I think that the important part of the Michael-san's opinion\n> is \"keep COPY TO implementation and COPY FROM implementation\n> separated for maintainability\".\n>\n> The patch focused in [1][2] uses one routine for both of\n> COPY TO and COPY FROM. If we use the approach, we need to\n> change one common routine from copyto.c and copyfrom.c (or\n> export callbacks from copyto.c and copyfrom.c and use them\n> in copyto.c to construct one common routine). It's\n> the problem.\n>\n> The idea 3 still has separated routines for COPY TO and COPY\n> FROM. So I think that it still keeps COPY TO implementation\n> and COPY FROM implementation separated.\n>\n> >> BTW, we also need to set .type:\n> >>\n> >> .routine = COPYTO_ROUTINE (\n> >> .type = T_CopyToFormatRoutine,\n> >> .start_fn = testfmt_copyto_start,\n> >> .onerow_fn = testfmt_copyto_onerow,\n> >> .end_fn = testfmt_copyto_end\n> >> )\n> >\n> > I think it's fine as the same is true for table AM.\n>\n> Ah, sorry. I should have said explicitly. I don't this that\n> it's not a problem. I just wanted to say that it's missing.\n\nThank you for pointing it out.\n\n>\n>\n> Defining one more static const struct instead of providing a\n> convenient (but a bit tricky) macro may be straightforward:\n>\n> static const CopyToFormatRoutine testfmt_copyto_routine = {\n> .type = T_CopyToFormatRoutine,\n> .start_fn = testfmt_copyto_start,\n> .onerow_fn = testfmt_copyto_onerow,\n> .end_fn = testfmt_copyto_end\n> };\n>\n> static const CopyFormatRoutine testfmt_copyto_handler = {\n> .type = T_CopyFormatRoutine,\n> .is_from = false,\n> .routine = (Node *) &testfmt_copyto_routine\n> };\n\nYeah, IIUC this is the option 2 you mentioned[1]. I think we can go\nwith this idea as it's the simplest. If we find a better way, we can\nchange it later. That is CopyFormatRoutine will be like:\n\ntypedef struct CopyFormatRoutine\n{\n NodeTag type;\n\n /* either CopyToFormatRoutine or CopyFromFormatRoutine */\n Node *routine;\n} CopyFormatRoutine;\n\nAnd the core can check the node type of the 'routine7 in the\nCopyFormatRoutine returned by extensions.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20240110.120034.501385498034538233.kou%40clear-code.com\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 16:03:41 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoB5x86TTyer90iSFivnSD8MFRU8V4ALzmQ=rQFw4QqiXQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 15 Jan 2024 16:03:41 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n>> Defining one more static const struct instead of providing a\n>> convenient (but a bit tricky) macro may be straightforward:\n>>\n>> static const CopyToFormatRoutine testfmt_copyto_routine = {\n>> .type = T_CopyToFormatRoutine,\n>> .start_fn = testfmt_copyto_start,\n>> .onerow_fn = testfmt_copyto_onerow,\n>> .end_fn = testfmt_copyto_end\n>> };\n>>\n>> static const CopyFormatRoutine testfmt_copyto_handler = {\n>> .type = T_CopyFormatRoutine,\n>> .is_from = false,\n>> .routine = (Node *) &testfmt_copyto_routine\n>> };\n> \n> Yeah, IIUC this is the option 2 you mentioned[1]. I think we can go\n> with this idea as it's the simplest.\n>\n> [1] https://www.postgresql.org/message-id/20240110.120034.501385498034538233.kou%40clear-code.com\n\nAh, you're right. I forgot it...\n\n> That is CopyFormatRoutine will be like:\n> \n> typedef struct CopyFormatRoutine\n> {\n> NodeTag type;\n> \n> /* either CopyToFormatRoutine or CopyFromFormatRoutine */\n> Node *routine;\n> } CopyFormatRoutine;\n> \n> And the core can check the node type of the 'routine7 in the\n> CopyFormatRoutine returned by extensions.\n\nIt makes sense.\n\n\nIf no more comments about the current design, I'll start\nimplementing this feature based on the current design.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:53:00 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nI've implemented custom COPY format feature based on the\ncurrent design discussion. See the attached patches for\ndetails.\n\nI also implemented a PoC COPY format handler for Apache\nArrow with this implementation and it worked.\nhttps://github.com/kou/pg-copy-arrow\n\nThe patches implement not only custom COPY TO format feature\nbut also custom COPY FROM format feature.\n\n0001-0004 is for COPY TO and 0005-0008 is for COPY FROM.\n\nFor COPY TO:\n\n0001: This adds CopyToRoutine and use it for text/csv/binary\nformats. No implementation change. This just move codes.\n\n0002: This adds support for adding custom COPY TO format by\n\"CREATE FUNCTION ${FORMAT_NAME}\". This uses the same\napproach provided by Sawada-san[1] but this doesn't\nintroduce a wrapper CopyRoutine struct for\nCopy{To,From}Routine. Because I noticed that a wrapper\nCopyRoutine struct is needless. Copy handler can just return\nCopyToRoutine or CopyFromRtouine because both of them have\nNodeTag. We can distinct a returned struct by the NodeTag.\n\n[1] https://www.postgresql.org/message-id/CAD21AoCunywHird3GaPzWe6s9JG1wzxj3Cr6vGN36DDheGjOjA@mail.gmail.com\n\n0003: This exports CopyToStateData. No implementation change\nexcept CopyDest enum values. I changed COPY_ prefix to\nCOPY_DEST_ to avoid name conflict with CopySource enum\nvalues. This just moves codes.\n\n0004: This adds CopyToState::opaque and exports\nCopySendEndOfRow(). CopySendEndOfRow() is renamed to\nCopyToStateFlush().\n\nFor COPY FROM:\n\n0005: Same as 0001 but for COPY FROM. This adds\nCopyFromRoutine and use it for text/csv/binary formats. No\nimplementation change. This just move codes.\n\n0006: Same as 0002 but for COPY FROM. This adds support for\nadding custom COPY FROM format by \"CREATE FUNCTION\n${FORMAT_NAME}\".\n\n0007: Same as 0003 but for COPY FROM. This exports\nCopyFromStateData. No implementation change except\nCopySource enum values. I changed COPY_ prefix to\nCOPY_SOURCE_ to align CopyDest changes in 0003. This just\nmoves codes.\n\n0008: Same as 0004 but for COPY FROM. This adds\nCopyFromState::opaque and exports\nCopyReadBinaryData(). CopyReadBinaryData() is renamed to\nCopyFromStateRead().\n\n\nThanks,\n-- \nkou\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 15 Jan 2024 15:27:02 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> Hi,\n> \n> If there are no more comments for the current design, I'll\n> start implementing this feature with the following\n> approaches for \"Discussing\" items:\n> \n>> 3.1 Should we use one function(internal) for COPY TO/FROM\n>> handlers or two function(internal)s (one is for COPY TO\n>> handler and another is for COPY FROM handler)?\n>> [4]\n> \n> I'll choose \"one function(internal) for COPY TO/FROM handlers\".\n> \n>> 3.4 Should we export Copy{To,From}State? Or should we just\n>> provide getters/setters to access Copy{To,From}State\n>> internal?\n>> [10]\n> \n> I'll export Copy{To,From}State.\n> \n> \n> Thanks,\n> -- \n> kou\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 12 Jan 2024 14:46:15 +0900 (JST),\n> Sutou Kouhei <[email protected]> wrote:\n> \n>> Hi,\n>> \n>> Here is the current summary for a this discussion to make\n>> COPY format extendable. It's for reaching consensus and\n>> starting implementing the feature. (I'll start implementing\n>> the feature once we reach consensus.) If you have any\n>> opinion, please share it.\n>> \n>> Confirmed:\n>> \n>> 1.1 Making COPY format extendable will not reduce performance.\n>> [1]\n>> \n>> Decisions:\n>> \n>> 2.1 Use separated handler for COPY TO and COPY FROM because\n>> our COPY TO implementation (copyto.c) and COPY FROM\n>> implementation (coypfrom.c) are separated.\n>> [2]\n>> \n>> 2.2 Don't use system catalog for COPY TO/FROM handlers. We can\n>> just use a function(internal) that returns a handler instead.\n>> [3]\n>> \n>> 2.3 The implementation must include documentation.\n>> [5]\n>> \n>> 2.4 The implementation must include test.\n>> [6]\n>> \n>> 2.5 The implementation should be consist of small patches\n>> for easy to review.\n>> [6]\n>> \n>> 2.7 Copy{To,From}State must have a opaque space for\n>> handlers.\n>> [8]\n>> \n>> 2.8 Export CopySendData() and CopySendEndOfRow() for COPY TO\n>> handlers.\n>> [8]\n>> \n>> 2.9 Make \"format\" in PgMsg_CopyOutResponse message\n>> extendable.\n>> [9]\n>> \n>> 2.10 Make makeNode() call avoidable in function(internal)\n>> that returns COPY TO/FROM handler.\n>> [9]\n>> \n>> 2.11 Custom COPY TO/FROM handlers must be able to parse\n>> their options.\n>> [11]\n>> \n>> Discussing:\n>> \n>> 3.1 Should we use one function(internal) for COPY TO/FROM\n>> handlers or two function(internal)s (one is for COPY TO\n>> handler and another is for COPY FROM handler)?\n>> [4]\n>> \n>> 3.2 If we use separated function(internal) for COPY TO/FROM\n>> handlers, we need to define naming rule. For example,\n>> <method_name>_to(internal) for COPY TO handler and\n>> <method_name>_from(internal) for COPY FROM handler.\n>> [4]\n>> \n>> 3.3 Should we use prefix or suffix for function(internal)\n>> name to avoid name conflict with other handlers such as\n>> tablesample handlers?\n>> [7]\n>> \n>> 3.4 Should we export Copy{To,From}State? Or should we just\n>> provide getters/setters to access Copy{To,From}State\n>> internal?\n>> [10]\n>> \n>> \n>> [1] https://www.postgresql.org/message-id/flat/20231204.153548.2126325458835528809.kou%40clear-code.com\n>> [2] https://www.postgresql.org/message-id/flat/ZXEUIy6wl4jHy6Nm%40paquier.xyz\n>> [3] https://www.postgresql.org/message-id/flat/CAD21AoAhcZkAp_WDJ4sSv_%2Bg2iCGjfyMFgeu7MxjnjX_FutZAg%40mail.gmail.com\n>> [4] https://www.postgresql.org/message-id/flat/CAD21AoDkoGL6yJ_HjNOg9cU%3DaAdW8uQ3rSQOeRS0SX85LPPNwQ%40mail.gmail.com\n>> [5] https://www.postgresql.org/message-id/flat/TY3PR01MB9889C9234CD220A3A7075F0DF589A%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n>> [6] https://www.postgresql.org/message-id/flat/ZXbiPNriHHyUrcTF%40paquier.xyz\n>> [7] https://www.postgresql.org/message-id/flat/20231214.184414.2179134502876898942.kou%40clear-code.com\n>> [8] https://www.postgresql.org/message-id/flat/20231221.183504.1240642084042888377.kou%40clear-code.com\n>> [9] https://www.postgresql.org/message-id/flat/ZYTfqGppMc9e_w2k%40paquier.xyz\n>> [10] https://www.postgresql.org/message-id/flat/CAD21AoD%3DUapH4Wh06G6H5XAzPJ0iJg9YcW8r7E2UEJkZ8QsosA%40mail.gmail.com\n>> [11] https://www.postgresql.org/message-id/flat/20240110.152023.1920937326588672387.kou%40clear-code.com\n>> \n>> \n>> Thanks,\n>> -- \n>> kou\n>> \n>> \n> \n>", "msg_date": "Wed, 24 Jan 2024 14:49:36 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 24, 2024 at 02:49:36PM +0900, Sutou Kouhei wrote:\n> For COPY TO:\n> \n> 0001: This adds CopyToRoutine and use it for text/csv/binary\n> formats. No implementation change. This just move codes.\n\n10M without this change:\n\n format,elapsed time (ms)\n text,1090.763\n csv,1136.103\n binary,1137.141\n\n10M with this change:\n\n format,elapsed time (ms)\n text,1082.654\n csv,1196.991\n binary,1069.697\n\nThese numbers point out that binary is faster by 6%, csv is slower by\n5%, while text stays around what looks like noise range. That's not\nnegligible. Are these numbers reproducible? If they are, that could\nbe a problem for anybody doing bulk-loading of large data sets. I am\nnot sure to understand where the improvement for binary comes from by\nreading the patch, but perhaps perf would tell more for each format?\nThe loss with csv could be blamed on the extra manipulations of the\nfunction pointers, likely.\n--\nMichael", "msg_date": "Wed, 24 Jan 2024 17:11:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "\nOn 2024-01-24 We 03:11, Michael Paquier wrote:\n> On Wed, Jan 24, 2024 at 02:49:36PM +0900, Sutou Kouhei wrote:\n>> For COPY TO:\n>>\n>> 0001: This adds CopyToRoutine and use it for text/csv/binary\n>> formats. No implementation change. This just move codes.\n> 10M without this change:\n>\n> format,elapsed time (ms)\n> text,1090.763\n> csv,1136.103\n> binary,1137.141\n>\n> 10M with this change:\n>\n> format,elapsed time (ms)\n> text,1082.654\n> csv,1196.991\n> binary,1069.697\n>\n> These numbers point out that binary is faster by 6%, csv is slower by\n> 5%, while text stays around what looks like noise range. That's not\n> negligible. Are these numbers reproducible? If they are, that could\n> be a problem for anybody doing bulk-loading of large data sets. I am\n> not sure to understand where the improvement for binary comes from by\n> reading the patch, but perhaps perf would tell more for each format?\n> The loss with csv could be blamed on the extra manipulations of the\n> function pointers, likely.\n\n\nI don't think that's at all acceptable.\n\nWe've spent quite a lot of blood sweat and tears over the years to make \nCOPY fast, and we should not sacrifice any of that lightly.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 24 Jan 2024 07:15:55 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 24 Jan 2024 07:15:55 -0500,\n Andrew Dunstan <[email protected]> wrote:\n\n> \n> On 2024-01-24 We 03:11, Michael Paquier wrote:\n>> On Wed, Jan 24, 2024 at 02:49:36PM +0900, Sutou Kouhei wrote:\n>>> For COPY TO:\n>>>\n>>> 0001: This adds CopyToRoutine and use it for text/csv/binary\n>>> formats. No implementation change. This just move codes.\n>> 10M without this change:\n>>\n>> format,elapsed time (ms)\n>> text,1090.763\n>> csv,1136.103\n>> binary,1137.141\n>>\n>> 10M with this change:\n>>\n>> format,elapsed time (ms)\n>> text,1082.654\n>> csv,1196.991\n>> binary,1069.697\n>>\n>> These numbers point out that binary is faster by 6%, csv is slower by\n>> 5%, while text stays around what looks like noise range. That's not\n>> negligible. Are these numbers reproducible? If they are, that could\n>> be a problem for anybody doing bulk-loading of large data sets. I am\n>> not sure to understand where the improvement for binary comes from by\n>> reading the patch, but perhaps perf would tell more for each format?\n>> The loss with csv could be blamed on the extra manipulations of the\n>> function pointers, likely.\n> \n> \n> I don't think that's at all acceptable.\n> \n> We've spent quite a lot of blood sweat and tears over the years to make COPY\n> fast, and we should not sacrifice any of that lightly.\n\nThese numbers aren't reproducible. Because these benchmarks\nexecuted on my normal machine not a machine only for\nbenchmarking. The machine runs another processes such as\neditor and Web browser.\n\nFor example, here are some results with master\n(94edfe250c6a200d2067b0debfe00b4122e9b11e):\n\nFormat,N records,Elapsed time (ms)\ncsv,10000000,1073.715\ncsv,10000000,1022.830\ncsv,10000000,1073.584\ncsv,10000000,1090.651\ncsv,10000000,1052.259\n\nHere are some results with master + the 0001 patch:\n\nFormat,N records,Elapsed time (ms)\ncsv,10000000,1025.356\ncsv,10000000,1067.202\ncsv,10000000,1014.563\ncsv,10000000,1032.088\ncsv,10000000,1058.110\n\n\nI uploaded my benchmark script so that you can run the same\nbenchmark on your machine:\n\nhttps://gist.github.com/kou/be02e02e5072c91969469dbf137b5de5\n\nCould anyone try the benchmark with master and master+0001?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 24 Jan 2024 23:17:26 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 24 Jan 2024 14:49:36 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> I've implemented custom COPY format feature based on the\n> current design discussion. See the attached patches for\n> details.\n\nI forgot to mention one note. Documentation isn't included\nin these patches. I'll write it after all (or some) patches\nare merged. Is it OK?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 24 Jan 2024 23:20:22 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 24, 2024 at 10:17 PM Sutou Kouhei <[email protected]> wrote:\n>\n> I uploaded my benchmark script so that you can run the same\n> benchmark on your machine:\n>\n> https://gist.github.com/kou/be02e02e5072c91969469dbf137b5de5\n>\n> Could anyone try the benchmark with master and master+0001?\n>\n\nsorry. I made a mistake. I applied v6, 0001 to 0008 all the patches.\n\nmy tests:\nCREATE unlogged TABLE data (a bigint);\nSELECT setseed(0.29);\nINSERT INTO data SELECT random() * 10000 FROM generate_series(1, 1e7);\n\nmy setup:\nmeson setup --reconfigure ${BUILD} \\\n-Dprefix=${PG_PREFIX} \\\n-Dpgport=5462 \\\n-Dbuildtype=release \\\n-Ddocs_html_style=website \\\n-Ddocs_pdf=disabled \\\n-Dllvm=disabled \\\n-Dextra_version=_release_build\n\ngcc version: PostgreSQL 17devel_release_build on x86_64-linux,\ncompiled by gcc-11.4.0, 64-bit\n\napply your patch:\nCOPY data TO '/dev/null' WITH (FORMAT csv) \\watch count=5\nTime: 668.996 ms\nTime: 596.254 ms\nTime: 592.723 ms\nTime: 591.663 ms\nTime: 590.803 ms\n\nnot apply your patch, at git 729439607ad210dbb446e31754e8627d7e3f7dda\nCOPY data TO '/dev/null' WITH (FORMAT csv) \\watch count=5\nTime: 644.246 ms\nTime: 583.075 ms\nTime: 568.670 ms\nTime: 569.463 ms\nTime: 569.201 ms\n\nI forgot to test other formats.\n\n\n", "msg_date": "Thu, 25 Jan 2024 10:53:58 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 24, 2024 at 11:17:26PM +0900, Sutou Kouhei wrote:\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 24 Jan 2024 07:15:55 -0500,\n> Andrew Dunstan <[email protected]> wrote:\n>> We've spent quite a lot of blood sweat and tears over the years to make COPY\n>> fast, and we should not sacrifice any of that lightly.\n\nClearly.\n\n> I uploaded my benchmark script so that you can run the same\n> benchmark on your machine:\n> \n> https://gist.github.com/kou/be02e02e5072c91969469dbf137b5de5\n\nThanks, that saves time. I am attaching it to this email as well, for\nthe sake of the archives if this link is removed in the future.\n\n> Could anyone try the benchmark with master and master+0001?\n\nYep. It is one point we need to settle before deciding what to do\nwith this patch set, and I've done so to reach my own conclusion.\n\nI have a rather good machine at my disposal in the cloud, so I did a\nfew runs with HEAD and HEAD+0001, with PGDATA mounted on a tmpfs.\nHere are some results for the 10M row case, as these should be the\nleast prone to noise, 5 runs each: \n\nmaster\ntext 10M 1732.570 1684.542 1693.430 1687.696 1714.845\ncsv 10M 1729.113 1724.926 1727.414 1726.237 1728.865\nbin 10M 1679.097 1677.887 1676.764 1677.554 1678.120\n\nmaster+0001\ntext 10M 1702.207 1654.818 1647.069 1690.568 1654.446\ncsv 10M 1764.939 1714.313 1712.444 1712.323 1716.952\nbin 10M 1703.061 1702.719 1702.234 1703.346 1704.137\n\nHmm. The point of contention in the patch is the change to use the\nCopyToOneRow callback in CopyOneRowTo(), as we go through it for each\nrow and we should habe a worst-case scenario with a relation that has\na small attribute size. The more rows, the more effect it would have.\nThe memory context switches and the StringInfo manipulations are\nequally important, and there are a bunch of the latter, actually, with\noptimizations around fe_msgbuf.\n\nI've repeated a few runs across these two builds, and there is some\nvariance and noise, but I am going to agree with your point that the\neffect 0001 cannot be seen. Even HEAD is showing some noise. So I am\ndiscarding the concerns I had after seeing the numbers you posted\nupthread.\n\n+typedef bool (*CopyToProcessOption_function) (CopyToState cstate, DefElem *defel);\n+typedef int16 (*CopyToGetFormat_function) (CopyToState cstate);\n+typedef void (*CopyToStart_function) (CopyToState cstate, TupleDesc tupDesc);\n+typedef void (*CopyToOneRow_function) (CopyToState cstate, TupleTableSlot *slot);\n+typedef void (*CopyToEnd_function) (CopyToState cstate);\n\nWe don't really need a set of typedefs here, let's put the definitions\nin the CopyToRoutine struct instead.\n\n+extern CopyToRoutine CopyToRoutineText;\n+extern CopyToRoutine CopyToRoutineCSV;\n+extern CopyToRoutine CopyToRoutineBinary;\n\nAll that should IMO remain in copyto.c and copyfrom.c in the initial\npatch doing the refactoring. Why not using a fetch function instead\nthat uses a string in input? Then you can call that once after\nparsing the List of options in ProcessCopyOptions().\n\nIntroducing copyapi.h in the initial patch makes sense here for the TO\nand FROM routines.\n\n+/* All \"text\" and \"csv\" options are parsed in ProcessCopyOptions(). We may\n+ * move the code to here later. */\nSome areas, like this comment, are written in an incorrect format.\n\n+ if (cstate->opts.csv_mode)\n+ CopyAttributeOutCSV(cstate, colname, false,\n+ list_length(cstate->attnumlist) == 1);\n+ else\n+ CopyAttributeOutText(cstate, colname);\n\nYou are right that this is not worth the trouble of creating a\ndifferent set of callbacks for CSV. This makes the result cleaner.\n\n+ getTypeBinaryOutputInfo(attr->atttypid, &out_func_oid, &isvarlena);\n+ fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]);\n\nActually, this split is interesting. It is possible for a custom\nformat to plug in a custom set of out functions. Did you make use of\nsomething custom for your own stuff? Actually, could it make sense to\nsplit the assignment of cstate->out_functions into its own callback?\nSure, that's part of the start phase, but at least it would make clear\nthat a custom method *has* to assign these OIDs to work. The patch\nimplies that as a rule, without a comment that CopyToStart *must* set\nup these OIDs.\n\nI think that 0001 and 0005 should be handled first, as pieces\nindependent of the rest. Then we could move on with 0002~0004 and\n0006~0008.\n--\nMichael", "msg_date": "Thu, 25 Jan 2024 12:17:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Jan 25, 2024 at 10:53:58AM +0800, jian he wrote:\n> apply your patch:\n> COPY data TO '/dev/null' WITH (FORMAT csv) \\watch count=5\n> Time: 668.996 ms\n> Time: 596.254 ms\n> Time: 592.723 ms\n> Time: 591.663 ms\n> Time: 590.803 ms\n> \n> not apply your patch, at git 729439607ad210dbb446e31754e8627d7e3f7dda\n> COPY data TO '/dev/null' WITH (FORMAT csv) \\watch count=5\n> Time: 644.246 ms\n> Time: 583.075 ms\n> Time: 568.670 ms\n> Time: 569.463 ms\n> Time: 569.201 ms\n> \n> I forgot to test other formats.\n\nThere can be some variance in the tests, so you'd better run much more\ntests so as you can get a better idea of the mean. Discarding the N\nhighest and lowest values also reduces slightly the effects of the\nnoise you would get across single runs.\n--\nMichael", "msg_date": "Thu, 25 Jan 2024 12:28:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 24, 2024 at 11:17 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 24 Jan 2024 07:15:55 -0500,\n> Andrew Dunstan <[email protected]> wrote:\n>\n> >\n> > On 2024-01-24 We 03:11, Michael Paquier wrote:\n> >> On Wed, Jan 24, 2024 at 02:49:36PM +0900, Sutou Kouhei wrote:\n> >>> For COPY TO:\n> >>>\n> >>> 0001: This adds CopyToRoutine and use it for text/csv/binary\n> >>> formats. No implementation change. This just move codes.\n> >> 10M without this change:\n> >>\n> >> format,elapsed time (ms)\n> >> text,1090.763\n> >> csv,1136.103\n> >> binary,1137.141\n> >>\n> >> 10M with this change:\n> >>\n> >> format,elapsed time (ms)\n> >> text,1082.654\n> >> csv,1196.991\n> >> binary,1069.697\n> >>\n> >> These numbers point out that binary is faster by 6%, csv is slower by\n> >> 5%, while text stays around what looks like noise range. That's not\n> >> negligible. Are these numbers reproducible? If they are, that could\n> >> be a problem for anybody doing bulk-loading of large data sets. I am\n> >> not sure to understand where the improvement for binary comes from by\n> >> reading the patch, but perhaps perf would tell more for each format?\n> >> The loss with csv could be blamed on the extra manipulations of the\n> >> function pointers, likely.\n> >\n> >\n> > I don't think that's at all acceptable.\n> >\n> > We've spent quite a lot of blood sweat and tears over the years to make COPY\n> > fast, and we should not sacrifice any of that lightly.\n>\n> These numbers aren't reproducible. Because these benchmarks\n> executed on my normal machine not a machine only for\n> benchmarking. The machine runs another processes such as\n> editor and Web browser.\n>\n> For example, here are some results with master\n> (94edfe250c6a200d2067b0debfe00b4122e9b11e):\n>\n> Format,N records,Elapsed time (ms)\n> csv,10000000,1073.715\n> csv,10000000,1022.830\n> csv,10000000,1073.584\n> csv,10000000,1090.651\n> csv,10000000,1052.259\n>\n> Here are some results with master + the 0001 patch:\n>\n> Format,N records,Elapsed time (ms)\n> csv,10000000,1025.356\n> csv,10000000,1067.202\n> csv,10000000,1014.563\n> csv,10000000,1032.088\n> csv,10000000,1058.110\n>\n>\n> I uploaded my benchmark script so that you can run the same\n> benchmark on your machine:\n>\n> https://gist.github.com/kou/be02e02e5072c91969469dbf137b5de5\n>\n> Could anyone try the benchmark with master and master+0001?\n>\n\nI've run a similar scenario:\n\ncreate unlogged table test (a int);\ninsert into test select c from generate_series(1, 25000000) c;\ncopy test to '/tmp/result.csv' with (format csv); -- generates 230MB file\n\nI've run it on HEAD and HEAD+0001 patch and here are the medians of 10\nexecutions for each format:\n\nHEAD:\nbinary 2930.353 ms\ntext 2754.852 ms\ncsv 2890.012 ms\n\nHEAD w/ 0001 patch:\nbinary 2814.838 ms\ntext 2900.845 ms\ncsv 3015.210 ms\n\nHmm I can see a similar trend that Suto-san had; the binary format got\nslightly faster whereas both text and csv format has small regression\n(4%~5%). I think that the improvement for binary came from the fact\nthat we removed \"if (cstate->opts.binary)\" branches from the original\nCopyOneRowTo(). I've experimented with a similar optimization for csv\nand text format; have different callbacks for text and csv format and\nremove \"if (cstate->opts.csv_mode)\" branches. I've attached a patch\nfor that. Here are results:\n\nHEAD w/ 0001 patch + remove branches:\nbinary 2824.502 ms\ntext 2715.264 ms\ncsv 2803.381 ms\n\nThe numbers look better now. I'm not sure these are within a noise\nrange but it might be worth considering having different callbacks for\ntext and csv formats.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 25 Jan 2024 13:36:03 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Jan 25, 2024 at 01:36:03PM +0900, Masahiko Sawada wrote:\n> Hmm I can see a similar trend that Suto-san had; the binary format got\n> slightly faster whereas both text and csv format has small regression\n> (4%~5%). I think that the improvement for binary came from the fact\n> that we removed \"if (cstate->opts.binary)\" branches from the original\n> CopyOneRowTo(). I've experimented with a similar optimization for csv\n> and text format; have different callbacks for text and csv format and\n> remove \"if (cstate->opts.csv_mode)\" branches. I've attached a patch\n> for that. Here are results:\n> \n> HEAD w/ 0001 patch + remove branches:\n> binary 2824.502 ms\n> text 2715.264 ms\n> csv 2803.381 ms\n> \n> The numbers look better now. I'm not sure these are within a noise\n> range but it might be worth considering having different callbacks for\n> text and csv formats.\n\nInteresting.\n\nYour numbers imply a 0.3% speedup for text, 0.7% speedup for csv and\n0.9% speedup for binary, which may be around the noise range assuming\na ~1% range. While this does not imply a regression, that seems worth\nthe duplication IMO. The patch had better document the reason why the\nsplit is done, as well.\n\nCopyFromTextOneRow() has also specific branches for binary and\nnon-binary removed in 0005, so assuming that I/O is not a bottleneck,\nthe operation would be faster because we would not evaluate this \"if\"\ncondition for each row. Wouldn't we also see improvements for COPY\nFROM with short row values, say when mounting PGDATA into a\ntmpfs/ramfs?\n--\nMichael", "msg_date": "Thu, 25 Jan 2024 13:53:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Jan 25, 2024 at 1:53 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 01:36:03PM +0900, Masahiko Sawada wrote:\n> > Hmm I can see a similar trend that Suto-san had; the binary format got\n> > slightly faster whereas both text and csv format has small regression\n> > (4%~5%). I think that the improvement for binary came from the fact\n> > that we removed \"if (cstate->opts.binary)\" branches from the original\n> > CopyOneRowTo(). I've experimented with a similar optimization for csv\n> > and text format; have different callbacks for text and csv format and\n> > remove \"if (cstate->opts.csv_mode)\" branches. I've attached a patch\n> > for that. Here are results:\n> >\n> > HEAD w/ 0001 patch + remove branches:\n> > binary 2824.502 ms\n> > text 2715.264 ms\n> > csv 2803.381 ms\n> >\n> > The numbers look better now. I'm not sure these are within a noise\n> > range but it might be worth considering having different callbacks for\n> > text and csv formats.\n>\n> Interesting.\n>\n> Your numbers imply a 0.3% speedup for text, 0.7% speedup for csv and\n> 0.9% speedup for binary, which may be around the noise range assuming\n> a ~1% range. While this does not imply a regression, that seems worth\n> the duplication IMO.\n\nAgreed. In addition to that, now that each format routine has its own\ncallbacks, there would be chances that we can do other optimizations\ndedicated to the format type in the future if available.\n\n> The patch had better document the reason why the\n> split is done, as well.\n\n+1\n\n>\n> CopyFromTextOneRow() has also specific branches for binary and\n> non-binary removed in 0005, so assuming that I/O is not a bottleneck,\n> the operation would be faster because we would not evaluate this \"if\"\n> condition for each row. Wouldn't we also see improvements for COPY\n> FROM with short row values, say when mounting PGDATA into a\n> tmpfs/ramfs?\n\nProbably. Seems worth evaluating.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Jan 2024 14:28:38 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nThanks for trying these patches!\n\nIn <CACJufxF9NS3xQ2d79jN0V1CGvF7cR16uJo-C3nrY7vZrwvxF7w@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 25 Jan 2024 10:53:58 +0800,\n jian he <[email protected]> wrote:\n\n> COPY data TO '/dev/null' WITH (FORMAT csv) \\watch count=5\n\nWow! I didn't know the \"\\watch count=\"!\nI'll use it.\n\n> Time: 668.996 ms\n> Time: 596.254 ms\n> Time: 592.723 ms\n> Time: 591.663 ms\n> Time: 590.803 ms\n\nIt seems that 5 times isn't enough for this case as Michael\nsaid. But thanks for trying!\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 25 Jan 2024 17:05:30 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 25 Jan 2024 12:17:55 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> +typedef bool (*CopyToProcessOption_function) (CopyToState cstate, DefElem *defel);\n> +typedef int16 (*CopyToGetFormat_function) (CopyToState cstate);\n> +typedef void (*CopyToStart_function) (CopyToState cstate, TupleDesc tupDesc);\n> +typedef void (*CopyToOneRow_function) (CopyToState cstate, TupleTableSlot *slot);\n> +typedef void (*CopyToEnd_function) (CopyToState cstate);\n> \n> We don't really need a set of typedefs here, let's put the definitions\n> in the CopyToRoutine struct instead.\n\nOK. I'll do it.\n\n> +extern CopyToRoutine CopyToRoutineText;\n> +extern CopyToRoutine CopyToRoutineCSV;\n> +extern CopyToRoutine CopyToRoutineBinary;\n> \n> All that should IMO remain in copyto.c and copyfrom.c in the initial\n> patch doing the refactoring. Why not using a fetch function instead\n> that uses a string in input? Then you can call that once after\n> parsing the List of options in ProcessCopyOptions().\n\nOK. How about the following for the fetch function\nsignature?\n\nextern CopyToRoutine *GetBuiltinCopyToRoutine(const char *format);\n\nWe may introduce an enum and use it:\n\ntypedef enum CopyBuiltinFormat\n{\n\tCOPY_BUILTIN_FORMAT_TEXT = 0,\n\tCOPY_BUILTIN_FORMAT_CSV,\n\tCOPY_BUILTIN_FORMAT_BINARY,\n} CopyBuiltinFormat;\n\nextern CopyToRoutine *GetBuiltinCopyToRoutine(CopyBuiltinFormat format);\n\n> +/* All \"text\" and \"csv\" options are parsed in ProcessCopyOptions(). We may\n> + * move the code to here later. */\n> Some areas, like this comment, are written in an incorrect format.\n\nOh, sorry. I assumed that the comment style was adjusted by\npgindent.\n\nI'll use the following style:\n\n/*\n * ...\n */\n\n> + getTypeBinaryOutputInfo(attr->atttypid, &out_func_oid, &isvarlena);\n> + fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]);\n> \n> Actually, this split is interesting. It is possible for a custom\n> format to plug in a custom set of out functions. Did you make use of\n> something custom for your own stuff?\n\nI didn't. My PoC custom COPY format handler for Apache Arrow\njust handles integer and text for now. It doesn't use\ncstate->out_functions because cstate->out_functions may not\nreturn a valid binary format value for Apache Arrow. So it\nformats each value by itself.\n\nI'll chose one of them for a custom type (that isn't\nsupported by Apache Arrow, e.g. PostGIS types):\n\n1. Report an unsupported error\n2. Call output function for Apache Arrow provided by the\n custom type\n\n> Actually, could it make sense to\n> split the assignment of cstate->out_functions into its own callback?\n\nYes. Because we need to use getTypeBinaryOutputInfo() for\n\"binary\" and use getTypeOutputInfo() for \"text\" and \"csv\".\n\n> Sure, that's part of the start phase, but at least it would make clear\n> that a custom method *has* to assign these OIDs to work. The patch\n> implies that as a rule, without a comment that CopyToStart *must* set\n> up these OIDs.\n\nCopyToStart doesn't need to set up them if the handler\ndoesn't use cstate->out_functions.\n\n> I think that 0001 and 0005 should be handled first, as pieces\n> independent of the rest. Then we could move on with 0002~0004 and\n> 0006~0008.\n\nOK. I'll focus on 0001 and 0005 for now. I'll restart\n0002-0004/0006-0008 after 0001 and 0005 are accepted.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 25 Jan 2024 17:45:43 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoALxEZz33NpcSk99ad_DT3A2oFNMa2KNjGBCMVFeCiUaA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 25 Jan 2024 13:36:03 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n> I've experimented with a similar optimization for csv\n> and text format; have different callbacks for text and csv format and\n> remove \"if (cstate->opts.csv_mode)\" branches. I've attached a patch\n> for that. Here are results:\n> \n> HEAD w/ 0001 patch + remove branches:\n> binary 2824.502 ms\n> text 2715.264 ms\n> csv 2803.381 ms\n> \n> The numbers look better now. I'm not sure these are within a noise\n> range but it might be worth considering having different callbacks for\n> text and csv formats.\n\nWow! Interesting. I tried the approach before but I didn't\nsee any difference by the approach. But it may depend on my\nenvironment.\n\nI'll import the approach to the next patch set so that\nothers can try the approach easily.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 25 Jan 2024 17:52:55 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Jan 25, 2024 at 05:45:43PM +0900, Sutou Kouhei wrote:\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 25 Jan 2024 12:17:55 +0900,\n> Michael Paquier <[email protected]> wrote:\n>> +extern CopyToRoutine CopyToRoutineText;\n>> +extern CopyToRoutine CopyToRoutineCSV;\n>> +extern CopyToRoutine CopyToRoutineBinary;\n>> \n>> All that should IMO remain in copyto.c and copyfrom.c in the initial\n>> patch doing the refactoring. Why not using a fetch function instead\n>> that uses a string in input? Then you can call that once after\n>> parsing the List of options in ProcessCopyOptions().\n> \n> OK. How about the following for the fetch function\n> signature?\n> \n> extern CopyToRoutine *GetBuiltinCopyToRoutine(const char *format);\n\nOr CopyToRoutineGet()? I am not wedded to my suggestion, got a bad\nhistory with naming things around here.\n\n> We may introduce an enum and use it:\n> \n> typedef enum CopyBuiltinFormat\n> {\n> \tCOPY_BUILTIN_FORMAT_TEXT = 0,\n> \tCOPY_BUILTIN_FORMAT_CSV,\n> \tCOPY_BUILTIN_FORMAT_BINARY,\n> } CopyBuiltinFormat;\n> \n> extern CopyToRoutine *GetBuiltinCopyToRoutine(CopyBuiltinFormat format);\n\nI am not sure that this is necessary as the option value is a string.\n\n> Oh, sorry. I assumed that the comment style was adjusted by\n> pgindent.\n\nNo worries, that's just something we get used to. I tend to fix a lot\nof these things by myself when editing patches.\n\n>> + getTypeBinaryOutputInfo(attr->atttypid, &out_func_oid, &isvarlena);\n>> + fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]);\n>> \n>> Actually, this split is interesting. It is possible for a custom\n>> format to plug in a custom set of out functions. Did you make use of\n>> something custom for your own stuff?\n> \n> I didn't. My PoC custom COPY format handler for Apache Arrow\n> just handles integer and text for now. It doesn't use\n> cstate->out_functions because cstate->out_functions may not\n> return a valid binary format value for Apache Arrow. So it\n> formats each value by itself.\n\nI mean, if you use a custom output function, you could tweak things\neven more with byteas or such.. If a callback is expected to do\nsomething, like setting the output function OIDs in the start\ncallback, we'd better document it rather than letting that be implied.\n\n>> Actually, could it make sense to\n>> split the assignment of cstate->out_functions into its own callback?\n> \n> Yes. Because we need to use getTypeBinaryOutputInfo() for\n> \"binary\" and use getTypeOutputInfo() for \"text\" and \"csv\".\n\nOkay. After sleeping on it, a split makes sense here, because it also\nreduces the presence of TupleDesc in the start callback.\n\n>> Sure, that's part of the start phase, but at least it would make clear\n>> that a custom method *has* to assign these OIDs to work. The patch\n>> implies that as a rule, without a comment that CopyToStart *must* set\n>> up these OIDs.\n> \n> CopyToStart doesn't need to set up them if the handler\n> doesn't use cstate->out_functions.\n\nNoted.\n\n>> I think that 0001 and 0005 should be handled first, as pieces\n>> independent of the rest. Then we could move on with 0002~0004 and\n>> 0006~0008.\n> \n> OK. I'll focus on 0001 and 0005 for now. I'll restart\n> 0002-0004/0006-0008 after 0001 and 0005 are accepted.\n\nOnce you get these, I'd be interested in re-doing an evaluation of\nCOPY TO and more tests with COPY FROM while running Postgres on\nscissors. One thing I was thinking to use here is my blackhole_am for\nCOPY FROM:\nhttps://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n\nAs per its name, it does nothing on INSERT, so you could create a\ntable using it as access method, and stress the COPY FROM execution\npaths without having to mount Postgres on a tmpfs because the data is\nsent to the void. Perhaps it does not matter, but that moves the\ntests to the bottlenecks we want to stress (aka the per-row callback\nfor large data sets).\n\nI've switched the patch as waiting on author for now. Thanks for your\nperseverance here. I understand that's not easy to follow up with\npatches and reviews (^_^;)\n--\nMichael", "msg_date": "Fri, 26 Jan 2024 08:35:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Jan 25, 2024 at 4:52 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAD21AoALxEZz33NpcSk99ad_DT3A2oFNMa2KNjGBCMVFeCiUaA@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 25 Jan 2024 13:36:03 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n>\n> > I've experimented with a similar optimization for csv\n> > and text format; have different callbacks for text and csv format and\n> > remove \"if (cstate->opts.csv_mode)\" branches. I've attached a patch\n> > for that. Here are results:\n> >\n> > HEAD w/ 0001 patch + remove branches:\n> > binary 2824.502 ms\n> > text 2715.264 ms\n> > csv 2803.381 ms\n> >\n> > The numbers look better now. I'm not sure these are within a noise\n> > range but it might be worth considering having different callbacks for\n> > text and csv formats.\n>\n> Wow! Interesting. I tried the approach before but I didn't\n> see any difference by the approach. But it may depend on my\n> environment.\n>\n> I'll import the approach to the next patch set so that\n> others can try the approach easily.\n>\n>\n> Thanks,\n> --\n> kou\n\nHi Kou-san,\n\nIn the current implementation, there is no way that one can check\nincompatibility\noptions in ProcessCopyOptions, we can postpone the check in CopyFromStart\nor CopyToStart, but I think it is a little bit late. Do you think\nadding an extra\ncheck for incompatible options hook is acceptable (PFA)?\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 26 Jan 2024 16:18:14 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3+-oG63GeG6v0L8EWi_8Fhuj9vJBhOteLxuBZwtun3GVA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:18:14 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> In the current implementation, there is no way that one can check\n> incompatibility\n> options in ProcessCopyOptions, we can postpone the check in CopyFromStart\n> or CopyToStart, but I think it is a little bit late. Do you think\n> adding an extra\n> check for incompatible options hook is acceptable (PFA)?\n\nThanks for the suggestion! But I think that a custom handler\ncan do it in\nCopyToProcessOption()/CopyFromProcessOption(). What do you\nthink about this? Or could you share a sample COPY TO/FROM\nWITH() SQL you think?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 26 Jan 2024 17:32:46 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Jan 26, 2024 at 4:32 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3+-oG63GeG6v0L8EWi_8Fhuj9vJBhOteLxuBZwtun3GVA@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:18:14 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> > In the current implementation, there is no way that one can check\n> > incompatibility\n> > options in ProcessCopyOptions, we can postpone the check in CopyFromStart\n> > or CopyToStart, but I think it is a little bit late. Do you think\n> > adding an extra\n> > check for incompatible options hook is acceptable (PFA)?\n>\n> Thanks for the suggestion! But I think that a custom handler\n> can do it in\n> CopyToProcessOption()/CopyFromProcessOption(). What do you\n> think about this? Or could you share a sample COPY TO/FROM\n> WITH() SQL you think?\n\nCopyToProcessOption()/CopyFromProcessOption() can only handle\nsingle option, and store the options in the opaque field, but it can not\ncheck the relation of two options, for example, considering json format,\nthe `header` option can not be handled by these two functions.\n\nI want to find a way when the user specifies the header option, customer\nhandler can error out.\n\n>\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 26 Jan 2024 16:41:50 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 08:35:19 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>> OK. How about the following for the fetch function\n>> signature?\n>> \n>> extern CopyToRoutine *GetBuiltinCopyToRoutine(const char *format);\n> \n> Or CopyToRoutineGet()? I am not wedded to my suggestion, got a bad\n> history with naming things around here.\n\nThanks for the suggestion.\nI rethink about this and use the following:\n\n+extern void ProcessCopyOptionFormatTo(ParseState *pstate, CopyFormatOptions *opts_out, DefElem *defel);\n\nIt's not a fetch function. It sets CopyToRoutine opts_out\ninstead. But it hides CopyToRoutine* to copyto.c. Is it\nacceptable?\n\n>> OK. I'll focus on 0001 and 0005 for now. I'll restart\n>> 0002-0004/0006-0008 after 0001 and 0005 are accepted.\n> \n> Once you get these, I'd be interested in re-doing an evaluation of\n> COPY TO and more tests with COPY FROM while running Postgres on\n> scissors. One thing I was thinking to use here is my blackhole_am for\n> COPY FROM:\n> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n\nThanks!\n\nCould you evaluate the attached patch set with COPY FROM?\n\nI attach v7 patch set. It includes only the 0001 and 0005\nparts in v6 patch set because we focus on them for now.\n\n0001: This is based on 0001 in v6.\n\nChanges since v6:\n\n* Fix comment style\n* Hide CopyToRoutine{Text,CSV,Binary}\n* Add more comments\n* Eliminate \"if (cstate->opts.csv_mode)\" branches from \"text\"\n and \"csv\" callbacks\n* Remove CopyTo*_function typedefs\n* Update benchmark results in commit message but the results\n are measured on my environment that isn't suitable for\n accurate benchmark\n\n0002: This is based on 0005 in v6.\n\nChanges since v6:\n\n* Fix comment style\n* Hide CopyFromRoutine{Text,CSV,Binary}\n* Add more comments\n* Eliminate a \"if (cstate->opts.csv_mode)\" branch from \"text\"\n and \"csv\" callbacks\n * NOTE: We can eliminate more \"if (cstate->opts.csv_mode)\" branches\n such as one in NextCopyFromRawFields(). Should we do it\n in this feature improvement (make COPY format\n extendable)? Can we defer this as a separated improvement?\n* Remove CopyFrom*_function typedefs\n\n\n\nThanks,\n-- \nkou", "msg_date": "Fri, 26 Jan 2024 17:49:47 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> CopyToProcessOption()/CopyFromProcessOption() can only handle\n> single option, and store the options in the opaque field, but it can not\n> check the relation of two options, for example, considering json format,\n> the `header` option can not be handled by these two functions.\n> \n> I want to find a way when the user specifies the header option, customer\n> handler can error out.\n\nAh, you want to use a built-in option (such as \"header\")\nvalue from a custom handler, right? Hmm, it may be better\nthat we call CopyToProcessOption()/CopyFromProcessOption()\nfor all options including built-in options.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 26 Jan 2024 17:55:11 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Jan 26, 2024 at 4:55 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> > CopyToProcessOption()/CopyFromProcessOption() can only handle\n> > single option, and store the options in the opaque field, but it can not\n> > check the relation of two options, for example, considering json format,\n> > the `header` option can not be handled by these two functions.\n> >\n> > I want to find a way when the user specifies the header option, customer\n> > handler can error out.\n>\n> Ah, you want to use a built-in option (such as \"header\")\n> value from a custom handler, right? Hmm, it may be better\n> that we call CopyToProcessOption()/CopyFromProcessOption()\n> for all options including built-in options.\n>\nHmm, still I don't think it can handle all cases, since we don't know\nthe sequence of the options, we need all the options been parsed\nbefore we check the compatibility of the options, or customer\nhandlers will need complicated logic to resolve that, which might\nlead to ugly code :(\n\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 26 Jan 2024 17:02:23 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Kou-san,\n\nOn Fri, Jan 26, 2024 at 5:02 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 4:55 PM Sutou Kouhei <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > In <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n> > \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n> > Junwang Zhao <[email protected]> wrote:\n> >\n> > > CopyToProcessOption()/CopyFromProcessOption() can only handle\n> > > single option, and store the options in the opaque field, but it can not\n> > > check the relation of two options, for example, considering json format,\n> > > the `header` option can not be handled by these two functions.\n> > >\n> > > I want to find a way when the user specifies the header option, customer\n> > > handler can error out.\n> >\n> > Ah, you want to use a built-in option (such as \"header\")\n> > value from a custom handler, right? Hmm, it may be better\n> > that we call CopyToProcessOption()/CopyFromProcessOption()\n> > for all options including built-in options.\n> >\n> Hmm, still I don't think it can handle all cases, since we don't know\n> the sequence of the options, we need all the options been parsed\n> before we check the compatibility of the options, or customer\n> handlers will need complicated logic to resolve that, which might\n> lead to ugly code :(\n>\n\nI have been working on a *COPY TO JSON* extension since yesterday,\nwhich is based on your V6 patch set, I'd like to give you more input\nso you can make better decisions about the implementation(with only\npg-copy-arrow you might not get everything considered).\n\nV8 is based on V6, so anybody involved in the performance issue\nshould still review the V7 patch set.\n\n0001-0008 is your original V6 implementations\n\n0009 is some changes made by me, I changed CopyToGetFormat to\nCopyToSendCopyBegin because pg_copy_json need to send different bytes\nin SendCopyBegin, get the format code along is not enough, I once had\na thought that may be we should merge SendCopyBegin/SendCopyEnd into\nCopyToStart/CopyToEnd but I don't do that in this patch. I have also\nexported more APIs for extension usage.\n\n00010 is the pg_copy_json extension, I think this should be a good\ncase which can utilize the *extendable copy format* feature, maybe we\nshould delete copy_test_format if we have this extension as an\nexample?\n\n> >\n> > Thanks,\n> > --\n> > kou\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sat, 27 Jan 2024 14:15:02 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Jan 26, 2024 at 6:02 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 4:55 PM Sutou Kouhei <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > In <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n> > \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n> > Junwang Zhao <[email protected]> wrote:\n> >\n> > > CopyToProcessOption()/CopyFromProcessOption() can only handle\n> > > single option, and store the options in the opaque field, but it can not\n> > > check the relation of two options, for example, considering json format,\n> > > the `header` option can not be handled by these two functions.\n> > >\n> > > I want to find a way when the user specifies the header option, customer\n> > > handler can error out.\n> >\n> > Ah, you want to use a built-in option (such as \"header\")\n> > value from a custom handler, right? Hmm, it may be better\n> > that we call CopyToProcessOption()/CopyFromProcessOption()\n> > for all options including built-in options.\n> >\n> Hmm, still I don't think it can handle all cases, since we don't know\n> the sequence of the options, we need all the options been parsed\n> before we check the compatibility of the options, or customer\n> handlers will need complicated logic to resolve that, which might\n> lead to ugly code :(\n>\n\nDoes it make sense to pass only non-builtin options to the custom\nformat callback after parsing and evaluating the builtin options? That\nis, we parse and evaluate only the builtin options and populate\nopts_out first, then pass each rest option to the custom format\nhandler callback. The callback can refer to the builtin option values.\nThe callback is expected to return false if the passed option is not\nsupported. If one of the builtin formats is specified and the rest\noptions list has at least one option, we raise \"option %s not\nrecognized\" error. IOW it's the core's responsibility to ranse the\n\"option %s not recognized\" error, which is in order to raise a\nconsistent error message. Also, I think the core should check the\nredundant options including bultiin and custom options.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 11:41:59 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Jan 29, 2024 at 10:42 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 6:02 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > On Fri, Jan 26, 2024 at 4:55 PM Sutou Kouhei <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > In <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n> > > \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n> > > Junwang Zhao <[email protected]> wrote:\n> > >\n> > > > CopyToProcessOption()/CopyFromProcessOption() can only handle\n> > > > single option, and store the options in the opaque field, but it can not\n> > > > check the relation of two options, for example, considering json format,\n> > > > the `header` option can not be handled by these two functions.\n> > > >\n> > > > I want to find a way when the user specifies the header option, customer\n> > > > handler can error out.\n> > >\n> > > Ah, you want to use a built-in option (such as \"header\")\n> > > value from a custom handler, right? Hmm, it may be better\n> > > that we call CopyToProcessOption()/CopyFromProcessOption()\n> > > for all options including built-in options.\n> > >\n> > Hmm, still I don't think it can handle all cases, since we don't know\n> > the sequence of the options, we need all the options been parsed\n> > before we check the compatibility of the options, or customer\n> > handlers will need complicated logic to resolve that, which might\n> > lead to ugly code :(\n> >\n>\n> Does it make sense to pass only non-builtin options to the custom\n> format callback after parsing and evaluating the builtin options? That\n> is, we parse and evaluate only the builtin options and populate\n> opts_out first, then pass each rest option to the custom format\n> handler callback. The callback can refer to the builtin option values.\n\nYeah, I think this makes sense.\n\n> The callback is expected to return false if the passed option is not\n> supported. If one of the builtin formats is specified and the rest\n> options list has at least one option, we raise \"option %s not\n> recognized\" error. IOW it's the core's responsibility to ranse the\n> \"option %s not recognized\" error, which is in order to raise a\n> consistent error message. Also, I think the core should check the\n> redundant options including bultiin and custom options.\n\nIt would be good that core could check all the redundant options,\nbut where should core do the book-keeping of all the options? I have\nno idea about this, in my implementation of pg_copy_json extension,\nI handle redundant options by adding a xxx_specified field for each\nxxx.\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 29 Jan 2024 11:10:45 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Jan 29, 2024 at 12:10 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Mon, Jan 29, 2024 at 10:42 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Jan 26, 2024 at 6:02 PM Junwang Zhao <[email protected]> wrote:\n> > >\n> > > On Fri, Jan 26, 2024 at 4:55 PM Sutou Kouhei <[email protected]> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > In <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n> > > > \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n> > > > Junwang Zhao <[email protected]> wrote:\n> > > >\n> > > > > CopyToProcessOption()/CopyFromProcessOption() can only handle\n> > > > > single option, and store the options in the opaque field, but it can not\n> > > > > check the relation of two options, for example, considering json format,\n> > > > > the `header` option can not be handled by these two functions.\n> > > > >\n> > > > > I want to find a way when the user specifies the header option, customer\n> > > > > handler can error out.\n> > > >\n> > > > Ah, you want to use a built-in option (such as \"header\")\n> > > > value from a custom handler, right? Hmm, it may be better\n> > > > that we call CopyToProcessOption()/CopyFromProcessOption()\n> > > > for all options including built-in options.\n> > > >\n> > > Hmm, still I don't think it can handle all cases, since we don't know\n> > > the sequence of the options, we need all the options been parsed\n> > > before we check the compatibility of the options, or customer\n> > > handlers will need complicated logic to resolve that, which might\n> > > lead to ugly code :(\n> > >\n> >\n> > Does it make sense to pass only non-builtin options to the custom\n> > format callback after parsing and evaluating the builtin options? That\n> > is, we parse and evaluate only the builtin options and populate\n> > opts_out first, then pass each rest option to the custom format\n> > handler callback. The callback can refer to the builtin option values.\n>\n> Yeah, I think this makes sense.\n>\n> > The callback is expected to return false if the passed option is not\n> > supported. If one of the builtin formats is specified and the rest\n> > options list has at least one option, we raise \"option %s not\n> > recognized\" error. IOW it's the core's responsibility to ranse the\n> > \"option %s not recognized\" error, which is in order to raise a\n> > consistent error message. Also, I think the core should check the\n> > redundant options including bultiin and custom options.\n>\n> It would be good that core could check all the redundant options,\n> but where should core do the book-keeping of all the options? I have\n> no idea about this, in my implementation of pg_copy_json extension,\n> I handle redundant options by adding a xxx_specified field for each\n> xxx.\n\nWhat I imagined is that while parsing the all specified options, we\nevaluate builtin options and we add non-builtin options to another\nlist. Then when parsing a non-builtin option, we check if this option\nalready exists in the list. If there is, we raise the \"option %s not\nrecognized\" error.\". Once we complete checking all options, we pass\neach option in the list to the callback.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 12:21:48 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Jan 29, 2024 at 11:22 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Jan 29, 2024 at 12:10 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > On Mon, Jan 29, 2024 at 10:42 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Fri, Jan 26, 2024 at 6:02 PM Junwang Zhao <[email protected]> wrote:\n> > > >\n> > > > On Fri, Jan 26, 2024 at 4:55 PM Sutou Kouhei <[email protected]> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > In <CAEG8a3KhS6s1XQgDSvc8vFTb4GkhBmS8TxOoVSDPFX+MPExxxQ@mail.gmail.com>\n> > > > > \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 26 Jan 2024 16:41:50 +0800,\n> > > > > Junwang Zhao <[email protected]> wrote:\n> > > > >\n> > > > > > CopyToProcessOption()/CopyFromProcessOption() can only handle\n> > > > > > single option, and store the options in the opaque field, but it can not\n> > > > > > check the relation of two options, for example, considering json format,\n> > > > > > the `header` option can not be handled by these two functions.\n> > > > > >\n> > > > > > I want to find a way when the user specifies the header option, customer\n> > > > > > handler can error out.\n> > > > >\n> > > > > Ah, you want to use a built-in option (such as \"header\")\n> > > > > value from a custom handler, right? Hmm, it may be better\n> > > > > that we call CopyToProcessOption()/CopyFromProcessOption()\n> > > > > for all options including built-in options.\n> > > > >\n> > > > Hmm, still I don't think it can handle all cases, since we don't know\n> > > > the sequence of the options, we need all the options been parsed\n> > > > before we check the compatibility of the options, or customer\n> > > > handlers will need complicated logic to resolve that, which might\n> > > > lead to ugly code :(\n> > > >\n> > >\n> > > Does it make sense to pass only non-builtin options to the custom\n> > > format callback after parsing and evaluating the builtin options? That\n> > > is, we parse and evaluate only the builtin options and populate\n> > > opts_out first, then pass each rest option to the custom format\n> > > handler callback. The callback can refer to the builtin option values.\n> >\n> > Yeah, I think this makes sense.\n> >\n> > > The callback is expected to return false if the passed option is not\n> > > supported. If one of the builtin formats is specified and the rest\n> > > options list has at least one option, we raise \"option %s not\n> > > recognized\" error. IOW it's the core's responsibility to ranse the\n> > > \"option %s not recognized\" error, which is in order to raise a\n> > > consistent error message. Also, I think the core should check the\n> > > redundant options including bultiin and custom options.\n> >\n> > It would be good that core could check all the redundant options,\n> > but where should core do the book-keeping of all the options? I have\n> > no idea about this, in my implementation of pg_copy_json extension,\n> > I handle redundant options by adding a xxx_specified field for each\n> > xxx.\n>\n> What I imagined is that while parsing the all specified options, we\n> evaluate builtin options and we add non-builtin options to another\n> list. Then when parsing a non-builtin option, we check if this option\n> already exists in the list. If there is, we raise the \"option %s not\n> recognized\" error.\". Once we complete checking all options, we pass\n> each option in the list to the callback.\n\nLGTM.\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 29 Jan 2024 11:37:07 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3JDPks7XU5-NvzjzuKQYQqR8pDfS7CDGZonQTXfdWtnnw@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Sat, 27 Jan 2024 14:15:02 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> I have been working on a *COPY TO JSON* extension since yesterday,\n> which is based on your V6 patch set, I'd like to give you more input\n> so you can make better decisions about the implementation(with only\n> pg-copy-arrow you might not get everything considered).\n\nThanks!\n\n> 0009 is some changes made by me, I changed CopyToGetFormat to\n> CopyToSendCopyBegin because pg_copy_json need to send different bytes\n> in SendCopyBegin, get the format code along is not enough\n\nOh, I haven't cared about the case.\nHow about the following API instead?\n\nstatic void\nSendCopyBegin(CopyToState cstate)\n{\n\tStringInfoData buf;\n\n\tpq_beginmessage(&buf, PqMsg_CopyOutResponse);\n\tcstate->opts.to_routine->CopyToFillCopyOutResponse(cstate, &buf);\n\tpq_endmessage(&buf);\n\tcstate->copy_dest = COPY_FRONTEND;\n}\n\nstatic void\nCopyToJsonFillCopyOutResponse(CopyToState cstate, StringInfoData &buf)\n{\n\tint16\t\tformat = 0;\n\n\tpq_sendbyte(&buf, format); /* overall format */\n\t/*\n\t * JSON mode is always one non-binary column\n\t */\n\tpq_sendint16(&buf, 1);\n\tpq_sendint16(&buf, format);\n}\n\n> 00010 is the pg_copy_json extension, I think this should be a good\n> case which can utilize the *extendable copy format* feature\n\nIt seems that it's convenient that we have one more callback\nfor initializing CopyToState::opaque. It's called only once\nwhen Copy{To,From}Routine is chosen:\n\ntypedef struct CopyToRoutine\n{\n\tvoid\t\t(*CopyToInit) (CopyToState cstate);\n...\n};\n\nvoid\nProcessCopyOptions(ParseState *pstate,\n\t\t\t\t CopyFormatOptions *opts_out,\n\t\t\t\t bool is_from,\n\t\t\t\t void *cstate,\n\t\t\t\t List *options)\n{\n...\n\tforeach(option, options)\n\t{\n\t\tDefElem *defel = lfirst_node(DefElem, option);\n\n\t\tif (strcmp(defel->defname, \"format\") == 0)\n\t\t{\n\t\t\t...\n\t\t\topts_out->to_routine = &CopyToRoutineXXX;\n\t\t\topts_out->to_routine->CopyToInit(cstate);\n\t\t\t...\n\t\t}\n\t}\n...\n}\n\n\n> maybe we\n> should delete copy_test_format if we have this extension as an\n> example?\n\nI haven't read the COPY TO format json thread[1] carefully\n(sorry), but we may add the JSON format as a built-in\nformat. If we do it, copy_test_format is useful to test the\nextension API.\n\n[1] https://www.postgresql.org/message-id/flat/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 29 Jan 2024 15:03:32 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Jan 29, 2024 at 2:03 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3JDPks7XU5-NvzjzuKQYQqR8pDfS7CDGZonQTXfdWtnnw@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Sat, 27 Jan 2024 14:15:02 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> > I have been working on a *COPY TO JSON* extension since yesterday,\n> > which is based on your V6 patch set, I'd like to give you more input\n> > so you can make better decisions about the implementation(with only\n> > pg-copy-arrow you might not get everything considered).\n>\n> Thanks!\n>\n> > 0009 is some changes made by me, I changed CopyToGetFormat to\n> > CopyToSendCopyBegin because pg_copy_json need to send different bytes\n> > in SendCopyBegin, get the format code along is not enough\n>\n> Oh, I haven't cared about the case.\n> How about the following API instead?\n>\n> static void\n> SendCopyBegin(CopyToState cstate)\n> {\n> StringInfoData buf;\n>\n> pq_beginmessage(&buf, PqMsg_CopyOutResponse);\n> cstate->opts.to_routine->CopyToFillCopyOutResponse(cstate, &buf);\n> pq_endmessage(&buf);\n> cstate->copy_dest = COPY_FRONTEND;\n> }\n>\n> static void\n> CopyToJsonFillCopyOutResponse(CopyToState cstate, StringInfoData &buf)\n> {\n> int16 format = 0;\n>\n> pq_sendbyte(&buf, format); /* overall format */\n> /*\n> * JSON mode is always one non-binary column\n> */\n> pq_sendint16(&buf, 1);\n> pq_sendint16(&buf, format);\n> }\n\nMake sense to me.\n\n>\n> > 00010 is the pg_copy_json extension, I think this should be a good\n> > case which can utilize the *extendable copy format* feature\n>\n> It seems that it's convenient that we have one more callback\n> for initializing CopyToState::opaque. It's called only once\n> when Copy{To,From}Routine is chosen:\n>\n> typedef struct CopyToRoutine\n> {\n> void (*CopyToInit) (CopyToState cstate);\n> ...\n> };\n\nI like this, we can alloc private data in this hook.\n\n>\n> void\n> ProcessCopyOptions(ParseState *pstate,\n> CopyFormatOptions *opts_out,\n> bool is_from,\n> void *cstate,\n> List *options)\n> {\n> ...\n> foreach(option, options)\n> {\n> DefElem *defel = lfirst_node(DefElem, option);\n>\n> if (strcmp(defel->defname, \"format\") == 0)\n> {\n> ...\n> opts_out->to_routine = &CopyToRoutineXXX;\n> opts_out->to_routine->CopyToInit(cstate);\n> ...\n> }\n> }\n> ...\n> }\n>\n>\n> > maybe we\n> > should delete copy_test_format if we have this extension as an\n> > example?\n>\n> I haven't read the COPY TO format json thread[1] carefully\n> (sorry), but we may add the JSON format as a built-in\n> format. If we do it, copy_test_format is useful to test the\n> extension API.\n\nYeah, maybe, I have no strong opinion here, pg_copy_json is\njust a toy extension for discussion.\n\n>\n> [1] https://www.postgresql.org/message-id/flat/CALvfUkBxTYy5uWPFVwpk_7ii2zgT07t3d-yR_cy4sfrrLU%3Dkcg%40mail.gmail.com\n>\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 29 Jan 2024 14:48:40 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3Jnmbjw82OiSvRK3v9XN2zSshsB8ew1mZCQDAkKq6r9YQ@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 29 Jan 2024 11:37:07 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n>> > > Does it make sense to pass only non-builtin options to the custom\n>> > > format callback after parsing and evaluating the builtin options? That\n>> > > is, we parse and evaluate only the builtin options and populate\n>> > > opts_out first, then pass each rest option to the custom format\n>> > > handler callback. The callback can refer to the builtin option values.\n>>\n>> What I imagined is that while parsing the all specified options, we\n>> evaluate builtin options and we add non-builtin options to another\n>> list. Then when parsing a non-builtin option, we check if this option\n>> already exists in the list. If there is, we raise the \"option %s not\n>> recognized\" error.\". Once we complete checking all options, we pass\n>> each option in the list to the callback.\n\nI implemented this idea and the following ideas:\n\n1. Add init callback for initialization\n2. Change GetFormat() to FillCopyXXXResponse()\n because JSON format always use 1 column\n3. FROM only: Eliminate more cstate->opts.csv_mode branches\n (This is for performance.)\n\nSee the attached v9 patch set for details. Changes since v7:\n\n0001:\n\n* Move CopyToProcessOption() calls to the end of\n ProcessCopyOptions() for easy to option validation\n* Add CopyToState::CopyToInit() and call it in\n ProcessCopyOptionFormatTo()\n* Change CopyToState::CopyToGetFormat() to\n CopyToState::CopyToFillCopyOutResponse() and use it in\n SendCopyBegin()\n\n0002:\n\n* Move CopyFromProcessOption() calls to the end of\n ProcessCopyOptions() for easy to option validation\n* Add CopyFromState::CopyFromInit() and call it in\n ProcessCopyOptionFormatFrom()\n* Change CopyFromState::CopyFromGetFormat() to\n CopyFromState::CopyFromFillCopyOutResponse() and use it in\n ReceiveCopyBegin()\n* Rename NextCopyFromRawFields() to\n NextCopyFromRawFieldsInternal() and pass the read\n attributes callback explicitly to eliminate more\n cstate->opts.csv_mode branches\n\n\nThanks,\n-- \nkou", "msg_date": "Mon, 29 Jan 2024 18:45:23 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Jan 29, 2024 at 6:45 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CAEG8a3Jnmbjw82OiSvRK3v9XN2zSshsB8ew1mZCQDAkKq6r9YQ@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 29 Jan 2024 11:37:07 +0800,\n> Junwang Zhao <[email protected]> wrote:\n>\n> >> > > Does it make sense to pass only non-builtin options to the custom\n> >> > > format callback after parsing and evaluating the builtin options? That\n> >> > > is, we parse and evaluate only the builtin options and populate\n> >> > > opts_out first, then pass each rest option to the custom format\n> >> > > handler callback. The callback can refer to the builtin option values.\n> >>\n> >> What I imagined is that while parsing the all specified options, we\n> >> evaluate builtin options and we add non-builtin options to another\n> >> list. Then when parsing a non-builtin option, we check if this option\n> >> already exists in the list. If there is, we raise the \"option %s not\n> >> recognized\" error.\". Once we complete checking all options, we pass\n> >> each option in the list to the callback.\n>\n> I implemented this idea and the following ideas:\n>\n> 1. Add init callback for initialization\n> 2. Change GetFormat() to FillCopyXXXResponse()\n> because JSON format always use 1 column\n> 3. FROM only: Eliminate more cstate->opts.csv_mode branches\n> (This is for performance.)\n>\n> See the attached v9 patch set for details. Changes since v7:\n>\n> 0001:\n>\n> * Move CopyToProcessOption() calls to the end of\n> ProcessCopyOptions() for easy to option validation\n> * Add CopyToState::CopyToInit() and call it in\n> ProcessCopyOptionFormatTo()\n> * Change CopyToState::CopyToGetFormat() to\n> CopyToState::CopyToFillCopyOutResponse() and use it in\n> SendCopyBegin()\n\nThank you for updating the patch! Here are comments on 0001 patch:\n\n---\n+ if (!format_specified)\n+ /* Set the default format. */\n+ ProcessCopyOptionFormatTo(pstate, opts_out, cstate, NULL);\n+\n\nI think we can pass \"text\" in this case instead of NULL. That way,\nProcessCopyOptionFormatTo doesn't need to handle NULL case.\n\nWe need curly brackets for this \"if branch\" as follows:\n\nif (!format_specifed)\n{\n /* Set the default format. */\n ProcessCopyOptionFormatTo(pstate, opts_out, cstate, NULL);\n}\n\n---\n+ /* Process not built-in options. */\n+ foreach(option, unknown_options)\n+ {\n+ DefElem *defel = lfirst_node(DefElem, option);\n+ bool processed = false;\n+\n+ if (!is_from)\n+ processed =\nopts_out->to_routine->CopyToProcessOption(cstate, defel);\n+ if (!processed)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"option \\\"%s\\\" not recognized\",\n+ defel->defname),\n+ parser_errposition(pstate,\ndefel->location)));\n+ }\n+ list_free(unknown_options);\n\nI think we can check the duplicated options in the core as we discussed.\n\n---\n+static void\n+CopyToTextBasedInit(CopyToState cstate)\n+{\n+}\n\nand\n\n+static void\n+CopyToBinaryInit(CopyToState cstate)\n+{\n+}\n\nDo we really need separate callbacks for the same behavior? I think we\ncan have a common init function say CopyToBuitinInit() that does\nnothing. Or we can make the init callback optional.\n\nThe same is true for process-option callback.\n\n---\n List *convert_select; /* list of column names (can be NIL) */\n+ const CopyToRoutine *to_routine; /* callback\nroutines for COPY TO */\n } CopyFormatOptions;\n\nI think CopyToStateData is a better place to have CopyToRoutine.\ncopy_data_dest_cb is also there.\n\n---\n- if (strcmp(fmt, \"text\") == 0)\n- /* default format */ ;\n- else if (strcmp(fmt, \"csv\") == 0)\n- opts_out->csv_mode = true;\n- else if (strcmp(fmt, \"binary\") == 0)\n- opts_out->binary = true;\n+\n+ if (is_from)\n+ {\n+ char *fmt = defGetString(defel);\n+\n+ if (strcmp(fmt, \"text\") == 0)\n+ /* default format */ ;\n+ else if (strcmp(fmt, \"csv\") == 0)\n+ {\n+ opts_out->csv_mode = true;\n+ }\n+ else if (strcmp(fmt, \"binary\") == 0)\n+ {\n+ opts_out->binary = true;\n+ }\n else\n- ereport(ERROR,\n-\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"COPY format\n\\\"%s\\\" not recognized\", fmt\\),\n-\nparser_errposition(pstate, defel->location)));\n+ ProcessCopyOptionFormatTo(pstate,\nopts_out, cstate, defel);\n\nThe 0002 patch replaces the options checks with\nProcessCopyOptionFormatFrom(). However, both\nProcessCopyOptionFormatTo() and ProcessCOpyOptionFormatFrom() would\nset format-related options such as opts_out->csv_mode etc, which seems\nnot elegant. IIUC the reason why we process only the \"format\" option\nfirst is to set the callback functions and call the init callback. So\nI think we don't necessarily need to do both setting callbacks and\nsetting format-related options together. Probably we can do only the\ncallback stuff first and then set format-related options in the\noriginal place we used to do?\n\n---\n+static void\n+CopyToTextBasedFillCopyOutResponse(CopyToState cstate, StringInfoData *buf)\n+{\n+ int16 format = 0;\n+ int natts = list_length(cstate->attnumlist);\n+ int i;\n+\n+ pq_sendbyte(buf, format); /* overall format */\n+ pq_sendint16(buf, natts);\n+ for (i = 0; i < natts; i++)\n+ pq_sendint16(buf, format); /* per-column formats */\n+}\n\nThis function and CopyToBinaryFillCopyOutResponse() fill three things:\noverall format, the number of columns, and per-column formats. While\nthis approach is flexible, extensions will have to understand the\nformat of CopyOutResponse message. An alternative is to have one or\nmore callbacks that return these three things.\n\n---\n+ /* Get info about the columns we need to process. */\n+ cstate->out_functions = (FmgrInfo *) palloc(num_phys_attrs *\nsizeof(Fmgr\\Info));\n+ foreach(cur, cstate->attnumlist)\n+ {\n+ int attnum = lfirst_int(cur);\n+ Oid out_func_oid;\n+ bool isvarlena;\n+ Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1);\n+\n+ getTypeOutputInfo(attr->atttypid, &out_func_oid, &isvarlena);\n+ fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]);\n+ }\n\nIs preparing the out functions an extension's responsibility? I\nthought the core could prepare them based on the overall format\nspecified by extensions, as long as the overall format matches the\nactual data format to send. What do you think?\n\n---\n+ /*\n+ * Called when COPY TO via the PostgreSQL protocol is\nstarted. This must\n+ * fill buf as a valid CopyOutResponse message:\n+ *\n+ */\n+ /*--\n+ * +--------+--------+--------+--------+--------+ +--------+--------+\n+ * | Format | N attributes | Attr1's format |...| AttrN's format |\n+ * +--------+--------+--------+--------+--------+ +--------+--------+\n+ * 0: text 0: text 0: text\n+ * 1: binary 1: binary 1: binary\n+ */\n\nI think this kind of diagram could be missed from being updated when\nwe update the CopyOutResponse format. It's better to refer to the\ndocumentation instead.\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Jan 2024 11:11:59 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoBmNiWwrspuedgAPgbAqsn7e7NoZYF6gNnYBf+gXEk9Mg@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 30 Jan 2024 11:11:59 +0900,\n Masahiko Sawada <[email protected]> wrote:\n\n> ---\n> + if (!format_specified)\n> + /* Set the default format. */\n> + ProcessCopyOptionFormatTo(pstate, opts_out, cstate, NULL);\n> +\n> \n> I think we can pass \"text\" in this case instead of NULL. That way,\n> ProcessCopyOptionFormatTo doesn't need to handle NULL case.\n\nYes, we can do it. But it needs a DefElem allocation. Is it\nacceptable?\n\n> We need curly brackets for this \"if branch\" as follows:\n> \n> if (!format_specifed)\n> {\n> /* Set the default format. */\n> ProcessCopyOptionFormatTo(pstate, opts_out, cstate, NULL);\n> }\n\nOh, sorry. I assumed that pgindent adjusts the style too.\n\n> ---\n> + /* Process not built-in options. */\n> + foreach(option, unknown_options)\n> + {\n> + DefElem *defel = lfirst_node(DefElem, option);\n> + bool processed = false;\n> +\n> + if (!is_from)\n> + processed =\n> opts_out->to_routine->CopyToProcessOption(cstate, defel);\n> + if (!processed)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"option \\\"%s\\\" not recognized\",\n> + defel->defname),\n> + parser_errposition(pstate,\n> defel->location)));\n> + }\n> + list_free(unknown_options);\n> \n> I think we can check the duplicated options in the core as we discussed.\n\nOh, sorry. I missed the part. I'll implement it.\n\n> ---\n> +static void\n> +CopyToTextBasedInit(CopyToState cstate)\n> +{\n> +}\n> \n> and\n> \n> +static void\n> +CopyToBinaryInit(CopyToState cstate)\n> +{\n> +}\n> \n> Do we really need separate callbacks for the same behavior? I think we\n> can have a common init function say CopyToBuitinInit() that does\n> nothing. Or we can make the init callback optional.\n> \n> The same is true for process-option callback.\n\nOK. I'll make them optional.\n\n> ---\n> List *convert_select; /* list of column names (can be NIL) */\n> + const CopyToRoutine *to_routine; /* callback\n> routines for COPY TO */\n> } CopyFormatOptions;\n> \n> I think CopyToStateData is a better place to have CopyToRoutine.\n> copy_data_dest_cb is also there.\n\nWe can do it but ProcessCopyOptions() accepts NULL\nCopyToState for file_fdw. Can we create an empty\nCopyToStateData internally like we did for opts_out in\nProcessCopyOptions()? (But it requires exporting\nCopyToStateData. We'll export it in a later patch but it's\nnot yet in 0001.)\n\n> The 0002 patch replaces the options checks with\n> ProcessCopyOptionFormatFrom(). However, both\n> ProcessCopyOptionFormatTo() and ProcessCOpyOptionFormatFrom() would\n> set format-related options such as opts_out->csv_mode etc, which seems\n> not elegant. IIUC the reason why we process only the \"format\" option\n> first is to set the callback functions and call the init callback. So\n> I think we don't necessarily need to do both setting callbacks and\n> setting format-related options together. Probably we can do only the\n> callback stuff first and then set format-related options in the\n> original place we used to do?\n\nIf we do it, we need to write the (strcmp(format, \"csv\") ==\n0) condition in copyto.c and copy.c. I wanted to avoid it. I\nthink that the duplication (setting opts_out->csv_mode in\ncopyto.c and copyfrom.c) is not a problem. But it's not a\nstrong opinion. If (strcmp(format, \"csv\") == 0) duplication\nis better than opts_out->csv_mode = true duplication, I'll\ndo it.\n\nBTW, if we want to make the CSV format implementation more\nmodularized, we will remove opts_out->csv_mode, move CSV\nrelated options to CopyToCSVProcessOption() and keep CSV\nrelated options in its opaque space. For example,\nopts_out->force_quote exists in COPY TO opaque space but\ndoesn't exist in COPY FROM opaque space because it's not\nused in COPY FROM.\n\n\n> +static void\n> +CopyToTextBasedFillCopyOutResponse(CopyToState cstate, StringInfoData *buf)\n> +{\n> + int16 format = 0;\n> + int natts = list_length(cstate->attnumlist);\n> + int i;\n> +\n> + pq_sendbyte(buf, format); /* overall format */\n> + pq_sendint16(buf, natts);\n> + for (i = 0; i < natts; i++)\n> + pq_sendint16(buf, format); /* per-column formats */\n> +}\n> \n> This function and CopyToBinaryFillCopyOutResponse() fill three things:\n> overall format, the number of columns, and per-column formats. While\n> this approach is flexible, extensions will have to understand the\n> format of CopyOutResponse message. An alternative is to have one or\n> more callbacks that return these three things.\n\nYes, we can choose the approach. I don't have a strong\nopinion on which approach to choose.\n\n> + /* Get info about the columns we need to process. */\n> + cstate->out_functions = (FmgrInfo *) palloc(num_phys_attrs *\n> sizeof(Fmgr\\Info));\n> + foreach(cur, cstate->attnumlist)\n> + {\n> + int attnum = lfirst_int(cur);\n> + Oid out_func_oid;\n> + bool isvarlena;\n> + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1);\n> +\n> + getTypeOutputInfo(attr->atttypid, &out_func_oid, &isvarlena);\n> + fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]);\n> + }\n> \n> Is preparing the out functions an extension's responsibility? I\n> thought the core could prepare them based on the overall format\n> specified by extensions, as long as the overall format matches the\n> actual data format to send. What do you think?\n\nHmm. I want to keep the preparation as an extension's\nresponsibility. Because it's not needed for all formats. For\nexample, Apache Arrow FORMAT doesn't need it. And JSON\nFORMAT doesn't need it too because it use\ncomposite_to_json().\n\n> + /*\n> + * Called when COPY TO via the PostgreSQL protocol is\n> started. This must\n> + * fill buf as a valid CopyOutResponse message:\n> + *\n> + */\n> + /*--\n> + * +--------+--------+--------+--------+--------+ +--------+--------+\n> + * | Format | N attributes | Attr1's format |...| AttrN's format |\n> + * +--------+--------+--------+--------+--------+ +--------+--------+\n> + * 0: text 0: text 0: text\n> + * 1: binary 1: binary 1: binary\n> + */\n> \n> I think this kind of diagram could be missed from being updated when\n> we update the CopyOutResponse format. It's better to refer to the\n> documentation instead.\n\nIt makes sense. I couldn't find the documentation when I\nwrote it but I found it now...:\nhttps://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-COPY\n\nIs there recommended comment style to refer a documentation?\n\"See doc/src/sgml/protocol.sgml for the CopyOutResponse\nmessage details\" is OK?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 30 Jan 2024 14:45:31 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Tue, Jan 30, 2024 at 02:45:31PM +0900, Sutou Kouhei wrote:\n> In <CAD21AoBmNiWwrspuedgAPgbAqsn7e7NoZYF6gNnYBf+gXEk9Mg@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 30 Jan 2024 11:11:59 +0900,\n> Masahiko Sawada <[email protected]> wrote:\n> \n>> ---\n>> + if (!format_specified)\n>> + /* Set the default format. */\n>> + ProcessCopyOptionFormatTo(pstate, opts_out, cstate, NULL);\n>> +\n>> \n>> I think we can pass \"text\" in this case instead of NULL. That way,\n>> ProcessCopyOptionFormatTo doesn't need to handle NULL case.\n> \n> Yes, we can do it. But it needs a DefElem allocation. Is it\n> acceptable?\n\nI don't think that there is a need for a DelElem at all here? While I\nam OK with the choice of calling CopyToInit() in the\nProcessCopyOption*() routines that exist to keep the set of callbacks\nlocal to copyto.c and copyfrom.c, I think that this should not bother\nabout setting opts_out->csv_mode or opts_out->csv_mode but just set \nthe opts_out->{to,from}_routine callbacks.\n\n>> +static void\n>> +CopyToTextBasedInit(CopyToState cstate)\n>> +{\n>> +}\n>> \n>> and\n>> \n>> +static void\n>> +CopyToBinaryInit(CopyToState cstate)\n>> +{\n>> +}\n>> \n>> Do we really need separate callbacks for the same behavior? I think we\n>> can have a common init function say CopyToBuitinInit() that does\n>> nothing. Or we can make the init callback optional.\n\nKeeping empty options does not strike as a bad idea, because this\nforces extension developers to think about this code path rather than\njust ignore it. Now, all the Init() callbacks are empty for the\nin-core callbacks, so I think that we should just remove it entirely\nfor now. Let's keep the core patch a maximum simple. It is always\npossible to build on top of it depending on what people need. It's\nbeen mentioned that JSON would want that, but this also proves that we\njust don't care about that for all the in-core callbacks, as well. I\nwould choose a minimalistic design for now.\n\n>> + /* Get info about the columns we need to process. */\n>> + cstate->out_functions = (FmgrInfo *) palloc(num_phys_attrs *\n>> sizeof(Fmgr\\Info));\n>> + foreach(cur, cstate->attnumlist)\n>> + {\n>> + int attnum = lfirst_int(cur);\n>> + Oid out_func_oid;\n>> + bool isvarlena;\n>> + Form_pg_attribute attr = TupleDescAttr(tupDesc, attnum - 1);\n>> +\n>> + getTypeOutputInfo(attr->atttypid, &out_func_oid, &isvarlena);\n>> + fmgr_info(out_func_oid, &cstate->out_functions[attnum - 1]);\n>> + }\n>> \n>> Is preparing the out functions an extension's responsibility? I\n>> thought the core could prepare them based on the overall format\n>> specified by extensions, as long as the overall format matches the\n>> actual data format to send. What do you think?\n> \n> Hmm. I want to keep the preparation as an extension's\n> responsibility. Because it's not needed for all formats. For\n> example, Apache Arrow FORMAT doesn't need it. And JSON\n> FORMAT doesn't need it too because it use\n> composite_to_json().\n\nI agree that it could be really useful for extensions to be able to\nforce that. We already know that for the in-core formats we've cared\nabout being able to enforce the way data is handled in input and\noutput.\n\n> It makes sense. I couldn't find the documentation when I\n> wrote it but I found it now...:\n> https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-COPY\n> \n> Is there recommended comment style to refer a documentation?\n> \"See doc/src/sgml/protocol.sgml for the CopyOutResponse\n> message details\" is OK?\n\nThere are a couple of places in the C code where we refer to SGML docs\nwhen it comes to specific details, so using a method like that here to\navoid a duplication with the docs sounds sensible for me.\n\nI would be really tempted to put my hands on this patch to put into\nshape a minimal set of changes because I'm caring quite a lot about\nthe performance gains reported with the removal of the \"if\" checks in\nthe per-row callbacks, and that's one goal of this thread quite\nindependent on the extensibility. Sutou-san, would you be OK with\nthat?\n--\nMichael", "msg_date": "Tue, 30 Jan 2024 16:20:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 30 Jan 2024 16:20:54 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>>> + if (!format_specified)\n>>> + /* Set the default format. */\n>>> + ProcessCopyOptionFormatTo(pstate, opts_out, cstate, NULL);\n>>> +\n>>> \n>>> I think we can pass \"text\" in this case instead of NULL. That way,\n>>> ProcessCopyOptionFormatTo doesn't need to handle NULL case.\n>> \n>> Yes, we can do it. But it needs a DefElem allocation. Is it\n>> acceptable?\n> \n> I don't think that there is a need for a DelElem at all here?\n\nWe use defel->location for an error message. (We don't need\nto set location for the default \"text\" DefElem.)\n\n> While I\n> am OK with the choice of calling CopyToInit() in the\n> ProcessCopyOption*() routines that exist to keep the set of callbacks\n> local to copyto.c and copyfrom.c, I think that this should not bother\n> about setting opts_out->csv_mode or opts_out->csv_mode but just set \n> the opts_out->{to,from}_routine callbacks.\n\nOK. I'll keep opts_out->{csv_mode,binary} in copy.c.\n\n> Now, all the Init() callbacks are empty for the\n> in-core callbacks, so I think that we should just remove it entirely\n> for now. Let's keep the core patch a maximum simple. It is always\n> possible to build on top of it depending on what people need. It's\n> been mentioned that JSON would want that, but this also proves that we\n> just don't care about that for all the in-core callbacks, as well. I\n> would choose a minimalistic design for now.\n\nOK. Let's remove Init() callbacks from the first patch set.\n\n> I would be really tempted to put my hands on this patch to put into\n> shape a minimal set of changes because I'm caring quite a lot about\n> the performance gains reported with the removal of the \"if\" checks in\n> the per-row callbacks, and that's one goal of this thread quite\n> independent on the extensibility. Sutou-san, would you be OK with\n> that?\n\nYes, sure.\n(We want to focus on the performance gains in the first\npatch set and then focus on extensibility again, right?)\n\nFor the purpose, I think that the v7 patch set is more\nsuitable than the v9 patch set. The v7 patch set doesn't\ninclude Init() callbacks, custom options validation support\nor extra Copy{In,Out}Response support. But the v7 patch set\nmisses the removal of the \"if\" checks in\nNextCopyFromRawFields() that exists in the v9 patch set. I'm\nnot sure how much performance will improve by this but it\nmay be worth a try.\n\nCan I prepare the v10 patch set as \"the v7 patch set\" + \"the\nremoval of the \"if\" checks in NextCopyFromRawFields()\"?\n(+ reverting opts_out->{csv_mode,binary} changes in\nProcessCopyOptions().)\n\n\nThanks,\n-- \nkou\n\n\n\n", "msg_date": "Tue, 30 Jan 2024 17:15:11 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Tue, Jan 30, 2024 at 05:15:11PM +0900, Sutou Kouhei wrote:\n> We use defel->location for an error message. (We don't need\n> to set location for the default \"text\" DefElem.)\n\nYeah, but you should not need to have this error in the paths that set\nthe callback routines in opts_out if the same validation happens a few\nlines before, in copy.c.\n\n> Yes, sure.\n> (We want to focus on the performance gains in the first\n> patch set and then focus on extensibility again, right?)\n\nYep, exactly, the numbers are too good to just ignore. I don't want\nto hijack the thread, but I am really worried about the complexities\nthis thread is getting into because we are trying to shape the\ncallbacks in the most generic way possible based on *two* use cases.\nThis is going to be a never-ending discussion. I'd rather get some\nsimple basics, and then we can discuss if tweaking the callbacks is\nreally necessary or not. Even after introducing the pg_proc lookups\nto get custom callbacks.\n\n> For the purpose, I think that the v7 patch set is more\n> suitable than the v9 patch set. The v7 patch set doesn't\n> include Init() callbacks, custom options validation support\n> or extra Copy{In,Out}Response support. But the v7 patch set\n> misses the removal of the \"if\" checks in\n> NextCopyFromRawFields() that exists in the v9 patch set. I'm\n> not sure how much performance will improve by this but it\n> may be worth a try.\n\nYeah.. The custom options don't seem like an absolute strong\nrequirement for the first shot with the callbacks or even the\npossibility to retrieve the callbacks from a function call. I mean,\nyou could provide some control with SET commands and a few GUCs, at\nleast, even if that would be strange. Manipulations with a list of\nDefElems is the intuitive way to have custom options at query level,\nbut we also have to guess the set of callbacks from this list of\nDefElems coming from the query. You see my point, I am not sure \nif it would be the best thing to process twice the options, especially\nwhen it comes to decide if a DefElem should be valid or not depending\non the callbacks used. Or we could use a kind of \"special\" DefElem\nwhere we could store a set of key:value fed to a callback :)\n\n> Can I prepare the v10 patch set as \"the v7 patch set\" + \"the\n> removal of the \"if\" checks in NextCopyFromRawFields()\"?\n> (+ reverting opts_out->{csv_mode,binary} changes in\n> ProcessCopyOptions().)\n\nYep, if I got it that would make sense to me. If you can do that,\nthat would help quite a bit. :)\n--\nMichael", "msg_date": "Tue, 30 Jan 2024 17:37:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 30 Jan 2024 17:37:35 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>> We use defel->location for an error message. (We don't need\n>> to set location for the default \"text\" DefElem.)\n> \n> Yeah, but you should not need to have this error in the paths that set\n> the callback routines in opts_out if the same validation happens a few\n> lines before, in copy.c.\n\nAh, yes. defel->location is used in later patches. For\nexample, it's used when a COPY handler for the specified\nFORMAT isn't found.\n\n> I am really worried about the complexities\n> this thread is getting into because we are trying to shape the\n> callbacks in the most generic way possible based on *two* use cases.\n> This is going to be a never-ending discussion. I'd rather get some\n> simple basics, and then we can discuss if tweaking the callbacks is\n> really necessary or not. Even after introducing the pg_proc lookups\n> to get custom callbacks.\n\nI understand your concern. Let's introduce minimal callbacks\nas the first step. I think that we completed our design\ndiscussion for this feature. We can choose minimal callbacks\nbased on the discussion.\n\n> The custom options don't seem like an absolute strong\n> requirement for the first shot with the callbacks or even the\n> possibility to retrieve the callbacks from a function call. I mean,\n> you could provide some control with SET commands and a few GUCs, at\n> least, even if that would be strange. Manipulations with a list of\n> DefElems is the intuitive way to have custom options at query level,\n> but we also have to guess the set of callbacks from this list of\n> DefElems coming from the query. You see my point, I am not sure \n> if it would be the best thing to process twice the options, especially\n> when it comes to decide if a DefElem should be valid or not depending\n> on the callbacks used. Or we could use a kind of \"special\" DefElem\n> where we could store a set of key:value fed to a callback :)\n\nInteresting. Let's remove custom options support from the\ninitial minimal callbacks.\n\n>> Can I prepare the v10 patch set as \"the v7 patch set\" + \"the\n>> removal of the \"if\" checks in NextCopyFromRawFields()\"?\n>> (+ reverting opts_out->{csv_mode,binary} changes in\n>> ProcessCopyOptions().)\n> \n> Yep, if I got it that would make sense to me. If you can do that,\n> that would help quite a bit. :)\n\nI've prepared the v10 patch set. Could you try this?\n\nChanges since the v7 patch set:\n\n0001:\n\n* Remove CopyToProcessOption() callback\n* Remove CopyToGetFormat() callback\n* Revert passing CopyToState to ProcessCopyOptions()\n* Revert moving \"opts_out->{csv_mode,binary} = true\" to\n ProcessCopyOptionFormatTo()\n* Change to receive \"const char *format\" instead \"DefElem *defel\"\n by ProcessCopyOptionFormatTo()\n\n0002:\n\n* Remove CopyFromProcessOption() callback\n* Remove CopyFromGetFormat() callback\n* Change to receive \"const char *format\" instead \"DefElem\n *defel\" by ProcessCopyOptionFormatFrom()\n* Remove \"if (cstate->opts.csv_mode)\" branches from\n NextCopyFromRawFields()\n\n\n\nFYI: Here are Copy{From,To}Routine in the v10 patch set. I\nthink that only Copy{From,To}OneRow are minimal callbacks\nfor the performance gain. But can we keep Copy{From,To}Start\nand Copy{From,To}End for consistency? We can remove a few\n{csv_mode,binary} conditions by Copy{From,To}{Start,End}. It\ndoesn't depend on the number of COPY target tuples. So they\nwill not affect performance.\n\n/* Routines for a COPY FROM format implementation. */\ntypedef struct CopyFromRoutine\n{\n\t/*\n\t * Called when COPY FROM is started. This will initialize something and\n\t * receive a header.\n\t */\n\tvoid\t\t(*CopyFromStart) (CopyFromState cstate, TupleDesc tupDesc);\n\n\t/* Copy one row. It returns false if no more tuples. */\n\tbool\t\t(*CopyFromOneRow) (CopyFromState cstate, ExprContext *econtext, Datum *values, bool *nulls);\n\n\t/* Called when COPY FROM is ended. This will finalize something. */\n\tvoid\t\t(*CopyFromEnd) (CopyFromState cstate);\n}\t\t\tCopyFromRoutine;\n\n/* Routines for a COPY TO format implementation. */\ntypedef struct CopyToRoutine\n{\n\t/* Called when COPY TO is started. This will send a header. */\n\tvoid\t\t(*CopyToStart) (CopyToState cstate, TupleDesc tupDesc);\n\n\t/* Copy one row for COPY TO. */\n\tvoid\t\t(*CopyToOneRow) (CopyToState cstate, TupleTableSlot *slot);\n\n\t/* Called when COPY TO is ended. This will send a trailer. */\n\tvoid\t\t(*CopyToEnd) (CopyToState cstate);\n}\t\t\tCopyToRoutine;\n\n\n\n\nThanks,\n-- \nkou", "msg_date": "Wed, 31 Jan 2024 14:11:22 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 31, 2024 at 02:11:22PM +0900, Sutou Kouhei wrote:\n> Ah, yes. defel->location is used in later patches. For\n> example, it's used when a COPY handler for the specified\n> FORMAT isn't found.\n\nI see.\n\n> I've prepared the v10 patch set. Could you try this?\n\nThanks, I'm looking into that now.\n\n> FYI: Here are Copy{From,To}Routine in the v10 patch set. I\n> think that only Copy{From,To}OneRow are minimal callbacks\n> for the performance gain. But can we keep Copy{From,To}Start\n> and Copy{From,To}End for consistency? We can remove a few\n> {csv_mode,binary} conditions by Copy{From,To}{Start,End}. It\n> doesn't depend on the number of COPY target tuples. So they\n> will not affect performance.\n\nI think I'm OK to keep the start/end callbacks. This makes the code\nmore consistent as a whole, as well.\n--\nMichael", "msg_date": "Wed, 31 Jan 2024 14:39:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Jan 31, 2024 at 02:39:54PM +0900, Michael Paquier wrote:\n> Thanks, I'm looking into that now.\n\nI have much to say about the patch, but for now I have begun running\nsome performance tests using the patches, because this thread won't\nget far until we are sure that the callbacks do not impact performance\nin some kind of worst-case scenario. First, here is what I used to\nsetup a set of tables used for COPY FROM and COPY TO (requires [1] to\nfeed COPY FROM's data to the void, and note that default values is to\nhave a strict control on the size of the StringInfos used in the copy\npaths):\nCREATE EXTENSION blackhole_am;\nCREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\nRETURNS VOID AS\n$func$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n FOR i IN 1..num_cols LOOP\n query := query || 'a_' || i::text || ' int default 1';\n IF i != num_cols THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\nEND\n$func$ LANGUAGE plpgsql;\n-- Tables used for COPY TO\nSELECT create_table_cols ('to_tab_1', 1);\nSELECT create_table_cols ('to_tab_10', 10);\nINSERT INTO to_tab_1 SELECT FROM generate_series(1, 10000000);\nINSERT INTO to_tab_10 SELECT FROM generate_series(1, 10000000);\n-- Data for COPY FROM\nCOPY to_tab_1 TO '/tmp/to_tab_1.bin' WITH (format binary);\nCOPY to_tab_10 TO '/tmp/to_tab_10.bin' WITH (format binary);\nCOPY to_tab_1 TO '/tmp/to_tab_1.txt' WITH (format text);\nCOPY to_tab_10 TO '/tmp/to_tab_10.txt' WITH (format text);\n-- Tables used for COPY FROM\nSELECT create_table_cols ('from_tab_1', 1);\nSELECT create_table_cols ('from_tab_10', 10);\nALTER TABLE from_tab_1 SET ACCESS METHOD blackhole_am;\nALTER TABLE from_tab_10 SET ACCESS METHOD blackhole_am;\n\nThen I have run a set of tests using HEAD, v7 and v10 with queries\nlike that (adapt them depending on the format and table):\nCOPY to_tab_1 TO '/dev/null' WITH (FORMAT text) \\watch count=5\nSET client_min_messages TO error; -- for blackhole_am\nCOPY from_tab_1 FROM '/tmp/to_tab_1.txt' with (FORMAT 'text') \\watch count=5\nCOPY from_tab_1 FROM '/tmp/to_tab_1.bin' with (FORMAT 'binary') \\watch count=5\n\nAll the patches have been compiled with -O2, without assertions, etc.\nPostgres is run in tmpfs mode, on scissors, without fsync. Unlogged\ntables help a bit in focusing on the execution paths as we don't care\nabout WAL, of course. I have also included v7 in the test of tests,\nas this version uses more simple per-row callbacks.\n\nAnd here are the results I get for text and binary (ms, average of 15\nqueries after discarding the three highest and three lowest values):\n test | master | v7 | v10 \n-----------------+--------+------+------\n from_bin_1col | 1575 | 1546 | 1575\n from_bin_10col | 5364 | 5208 | 5230\n from_text_1col | 1690 | 1715 | 1684\n from_text_10col | 4875 | 4793 | 4757\n to_bin_1col | 1717 | 1730 | 1731\n to_bin_10col | 7728 | 7707 | 7513\n to_text_1col | 1710 | 1730 | 1698\n to_text_10col | 5998 | 5960 | 5987\n\nI am getting an interesting trend here in terms of a speedup between\nHEAD and the patches with a table that has 10 attributes filled with\nintegers, especially for binary and text with COPY FROM. COPY TO\nbinary also gets nice numbers, while text looks rather stable. Hmm.\n\nThese were on my buildfarm animal, but we need to be more confident\nabout all this. Could more people run these tests? I am going to do\na second session on a local machine I have at hand and see what\nhappens. Will publish the numbers here, the method will be the same.\n\n[1]: https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n--\nMichael", "msg_date": "Thu, 1 Feb 2024 10:57:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Michael,\n\nOn Thu, Feb 1, 2024 at 9:58 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jan 31, 2024 at 02:39:54PM +0900, Michael Paquier wrote:\n> > Thanks, I'm looking into that now.\n>\n> I have much to say about the patch, but for now I have begun running\n> some performance tests using the patches, because this thread won't\n> get far until we are sure that the callbacks do not impact performance\n> in some kind of worst-case scenario. First, here is what I used to\n> setup a set of tables used for COPY FROM and COPY TO (requires [1] to\n> feed COPY FROM's data to the void, and note that default values is to\n> have a strict control on the size of the StringInfos used in the copy\n> paths):\n> CREATE EXTENSION blackhole_am;\n> CREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\n> RETURNS VOID AS\n> $func$\n> DECLARE\n> query text;\n> BEGIN\n> query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n> FOR i IN 1..num_cols LOOP\n> query := query || 'a_' || i::text || ' int default 1';\n> IF i != num_cols THEN\n> query := query || ', ';\n> END IF;\n> END LOOP;\n> query := query || ')';\n> EXECUTE format(query);\n> END\n> $func$ LANGUAGE plpgsql;\n> -- Tables used for COPY TO\n> SELECT create_table_cols ('to_tab_1', 1);\n> SELECT create_table_cols ('to_tab_10', 10);\n> INSERT INTO to_tab_1 SELECT FROM generate_series(1, 10000000);\n> INSERT INTO to_tab_10 SELECT FROM generate_series(1, 10000000);\n> -- Data for COPY FROM\n> COPY to_tab_1 TO '/tmp/to_tab_1.bin' WITH (format binary);\n> COPY to_tab_10 TO '/tmp/to_tab_10.bin' WITH (format binary);\n> COPY to_tab_1 TO '/tmp/to_tab_1.txt' WITH (format text);\n> COPY to_tab_10 TO '/tmp/to_tab_10.txt' WITH (format text);\n> -- Tables used for COPY FROM\n> SELECT create_table_cols ('from_tab_1', 1);\n> SELECT create_table_cols ('from_tab_10', 10);\n> ALTER TABLE from_tab_1 SET ACCESS METHOD blackhole_am;\n> ALTER TABLE from_tab_10 SET ACCESS METHOD blackhole_am;\n>\n> Then I have run a set of tests using HEAD, v7 and v10 with queries\n> like that (adapt them depending on the format and table):\n> COPY to_tab_1 TO '/dev/null' WITH (FORMAT text) \\watch count=5\n> SET client_min_messages TO error; -- for blackhole_am\n> COPY from_tab_1 FROM '/tmp/to_tab_1.txt' with (FORMAT 'text') \\watch count=5\n> COPY from_tab_1 FROM '/tmp/to_tab_1.bin' with (FORMAT 'binary') \\watch count=5\n>\n> All the patches have been compiled with -O2, without assertions, etc.\n> Postgres is run in tmpfs mode, on scissors, without fsync. Unlogged\n> tables help a bit in focusing on the execution paths as we don't care\n> about WAL, of course. I have also included v7 in the test of tests,\n> as this version uses more simple per-row callbacks.\n>\n> And here are the results I get for text and binary (ms, average of 15\n> queries after discarding the three highest and three lowest values):\n> test | master | v7 | v10\n> -----------------+--------+------+------\n> from_bin_1col | 1575 | 1546 | 1575\n> from_bin_10col | 5364 | 5208 | 5230\n> from_text_1col | 1690 | 1715 | 1684\n> from_text_10col | 4875 | 4793 | 4757\n> to_bin_1col | 1717 | 1730 | 1731\n> to_bin_10col | 7728 | 7707 | 7513\n> to_text_1col | 1710 | 1730 | 1698\n> to_text_10col | 5998 | 5960 | 5987\n>\n> I am getting an interesting trend here in terms of a speedup between\n> HEAD and the patches with a table that has 10 attributes filled with\n> integers, especially for binary and text with COPY FROM. COPY TO\n> binary also gets nice numbers, while text looks rather stable. Hmm.\n>\n> These were on my buildfarm animal, but we need to be more confident\n> about all this. Could more people run these tests? I am going to do\n> a second session on a local machine I have at hand and see what\n> happens. Will publish the numbers here, the method will be the same.\n>\n> [1]: https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n> --\n> Michael\n\nI'm running the benchmark, but I got some strong numbers:\n\npostgres=# \\timing\nTiming is on.\npostgres=# COPY to_tab_10 TO '/dev/null' WITH (FORMAT binary) \\watch count=15\nCOPY 10000000\nTime: 3168.497 ms (00:03.168)\nCOPY 10000000\nTime: 3255.464 ms (00:03.255)\nCOPY 10000000\nTime: 3270.625 ms (00:03.271)\nCOPY 10000000\nTime: 3285.112 ms (00:03.285)\nCOPY 10000000\nTime: 3322.304 ms (00:03.322)\nCOPY 10000000\nTime: 3341.328 ms (00:03.341)\nCOPY 10000000\nTime: 3621.564 ms (00:03.622)\nCOPY 10000000\nTime: 3700.911 ms (00:03.701)\nCOPY 10000000\nTime: 3717.992 ms (00:03.718)\nCOPY 10000000\nTime: 3708.350 ms (00:03.708)\nCOPY 10000000\nTime: 3704.367 ms (00:03.704)\nCOPY 10000000\nTime: 3724.281 ms (00:03.724)\nCOPY 10000000\nTime: 3703.335 ms (00:03.703)\nCOPY 10000000\nTime: 3728.629 ms (00:03.729)\nCOPY 10000000\nTime: 3758.135 ms (00:03.758)\n\nThe first 6 rounds are like 10% better than the later 9 rounds, is this normal?\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 1 Feb 2024 11:43:07 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 01, 2024 at 10:57:58AM +0900, Michael Paquier wrote:\n> And here are the results I get for text and binary (ms, average of 15\n> queries after discarding the three highest and three lowest values):\n> test | master | v7 | v10 \n> -----------------+--------+------+------\n> from_bin_1col | 1575 | 1546 | 1575\n> from_bin_10col | 5364 | 5208 | 5230\n> from_text_1col | 1690 | 1715 | 1684\n> from_text_10col | 4875 | 4793 | 4757\n> to_bin_1col | 1717 | 1730 | 1731\n> to_bin_10col | 7728 | 7707 | 7513\n> to_text_1col | 1710 | 1730 | 1698\n> to_text_10col | 5998 | 5960 | 5987\n\nHere are some numbers from a second local machine:\n test | master | v7 | v10 \n-----------------+--------+------+------\n from_bin_1col | 508 | 467 | 461\n from_bin_10col | 2192 | 2083 | 2098\n from_text_1col | 510 | 499 | 517\n from_text_10col | 1970 | 1678 | 1654\n to_bin_1col | 575 | 577 | 573\n to_bin_10col | 2680 | 2678 | 2722\n to_text_1col | 516 | 506 | 527\n to_text_10col | 2250 | 2245 | 2235\n\nThis is confirming a speedup with COPY FROM for both text and binary,\nwith more impact with a larger number of attributes. That's harder to\nconclude about COPY TO in both cases, but at least I'm not seeing any\nregression even with some variance caused by what looks like noise.\nWe need more numbers from more people. Sutou-san or Sawada-san, or\nany volunteers?\n--\nMichael", "msg_date": "Thu, 1 Feb 2024 12:49:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 01, 2024 at 11:43:07AM +0800, Junwang Zhao wrote:\n> The first 6 rounds are like 10% better than the later 9 rounds, is this normal?\n\nEven with HEAD? Perhaps you have some OS cache eviction in play here?\nFWIW, I'm not seeing any of that with longer runs after 7~ tries in a\nloop of 15.\n--\nMichael", "msg_date": "Thu, 1 Feb 2024 12:56:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 1, 2024 at 11:56 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Feb 01, 2024 at 11:43:07AM +0800, Junwang Zhao wrote:\n> > The first 6 rounds are like 10% better than the later 9 rounds, is this normal?\n>\n> Even with HEAD? Perhaps you have some OS cache eviction in play here?\n> FWIW, I'm not seeing any of that with longer runs after 7~ tries in a\n> loop of 15.\n\nYeah, with HEAD. I'm on ubuntu 22.04, I did not change any gucs, maybe I should\nset a higher shared_buffers? But I dought that's related ;(\n\n\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 1 Feb 2024 12:20:11 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nThanks for preparing benchmark.\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 1 Feb 2024 12:49:59 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> On Thu, Feb 01, 2024 at 10:57:58AM +0900, Michael Paquier wrote:\n>> And here are the results I get for text and binary (ms, average of 15\n>> queries after discarding the three highest and three lowest values):\n>> test | master | v7 | v10 \n>> -----------------+--------+------+------\n>> from_bin_1col | 1575 | 1546 | 1575\n>> from_bin_10col | 5364 | 5208 | 5230\n>> from_text_1col | 1690 | 1715 | 1684\n>> from_text_10col | 4875 | 4793 | 4757\n>> to_bin_1col | 1717 | 1730 | 1731\n>> to_bin_10col | 7728 | 7707 | 7513\n>> to_text_1col | 1710 | 1730 | 1698\n>> to_text_10col | 5998 | 5960 | 5987\n> \n> Here are some numbers from a second local machine:\n> test | master | v7 | v10 \n> -----------------+--------+------+------\n> from_bin_1col | 508 | 467 | 461\n> from_bin_10col | 2192 | 2083 | 2098\n> from_text_1col | 510 | 499 | 517\n> from_text_10col | 1970 | 1678 | 1654\n> to_bin_1col | 575 | 577 | 573\n> to_bin_10col | 2680 | 2678 | 2722\n> to_text_1col | 516 | 506 | 527\n> to_text_10col | 2250 | 2245 | 2235\n> \n> This is confirming a speedup with COPY FROM for both text and binary,\n> with more impact with a larger number of attributes. That's harder to\n> conclude about COPY TO in both cases, but at least I'm not seeing any\n> regression even with some variance caused by what looks like noise.\n> We need more numbers from more people. Sutou-san or Sawada-san, or\n> any volunteers?\n\nHere are some numbers on my local machine (Note that my\nlocal machine isn't suitable for benchmark as I said\nbefore. Each number is median of \"\\watch 15\" results):\n\n1:\n direction format n_columns master v7 v10\n to text 1 1077.254 1016.953 1028.434\n to csv 1 1079.88 1055.545 1053.95\n to binary 1 1051.247 1033.93 1003.44\n to text 10 4373.168 3980.442 3955.94\n to csv 10 4753.842 4719.2 4677.643\n to binary 10 4598.374 4431.238 4285.757\n from text 1 875.729 916.526 869.283\n from csv 1 909.355 1001.277 918.655\n from binary 1 872.943 907.778 859.433\n from text 10 2594.429 2345.292 2587.603\n from csv 10 2968.972 3039.544 2964.468\n from binary 10 3072.01 3109.267 3093.983\n\n2:\n direction format n_columns master v7 v10\n to text 1 1061.908 988.768 978.291\n to csv 1 1095.109 1037.015 1041.613\n to binary 1 1076.992 1000.212 983.318\n to text 10 4336.517 3901.833 3841.789\n to csv 10 4679.411 4640.975 4570.774\n to binary 10 4465.04 4508.063 4261.749\n from text 1 866.689 917.54 830.417\n from csv 1 917.973 1695.401 871.991\n from binary 1 841.104 1422.012 820.786\n from text 10 2523.607 3147.738 2517.505\n from csv 10 2917.018 3042.685 2950.338\n from binary 10 2998.051 3128.542 3018.954\n\n3:\n direction format n_columns master v7 v10\n to text 1 1021.168 1031.183 962.945\n to csv 1 1076.549 1069.661 1060.258\n to binary 1 1024.611 1022.143 975.768\n to text 10 4327.24 3936.703 4049.893\n to csv 10 4620.436 4531.676 4685.672\n to binary 10 4457.165 4390.992 4301.463\n from text 1 887.532 907.365 888.892\n from csv 1 945.167 1012.29 895.921\n from binary 1 853.06 854.652 849.661\n from text 10 2660.509 2304.256 2527.071\n from csv 10 2913.644 2968.204 2935.081\n from binary 10 3020.812 3081.162 3090.803\n\nI'll measure again on my local machine later. I'll stop\nother processes such as Web browser, editor and so on as\nmuch as possible when I do.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 02 Feb 2024 00:19:51 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 02, 2024 at 12:19:51AM +0900, Sutou Kouhei wrote:\n> Here are some numbers on my local machine (Note that my\n> local machine isn't suitable for benchmark as I said\n> before. Each number is median of \"\\watch 15\" results):\n>>\n> I'll measure again on my local machine later. I'll stop\n> other processes such as Web browser, editor and so on as\n> much as possible when I do.\n\nThanks for compiling some numbers. This is showing a lot of variance.\nExpecially, these two lines in table 2 are showing surprising results\nfor v7:\n direction format n_columns master v7 v10\n from csv 1 917.973 1695.401 871.991\n from binary 1 841.104 1422.012 820.786\n\nI am going to try to plug in some rusage() calls in the backend for\nthe COPY paths. I hope that gives more precision about the backend\nactivity. I'll post that with more numbers.\n--\nMichael", "msg_date": "Fri, 2 Feb 2024 06:51:02 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 2 Feb 2024 06:51:02 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> On Fri, Feb 02, 2024 at 12:19:51AM +0900, Sutou Kouhei wrote:\n>> Here are some numbers on my local machine (Note that my\n>> local machine isn't suitable for benchmark as I said\n>> before. Each number is median of \"\\watch 15\" results):\n>>>\n>> I'll measure again on my local machine later. I'll stop\n>> other processes such as Web browser, editor and so on as\n>> much as possible when I do.\n> \n> Thanks for compiling some numbers. This is showing a lot of variance.\n> Expecially, these two lines in table 2 are showing surprising results\n> for v7:\n> direction format n_columns master v7 v10\n> from csv 1 917.973 1695.401 871.991\n> from binary 1 841.104 1422.012 820.786\n\nHere are more numbers:\n\n1:\n direction format n_columns master v7 v10\n to text 1 1053.844 978.998 956.575\n to csv 1 1091.316 1020.584 1098.314\n to binary 1 1034.685 969.224 980.458\n to text 10 4216.264 3886.515 4111.417\n to csv 10 4649.228 4530.882 4682.988\n to binary 10 4219.228 4189.99 4211.942\n from text 1 851.697 896.968 890.458\n from csv 1 890.229 936.231 887.15\n from binary 1 784.407 817.07 938.736\n from text 10 2549.056 2233.899 2630.892\n from csv 10 2809.441 2868.411 2895.196\n from binary 10 2985.674 3027.522 3397.5\n\n2:\n direction format n_columns master v7 v10\n to text 1 1013.764 1011.968 940.855\n to csv 1 1060.431 1065.468 1040.68\n to binary 1 1013.652 1009.956 965.675\n to text 10 4411.484 4031.571 3896.836\n to csv 10 4739.625 4715.81 4631.002\n to binary 10 4374.077 4357.942 4227.215\n from text 1 955.078 922.346 866.222\n from csv 1 1040.717 986.524 905.657\n from binary 1 849.316 864.859 833.152\n from text 10 2703.209 2361.651 2533.992\n from csv 10 2990.35 3059.167 2930.632\n from binary 10 3008.375 3368.714 3055.723\n\n3:\n direction format n_columns master v7 v10\n to text 1 1084.756 1003.822 994.409\n to csv 1 1092.4 1062.536 1079.027\n to binary 1 1046.774 994.168 993.633\n to text 10 4363.51 3978.205 4124.359\n to csv 10 4866.762 4616.001 4715.052\n to binary 10 4382.412 4363.269 4213.456\n from text 1 852.976 907.315 860.749\n from csv 1 925.187 962.632 897.833\n from binary 1 824.997 897.046 828.231\n from text 10 2591.07 2358.541 2540.431\n from csv 10 2907.033 3018.486 2915.997\n from binary 10 3069.027 3209.21 3119.128\n\nOther processes are stopped while I measure them. But I'm\nnot sure these numbers are more reliable than before...\n\n> I am going to try to plug in some rusage() calls in the backend for\n> the COPY paths. I hope that gives more precision about the backend\n> activity. I'll post that with more numbers.\n\nThanks. It'll help us.\n\n\n-- \nkou\n\n\n", "msg_date": "Fri, 02 Feb 2024 09:40:56 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 02, 2024 at 06:51:02AM +0900, Michael Paquier wrote:\n> I am going to try to plug in some rusage() calls in the backend for\n> the COPY paths. I hope that gives more precision about the backend\n> activity. I'll post that with more numbers.\n\nAnd here they are with log_statement_stats enabled to get rusage() fot\nthese queries:\n test | user_s | system_s | elapsed_s \n----------------------+----------+----------+-----------\n head_to_bin_1col | 1.639761 | 0.007998 | 1.647762\n v7_to_bin_1col | 1.645499 | 0.004003 | 1.649498\n v10_to_bin_1col | 1.639466 | 0.004008 | 1.643488\n\n head_to_bin_10col | 7.486369 | 0.056007 | 7.542485\n v7_to_bin_10col | 7.314341 | 0.039990 | 7.354743\n v10_to_bin_10col | 7.329355 | 0.052007 | 7.381408\n\n head_to_text_1col | 1.581140 | 0.012000 | 1.593166\n v7_to_text_1col | 1.615441 | 0.003992 | 1.619446\n v10_to_text_1col | 1.613443 | 0.000000 | 1.613454\n\n head_to_text_10col | 5.897014 | 0.011990 | 5.909063\n v7_to_text_10col | 5.722872 | 0.016014 | 5.738979\n v10_to_text_10col | 5.762286 | 0.011993 | 5.774265\n\n head_from_bin_1col | 1.524038 | 0.020000 | 1.544046\n v7_from_bin_1col | 1.551367 | 0.016015 | 1.567408\n v10_from_bin_1col | 1.560087 | 0.016001 | 1.576115\n\n head_from_bin_10col | 5.238444 | 0.139993 | 5.378595\n v7_from_bin_10col | 5.170503 | 0.076021 | 5.246588\n v10_from_bin_10col | 5.106496 | 0.112020 | 5.218565\n\n head_from_text_1col | 1.664124 | 0.003998 | 1.668172\n v7_from_text_1col | 1.720616 | 0.007990 | 1.728617\n v10_from_text_1col | 1.683950 | 0.007990 | 1.692098\n\n head_from_text_10col | 4.859651 | 0.015996 | 4.875747\n v7_from_text_10col | 4.775975 | 0.032000 | 4.808051\n v10_from_text_10col | 4.737512 | 0.028012 | 4.765522\n(24 rows)\n\nI'm looking at this table, and what I can see is still a lot of\nvariance in the tests with tables involving 1 attribute. However, a\nsecond thing stands out to me here: there is a speedup with the\n10-attribute case for all both COPY FROM and COPY TO, and both\nformats. The data posted at [1] is showing me the same trend. In\nshort, let's move on with this split refactoring with the per-row\ncallbacks. That clearly shows benefits.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Fri, 2 Feb 2024 09:52:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 02, 2024 at 09:40:56AM +0900, Sutou Kouhei wrote:\n> Thanks. It'll help us.\n\nI have done a review of v10, see v11 attached which is still WIP, with\nthe patches for COPY TO and COPY FROM merged together. Note that I'm\nthinking to merge them into a single commit.\n\n@@ -74,11 +75,11 @@ typedef struct CopyFormatOptions\n bool convert_selectively; /* do selective binary conversion? */\n CopyOnErrorChoice on_error; /* what to do when error happened */\n List *convert_select; /* list of column names (can be NIL) */\n+ const CopyToRoutine *to_routine; /* callback routines for COPY TO */\n } CopyFormatOptions;\n\nAdding the routines to the structure for the format options is in my\nopinion incorrect. The elements of this structure are first processed\nin the option deparsing path, and then we need to use the options to\nguess which routines we need. A more natural location is cstate\nitself, so as the pointer to the routines is isolated within copyto.c\nand copyfrom_internal.h. My point is: the routines are an\nimplementation detail that the centralized copy.c has no need to know\nabout. This also led to a strange separation with\nProcessCopyOptionFormatFrom() and ProcessCopyOptionFormatTo() to fit\nthe hole in-between.\n\nThe separation between cstate and the format-related fields could be\nmuch better, though I am not sure if it is worth doing as it\nintroduces more duplication. For example, max_fields and raw_fields\nare specific to text and csv, while binary does not care much.\nPerhaps this is just useful to be for custom formats.\n\ncopyapi.h needs more documentation, like what is expected for\nextension developers when using these, what are the arguments, etc. I\nhave added what I had in mind for now.\n\n+typedef char *(*PostpareColumnValue) (CopyFromState cstate, char *string, int m);\n\nCopyReadAttributes and PostpareColumnValue are also callbacks specific\nto text and csv, except that they are used within the per-row\ncallbacks. The same can be said about CopyAttributeOutHeaderFunction.\nIt seems to me that it would be less confusing to store pointers to\nthem in the routine structures, where the final picture involves not\nhaving multiple layers of APIs like CopyToCSVStart,\nCopyAttributeOutTextValue, etc. These *have* to be documented\nproperly in copyapi.h, and this is much easier now that cstate stores\nthe routine pointers. That would also make simpler function stacks.\nNote that I have not changed that in the v11 attached.\n\nThis business with the extra callbacks required for csv and text is my\nmain point of contention, but I'd be OK once the model of the APIs is\nmore linear, with everything in Copy{From,To}State. The changes would\nbe rather simple, and I'd be OK to put my hands on it. Just,\nSutou-san, would you agree with my last point about these extra\ncallbacks?\n--\nMichael", "msg_date": "Fri, 2 Feb 2024 15:21:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 2, 2024 at 2:21 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Feb 02, 2024 at 09:40:56AM +0900, Sutou Kouhei wrote:\n> > Thanks. It'll help us.\n>\n> I have done a review of v10, see v11 attached which is still WIP, with\n> the patches for COPY TO and COPY FROM merged together. Note that I'm\n> thinking to merge them into a single commit.\n>\n> @@ -74,11 +75,11 @@ typedef struct CopyFormatOptions\n> bool convert_selectively; /* do selective binary conversion? */\n> CopyOnErrorChoice on_error; /* what to do when error happened */\n> List *convert_select; /* list of column names (can be NIL) */\n> + const CopyToRoutine *to_routine; /* callback routines for COPY TO */\n> } CopyFormatOptions;\n>\n> Adding the routines to the structure for the format options is in my\n> opinion incorrect. The elements of this structure are first processed\n> in the option deparsing path, and then we need to use the options to\n> guess which routines we need. A more natural location is cstate\n> itself, so as the pointer to the routines is isolated within copyto.c\n\nI agree CopyToRoutine should be placed into CopyToStateData, but\nwhy set it after ProcessCopyOptions, the implementation of\nCopyToGetRoutine doesn't make sense if we want to support custom\nformat in the future.\n\nSeems the refactor of v11 only considered performance but not\n*extendable copy format*.\n\n> and copyfrom_internal.h. My point is: the routines are an\n> implementation detail that the centralized copy.c has no need to know\n> about. This also led to a strange separation with\n> ProcessCopyOptionFormatFrom() and ProcessCopyOptionFormatTo() to fit\n> the hole in-between.\n>\n> The separation between cstate and the format-related fields could be\n> much better, though I am not sure if it is worth doing as it\n> introduces more duplication. For example, max_fields and raw_fields\n> are specific to text and csv, while binary does not care much.\n> Perhaps this is just useful to be for custom formats.\n\nI think those can be placed in format specific fields by utilizing the opaque\nspace, but yeah, this will introduce duplication.\n\n>\n> copyapi.h needs more documentation, like what is expected for\n> extension developers when using these, what are the arguments, etc. I\n> have added what I had in mind for now.\n>\n> +typedef char *(*PostpareColumnValue) (CopyFromState cstate, char *string, int m);\n>\n> CopyReadAttributes and PostpareColumnValue are also callbacks specific\n> to text and csv, except that they are used within the per-row\n> callbacks. The same can be said about CopyAttributeOutHeaderFunction.\n> It seems to me that it would be less confusing to store pointers to\n> them in the routine structures, where the final picture involves not\n> having multiple layers of APIs like CopyToCSVStart,\n> CopyAttributeOutTextValue, etc. These *have* to be documented\n> properly in copyapi.h, and this is much easier now that cstate stores\n> the routine pointers. That would also make simpler function stacks.\n> Note that I have not changed that in the v11 attached.\n>\n> This business with the extra callbacks required for csv and text is my\n> main point of contention, but I'd be OK once the model of the APIs is\n> more linear, with everything in Copy{From,To}State. The changes would\n> be rather simple, and I'd be OK to put my hands on it. Just,\n> Sutou-san, would you agree with my last point about these extra\n> callbacks?\n> --\n> Michael\n\nIf V7 and V10 have no performance reduction, then I think V6 is also\ngood with performance, since most of the time goes to CopyToOneRow\nand CopyFromOneRow.\n\nI just think we should take the *extendable* into consideration at\nthe beginning.\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 2 Feb 2024 15:27:15 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 2 Feb 2024 15:21:31 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> I have done a review of v10, see v11 attached which is still WIP, with\n> the patches for COPY TO and COPY FROM merged together. Note that I'm\n> thinking to merge them into a single commit.\n\nOK. I don't have a strong opinion for commit unit.\n\n> @@ -74,11 +75,11 @@ typedef struct CopyFormatOptions\n> bool convert_selectively; /* do selective binary conversion? */\n> CopyOnErrorChoice on_error; /* what to do when error happened */\n> List *convert_select; /* list of column names (can be NIL) */\n> + const CopyToRoutine *to_routine; /* callback routines for COPY TO */\n> } CopyFormatOptions;\n> \n> Adding the routines to the structure for the format options is in my\n> opinion incorrect. The elements of this structure are first processed\n> in the option deparsing path, and then we need to use the options to\n> guess which routines we need.\n\nThis was discussed with Sawada-san a bit before. [1][2]\n\n[1] https://www.postgresql.org/message-id/flat/CAD21AoBmNiWwrspuedgAPgbAqsn7e7NoZYF6gNnYBf%2BgXEk9Mg%40mail.gmail.com#bfd19262d261c67058fdb8d64e6a723c\n[2] https://www.postgresql.org/message-id/flat/20240130.144531.1257430878438173740.kou%40clear-code.com#fc55392d77f400fc74e42686fe7e348a\n\nI kept the routines in CopyFormatOptions for custom option\nprocessing. But I should have not cared about it in this\npatch set because this patch set doesn't include custom\noption processing.\n\nSo I'm OK that we move the routines to\nCopy{From,To}StateData.\n\n> This also led to a strange separation with\n> ProcessCopyOptionFormatFrom() and ProcessCopyOptionFormatTo() to fit\n> the hole in-between.\n\nThey also for custom option processing. We don't need to\ncare about them in this patch set too.\n\n> copyapi.h needs more documentation, like what is expected for\n> extension developers when using these, what are the arguments, etc. I\n> have added what I had in mind for now.\n\nThanks! I'm not good at writing documentation in English...\n\n> +typedef char *(*PostpareColumnValue) (CopyFromState cstate, char *string, int m);\n> \n> CopyReadAttributes and PostpareColumnValue are also callbacks specific\n> to text and csv, except that they are used within the per-row\n> callbacks. The same can be said about CopyAttributeOutHeaderFunction.\n> It seems to me that it would be less confusing to store pointers to\n> them in the routine structures, where the final picture involves not\n> having multiple layers of APIs like CopyToCSVStart,\n> CopyAttributeOutTextValue, etc. These *have* to be documented\n> properly in copyapi.h, and this is much easier now that cstate stores\n> the routine pointers. That would also make simpler function stacks.\n> Note that I have not changed that in the v11 attached.\n> \n> This business with the extra callbacks required for csv and text is my\n> main point of contention, but I'd be OK once the model of the APIs is\n> more linear, with everything in Copy{From,To}State. The changes would\n> be rather simple, and I'd be OK to put my hands on it. Just,\n> Sutou-san, would you agree with my last point about these extra\n> callbacks?\n\nI'm OK with the approach. But how about adding the extra\ncallbacks to Copy{From,To}StateData not\nCopy{From,To}Routines like CopyToStateData::data_dest_cb and\nCopyFromStateData::data_source_cb? They are only needed for\n\"text\" and \"csv\". So we don't need to add them to\nCopy{From,To}Routines to keep required callback minimum.\n\nWhat is the better next action for us? Do you want to\ncomplete the WIP v11 patch set by yourself (and commit it)?\nOr should I take over it?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 02 Feb 2024 16:33:19 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3LxnBwNRPRwvmimDvOkPvYL8pB1+rhLBnxjeddFt3MeNw@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 2 Feb 2024 15:27:15 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> I agree CopyToRoutine should be placed into CopyToStateData, but\n> why set it after ProcessCopyOptions, the implementation of\n> CopyToGetRoutine doesn't make sense if we want to support custom\n> format in the future.\n> \n> Seems the refactor of v11 only considered performance but not\n> *extendable copy format*.\n\nRight.\nWe focus on performance for now. And then we will focus on\nextendability. [1]\n\n[1] https://www.postgresql.org/message-id/flat/20240130.171511.2014195814665030502.kou%40clear-code.com#757a48c273f140081656ec8eb69f502b\n\n> If V7 and V10 have no performance reduction, then I think V6 is also\n> good with performance, since most of the time goes to CopyToOneRow\n> and CopyFromOneRow.\n\nDon't worry. I'll re-submit changes in the v6 patch set\nagain after the current patch set that focuses on\nperformance is merged.\n\n> I just think we should take the *extendable* into consideration at\n> the beginning.\n\nIntroducing Copy{To,From}Routine is also valuable for\nextendability. We can improve extendability later. Let's\nfocus on only performance for now to introduce\nCopy{To,From}Routine.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 02 Feb 2024 16:47:02 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 02, 2024 at 04:33:19PM +0900, Sutou Kouhei wrote:\n> Hi,\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 2 Feb 2024 15:21:31 +0900,\n> Michael Paquier <[email protected]> wrote:\n> \n> > I have done a review of v10, see v11 attached which is still WIP, with\n> > the patches for COPY TO and COPY FROM merged together. Note that I'm\n> > thinking to merge them into a single commit.\n> \n> OK. I don't have a strong opinion for commit unit.\n> \n> > @@ -74,11 +75,11 @@ typedef struct CopyFormatOptions\n> > bool convert_selectively; /* do selective binary conversion? */\n> > CopyOnErrorChoice on_error; /* what to do when error happened */\n> > List *convert_select; /* list of column names (can be NIL) */\n> > + const CopyToRoutine *to_routine; /* callback routines for COPY TO */\n> > } CopyFormatOptions;\n> > \n> > Adding the routines to the structure for the format options is in my\n> > opinion incorrect. The elements of this structure are first processed\n> > in the option deparsing path, and then we need to use the options to\n> > guess which routines we need.\n> \n> This was discussed with Sawada-san a bit before. [1][2]\n> \n> [1] https://www.postgresql.org/message-id/flat/CAD21AoBmNiWwrspuedgAPgbAqsn7e7NoZYF6gNnYBf%2BgXEk9Mg%40mail.gmail.com#bfd19262d261c67058fdb8d64e6a723c\n> [2] https://www.postgresql.org/message-id/flat/20240130.144531.1257430878438173740.kou%40clear-code.com#fc55392d77f400fc74e42686fe7e348a\n> \n> I kept the routines in CopyFormatOptions for custom option\n> processing. But I should have not cared about it in this\n> patch set because this patch set doesn't include custom\n> option processing.\n\nOne idea I was considering is whether we should use a special value in\nthe \"format\" DefElem, say \"custom:$my_custom_format\" where it would be\npossible to bypass the formay check when processing options and find\nthe routines after processing all the options. I'm not wedded to\nthat, but attaching the routines to the state data is IMO the correct\nthing, because this has nothing to do with CopyFormatOptions.\n\n> So I'm OK that we move the routines to\n> Copy{From,To}StateData.\n\nOkay.\n\n>> copyapi.h needs more documentation, like what is expected for\n>> extension developers when using these, what are the arguments, etc. I\n>> have added what I had in mind for now.\n> \n> Thanks! I'm not good at writing documentation in English...\n\nNo worries.\n\n> I'm OK with the approach. But how about adding the extra\n> callbacks to Copy{From,To}StateData not\n> Copy{From,To}Routines like CopyToStateData::data_dest_cb and\n> CopyFromStateData::data_source_cb? They are only needed for\n> \"text\" and \"csv\". So we don't need to add them to\n> Copy{From,To}Routines to keep required callback minimum.\n\nAnd set them in cstate while we are in the Start routine, right? Hmm.\nWhy not.. That would get rid of the multiples layers v11 has, which\nis my pain point, and we have many fields in cstate that are already\nused on a per-format basis.\n\n> What is the better next action for us? Do you want to\n> complete the WIP v11 patch set by yourself (and commit it)?\n> Or should I take over it?\n\nI was planning to work on that, but wanted to be sure how you felt\nabout the problem with text and csv first.\n--\nMichael", "msg_date": "Fri, 2 Feb 2024 17:04:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 2 Feb 2024 17:04:28 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> One idea I was considering is whether we should use a special value in\n> the \"format\" DefElem, say \"custom:$my_custom_format\" where it would be\n> possible to bypass the formay check when processing options and find\n> the routines after processing all the options. I'm not wedded to\n> that, but attaching the routines to the state data is IMO the correct\n> thing, because this has nothing to do with CopyFormatOptions.\n\nThanks for sharing your idea.\nLet's discuss how to support custom options after we\ncomplete the current performance changes.\n\n>> I'm OK with the approach. But how about adding the extra\n>> callbacks to Copy{From,To}StateData not\n>> Copy{From,To}Routines like CopyToStateData::data_dest_cb and\n>> CopyFromStateData::data_source_cb? They are only needed for\n>> \"text\" and \"csv\". So we don't need to add them to\n>> Copy{From,To}Routines to keep required callback minimum.\n> \n> And set them in cstate while we are in the Start routine, right?\n\nI imagined that it's done around the following part:\n\n@@ -1418,6 +1579,9 @@ BeginCopyFrom(ParseState *pstate,\n /* Extract options from the statement node tree */\n ProcessCopyOptions(pstate, &cstate->opts, true /* is_from */ , options);\n \n+ /* Set format routine */\n+ cstate->routine = CopyFromGetRoutine(cstate->opts);\n+\n /* Process the target relation */\n cstate->rel = rel;\n \n\nExample1:\n\n/* Set format routine */\ncstate->routine = CopyFromGetRoutine(cstate->opts);\nif (!cstate->opts.binary)\n if (cstate->opts.csv_mode)\n cstate->copy_read_attributes = CopyReadAttributesCSV;\n else\n cstate->copy_read_attributes = CopyReadAttributesText;\n\nExample2:\n\nstatic void\nCopyFromSetRoutine(CopyFromState cstate)\n{\n if (cstate->opts.csv_mode)\n {\n cstate->routine = &CopyFromRoutineCSV;\n cstate->copy_read_attributes = CopyReadAttributesCSV;\n }\n else if (cstate.binary)\n cstate->routine = &CopyFromRoutineBinary;\n else\n {\n cstate->routine = &CopyFromRoutineText;\n cstate->copy_read_attributes = CopyReadAttributesText;\n }\n}\n\nBeginCopyFrom()\n{\n /* Set format routine */\n CopyFromSetRoutine(cstate);\n}\n\n\nBut I don't object your original approach. If we have the\nextra callbacks in Copy{From,To}Routines, I just don't use\nthem for my custom format extension.\n\n>> What is the better next action for us? Do you want to\n>> complete the WIP v11 patch set by yourself (and commit it)?\n>> Or should I take over it?\n> \n> I was planning to work on that, but wanted to be sure how you felt\n> about the problem with text and csv first.\n\nOK.\nMy opinion is the above. I have an idea how to implement it\nbut it's not a strong idea. You can choose whichever you like.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 02 Feb 2024 17:46:18 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 02, 2024 at 05:46:18PM +0900, Sutou Kouhei wrote:\n> Hi,\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 2 Feb 2024 17:04:28 +0900,\n> Michael Paquier <[email protected]> wrote:\n> \n> > One idea I was considering is whether we should use a special value in\n> > the \"format\" DefElem, say \"custom:$my_custom_format\" where it would be\n> > possible to bypass the formay check when processing options and find\n> > the routines after processing all the options. I'm not wedded to\n> > that, but attaching the routines to the state data is IMO the correct\n> > thing, because this has nothing to do with CopyFormatOptions.\n> \n> Thanks for sharing your idea.\n> Let's discuss how to support custom options after we\n> complete the current performance changes.\n> \n> >> I'm OK with the approach. But how about adding the extra\n> >> callbacks to Copy{From,To}StateData not\n> >> Copy{From,To}Routines like CopyToStateData::data_dest_cb and\n> >> CopyFromStateData::data_source_cb? They are only needed for\n> >> \"text\" and \"csv\". So we don't need to add them to\n> >> Copy{From,To}Routines to keep required callback minimum.\n> > \n> > And set them in cstate while we are in the Start routine, right?\n> \n> I imagined that it's done around the following part:\n> \n> @@ -1418,6 +1579,9 @@ BeginCopyFrom(ParseState *pstate,\n> /* Extract options from the statement node tree */\n> ProcessCopyOptions(pstate, &cstate->opts, true /* is_from */ , options);\n> \n> + /* Set format routine */\n> + cstate->routine = CopyFromGetRoutine(cstate->opts);\n> +\n> /* Process the target relation */\n> cstate->rel = rel;\n> \n> \n> Example1:\n> \n> /* Set format routine */\n> cstate->routine = CopyFromGetRoutine(cstate->opts);\n> if (!cstate->opts.binary)\n> if (cstate->opts.csv_mode)\n> cstate->copy_read_attributes = CopyReadAttributesCSV;\n> else\n> cstate->copy_read_attributes = CopyReadAttributesText;\n> \n> Example2:\n> \n> static void\n> CopyFromSetRoutine(CopyFromState cstate)\n> {\n> if (cstate->opts.csv_mode)\n> {\n> cstate->routine = &CopyFromRoutineCSV;\n> cstate->copy_read_attributes = CopyReadAttributesCSV;\n> }\n> else if (cstate.binary)\n> cstate->routine = &CopyFromRoutineBinary;\n> else\n> {\n> cstate->routine = &CopyFromRoutineText;\n> cstate->copy_read_attributes = CopyReadAttributesText;\n> }\n> }\n> \n> BeginCopyFrom()\n> {\n> /* Set format routine */\n> CopyFromSetRoutine(cstate);\n> }\n> \n> \n> But I don't object your original approach. If we have the\n> extra callbacks in Copy{From,To}Routines, I just don't use\n> them for my custom format extension.\n> \n> >> What is the better next action for us? Do you want to\n> >> complete the WIP v11 patch set by yourself (and commit it)?\n> >> Or should I take over it?\n> > \n> > I was planning to work on that, but wanted to be sure how you felt\n> > about the problem with text and csv first.\n> \n> OK.\n> My opinion is the above. I have an idea how to implement it\n> but it's not a strong idea. You can choose whichever you like.\n\nSo, I've looked at all that today, and finished by applying two\npatches as of 2889fd23be56 and 95fb5b49024a to get some of the\nweirdness with the workhorse routines out of the way. Both have added\ncallbacks assigned in their respective cstate data for text and csv.\nAs this is called within the OneRow routine, I can live with that. If\nthere is an opposition to that, we could just attach it within the\nroutines. The CopyAttributeOut routines had a strange argument\nlayout, actually, the flag for the quotes is required as a header uses\nno quotes, but there was little point in the \"single arg\" case, so\nI've removed it.\n\nI am attaching a v12 which is close to what I want it to be, with\nmuch more documentation and comments. There are two things that I've\nchanged compared to the previous versions though:\n1) I have added a callback to set up the input and output functions\nrather than attach that in the Start callback. These routines are now\ncalled once per argument, where we know that the argument is valid.\nThe callbacks are in charge of filling the FmgrInfos. There are some\ngood reasons behind that:\n- No need for plugins to think about how to allocate this data. v11\nand other versions were doing things the wrong way by allocating this\nstuff in the wrong memory context as we switch to the COPY context\nwhen we are in the Start routines.\n- This avoids attisdropped problems, and we have a long history of\nbugs regarding that. I'm ready to bet that custom formats would get\nthat wrong.\n2) I have backpedaled on the postpare callback, which did not bring\nmuch in clarity IMO while being a CSV-only callback. Note that we\nhave in copyfromparse.c more paths that are only for CSV but the past\nversions of the patch never cared about that. This makes the text and\nCSV implementations much closer to each other, as a result.\n\nI had mixed feelings about CopySendEndOfRow() being split to\nCopyToTextSendEndOfRow() to send the line terminations when sending a\nCSV/text row, but I'm OK with that at the end. v12 is mostly about\nmoving code around at this point, making it kind of straight-forward\nto follow as the code blocks are the same. I'm still planning to do a\nfew more measurements, just lacked of time. Let me know if you have\ncomments about all that.\n--\nMichael", "msg_date": "Mon, 5 Feb 2024 16:14:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 5 Feb 2024 16:14:08 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> So, I've looked at all that today, and finished by applying two\n> patches as of 2889fd23be56 and 95fb5b49024a to get some of the\n> weirdness with the workhorse routines out of the way.\n\nThanks!\n\n> As this is called within the OneRow routine, I can live with that. If\n> there is an opposition to that, we could just attach it within the\n> routines.\n\nI don't object the approach.\n\n> I am attaching a v12 which is close to what I want it to be, with\n> much more documentation and comments. There are two things that I've\n> changed compared to the previous versions though:\n> 1) I have added a callback to set up the input and output functions\n> rather than attach that in the Start callback.\n\nI'm OK with this. I just don't use them in Apache Arrow COPY\nFORMAT extension.\n\n> - No need for plugins to think about how to allocate this data. v11\n> and other versions were doing things the wrong way by allocating this\n> stuff in the wrong memory context as we switch to the COPY context\n> when we are in the Start routines.\n\nOh, sorry. I missed it when I moved them.\n\n> 2) I have backpedaled on the postpare callback, which did not bring\n> much in clarity IMO while being a CSV-only callback. Note that we\n> have in copyfromparse.c more paths that are only for CSV but the past\n> versions of the patch never cared about that. This makes the text and\n> CSV implementations much closer to each other, as a result.\n\nAh, sorry. I forgot to eliminate cstate->opts.csv_mode in\nCopyReadLineText(). The postpare callback is for\noptimization. If it doesn't improve performance, we don't\nneed to introduce it.\n\nWe may want to try eliminating cstate->opts.csv_mode in\nCopyReadLineText() for performance. But we don't need to\ndo this in introducing CopyFromRoutine. We can defer it.\n\nSo I don't object removing the postpare callback.\n\n> Let me know if you have\n> comments about all that.\n\nHere are some comments for the patch:\n\n+\t/*\n+\t * Called when COPY FROM is started to set up the input functions\n+\t * associated to the relation's attributes writing to. `fmgr_info` can be\n\nfmgr_info ->\nfinfo\n\n+\t * optionally filled to provide the catalog information of the input\n+\t * function. `typioparam` can be optinally filled to define the OID of\n\noptinally ->\noptionally\n\n+\t * the type to pass to the input function. `atttypid` is the OID of data\n+\t * type used by the relation's attribute.\n+\t */\n+\tvoid\t\t(*CopyFromInFunc) (Oid atttypid, FmgrInfo *finfo,\n+\t\t\t\t\t\t\t\t Oid *typioparam);\n\nHow about passing CopyFromState cstate too like other\ncallbacks for consistency?\n\n+\t/*\n+\t * Copy one row to a set of `values` and `nulls` of size tupDesc->natts.\n+\t *\n+\t * 'econtext' is used to evaluate default expression for each column that\n+\t * is either not read from the file or is using the DEFAULT option of COPY\n\nor is ->\nor\n\n(I'm not sure...)\n\n+\t * FROM. It is NULL if no default values are used.\n+\t *\n+\t * Returns false if there are no more tuples to copy.\n+\t */\n+\tbool\t\t(*CopyFromOneRow) (CopyFromState cstate, ExprContext *econtext,\n+\t\t\t\t\t\t\t\t Datum *values, bool *nulls);\n\n+typedef struct CopyToRoutine\n+{\n+\t/*\n+\t * Called when COPY TO is started to set up the output functions\n+\t * associated to the relation's attributes reading from. `fmgr_info` can\n\nfmgr_info ->\nfinfo\n\n+\t * be optionally filled. `atttypid` is the OID of data type used by the\n+\t * relation's attribute.\n+\t */\n+\tvoid\t\t(*CopyToOutFunc) (Oid atttypid, FmgrInfo *finfo);\n\nHow about passing CopyToState cstate too like other\ncallbacks for consistency?\n\n\n@@ -200,4 +204,10 @@ extern void ReceiveCopyBinaryHeader(CopyFromState cstate);\n extern int\tCopyReadAttributesCSV(CopyFromState cstate);\n extern int\tCopyReadAttributesText(CopyFromState cstate);\n \n+/* Callbacks for CopyFromRoutine->OneRow */\n\nCopyFromRoutine->OneRow ->\nCopyFromRoutine->CopyFromOneRow\n\n+extern bool CopyFromTextOneRow(CopyFromState cstate, ExprContext *econtext,\n+\t\t\t\t\t\t\t Datum *values, bool *nulls);\n+extern bool CopyFromBinaryOneRow(CopyFromState cstate, ExprContext *econtext,\n+\t\t\t\t\t\t\t\t Datum *values, bool *nulls);\n+\n #endif\t\t\t\t\t\t\t/* COPYFROM_INTERNAL_H */\n\n+/*\n+ * CopyFromTextStart\n\nCopyFromTextStart ->\nCopyFromBinaryStart\n\n+ *\n+ * Start of COPY FROM for binary format.\n+ */\n+static void\n+CopyFromBinaryStart(CopyFromState cstate, TupleDesc tupDesc)\n+{\n+\t/* Read and verify binary header */\n+\tReceiveCopyBinaryHeader(cstate);\n+}\n+\n+/*\n+ * CopyFromTextEnd\n\nCopyFromTextEnd ->\nCopyFromBinaryEnd\n\n+ *\n+ * End of COPY FROM for binary format.\n+ */\n+static void\n+CopyFromBinaryEnd(CopyFromState cstate)\n+{\n+\t/* nothing to do */\n+}\n\n\ndiff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list\nindex 91433d439b..d02a7773e3 100644\n--- a/src/tools/pgindent/typedefs.list\n+++ b/src/tools/pgindent/typedefs.list\n@@ -473,6 +473,7 @@ ConvertRowtypeExpr\n CookedConstraint\n CopyDest\n CopyFormatOptions\n+CopyFromRoutine\n CopyFromState\n CopyFromStateData\n CopyHeaderChoice\n@@ -482,6 +483,7 @@ CopyMultiInsertInfo\n CopyOnErrorChoice\n CopySource\n CopyStmt\n+CopyToRoutine\n CopyToState\n CopyToStateData\n Cost\n\nWow! I didn't know that we need to update typedefs.list when\nI add a \"typedef struct\".\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 05 Feb 2024 18:05:15 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nHave you benchmarked the performance effects of 2889fd23be5 ? I'd not at all\nbe surprised if it lead to a measurable performance regression.\n\nI think callbacks for individual attributes is the wrong approach - the\ndispatch needs to happen at a higher level, otherwise there are too many\nindirect function calls.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Feb 2024 10:21:18 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Feb 05, 2024 at 06:05:15PM +0900, Sutou Kouhei wrote:\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 5 Feb 2024 16:14:08 +0900,\n> Michael Paquier <[email protected]> wrote:\n>> 2) I have backpedaled on the postpare callback, which did not bring\n>> much in clarity IMO while being a CSV-only callback. Note that we\n>> have in copyfromparse.c more paths that are only for CSV but the past\n>> versions of the patch never cared about that. This makes the text and\n>> CSV implementations much closer to each other, as a result.\n> \n> Ah, sorry. I forgot to eliminate cstate->opts.csv_mode in\n> CopyReadLineText(). The postpare callback is for\n> optimization. If it doesn't improve performance, we don't\n> need to introduce it.\n\nNo worries.\n\n> We may want to try eliminating cstate->opts.csv_mode in\n> CopyReadLineText() for performance. But we don't need to\n> do this in introducing CopyFromRoutine. We can defer it.\n> \n> So I don't object removing the postpare callback.\n\nRather related, but there has been a comment from Andres about this\nkind of splits a few hours ago, so perhaps this is for the best:\nhttps://www.postgresql.org/message-id/20240205182118.h5rkbnjgujwzuxip%40awork3.anarazel.de\n\nI'll reply to this one in a bit.\n\n>> Let me know if you have\n>> comments about all that.\n> \n> Here are some comments for the patch:\n\nThanks. My head was spinning after reading the diffs more than 20\ntimes :)\n\n> fmgr_info ->\n> finfo\n> optinally ->\n> optionally\n> CopyFromRoutine->OneRow ->\n> CopyFromRoutine->CopyFromOneRow\n> CopyFromTextStart ->\n> CopyFromBinaryStart\n> CopyFromTextEnd ->\n> CopyFromBinaryEnd\n\nFixed all these.\n\n> How about passing CopyFromState cstate too like other\n> callbacks for consistency?\n\nYes, I was wondering a bit if this can be useful for the custom\nformats.\n\n> +\t/*\n> +\t * Copy one row to a set of `values` and `nulls` of size tupDesc->natts.\n> +\t *\n> +\t * 'econtext' is used to evaluate default expression for each column that\n> +\t * is either not read from the file or is using the DEFAULT option of COPY\n> \n> or is ->\n> or\n\n\"or is\" is correct here IMO.\n\n> Wow! I didn't know that we need to update typedefs.list when\n> I add a \"typedef struct\".\n\nThat's for the automated indentation. This is a habit I have when it\ncomes to work on shaping up patches to avoid weird diffs with pgindent\nand new structure names. It's OK to forget about it :)\n\nAttaching a v13 for now.\n--\nMichael", "msg_date": "Tue, 6 Feb 2024 08:48:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Feb 05, 2024 at 10:21:18AM -0800, Andres Freund wrote:\n> Have you benchmarked the performance effects of 2889fd23be5 ? I'd not at all\n> be surprised if it lead to a measurable performance regression.\n\nYes, I was looking at runtimes and some profiles around CopyOneRowTo()\nto see the effects that this has yesterday. The principal point of\ncontention is CopyOneRowTo() where the callback is called once per\nattribute, so more attributes stress it more. The method I've used is\ndescribed in [1], where I've used up to 50 int attributes (fixed value\nsize to limit appendBinaryStringInfo) with 5 million rows, with\nshared_buffers large enough that all the data fits in it, while\nprewarming the whole. Postgres runs on a tmpfs, and COPY TO is\nredirected to /dev/null.\n\nFor reference, I still have some reports lying around (-g attached to\nthe backend process running the COPY TO queries with text format), so\nhere you go:\n* At 95fb5b49024a:\n- 83.04% 11.46% postgres postgres [.] CopyOneRowTo\n - 71.58% CopyOneRowTo\n - 30.37% OutputFunctionCall\n + 27.77% int4out\n + 13.18% CopyAttributeOutText\n + 10.19% appendBinaryStringInfo\n 3.76% 0xffffa7096234\n 2.78% 0xffffa7096214\n + 2.49% CopySendEndOfRow\n 1.21% int4out\n 0.83% memcpy@plt\n 0.76% 0xffffa7094ba8\n 0.75% 0xffffa7094ba4\n 0.69% pgstat_progress_update_param\n 0.57% enlargeStringInfo\n 0.52% 0xffffa7096204\n 0.52% 0xffffa7094b8c\n + 11.46% _start\n* At 2889fd23be56:\n- 83.53% 14.24% postgres postgres [.] CopyOneRowTo\n - 69.29% CopyOneRowTo\n - 29.89% OutputFunctionCall\n + 27.43% int4out\n - 12.89% CopyAttributeOutText\n pg_server_to_any\n + 9.31% appendBinaryStringInfo\n 3.68% 0xffffa6940234\n + 2.74% CopySendEndOfRow\n 2.43% 0xffffa6940214\n 1.36% int4out\n 0.74% 0xffffa693eba8\n 0.73% pgstat_progress_update_param\n 0.65% memcpy@plt\n 0.53% MemoryContextReset\n + 14.24% _start\n\nIf you have concerns about that, I'm OK to revert, I'm not wedded to\nthis level of control. Note that I've actually seen *better*\nruntimes.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n\n> I think callbacks for individual attributes is the wrong approach - the\n> dispatch needs to happen at a higher level, otherwise there are too many\n> indirect function calls.\n\nHmm. Do you have concerns about v13 posted on [2] then? If yes, then\nI'd assume that this shuts down the whole thread or that it needs a\ncompletely different approach, because we will multiply indirect\nfunction calls that can control how data is generated for each row,\nwhich is the original case that Sutou-san wanted to tackle. There\ncould be many indirect calls with custom callbacks that control how\nthings should be processed at row-level, and COPY likes doing work\nwith loads of data. The End, Start and In/OutFunc callbacks are\ncalled only once per query, so these don't matter AFAIU.\n\n[2]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Tue, 6 Feb 2024 10:01:36 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nOn 2024-02-06 10:01:36 +0900, Michael Paquier wrote:\n> On Mon, Feb 05, 2024 at 10:21:18AM -0800, Andres Freund wrote:\n> > Have you benchmarked the performance effects of 2889fd23be5 ? I'd not at all\n> > be surprised if it lead to a measurable performance regression.\n>\n> Yes, I was looking at runtimes and some profiles around CopyOneRowTo()\n> to see the effects that this has yesterday. The principal point of\n> contention is CopyOneRowTo() where the callback is called once per\n> attribute, so more attributes stress it more.\n\nRight.\n\n\n> If you have concerns about that, I'm OK to revert, I'm not wedded to\n> this level of control. Note that I've actually seen *better*\n> runtimes.\n\nI'm somewhat worried that handling the different formats at that level will\nmake it harder to improve copy performance - it's quite attrociously slow\nright now. The more we reduce the per-row/field overhead, the more the\ndispatch overhead will matter.\n\n\n\n> [1]: https://www.postgresql.org/message-id/[email protected]\n>\n> > I think callbacks for individual attributes is the wrong approach - the\n> > dispatch needs to happen at a higher level, otherwise there are too many\n> > indirect function calls.\n>\n> Hmm. Do you have concerns about v13 posted on [2] then?\n\nAs is I'm indeed not a fan. It imo doesn't make sense to have an indirect\ndispatch for *both* ->copy_attribute_out *and* ->CopyToOneRow. After all, when\nin ->CopyToOneRow for text, we could know that we need to call\nCopyAttributeOutText etc.\n\n\n> If yes, then I'd assume that this shuts down the whole thread or that it\n> needs a completely different approach, because we will multiply indirect\n> function calls that can control how data is generated for each row, which is\n> the original case that Sutou-san wanted to tackle.\n\nI think it could be rescued fairly easily - remove the dispatch via\n->copy_attribute_out(). To avoid duplicating code you could use a static\ninline function that's used with constant arguments by both csv and text mode.\n\nI think it might also be worth ensuring that future patches can move branches\nlike\n\tif (cstate->encoding_embeds_ascii)\n\tif (cstate->need_transcoding)\ninto the choice of per-row callback.\n\n\n> The End, Start and In/OutFunc callbacks are called only once per query, so\n> these don't matter AFAIU.\n\nRight.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Feb 2024 17:41:25 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Feb 05, 2024 at 05:41:25PM -0800, Andres Freund wrote:\n> On 2024-02-06 10:01:36 +0900, Michael Paquier wrote:\n>> If you have concerns about that, I'm OK to revert, I'm not wedded to\n>> this level of control. Note that I've actually seen *better*\n>> runtimes.\n> \n> I'm somewhat worried that handling the different formats at that level will\n> make it harder to improve copy performance - it's quite attrociously slow\n> right now. The more we reduce the per-row/field overhead, the more the\n> dispatch overhead will matter.\n\nYep. That's the hard part when it comes to design these callbacks.\nWe don't want something too high level because this leads to more code\nduplication churns when someone wants to plug in its own routine set,\nand we don't want to be at a too low level because of the indirect\ncalls as you said. I'd like to think that the current CopyFromOneRow\noffers a good balance here, avoiding the \"if\" branch with the binary\nand non-binary paths.\n\n>> Hmm. Do you have concerns about v13 posted on [2] then?\n> \n> As is I'm indeed not a fan. It imo doesn't make sense to have an indirect\n> dispatch for *both* ->copy_attribute_out *and* ->CopyToOneRow. After all, when\n> in ->CopyToOneRow for text, we could know that we need to call\n> CopyAttributeOutText etc.\n\nRight.\n\n>> If yes, then I'd assume that this shuts down the whole thread or that it\n>> needs a completely different approach, because we will multiply indirect\n>> function calls that can control how data is generated for each row, which is\n>> the original case that Sutou-san wanted to tackle.\n> \n> I think it could be rescued fairly easily - remove the dispatch via\n> ->copy_attribute_out(). To avoid duplicating code you could use a static\n> inline function that's used with constant arguments by both csv and text mode.\n\nHmm. So you basically mean to tweak the beginning of\nCopyToTextOneRow() and CopyToTextStart() so as copy_attribute_out is\nsaved in a local variable outside of cstate and we'd save the \"if\"\nchecked for each attribute. If I got that right, it would mean\nsomething like the v13-0002 attached, on top of the v13-0001 of\nupthread. Is that what you meant?\n\n> I think it might also be worth ensuring that future patches can move branches\n> like\n> \tif (cstate->encoding_embeds_ascii)\n> \tif (cstate->need_transcoding)\n> into the choice of per-row callback.\n\nYeah, I'm still not sure how much we should split CopyToStateData in\nthe initial patch set. I'd like to think that the best result would\nbe to have in the state data an opaque (void *) that points to a\nstructure that can be set for each format, so as there is a clean\nsplit between which variable gets set and used where (same remark\napplies to COPY FROM with its raw_fields, raw_fields, for example).\n--\nMichael", "msg_date": "Tue, 6 Feb 2024 11:41:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nOn 2024-02-06 11:41:06 +0900, Michael Paquier wrote:\n> On Mon, Feb 05, 2024 at 05:41:25PM -0800, Andres Freund wrote:\n> > On 2024-02-06 10:01:36 +0900, Michael Paquier wrote:\n> >> If you have concerns about that, I'm OK to revert, I'm not wedded to\n> >> this level of control. Note that I've actually seen *better*\n> >> runtimes.\n> > \n> > I'm somewhat worried that handling the different formats at that level will\n> > make it harder to improve copy performance - it's quite attrociously slow\n> > right now. The more we reduce the per-row/field overhead, the more the\n> > dispatch overhead will matter.\n> \n> Yep. That's the hard part when it comes to design these callbacks.\n> We don't want something too high level because this leads to more code\n> duplication churns when someone wants to plug in its own routine set,\n> and we don't want to be at a too low level because of the indirect\n> calls as you said. I'd like to think that the current CopyFromOneRow\n> offers a good balance here, avoiding the \"if\" branch with the binary\n> and non-binary paths.\n\nOne way to address code duplication is to use static inline helper functions\nthat do a lot of the work in a generic fashion, but where the compiler can\noptimize the branches away, because it can do constant folding.\n\n\n> >> If yes, then I'd assume that this shuts down the whole thread or that it\n> >> needs a completely different approach, because we will multiply indirect\n> >> function calls that can control how data is generated for each row, which is\n> >> the original case that Sutou-san wanted to tackle.\n> > \n> > I think it could be rescued fairly easily - remove the dispatch via\n> > ->copy_attribute_out(). To avoid duplicating code you could use a static\n> > inline function that's used with constant arguments by both csv and text mode.\n> \n> Hmm. So you basically mean to tweak the beginning of\n> CopyToTextOneRow() and CopyToTextStart() so as copy_attribute_out is\n> saved in a local variable outside of cstate and we'd save the \"if\"\n> checked for each attribute. If I got that right, it would mean\n> something like the v13-0002 attached, on top of the v13-0001 of\n> upthread. Is that what you meant?\n\nNo - what I mean is that it doesn't make sense to have copy_attribute_out(),\nas e.g. CopyToTextOneRow() already knows that it's dealing with text, so it\ncan directly call the right function. That does require splitting a bit more\nbetween csv and text output, but I think that can be done without much\nduplication.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Feb 2024 21:46:42 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Feb 05, 2024 at 09:46:42PM -0800, Andres Freund wrote:\n> No - what I mean is that it doesn't make sense to have copy_attribute_out(),\n> as e.g. CopyToTextOneRow() already knows that it's dealing with text, so it\n> can directly call the right function. That does require splitting a bit more\n> between csv and text output, but I think that can be done without much\n> duplication.\n\nI am not sure to understand here. In what is that different from\nreverting 2889fd23be56 then mark CopyAttributeOutCSV and\nCopyAttributeOutText as static inline? Or you mean to merge\nCopyAttributeOutText and CopyAttributeOutCSV together into a single\ninlined function, reducing a bit code readability? Both routines have\ntheir own roadmap for encoding_embeds_ascii with quoting and escaping,\nso keeping them separated looks kinda cleaner here.\n--\nMichael", "msg_date": "Tue, 6 Feb 2024 15:11:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nOn 2024-02-06 15:11:05 +0900, Michael Paquier wrote:\n> On Mon, Feb 05, 2024 at 09:46:42PM -0800, Andres Freund wrote:\n> > No - what I mean is that it doesn't make sense to have copy_attribute_out(),\n> > as e.g. CopyToTextOneRow() already knows that it's dealing with text, so it\n> > can directly call the right function. That does require splitting a bit more\n> > between csv and text output, but I think that can be done without much\n> > duplication.\n> \n> I am not sure to understand here. In what is that different from\n> reverting 2889fd23be56 then mark CopyAttributeOutCSV and\n> CopyAttributeOutText as static inline?\n\nWell, you can't just do that, because there's only one caller, namely\nCopyToTextOneRow(). What I am trying to suggest is something like the\nattached, just a quick hacky POC. Namely to split out CSV support from\nCopyToTextOneRow() by introducing CopyToCSVOneRow(), and to avoid code\nduplication by moving the code into a new CopyToTextLikeOneRow().\n\nI named it CopyToTextLike* here, because it seems confusing that some Text*\nare used for both CSV and text and others are actually just for text. But if\nwere to go for that, we should go further.\n\n\nTo test the performnce effects I chose to remove the pointless encoding\n\"check\" we're discussing in the other thread, as it makes it harder to see the\ntime differences due to the per-attribute code. I did three runs of pgbench\n-t of [1] and chose the fastest result for each.\n\n\nWith turbo mode and power saving disabled:\n\n Avg Time\nHEAD 995.349\nRemove Encoding Check 870.793\nv13-0001 869.678\nRemove out callback 839.508\n\nGreetings,\n\nAndres Freund\n\n[1] COPY (SELECT 1::int2,2::int2,3::int2,4::int2,5::int2,6::int2,7::int2,8::int2,9::int2,10::int2,11::int2,12::int2,13::int2,14::int2,15::int2,16::int2,17::int2,18::int2,19::int2,20::int2, generate_series(1, 1000000::int4)) TO '/dev/null';", "msg_date": "Tue, 6 Feb 2024 15:33:36 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Tue, Feb 06, 2024 at 03:33:36PM -0800, Andres Freund wrote:\n> Well, you can't just do that, because there's only one caller, namely\n> CopyToTextOneRow(). What I am trying to suggest is something like the\n> attached, just a quick hacky POC. Namely to split out CSV support from\n> CopyToTextOneRow() by introducing CopyToCSVOneRow(), and to avoid code\n> duplication by moving the code into a new CopyToTextLikeOneRow().\n\nAh, OK. Got it now.\n\n> I named it CopyToTextLike* here, because it seems confusing that some Text*\n> are used for both CSV and text and others are actually just for text. But if\n> were to go for that, we should go further.\n\nThis can always be argued later.\n\n> To test the performnce effects I chose to remove the pointless encoding\n> \"check\" we're discussing in the other thread, as it makes it harder to see the\n> time differences due to the per-attribute code. I did three runs of pgbench\n> -t of [1] and chose the fastest result for each.\n> \n> With turbo mode and power saving disabled:\n> Avg Time\n> HEAD 995.349\n> Remove Encoding Check 870.793\n> v13-0001 869.678\n> Remove out callback 839.508\n\nHmm. That explains why I was not seeing any differences with this\ncallback then. It seems to me that the order of actions to take is\nclear, like:\n- Revert 2889fd23be56 to keep a clean state of the tree, now done with\n1aa8324b81fa.\n- Dive into the strlen() issue, as it really looks like this can\ncreate more simplifications for the patch discussed on this thread\nwith COPY TO.\n- Revisit what we have here, looking at more profiles to see how HEAD\nan v13 compare. It looks like we are on a good path, but let's tackle\nthings one step at a time.\n--\nMichael", "msg_date": "Wed, 7 Feb 2024 13:33:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 01, 2024 at 10:57:58AM +0900, Michael Paquier wrote:\n> CREATE EXTENSION blackhole_am;\n\nOne thing I have forgotten here is to provide a copy of this AM for\nfuture references, so here you go with a blackhole_am.tar.gz attached.\n--\nMichael", "msg_date": "Fri, 9 Feb 2024 09:54:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 7 Feb 2024 13:33:18 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> Hmm. That explains why I was not seeing any differences with this\n> callback then. It seems to me that the order of actions to take is\n> clear, like:\n> - Revert 2889fd23be56 to keep a clean state of the tree, now done with\n> 1aa8324b81fa.\n\nDone.\n\n> - Dive into the strlen() issue, as it really looks like this can\n> create more simplifications for the patch discussed on this thread\n> with COPY TO.\n\nDone: b619852086ed2b5df76631f5678f60d3bebd3745\n\n> - Revisit what we have here, looking at more profiles to see how HEAD\n> an v13 compare. It looks like we are on a good path, but let's tackle\n> things one step at a time.\n\nAre you already working on this? Do you want me to write the\nnext patch based on the current master?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 09 Feb 2024 13:19:50 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Feb 07, 2024 at 01:33:18PM +0900, Michael Paquier wrote:\n> Hmm. That explains why I was not seeing any differences with this\n> callback then. It seems to me that the order of actions to take is\n> clear, like:\n> - Revert 2889fd23be56 to keep a clean state of the tree, now done with\n> 1aa8324b81fa.\n> - Dive into the strlen() issue, as it really looks like this can\n> create more simplifications for the patch discussed on this thread\n> with COPY TO.\n\nThis has been done this morning with b619852086ed.\n\n> - Revisit what we have here, looking at more profiles to see how HEAD\n> an v13 compare. It looks like we are on a good path, but let's tackle\n> things one step at a time.\n\nAnd attached is a v14 that's rebased on HEAD. While on it, I've\nlooked at more profiles and did more runtime checks.\n\nSome runtimes, in (ms), average of 15 runs, 30 int attributes on 5M\nrows as mentioned above:\nCOPY FROM text binary\nHEAD 6066 7110\nv14 6087 7105\nCOPY TO text binary\nHEAD 6591 10161\nv14 6508 10189\n\nAnd here are some profiles, where I'm not seeing an impact at\nrow-level with the addition of the callbacks:\nCOPY FROM, text, master:\n- 66.59% 16.10% postgres postgres [.] NextCopyFrom ▒ - 50.50% NextCopyFrom\n - 30.75% NextCopyFromRawFields\n + 15.93% CopyReadLine\n 13.73% CopyReadAttributesText\n - 19.43% InputFunctionCallSafe\n + 13.49% int4in\n 0.77% pg_strtoint32_safe\n + 16.10% _start\nCOPY FROM, text, v14:\n- 66.42% 0.74% postgres postgres [.] NextCopyFrom\n - 65.67% NextCopyFrom\n - 65.51% CopyFromTextOneRow\n - 30.25% NextCopyFromRawFields\n + 16.14% CopyReadLine\n 13.40% CopyReadAttributesText\n - 18.96% InputFunctionCallSafe\n + 13.15% int4in\n 0.70% pg_strtoint32_safe\n + 0.74% _start\n\nCOPY TO, binary, master\n- 90.32% 7.14% postgres postgres [.] CopyOneRowTo\n - 83.18% CopyOneRowTo\n + 60.30% SendFunctionCall\n + 10.99% appendBinaryStringInfo\n + 3.67% MemoryContextReset\n + 2.89% CopySendEndOfRow\n 0.89% memcpy@plt\n 0.66% 0xffffa052db5c\n 0.62% enlargeStringInfo\n 0.56% pgstat_progress_update_param\n + 7.14% _start\nCOPY TO, binary, v14\n- 90.96% 0.21% postgres postgres [.] CopyOneRowTo\n - 90.75% CopyOneRowTo\n - 81.86% CopyToBinaryOneRow\n + 59.17% SendFunctionCall\n + 10.56% appendBinaryStringInfo\n 1.10% enlargeStringInfo\n 0.59% int4send\n 0.57% memcpy@plt\n + 3.68% MemoryContextReset\n + 2.83% CopySendEndOfRow\n 1.13% appendBinaryStringInfo\n 0.58% SendFunctionCall\n 0.58% pgstat_progress_update_param\n\nAre there any comments about this v14? Sutou-san?\n\nA next step I think we could take is to split the binary-only and the\ntext/csv-only data in each cstate into their own structure to make the\nstructure, with an opaque pointer that custom formats could use, but a\nlot of fields are shared as well. This patch is already complicated\nenough IMO, so I'm OK to leave it out for the moment, and focus on\nmaking this infra pluggable as a next step.\n--\nMichael", "msg_date": "Fri, 9 Feb 2024 13:21:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 09, 2024 at 01:19:50PM +0900, Sutou Kouhei wrote:\n> Are you already working on this? Do you want me to write the\n> next patch based on the current master?\n\nNo need for a new patch, thanks. I've spent some time today doing a\nrebase and measuring the whole, without seeing a degradation with what\nshould be the worst cases for COPY TO and FROM:\nhttps://www.postgresql.org/message-id/ZcWoTr1N0GELFA9E%40paquier.xyz\n--\nMichael", "msg_date": "Fri, 9 Feb 2024 13:40:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 9 Feb 2024 13:21:34 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>> - Revisit what we have here, looking at more profiles to see how HEAD\n>> an v13 compare. It looks like we are on a good path, but let's tackle\n>> things one step at a time.\n> \n> And attached is a v14 that's rebased on HEAD.\n\nThanks!\n\n> A next step I think we could take is to split the binary-only and the\n> text/csv-only data in each cstate into their own structure to make the\n> structure, with an opaque pointer that custom formats could use, but a\n> lot of fields are shared as well.\n\nIt'll make COPY code base cleaner but it may decrease\nperformance. How about just adding an opaque pointer to each\ncstate as the next step and then try the split?\n\nMy suggestion:\n1. Introduce Copy{To,From}Routine\n (We can do it based on the v14 patch.)\n2. Add an opaque pointer to Copy{To,From}Routine\n (This must not have performance impact.)\n3.a. Split format specific data to the opaque space\n3.b. Add support for registering custom format handler by\n creating a function\n4. ...\n\n> This patch is already complicated\n> enough IMO, so I'm OK to leave it out for the moment, and focus on\n> making this infra pluggable as a next step.\n\nI agree with you.\n\n> Are there any comments about this v14? Sutou-san?\n\nHere are my comments:\n\n\n+\t/* Set read attribute callback */\n+\tif (cstate->opts.csv_mode)\n+\t\tcstate->copy_read_attributes = CopyReadAttributesCSV;\n+\telse\n+\t\tcstate->copy_read_attributes = CopyReadAttributesText;\n\nI think that we should not use this approach for\nperformance. We need to use \"static inline\" and constant\nargument instead something like the attached\nremove-copy-read-attributes.diff.\n\nWe have similar codes for\nCopyReadLine()/CopyReadLineText(). The attached\nremove-copy-read-attributes-and-optimize-copy-read-line.diff\nalso applies the same optimization to\nCopyReadLine()/CopyReadLineText().\n\nI hope that this improved performance of COPY FROM.\n\n+/*\n+ * Routines assigned to each format.\n++\n\nGarbage \"+\"\n\n+ * CSV and text share the same implementation, at the exception of the\n+ * copy_read_attributes callback.\n+ */\n\n\n+/*\n+ * CopyToTextOneRow\n+ *\n+ * Process one row for text/CSV format.\n+ */\n+static void\n+CopyToTextOneRow(CopyToState cstate,\n+\t\t\t\t TupleTableSlot *slot)\n+{\n...\n+\t\t\tif (cstate->opts.csv_mode)\n+\t\t\t\tCopyAttributeOutCSV(cstate, string,\n+\t\t\t\t\t\t\t\t\tcstate->opts.force_quote_flags[attnum - 1]);\n+\t\t\telse\n+\t\t\t\tCopyAttributeOutText(cstate, string);\n...\n\nHow about use \"static inline\" and constant argument approach\nhere too?\n\nstatic inline void\nCopyToTextBasedOneRow(CopyToState cstate,\n\t\t\t\t\t TupleTableSlot *slot,\n\t\t\t\t\t bool csv_mode)\n{\n...\n\t\t\tif (cstate->opts.csv_mode)\n\t\t\t\tCopyAttributeOutCSV(cstate, string,\n\t\t\t\t\t\t\t\t\tcstate->opts.force_quote_flags[attnum - 1]);\n\t\t\telse\n\t\t\t\tCopyAttributeOutText(cstate, string);\n...\n}\n\nstatic void\nCopyToTextOneRow(CopyToState cstate,\n\t\t\t\t TupleTableSlot *slot,\n\t\t\t\t bool csv_mode)\n{\n\tCopyToTextBasedOneRow(cstate, slot, false);\n}\n\nstatic void\nCopyToCSVOneRow(CopyToState cstate,\n\t\t\t\tTupleTableSlot *slot,\n\t\t\t\tbool csv_mode)\n{\n\tCopyToTextBasedOneRow(cstate, slot, true);\n}\n\nstatic const CopyToRoutine CopyCSVRoutineText = {\n\t...\n\t.CopyToOneRow = CopyToCSVOneRow,\n\t...\n};\n\n\nThanks,\n-- \nkou", "msg_date": "Fri, 09 Feb 2024 16:32:05 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 09, 2024 at 04:32:05PM +0900, Sutou Kouhei wrote:\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 9 Feb 2024 13:21:34 +0900,\n> Michael Paquier <[email protected]> wrote:\n>> A next step I think we could take is to split the binary-only and the\n>> text/csv-only data in each cstate into their own structure to make the\n>> structure, with an opaque pointer that custom formats could use, but a\n>> lot of fields are shared as well.\n> \n> It'll make COPY code base cleaner but it may decrease\n> performance.\n\nPerhaps, but I'm not sure, TBH. But perhaps others can comment on\nthis point. This surely needs to be studied closely.\n\n> My suggestion:\n> 1. Introduce Copy{To,From}Routine\n> (We can do it based on the v14 patch.)\n> 2. Add an opaque pointer to Copy{To,From}Routine\n> (This must not have performance impact.)\n> 3.a. Split format specific data to the opaque space\n> 3.b. Add support for registering custom format handler by\n> creating a function\n> 4. ...\n\n4. is going to need 3. At this point 3.b sounds like the main thing\nto tackle first if we want to get something usable for the end-user\ninto this release, at least. Still 2 is important for pluggability\nas we pass the cstates across all the routines and custom formats want\nto save their own data, so this split sounds OK. I am not sure how\nmuch of 3.a we really need to do for the in-core formats.\n\n> I think that we should not use this approach for\n> performance. We need to use \"static inline\" and constant\n> argument instead something like the attached\n> remove-copy-read-attributes.diff.\n\nFWIW, using inlining did not show any performance change here.\nPerhaps that's only because this is called in the COPY FROM path once\nper row (even for the case of using 1 attribute with blackhole_am).\n--\nMichael", "msg_date": "Fri, 9 Feb 2024 17:25:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nOn 2024-02-09 13:21:34 +0900, Michael Paquier wrote:\n> +static void\n> +CopyFromTextInFunc(CopyFromState cstate, Oid atttypid,\n> +\t\t\t\t FmgrInfo *finfo, Oid *typioparam)\n> +{\n> +\tOid\t\t\tfunc_oid;\n> +\n> +\tgetTypeInputInfo(atttypid, &func_oid, typioparam);\n> +\tfmgr_info(func_oid, finfo);\n> +}\n\nFWIW, we should really change the copy code to initialize FunctionCallInfoData\ninstead of re-initializing that on every call, realy makes a difference\nperformance wise.\n\n\n> +/*\n> + * CopyFromTextStart\n> + *\n> + * Start of COPY FROM for text/CSV format.\n> + */\n> +static void\n> +CopyFromTextStart(CopyFromState cstate, TupleDesc tupDesc)\n> +{\n> +\tAttrNumber\tattr_count;\n> +\n> +\t/*\n> +\t * If encoding conversion is needed, we need another buffer to hold the\n> +\t * converted input data. Otherwise, we can just point input_buf to the\n> +\t * same buffer as raw_buf.\n> +\t */\n> +\tif (cstate->need_transcoding)\n> +\t{\n> +\t\tcstate->input_buf = (char *) palloc(INPUT_BUF_SIZE + 1);\n> +\t\tcstate->input_buf_index = cstate->input_buf_len = 0;\n> +\t}\n> +\telse\n> +\t\tcstate->input_buf = cstate->raw_buf;\n> +\tcstate->input_reached_eof = false;\n> +\n> +\tinitStringInfo(&cstate->line_buf);\n\nSeems kinda odd that we have a supposedly extensible API that then stores all\nthis stuff in the non-extensible CopyFromState.\n\n\n> +\t/* create workspace for CopyReadAttributes results */\n> +\tattr_count = list_length(cstate->attnumlist);\n> +\tcstate->max_fields = attr_count;\n\nWhy is this here? This seems like generic code, not text format specific.\n\n\n> +\tcstate->raw_fields = (char **) palloc(attr_count * sizeof(char *));\n> +\t/* Set read attribute callback */\n> +\tif (cstate->opts.csv_mode)\n> +\t\tcstate->copy_read_attributes = CopyReadAttributesCSV;\n> +\telse\n> +\t\tcstate->copy_read_attributes = CopyReadAttributesText;\n> +}\n\nIsn't this precisely repeating the mistake of 2889fd23be56?\n\nAnd, why is this done here? Shouldn't this decision have been made prior to\neven calling CopyFromTextStart()?\n\n> +/*\n> + * CopyFromTextOneRow\n> + *\n> + * Copy one row to a set of `values` and `nulls` for the text and CSV\n> + * formats.\n> + */\n\nI'm very doubtful it's a good idea to combine text and CSV here. They have\nbasically no shared parsing code, so what's the point in sending them through\none input routine?\n\n\n> +bool\n> +CopyFromTextOneRow(CopyFromState cstate,\n> +\t\t\t\t ExprContext *econtext,\n> +\t\t\t\t Datum *values,\n> +\t\t\t\t bool *nulls)\n> +{\n> +\tTupleDesc\ttupDesc;\n> +\tAttrNumber\tattr_count;\n> +\tFmgrInfo *in_functions = cstate->in_functions;\n> +\tOid\t\t *typioparams = cstate->typioparams;\n> +\tExprState **defexprs = cstate->defexprs;\n> +\tchar\t **field_strings;\n> +\tListCell *cur;\n> +\tint\t\t\tfldct;\n> +\tint\t\t\tfieldno;\n> +\tchar\t *string;\n> +\n> +\ttupDesc = RelationGetDescr(cstate->rel);\n> +\tattr_count = list_length(cstate->attnumlist);\n> +\n> +\t/* read raw fields in the next line */\n> +\tif (!NextCopyFromRawFields(cstate, &field_strings, &fldct))\n> +\t\treturn false;\n> +\n> +\t/* check for overflowing fields */\n> +\tif (attr_count > 0 && fldct > attr_count)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n> +\t\t\t\t errmsg(\"extra data after last expected column\")));\n\nIt bothers me that we look to be ending up with different error handling\nacross the various output formats, particularly if they're ending up in\nextensions. That'll make it harder to evolve this code in the future.\n\n\n> +\tfieldno = 0;\n> +\n> +\t/* Loop to read the user attributes on the line. */\n> +\tforeach(cur, cstate->attnumlist)\n> +\t{\n> +\t\tint\t\t\tattnum = lfirst_int(cur);\n> +\t\tint\t\t\tm = attnum - 1;\n> +\t\tForm_pg_attribute att = TupleDescAttr(tupDesc, m);\n> +\n> +\t\tif (fieldno >= fldct)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n> +\t\t\t\t\t errmsg(\"missing data for column \\\"%s\\\"\",\n> +\t\t\t\t\t\t\tNameStr(att->attname))));\n> +\t\tstring = field_strings[fieldno++];\n> +\n> +\t\tif (cstate->convert_select_flags &&\n> +\t\t\t!cstate->convert_select_flags[m])\n> +\t\t{\n> +\t\t\t/* ignore input field, leaving column as NULL */\n> +\t\t\tcontinue;\n> +\t\t}\n> +\n> +\t\tcstate->cur_attname = NameStr(att->attname);\n> +\t\tcstate->cur_attval = string;\n> +\n> +\t\tif (cstate->opts.csv_mode)\n> +\t\t{\n\nMore unfortunate intermingling of multiple formats in a single routine.\n\n\n> +\n> +\t\tif (cstate->defaults[m])\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * The caller must supply econtext and have switched into the\n> +\t\t\t * per-tuple memory context in it.\n> +\t\t\t */\n> +\t\t\tAssert(econtext != NULL);\n> +\t\t\tAssert(CurrentMemoryContext == econtext->ecxt_per_tuple_memory);\n> +\n> +\t\t\tvalues[m] = ExecEvalExpr(defexprs[m], econtext, &nulls[m]);\n> +\t\t}\n\nI don't think it's good that we end up with this code in different copy\nimplementations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Feb 2024 11:27:05 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Feb 09, 2024 at 11:27:05AM -0800, Andres Freund wrote:\n> On 2024-02-09 13:21:34 +0900, Michael Paquier wrote:\n>> +static void\n>> +CopyFromTextInFunc(CopyFromState cstate, Oid atttypid,\n>> +\t\t\t\t FmgrInfo *finfo, Oid *typioparam)\n>> +{\n>> +\tOid\t\t\tfunc_oid;\n>> +\n>> +\tgetTypeInputInfo(atttypid, &func_oid, typioparam);\n>> +\tfmgr_info(func_oid, finfo);\n>> +}\n> \n> FWIW, we should really change the copy code to initialize FunctionCallInfoData\n> instead of re-initializing that on every call, realy makes a difference\n> performance wise.\n\nYou mean to initialize once its memory and let the internal routines\ncall InitFunctionCallInfoData for each attribute. Sounds like a good\nidea, doing that for HEAD before the main patch. More impact with\nmore attributes.\n\n>> +/*\n>> + * CopyFromTextStart\n>> + *\n>> + * Start of COPY FROM for text/CSV format.\n>> + */\n>> +static void\n>> +CopyFromTextStart(CopyFromState cstate, TupleDesc tupDesc)\n>> +{\n>> +\tAttrNumber\tattr_count;\n>> +\n>> +\t/*\n>> +\t * If encoding conversion is needed, we need another buffer to hold the\n>> +\t * converted input data. Otherwise, we can just point input_buf to the\n>> +\t * same buffer as raw_buf.\n>> +\t */\n>> +\tif (cstate->need_transcoding)\n>> +\t{\n>> +\t\tcstate->input_buf = (char *) palloc(INPUT_BUF_SIZE + 1);\n>> +\t\tcstate->input_buf_index = cstate->input_buf_len = 0;\n>> +\t}\n>> +\telse\n>> +\t\tcstate->input_buf = cstate->raw_buf;\n>> +\tcstate->input_reached_eof = false;\n>> +\n>> +\tinitStringInfo(&cstate->line_buf);\n> \n> Seems kinda odd that we have a supposedly extensible API that then stores all\n> this stuff in the non-extensible CopyFromState.\n\nThat relates to the introduction of the the opaque pointer mentioned \nupthread to point to a per-format structure, where we'd store data\nspecific to each format.\n\n>> +\t/* create workspace for CopyReadAttributes results */\n>> +\tattr_count = list_length(cstate->attnumlist);\n>> +\tcstate->max_fields = attr_count;\n> \n> Why is this here? This seems like generic code, not text format specific.\n\nWe don't care about that for binary.\n\n>> +/*\n>> + * CopyFromTextOneRow\n>> + *\n>> + * Copy one row to a set of `values` and `nulls` for the text and CSV\n>> + * formats.\n>> + */\n> \n> I'm very doubtful it's a good idea to combine text and CSV here. They have\n> basically no shared parsing code, so what's the point in sending them through\n> one input routine?\n\nThe code shared between text and csv involves a path called once per\nattribute. TBH, I am not sure how much of the NULL handling should be\nput outside the per-row routine as these options are embedded in the\ncore options. So I don't have a better idea on this one than what's\nproposed here if we cannot dispatch the routine calls once per\nattribute.\n\n>> +\t/* read raw fields in the next line */\n>> +\tif (!NextCopyFromRawFields(cstate, &field_strings, &fldct))\n>> +\t\treturn false;\n>> +\n>> +\t/* check for overflowing fields */\n>> +\tif (attr_count > 0 && fldct > attr_count)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n>> +\t\t\t\t errmsg(\"extra data after last expected column\")));\n> \n> It bothers me that we look to be ending up with different error handling\n> across the various output formats, particularly if they're ending up in\n> extensions. That'll make it harder to evolve this code in the future.\n\nBut different formats may have different requirements, including the\nnumber of attributes detected vs expected. That was not really\nnothing me.\n\n>> +\t\tif (cstate->opts.csv_mode)\n>> +\t\t{\n> \n> More unfortunate intermingling of multiple formats in a single\n> routine.\n\nSimilar answer as a few paragraphs above. Sutou-san was suggesting to\nuse an internal routine with fixed arguments instead, which would be\nenough at the end with some inline instructions?\n\n>> +\n>> +\t\tif (cstate->defaults[m])\n>> +\t\t{\n>> +\t\t\t/*\n>> +\t\t\t * The caller must supply econtext and have switched into the\n>> +\t\t\t * per-tuple memory context in it.\n>> +\t\t\t */\n>> +\t\t\tAssert(econtext != NULL);\n>> +\t\t\tAssert(CurrentMemoryContext == econtext->ecxt_per_tuple_memory);\n>> +\n>> +\t\t\tvalues[m] = ExecEvalExpr(defexprs[m], econtext, &nulls[m]);\n>> +\t\t}\n> \n> I don't think it's good that we end up with this code in different copy\n> implementations.\n\nYeah, still we don't care about that for binary.\n--\nMichael", "msg_date": "Sat, 10 Feb 2024 10:02:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 9 Feb 2024 11:27:05 -0800,\n Andres Freund <[email protected]> wrote:\n\n>> +static void\n>> +CopyFromTextInFunc(CopyFromState cstate, Oid atttypid,\n>> +\t\t\t\t FmgrInfo *finfo, Oid *typioparam)\n>> +{\n>> +\tOid\t\t\tfunc_oid;\n>> +\n>> +\tgetTypeInputInfo(atttypid, &func_oid, typioparam);\n>> +\tfmgr_info(func_oid, finfo);\n>> +}\n> \n> FWIW, we should really change the copy code to initialize FunctionCallInfoData\n> instead of re-initializing that on every call, realy makes a difference\n> performance wise.\n\nHow about the attached patch approach? If it's a desired\napproach, I can also write a separated patch for COPY TO.\n\n>> +\tcstate->raw_fields = (char **) palloc(attr_count * sizeof(char *));\n>> +\t/* Set read attribute callback */\n>> +\tif (cstate->opts.csv_mode)\n>> +\t\tcstate->copy_read_attributes = CopyReadAttributesCSV;\n>> +\telse\n>> +\t\tcstate->copy_read_attributes = CopyReadAttributesText;\n>> +}\n> \n> Isn't this precisely repeating the mistake of 2889fd23be56?\n\nWhat do you think about the approach in my previous mail's\nattachments?\nhttps://www.postgresql.org/message-id/flat/20240209.163205.704848659612151781.kou%40clear-code.com#dbb1f8d7f2f0e8fe3c7e37a757fcfc54\n\nIf it's a desired approach, I can prepare a v15 patch set\nbased on the v14 patch set and the approach.\n\n\nI'll reply other comments later...\n\n\nThanks,\n-- \nkou", "msg_date": "Tue, 13 Feb 2024 17:33:40 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Tue, Feb 13, 2024 at 05:33:40PM +0900, Sutou Kouhei wrote:\n> Hi,\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 9 Feb 2024 11:27:05 -0800,\n> Andres Freund <[email protected]> wrote:\n> \n>>> +static void\n>>> +CopyFromTextInFunc(CopyFromState cstate, Oid atttypid,\n>>> +\t\t\t\t FmgrInfo *finfo, Oid *typioparam)\n>>> +{\n>>> +\tOid\t\t\tfunc_oid;\n>>> +\n>>> +\tgetTypeInputInfo(atttypid, &func_oid, typioparam);\n>>> +\tfmgr_info(func_oid, finfo);\n>>> +}\n>> \n>> FWIW, we should really change the copy code to initialize FunctionCallInfoData\n>> instead of re-initializing that on every call, realy makes a difference\n>> performance wise.\n> \n> How about the attached patch approach? If it's a desired\n> approach, I can also write a separated patch for COPY TO.\n\nHmm, I have not studied that much, but my first impression was that we\nwould not require any new facility in fmgr.c, but perhaps you're right\nand it's more elegant to pass a InitFunctionCallInfoData this way.\n\nPrepareInputFunctionCallInfo() looks OK as a name, but I'm less a fan\nof PreparedInputFunctionCallSafe() and its \"Prepared\" part. How about\nsomething like ExecuteInputFunctionCallSafe()?\n\nI may be able to look more at that next week, and I would surely check\nthe impact of that with a simple COPY query throttled by CPU (more\nrows and more attributes the better).\n\n>>> +\tcstate->raw_fields = (char **) palloc(attr_count * sizeof(char *));\n>>> +\t/* Set read attribute callback */\n>>> +\tif (cstate->opts.csv_mode)\n>>> +\t\tcstate->copy_read_attributes = CopyReadAttributesCSV;\n>>> +\telse\n>>> +\t\tcstate->copy_read_attributes = CopyReadAttributesText;\n>>> +}\n>> \n>> Isn't this precisely repeating the mistake of 2889fd23be56?\n> \n> What do you think about the approach in my previous mail's\n> attachments?\n> https://www.postgresql.org/message-id/flat/20240209.163205.704848659612151781.kou%40clear-code.com#dbb1f8d7f2f0e8fe3c7e37a757fcfc54\n>\n> If it's a desired approach, I can prepare a v15 patch set\n> based on the v14 patch set and the approach.\n\nYes, this one looks like it's using the right angle: we don't rely\nanymore in cstate to decide which CopyReadAttributes to use, the\nroutines do that instead. Note that I've reverted 06bd311bce24 for\nthe moment, as this is just getting in the way of the main patch, and\nthat was non-optimal once there is a per-row callback.\n\n> diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c\n> index 41f6bc43e4..a43c853e99 100644\n> --- a/src/backend/commands/copyfrom.c\n> +++ b/src/backend/commands/copyfrom.c\n> @@ -1691,6 +1691,10 @@ BeginCopyFrom(ParseState *pstate,\n> \t/* We keep those variables in cstate. */\n> \tcstate->in_functions = in_functions;\n> \tcstate->typioparams = typioparams;\n> +\tif (cstate->opts.binary)\n> +\t\tcstate->fcinfo = PrepareInputFunctionCallInfo();\n> +\telse\n> +\t\tcstate->fcinfo = PrepareReceiveFunctionCallInfo();\n\nPerhaps we'd better avoid more callbacks like that, for now.\n--\nMichael", "msg_date": "Wed, 14 Feb 2024 12:28:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 14 Feb 2024 12:28:38 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>> How about the attached patch approach? If it's a desired\n>> approach, I can also write a separated patch for COPY TO.\n> \n> Hmm, I have not studied that much, but my first impression was that we\n> would not require any new facility in fmgr.c, but perhaps you're right\n> and it's more elegant to pass a InitFunctionCallInfoData this way.\n\nI'm not familiar with the fmgr.c related code base but it\nseems that we abstract {,binary-}input function call by\nfmgr.c. So I think that it's better that we follow the\ndesign. (If there is a person who knows the fmgr.c related\ncode base, please help us.)\n\n> PrepareInputFunctionCallInfo() looks OK as a name, but I'm less a fan\n> of PreparedInputFunctionCallSafe() and its \"Prepared\" part. How about\n> something like ExecuteInputFunctionCallSafe()?\n\nI understand the feeling. SQL uses \"prepared\" for \"prepared\nstatement\". There are similar function names such as\nInputFunctionCall()/InputFunctionCallSafe()/DirectInputFunctionCallSafe(). They\nexecute (call) an input function but they use \"call\" not\n\"execute\" for it... So \"Execute...Call...\" may be\nredundant...\n\nHow about InputFunctionCallSafeWithInfo(),\nInputFunctionCallSafeInfo() or\nInputFunctionCallInfoCallSafe()?\n\n> I may be able to look more at that next week, and I would surely check\n> the impact of that with a simple COPY query throttled by CPU (more\n> rows and more attributes the better).\n\nThanks!\n\n> Note that I've reverted 06bd311bce24 for\n> the moment, as this is just getting in the way of the main patch, and\n> that was non-optimal once there is a per-row callback.\n\nThanks for sharing the information. I'll rebase on master\nwhen I create the v15 patch.\n\n\n>> diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c\n>> index 41f6bc43e4..a43c853e99 100644\n>> --- a/src/backend/commands/copyfrom.c\n>> +++ b/src/backend/commands/copyfrom.c\n>> @@ -1691,6 +1691,10 @@ BeginCopyFrom(ParseState *pstate,\n>> \t/* We keep those variables in cstate. */\n>> \tcstate->in_functions = in_functions;\n>> \tcstate->typioparams = typioparams;\n>> +\tif (cstate->opts.binary)\n>> +\t\tcstate->fcinfo = PrepareInputFunctionCallInfo();\n>> +\telse\n>> +\t\tcstate->fcinfo = PrepareReceiveFunctionCallInfo();\n> \n> Perhaps we'd better avoid more callbacks like that, for now.\n\nI'll not use a callback for this. I'll not change this part\nafter we introduce Copy{To,From}Routine. cstate->fcinfo\nisn't used some custom COPY format handlers such as Apache\nArrow handler like cstate->in_functions and\ncstate->typioparams. But they will be always allocated. It's\na bit wasteful for those handlers but we may not care about\nit. So we can always use \"if (state->opts.binary)\" condition\nhere.\n\nBTW... This part was wrong... Sorry... It should be:\n\n\n\tif (cstate->opts.binary)\n\t\tcstate->fcinfo = PrepareReceiveFunctionCallInfo();\n\telse\n\t\tcstate->fcinfo = PrepareInputFunctionCallInfo();\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Wed, 14 Feb 2024 14:08:51 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Feb 14, 2024 at 02:08:51PM +0900, Sutou Kouhei wrote:\n> I understand the feeling. SQL uses \"prepared\" for \"prepared\n> statement\". There are similar function names such as\n> InputFunctionCall()/InputFunctionCallSafe()/DirectInputFunctionCallSafe(). They\n> execute (call) an input function but they use \"call\" not\n> \"execute\" for it... So \"Execute...Call...\" may be\n> redundant...\n> \n> How about InputFunctionCallSafeWithInfo(),\n> InputFunctionCallSafeInfo() or\n> InputFunctionCallInfoCallSafe()?\n\nWithInfo() would not be a new thing. There are a couple of APIs named\nlike this when manipulating catalogs, so that sounds kind of a good\nchoice from here.\n--\nMichael", "msg_date": "Wed, 14 Feb 2024 15:52:36 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 14 Feb 2024 15:52:36 +0900,\n Michael Paquier <[email protected]> wrote:\n\n>> How about InputFunctionCallSafeWithInfo(),\n>> InputFunctionCallSafeInfo() or\n>> InputFunctionCallInfoCallSafe()?\n> \n> WithInfo() would not be a new thing. There are a couple of APIs named\n> like this when manipulating catalogs, so that sounds kind of a good\n> choice from here.\n\nThanks for the info. Let's use InputFunctionCallSafeWithInfo().\nSee that attached patch:\nv2-0001-Reuse-fcinfo-used-in-COPY-FROM.patch\n\nI also attach a patch for COPY TO:\nv1-0001-Reuse-fcinfo-used-in-COPY-TO.patch\n\nI measured the COPY TO patch on my environment with:\nCOPY (SELECT 1::int2,2::int2,3::int2,4::int2,5::int2,6::int2,7::int2,8::int2,9::int2,10::int2,11::int2,12::int2,13::int2,14::int2,15::int2,16::int2,17::int2,18::int2,19::int2,20::int2, generate_series(1, 1000000::int4)) TO '/dev/null' \\watch c=5\n\nmaster:\n740.066ms\n734.884ms\n738.579ms\n734.170ms\n727.953ms\n\npatched:\n730.714ms\n741.483ms\n714.149ms\n715.436ms\n713.578ms\n\nIt seems that it improves performance a bit but my\nenvironment isn't suitable for benchmark. So they may not\nbe valid numbers.\n\n\nThanks,\n-- \nkou", "msg_date": "Thu, 15 Feb 2024 15:34:21 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 13 Feb 2024 17:33:40 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> I'll reply other comments later...\n\nI've read other comments and my answers for them are same as\nMichael's one.\n\n\nI'll prepare the v15 patch with static inline functions and\nfixed arguments after the fcinfo cache patches are merged. I\nthink that the v15 patch will be conflicted with fcinfo\ncache patches.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 15 Feb 2024 15:51:29 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 15, 2024 at 2:34 PM Sutou Kouhei <[email protected]> wrote:\n>\n>\n> Thanks for the info. Let's use InputFunctionCallSafeWithInfo().\n> See that attached patch:\n> v2-0001-Reuse-fcinfo-used-in-COPY-FROM.patch\n>\n> I also attach a patch for COPY TO:\n> v1-0001-Reuse-fcinfo-used-in-COPY-TO.patch\n>\n> I measured the COPY TO patch on my environment with:\n> COPY (SELECT 1::int2,2::int2,3::int2,4::int2,5::int2,6::int2,7::int2,8::int2,9::int2,10::int2,11::int2,12::int2,13::int2,14::int2,15::int2,16::int2,17::int2,18::int2,19::int2,20::int2, generate_series(1, 1000000::int4)) TO '/dev/null' \\watch c=5\n>\n> master:\n> 740.066ms\n> 734.884ms\n> 738.579ms\n> 734.170ms\n> 727.953ms\n>\n> patched:\n> 730.714ms\n> 741.483ms\n> 714.149ms\n> 715.436ms\n> 713.578ms\n>\n> It seems that it improves performance a bit but my\n> environment isn't suitable for benchmark. So they may not\n> be valid numbers.\n\nMy environment is slow (around 10x) but consistent.\nI see around 2-3 percent increase consistently.\n(with patch 7369.068 ms, without patch 7574.802 ms)\n\nthe patchset looks good in my eyes, i can understand it.\nhowever I cannot apply it cleanly against the HEAD.\n\n+/*\n+ * Prepare callinfo for InputFunctionCallSafeWithInfo to reuse one callinfo\n+ * instead of initializing it for each call. This is for performance.\n+ */\n+FunctionCallInfoBaseData *\n+PrepareInputFunctionCallInfo(void)\n+{\n+ FunctionCallInfoBaseData *fcinfo;\n+\n+ fcinfo = (FunctionCallInfoBaseData *) palloc(SizeForFunctionCallInfo(3));\n\njust wondering, I saw other similar places using palloc0,\ndo we need to use palloc0?\n\n\n", "msg_date": "Thu, 15 Feb 2024 17:09:20 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CACJufxE=m8kMC92JpaqNMg02P_Pi1sZJ1w=xNec0=j_W6d9GDw@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 15 Feb 2024 17:09:20 +0800,\n jian he <[email protected]> wrote:\n\n> My environment is slow (around 10x) but consistent.\n> I see around 2-3 percent increase consistently.\n> (with patch 7369.068 ms, without patch 7574.802 ms)\n\nThanks for sharing your numbers! It will help us to\ndetermine whether these changes improve performance or not.\n\n> the patchset looks good in my eyes, i can understand it.\n> however I cannot apply it cleanly against the HEAD.\n\nHmm, I used 9bc1eee988c31e66a27e007d41020664df490214 as the\nbase version. But both patches based on the same\nrevision. So we may not be able to apply both patches at\nonce cleanly.\n\n> +/*\n> + * Prepare callinfo for InputFunctionCallSafeWithInfo to reuse one callinfo\n> + * instead of initializing it for each call. This is for performance.\n> + */\n> +FunctionCallInfoBaseData *\n> +PrepareInputFunctionCallInfo(void)\n> +{\n> + FunctionCallInfoBaseData *fcinfo;\n> +\n> + fcinfo = (FunctionCallInfoBaseData *) palloc(SizeForFunctionCallInfo(3));\n> \n> just wondering, I saw other similar places using palloc0,\n> do we need to use palloc0?\n\nI think that we don't need to use palloc0() here because the\nfollowing InitFunctionCallInfoData() call initializes all\nmembers explicitly.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 15 Feb 2024 18:15:54 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 15, 2024 at 03:34:21PM +0900, Sutou Kouhei wrote:\n> It seems that it improves performance a bit but my\n> environment isn't suitable for benchmark. So they may not\n> be valid numbers.\n\nI was comparing what you have here, and what's been attached by Andres\nat [1] and the top of the changes on my development branch at [2]\n(v3-0008, mostly). And, it strikes me that there is no need to do any\nmajor changes in any of the callbacks proposed up to v13 and v14 in\nthis thread, as all the changes proposed want to plug in more data\ninto each StateData for COPY FROM and COPY TO, the best part being\nthat v3-0008 can just reuse the proposed callbacks as-is. v1-0001\nfrom Sutou-san would need one slight tweak in the per-row callback,\nstill that's minor.\n\nI have been spending more time on the patch to introduce the COPY\nAPIs, leading me to the v15 attached, where I have replaced the\nprevious attribute callbacks for the output representation and the\nreads with hardcoded routines that should be optimized by compilers,\nand I have done more profiling with -O2. I'm aware of the disparities\nin the per-row and start callbacks for the text/csv cases as well as\nthe default expressions, but these are really format-dependent with\ntheir own assumptions so splitting them is something that makes\nlimited sense to me. I've also looks at externalizing some of the\nerror handling, though the result was not that beautiful, so what I\ngot here is what makes the callbacks leaner and easier to work with.\n\nFirst, some results for COPY FROM using the previous tests (30 int\nattributes, running on scissors, data sent to blackhole_am, etc.) in\nNextCopyFrom() which becomes the hot-spot:\n* Using v15:\n Children Self Command Shared Object Symbol\n- 66.42% 0.71% postgres postgres [.] NextCopyFrom\n - 65.70% NextCopyFrom\n - 65.49% CopyFromTextLikeOneRow\n + 19.29% InputFunctionCallSafe\n + 15.81% CopyReadLine\n 13.89% CopyReadAttributesText\n + 0.71% _start\n* Using HEAD (today's 011d60c4352c):\n Children Self Command Shared Object Symbol\n- 67.09% 16.64% postgres postgres [.] NextCopyFrom\n - 50.45% NextCopyFrom\n - 30.89% NextCopyFromRawFields\n + 16.26% CopyReadLine\n 13.59% CopyReadAttributesText\n + 19.24% InputFunctionCallSafe\n + 16.64% _start\n\nIn this case, I have been able to limit the effects of the per-row\ncallback by making NextCopyFromRawFields() local to copyfromparse.c\nwhile applying some inlining to it. This brings me to a different\npoint, why don't we do this change independently on HEAD? It's not \nreally complicated to make NextCopyFromRawFields show high in the\nprofiles. I was looking at external projects, and noticed that\nthere's nothing calling NextCopyFromRawFields() directly.\n\nSecond, some profiles with COPY TO (30 int integers, running on\nscissors) where data is sent /dev/null:\n* Using v15:\n Children Self Command Shared Object Symbol\n- 85.61% 0.34% postgres postgres [.] CopyOneRowTo\n - 85.26% CopyOneRowTo\n - 75.86% CopyToTextOneRow\n + 36.49% OutputFunctionCall\n + 10.53% appendBinaryStringInfo\n 9.66% CopyAttributeOutText\n 1.34% int4out\n 0.92% 0xffffa9803be8\n 0.79% enlargeStringInfo\n 0.77% memcpy@plt\n 0.69% 0xffffa9803be4\n + 3.12% CopySendEndOfRow\n 2.81% CopySendChar\n 0.95% pgstat_progress_update_param\n 0.95% appendBinaryStringInfo\n 0.55% MemoryContextReset\n* Using HEAD (today's 011d60c4352c):\n Children Self Command Shared Object Symbol\n- 80.35% 14.23% postgres postgres [.] CopyOneRowTo\n - 66.12% CopyOneRowTo\n + 35.40% OutputFunctionCall\n + 11.00% appendBinaryStringInfo\n 8.38% CopyAttributeOutText\n + 2.98% CopySendEndOfRow\n 1.52% int4out\n 0.88% pgstat_progress_update_param\n 0.87% 0xffff8ab32be8\n 0.74% memcpy@plt\n 0.68% enlargeStringInfo\n 0.61% 0xffff8ab32be4\n 0.51% MemoryContextReset\n + 14.23% _start\n\nThe increase in CopyOneRowTo from 80% to 85% worries me but I am not\nquite sure how to optimize that with the current structure of the\ncode, so the dispatch caused by per-row callback is noticeable in\nwhat's my worst test case. I am not quite sure how to avoid that,\nTBH. A result that has been puzzling me is that I am getting faster\nruntimes with v15 (6232ms in average) vs HEAD (6550ms) at 5M rows with\nCOPY TO for what led to these profiles (for tests without perf\nattached to the backends).\n\nAny thoughts overall?\n\n[1]: https://www.postgresql.org/message-id/20240218015955.rmw5mcmobt5hbene%40awork3.anarazel.de\n[2]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Thu, 22 Feb 2024 15:44:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 22 Feb 2024 15:44:16 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> I was comparing what you have here, and what's been attached by Andres\n> at [1] and the top of the changes on my development branch at [2]\n> (v3-0008, mostly). And, it strikes me that there is no need to do any\n> major changes in any of the callbacks proposed up to v13 and v14 in\n> this thread, as all the changes proposed want to plug in more data\n> into each StateData for COPY FROM and COPY TO, the best part being\n> that v3-0008 can just reuse the proposed callbacks as-is. v1-0001\n> from Sutou-san would need one slight tweak in the per-row callback,\n> still that's minor.\n\nI think so too. But I thought that some minor conflicts will\nbe happen with this and the v15. So I worked on this before\nthe v15.\n\nWe agreed that this optimization doesn't block v15: [1]\nSo we can work on the v15 without this optimization for now.\n\n[1] https://www.postgresql.org/message-id/flat/20240219195351.5vy7cdl3wxia66kg%40awork3.anarazel.de#20f9677e074fb0f8c5bb3994ef059a15\n\n> I have been spending more time on the patch to introduce the COPY\n> APIs, leading me to the v15 attached, where I have replaced the\n> previous attribute callbacks for the output representation and the\n> reads with hardcoded routines that should be optimized by compilers,\n> and I have done more profiling with -O2.\n\nThanks! I wanted to work on it but I didn't have enough time\nfor it in a few days...\n\nI've reviewed the v15.\n\n----\n> @@ -751,8 +751,9 @@ CopyReadBinaryData(CopyFromState cstate, char *dest, int nbytes)\n> *\n> * NOTE: force_not_null option are not applied to the returned fields.\n> */\n> -bool\n> -NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields)\n> +static bool\n\n\"inline\" is missing here.\n\n> +NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields,\n> +\t\t\t\t\t bool is_csv)\n> {\n> \tint\t\t\tfldct;\n----\n\nHow about adding \"is_csv\" to CopyReadline() and\nCopyReadLineText() too?\n\n----\ndiff --git a/src/backend/commands/copyfromparse.c b/src/backend/commands/copyfromparse.c\nindex 25b8d4bc52..79fabecc69 100644\n--- a/src/backend/commands/copyfromparse.c\n+++ b/src/backend/commands/copyfromparse.c\n@@ -150,8 +150,8 @@ static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n \n \n /* non-export function prototypes */\n-static bool CopyReadLine(CopyFromState cstate);\n-static bool CopyReadLineText(CopyFromState cstate);\n+static inline bool CopyReadLine(CopyFromState cstate, bool is_csv);\n+static inline bool CopyReadLineText(CopyFromState cstate, bool is_csv);\n static inline int CopyReadAttributesText(CopyFromState cstate);\n static inline int CopyReadAttributesCSV(CopyFromState cstate);\n static Datum CopyReadBinaryAttribute(CopyFromState cstate, FmgrInfo *flinfo,\n@@ -770,7 +770,7 @@ NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields,\n \t\ttupDesc = RelationGetDescr(cstate->rel);\n \n \t\tcstate->cur_lineno++;\n-\t\tdone = CopyReadLine(cstate);\n+\t\tdone = CopyReadLine(cstate, is_csv);\n \n \t\tif (cstate->opts.header_line == COPY_HEADER_MATCH)\n \t\t{\n@@ -823,7 +823,7 @@ NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields,\n \tcstate->cur_lineno++;\n \n \t/* Actually read the line into memory here */\n-\tdone = CopyReadLine(cstate);\n+\tdone = CopyReadLine(cstate, is_csv);\n \n \t/*\n \t * EOF at start of line means we're done. If we see EOF after some\n@@ -1133,8 +1133,8 @@ NextCopyFrom(CopyFromState cstate, ExprContext *econtext,\n * by newline. The terminating newline or EOF marker is not included\n * in the final value of line_buf.\n */\n-static bool\n-CopyReadLine(CopyFromState cstate)\n+static inline bool\n+CopyReadLine(CopyFromState cstate, bool is_csv)\n {\n \tbool\t\tresult;\n \n@@ -1142,7 +1142,7 @@ CopyReadLine(CopyFromState cstate)\n \tcstate->line_buf_valid = false;\n \n \t/* Parse data and transfer into line_buf */\n-\tresult = CopyReadLineText(cstate);\n+\tresult = CopyReadLineText(cstate, is_csv);\n \n \tif (result)\n \t{\n@@ -1209,8 +1209,8 @@ CopyReadLine(CopyFromState cstate)\n /*\n * CopyReadLineText - inner loop of CopyReadLine for text mode\n */\n-static bool\n-CopyReadLineText(CopyFromState cstate)\n+static inline bool\n+CopyReadLineText(CopyFromState cstate, bool is_csv)\n {\n \tchar\t *copy_input_buf;\n \tint\t\t\tinput_buf_ptr;\n@@ -1226,7 +1226,7 @@ CopyReadLineText(CopyFromState cstate)\n \tchar\t\tquotec = '\\0';\n \tchar\t\tescapec = '\\0';\n \n-\tif (cstate->opts.csv_mode)\n+\tif (is_csv)\n \t{\n \t\tquotec = cstate->opts.quote[0];\n \t\tescapec = cstate->opts.escape[0];\n@@ -1306,7 +1306,7 @@ CopyReadLineText(CopyFromState cstate)\n \t\tprev_raw_ptr = input_buf_ptr;\n \t\tc = copy_input_buf[input_buf_ptr++];\n \n-\t\tif (cstate->opts.csv_mode)\n+\t\tif (is_csv)\n \t\t{\n \t\t\t/*\n \t\t\t * If character is '\\\\' or '\\r', we may need to look ahead below.\n@@ -1345,7 +1345,7 @@ CopyReadLineText(CopyFromState cstate)\n \t\t}\n \n \t\t/* Process \\r */\n-\t\tif (c == '\\r' && (!cstate->opts.csv_mode || !in_quote))\n+\t\tif (c == '\\r' && (!is_csv || !in_quote))\n \t\t{\n \t\t\t/* Check for \\r\\n on first line, _and_ handle \\r\\n. */\n \t\t\tif (cstate->eol_type == EOL_UNKNOWN ||\n@@ -1373,10 +1373,10 @@ CopyReadLineText(CopyFromState cstate)\n \t\t\t\t\tif (cstate->eol_type == EOL_CRNL)\n \t\t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n-\t\t\t\t\t\t\t\t !cstate->opts.csv_mode ?\n+\t\t\t\t\t\t\t\t !is_csv ?\n \t\t\t\t\t\t\t\t errmsg(\"literal carriage return found in data\") :\n \t\t\t\t\t\t\t\t errmsg(\"unquoted carriage return found in data\"),\n-\t\t\t\t\t\t\t\t !cstate->opts.csv_mode ?\n+\t\t\t\t\t\t\t\t !is_csv ?\n \t\t\t\t\t\t\t\t errhint(\"Use \\\"\\\\r\\\" to represent carriage return.\") :\n \t\t\t\t\t\t\t\t errhint(\"Use quoted CSV field to represent carriage return.\")));\n \n@@ -1390,10 +1390,10 @@ CopyReadLineText(CopyFromState cstate)\n \t\t\telse if (cstate->eol_type == EOL_NL)\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n-\t\t\t\t\t\t !cstate->opts.csv_mode ?\n+\t\t\t\t\t\t !is_csv ?\n \t\t\t\t\t\t errmsg(\"literal carriage return found in data\") :\n \t\t\t\t\t\t errmsg(\"unquoted carriage return found in data\"),\n-\t\t\t\t\t\t !cstate->opts.csv_mode ?\n+\t\t\t\t\t\t !is_csv ?\n \t\t\t\t\t\t errhint(\"Use \\\"\\\\r\\\" to represent carriage return.\") :\n \t\t\t\t\t\t errhint(\"Use quoted CSV field to represent carriage return.\")));\n \t\t\t/* If reach here, we have found the line terminator */\n@@ -1401,15 +1401,15 @@ CopyReadLineText(CopyFromState cstate)\n \t\t}\n \n \t\t/* Process \\n */\n-\t\tif (c == '\\n' && (!cstate->opts.csv_mode || !in_quote))\n+\t\tif (c == '\\n' && (!is_csv || !in_quote))\n \t\t{\n \t\t\tif (cstate->eol_type == EOL_CR || cstate->eol_type == EOL_CRNL)\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n-\t\t\t\t\t\t !cstate->opts.csv_mode ?\n+\t\t\t\t\t\t !is_csv ?\n \t\t\t\t\t\t errmsg(\"literal newline found in data\") :\n \t\t\t\t\t\t errmsg(\"unquoted newline found in data\"),\n-\t\t\t\t\t\t !cstate->opts.csv_mode ?\n+\t\t\t\t\t\t !is_csv ?\n \t\t\t\t\t\t errhint(\"Use \\\"\\\\n\\\" to represent newline.\") :\n \t\t\t\t\t\t errhint(\"Use quoted CSV field to represent newline.\")));\n \t\t\tcstate->eol_type = EOL_NL;\t/* in case not set yet */\n@@ -1421,7 +1421,7 @@ CopyReadLineText(CopyFromState cstate)\n \t\t * In CSV mode, we only recognize \\. alone on a line. This is because\n \t\t * \\. is a valid CSV data value.\n \t\t */\n-\t\tif (c == '\\\\' && (!cstate->opts.csv_mode || first_char_in_line))\n+\t\tif (c == '\\\\' && (!is_csv || first_char_in_line))\n \t\t{\n \t\t\tchar\t\tc2;\n \n@@ -1454,7 +1454,7 @@ CopyReadLineText(CopyFromState cstate)\n \n \t\t\t\t\tif (c2 == '\\n')\n \t\t\t\t\t{\n-\t\t\t\t\t\tif (!cstate->opts.csv_mode)\n+\t\t\t\t\t\tif (!is_csv)\n \t\t\t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n \t\t\t\t\t\t\t\t\t errmsg(\"end-of-copy marker does not match previous newline style\")));\n@@ -1463,7 +1463,7 @@ CopyReadLineText(CopyFromState cstate)\n \t\t\t\t\t}\n \t\t\t\t\telse if (c2 != '\\r')\n \t\t\t\t\t{\n-\t\t\t\t\t\tif (!cstate->opts.csv_mode)\n+\t\t\t\t\t\tif (!is_csv)\n \t\t\t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n \t\t\t\t\t\t\t\t\t errmsg(\"end-of-copy marker corrupt\")));\n@@ -1479,7 +1479,7 @@ CopyReadLineText(CopyFromState cstate)\n \n \t\t\t\tif (c2 != '\\r' && c2 != '\\n')\n \t\t\t\t{\n-\t\t\t\t\tif (!cstate->opts.csv_mode)\n+\t\t\t\t\tif (!is_csv)\n \t\t\t\t\t\tereport(ERROR,\n \t\t\t\t\t\t\t\t(errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n \t\t\t\t\t\t\t\t errmsg(\"end-of-copy marker corrupt\")));\n@@ -1508,7 +1508,7 @@ CopyReadLineText(CopyFromState cstate)\n \t\t\t\tresult = true;\t/* report EOF */\n \t\t\t\tbreak;\n \t\t\t}\n-\t\t\telse if (!cstate->opts.csv_mode)\n+\t\t\telse if (!is_csv)\n \t\t\t{\n \t\t\t\t/*\n \t\t\t\t * If we are here, it means we found a backslash followed by\n----\n\n> In this case, I have been able to limit the effects of the per-row\n> callback by making NextCopyFromRawFields() local to copyfromparse.c\n> while applying some inlining to it. This brings me to a different\n> point, why don't we do this change independently on HEAD?\n\nDoes this mean that changing\n\nbool NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields)\n\nto (adding \"static\")\n\nstatic bool NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields)\n\nnot (adding \"static\" and \"bool is_csv\")\n\nstatic bool NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields, bool is_csv)\n\nimproves performance?\n\nIf so, adding the change independently on HEAD makes\nsense. But I don't know why that improves\nperformance... Inlining?\n\n> It's not \n> really complicated to make NextCopyFromRawFields show high in the\n> profiles. I was looking at external projects, and noticed that\n> there's nothing calling NextCopyFromRawFields() directly.\n\nIt means that we can hide NextCopyFromRawFields() without\nbreaking compatibility (because nobody uses it), right?\n\nIf so, I also think that we can change\nNextCopyFromRawFields() directly.\n\nIf we assume that someone (not public code) may use it, we\ncan create a new internal function and use it something\nlike:\n\n----\ndiff --git a/src/backend/commands/copyfromparse.c b/src/backend/commands/copyfromparse.c\nindex 7cacd0b752..b1515ead82 100644\n--- a/src/backend/commands/copyfromparse.c\n+++ b/src/backend/commands/copyfromparse.c\n@@ -751,8 +751,8 @@ CopyReadBinaryData(CopyFromState cstate, char *dest, int nbytes)\n *\n * NOTE: force_not_null option are not applied to the returned fields.\n */\n-bool\n-NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields)\n+static bool\n+NextCopyFromRawFieldsInternal(CopyFromState cstate, char ***fields, int *nfields)\n {\n \tint\t\t\tfldct;\n \tbool\t\tdone;\n@@ -840,6 +840,12 @@ NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields)\n \treturn true;\n }\n \n+bool\n+NextCopyFromRawFields(CopyFromState cstate, char ***fields, int *nfields)\n+{\n+\treturn NextCopyFromRawFieldsInternal(cstate, fields, nfields);\n+}\n+\n /*\n * Read next tuple from file for COPY FROM. Return false if no more tuples.\n *\n----\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 22 Feb 2024 18:39:48 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Thu, Feb 22, 2024 at 06:39:48PM +0900, Sutou Kouhei wrote:\n> If so, adding the change independently on HEAD makes\n> sense. But I don't know why that improves\n> performance... Inlining?\n\nI guess so. It does not make much of a difference, though. The thing\nis that the dispatch caused by the custom callbacks called for each\nrow is noticeable in any profiles I'm taking (not that much in the\nworst-case scenarios, still a few percents), meaning that this impacts\nthe performance for all the in-core formats (text, csv, binary) as\nlong as we refactor text/csv/binary to use the routines of copyapi.h.\nI don't really see a way forward, except if we don't dispatch the\nin-core formats to not impact the default cases. That makes the code\na bit less elegant, but equally efficient for the existing formats.\n--\nMichael", "msg_date": "Fri, 1 Mar 2024 14:31:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 22 Feb 2024 18:39:48 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> How about adding \"is_csv\" to CopyReadline() and\n> CopyReadLineText() too?\n\nI tried this on my environment. This is a change for COPY\nFROM not COPY TO but this decreases COPY TO\nperformance with [1]... Hmm...\n\nmaster: 697.693 msec (the best case)\nv15: 576.374 msec (the best case)\nv15+this: 593.559 msec (the best case)\n\n[1] COPY (SELECT 1::int2,2::int2,3::int2,4::int2,5::int2,6::int2,7::int2,8::int2,9::int2,10::int2,11::int2,12::int2,13::int2,14::int2,15::int2,16::int2,17::int2,18::int2,19::int2,20::int2, generate_series(1, 1000000::int4)) TO '/dev/null' \\watch c=15\n\nSo I think that v15 is good.\n\n\nperf result of master:\n\n# Children Self Command Shared Object Symbol \n# ........ ........ ........ ................. .........................................\n#\n 31.39% 14.54% postgres postgres [.] CopyOneRowTo\n |--17.00%--CopyOneRowTo\n | |--10.61%--FunctionCall1Coll\n | | --8.40%--int2out\n | | |--2.58%--pg_ltoa\n | | | --0.68%--pg_ultoa_n\n | | |--1.11%--pg_ultoa_n\n | | |--0.83%--AllocSetAlloc\n | | |--0.69%--__memcpy_avx_unaligned_erms (inlined)\n | | |--0.58%--FunctionCall1Coll\n | | --0.55%--memcpy@plt\n | |--3.25%--appendBinaryStringInfo\n | | --0.56%--pg_ultoa_n\n | --0.69%--CopyAttributeOutText\n\nperf result of v15:\n\n# Children Self Command Shared Object Symbol \n# ........ ........ ........ ................. .........................................\n#\n 25.60% 10.47% postgres postgres [.] CopyToTextOneRow\n |--15.39%--CopyToTextOneRow\n | |--10.44%--FunctionCall1Coll\n | | |--7.25%--int2out\n | | | |--2.60%--pg_ltoa\n | | | | --0.71%--pg_ultoa_n\n | | | |--0.90%--FunctionCall1Coll\n | | | |--0.84%--pg_ultoa_n\n | | | --0.66%--AllocSetAlloc\n | | |--0.79%--ExecProjectSet\n | | --0.68%--int4out\n | |--2.50%--appendBinaryStringInfo\n | --0.53%--CopyAttributeOutText\n\n\nThe profiles on Michael's environment [2] showed that\nCopyOneRow() % was increased by v15. But it\n(CopyToTextOneRow() % not CopyOneRow() %) wasn't increased\nby v15. It's decreased instead.\n\n[2] https://www.postgresql.org/message-id/flat/ZdbtQJ-p5H1_EDwE%40paquier.xyz#6439e6ad574f2d47cd7220e9bfed3889\n\nSo I think that v15 doesn't have performance regression but\nmy environment isn't suitable for benchmark...\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 01 Mar 2024 15:29:17 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 1 Mar 2024 14:31:38 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> I guess so. It does not make much of a difference, though. The thing\n> is that the dispatch caused by the custom callbacks called for each\n> row is noticeable in any profiles I'm taking (not that much in the\n> worst-case scenarios, still a few percents), meaning that this impacts\n> the performance for all the in-core formats (text, csv, binary) as\n> long as we refactor text/csv/binary to use the routines of copyapi.h.\n> I don't really see a way forward, except if we don't dispatch the\n> in-core formats to not impact the default cases. That makes the code\n> a bit less elegant, but equally efficient for the existing formats.\n\nIt's an option based on your profile result but your\nexecution result also shows that v15 is faster than HEAD [1]:\n\n> I am getting faster runtimes with v15 (6232ms in average)\n> vs HEAD (6550ms) at 5M rows with COPY TO\n\n[1] https://www.postgresql.org/message-id/flat/ZdbtQJ-p5H1_EDwE%40paquier.xyz#6439e6ad574f2d47cd7220e9bfed3889\n\nI think that faster runtime is beneficial than mysterious\nprofile for users. So I think that we can merge v15 to\nmaster.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 01 Mar 2024 15:44:43 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 01 Mar 2024 15:44:43 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n>> I guess so. It does not make much of a difference, though. The thing\n>> is that the dispatch caused by the custom callbacks called for each\n>> row is noticeable in any profiles I'm taking (not that much in the\n>> worst-case scenarios, still a few percents), meaning that this impacts\n>> the performance for all the in-core formats (text, csv, binary) as\n>> long as we refactor text/csv/binary to use the routines of copyapi.h.\n>> I don't really see a way forward, except if we don't dispatch the\n>> in-core formats to not impact the default cases. That makes the code\n>> a bit less elegant, but equally efficient for the existing formats.\n> \n> It's an option based on your profile result but your\n> execution result also shows that v15 is faster than HEAD [1]:\n> \n>> I am getting faster runtimes with v15 (6232ms in average)\n>> vs HEAD (6550ms) at 5M rows with COPY TO\n> \n> [1] https://www.postgresql.org/message-id/flat/ZdbtQJ-p5H1_EDwE%40paquier.xyz#6439e6ad574f2d47cd7220e9bfed3889\n> \n> I think that faster runtime is beneficial than mysterious\n> profile for users. So I think that we can merge v15 to\n> master.\n\nIf this is a blocker of making COPY format extendable, can\nwe defer moving the existing text/csv/binary format\nimplementations to Copy{From,To}Routine for now as Michael\nsuggested to proceed making COPY format extendable? (Can we\nadd Copy{From,To}Routine without changing the existing\ntext/csv/binary format implementations?)\n\nI attach a patch for it.\n\nThere is a large hunk for CopyOneRowTo() that is caused by\nindent change. I also attach \"...-w.patch\" that uses \"git\n-w\" to remove space only changes. \"...-w.patch\" is only for\nreview. We should use .patch without -w for push.\n\n\nThanks,\n-- \nkou", "msg_date": "Mon, 04 Mar 2024 14:11:08 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Mar 04, 2024 at 02:11:08PM +0900, Sutou Kouhei wrote:\n> If this is a blocker of making COPY format extendable, can\n> we defer moving the existing text/csv/binary format\n> implementations to Copy{From,To}Routine for now as Michael\n> suggested to proceed making COPY format extendable? (Can we\n> add Copy{From,To}Routine without changing the existing\n> text/csv/binary format implementations?)\n\nYeah, I assume that it would be the way to go so as we don't do any\ndispatching in default cases. A different approach that could be done\nis to hide some of the parts of binary and text/csv in inline static\nfunctions that are equivalent to the routine callbacks. That's\nsimilar to the previous versions of the patch set, but if we come back\nto the argument that there is a risk of blocking optimizations of more\nof the local areas of the per-row processing in NextCopyFrom() and\nCopyOneRowTo(), what you have sounds like a good balance.\n\nCopyOneRowTo() could do something like that to avoid the extra\nindentation:\nif (cstate->routine)\n{\n cstate->routine->CopyToOneRow(cstate, slot);\n MemoryContextSwitchTo(oldcontext);\n return;\n}\n\nNextCopyFrom() does not need to be concerned by that.\n\n> I attach a patch for it.\n\n> There is a large hunk for CopyOneRowTo() that is caused by\n> indent change. I also attach \"...-w.patch\" that uses \"git\n> -w\" to remove space only changes. \"...-w.patch\" is only for\n> review. We should use .patch without -w for push.\n\nI didn't know this trick. That's indeed nice.. I may use that for\nother stuff to make patches more presentable to the eyes. And that's\navailable as well with `git diff`.\n\nIf we basically agree about this part, how would the rest work out\nwith this set of APIs and the possibility to plug in a custom value\nfor FORMAT to do a pg_proc lookup, including an example of how these\nAPIs can be used?\n--\nMichael", "msg_date": "Tue, 5 Mar 2024 15:16:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 5 Mar 2024 15:16:33 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> CopyOneRowTo() could do something like that to avoid the extra\n> indentation:\n> if (cstate->routine)\n> {\n> cstate->routine->CopyToOneRow(cstate, slot);\n> MemoryContextSwitchTo(oldcontext);\n> return;\n> }\n\nOK. The v17 patch uses this style. Others are same as the\nv16.\n\n> I didn't know this trick. That's indeed nice.. I may use that for\n> other stuff to make patches more presentable to the eyes. And that's\n> available as well with `git diff`.\n\n:-)\n\n> If we basically agree about this part, how would the rest work out\n> with this set of APIs and the possibility to plug in a custom value\n> for FORMAT to do a pg_proc lookup, including an example of how these\n> APIs can be used?\n\nI'll send the following patches after this patch is\nmerged. They are based on the v6 patch[1]:\n\n1. Add copy_handler\n * This also adds a pg_proc lookup for custom FORMAT\n * This also adds a test for copy_handler\n2. Export CopyToStateData\n * We need it to implement custom copy TO handler\n3. Add needed APIs to implement custom copy TO handler\n * Add CopyToStateData::opaque\n * Export CopySendEndOfRow()\n4. Export CopyFromStateData\n * We need it to implement custom copy FROM handler\n5. Add needed APIs to implement custom copy FROM handler\n * Add CopyFromStateData::opaque\n * Export CopyReadBinaryData()\n\n[1] https://www.postgresql.org/message-id/flat/20240124.144936.67229716500876806.kou%40clear-code.com#f1ad092fc5e81fe38d3c376559efd52c\n\n\nThanks,\n-- \nkou", "msg_date": "Tue, 05 Mar 2024 17:18:08 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Tue, Mar 05, 2024 at 05:18:08PM +0900, Sutou Kouhei wrote:\n> I'll send the following patches after this patch is\n> merged.\n\nI am not sure that my schedule is on track to allow that for this\nrelease, unfortunately, especially with all the other items to review\nand discuss to make this thread feature-complete. There should be\na bit more than four weeks until the feature freeze (date not set in\nstone, should be around the 8th of April AoE), but I have less than\nthe half due to personal issues. Perhaps if somebody jumps on this\nthread, that will be possible..\n\n> They are based on the v6 patch[1]:\n> \n> 1. Add copy_handler\n> * This also adds a pg_proc lookup for custom FORMAT\n> * This also adds a test for copy_handler\n> 2. Export CopyToStateData\n> * We need it to implement custom copy TO handler\n> 3. Add needed APIs to implement custom copy TO handler\n> * Add CopyToStateData::opaque\n> * Export CopySendEndOfRow()\n> 4. Export CopyFromStateData\n> * We need it to implement custom copy FROM handler\n> 5. Add needed APIs to implement custom copy FROM handler\n> * Add CopyFromStateData::opaque\n> * Export CopyReadBinaryData()\n\nHmm. Sounds like a good plan for a split.\n--\nMichael", "msg_date": "Wed, 6 Mar 2024 15:34:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Wed, Mar 06, 2024 at 03:34:04PM +0900, Michael Paquier wrote:\n> I am not sure that my schedule is on track to allow that for this\n> release, unfortunately, especially with all the other items to review\n> and discuss to make this thread feature-complete. There should be\n> a bit more than four weeks until the feature freeze (date not set in\n> stone, should be around the 8th of April AoE), but I have less than\n> the half due to personal issues. Perhaps if somebody jumps on this\n> thread, that will be possible..\n\nWhile on it, here are some profiles based on HEAD and v17 with the\nprevious tests (COPY TO /dev/null, COPY FROM data sent to the void).\n\nCOPY FROM, text format with 30 attributes and HEAD:\n- 66.53% 16.33% postgres postgres [.] NextCopyFrom\n - 50.20% NextCopyFrom\n - 30.83% NextCopyFromRawFields\n + 16.09% CopyReadLine\n 13.72% CopyReadAttributesText\n + 19.11% InputFunctionCallSafe\n + 16.33% _start\nCOPY FROM, text format with 30 attributes and v17:\n- 66.60% 16.10% postgres postgres [.] NextCopyFrom\n - 50.50% NextCopyFrom\n - 30.44% NextCopyFromRawFields\n + 15.71% CopyReadLine\n 13.73% CopyReadAttributesText\n + 19.81% InputFunctionCallSafe\n + 16.10% _start\n\nCOPY TO, text format with 30 attributes and HEAD:\n- 79.55% 15.54% postgres postgres [.] CopyOneRowTo\n - 64.01% CopyOneRowTo\n + 30.01% OutputFunctionCall\n + 11.71% appendBinaryStringInfo\n 9.36% CopyAttributeOutText\n + 3.03% CopySendEndOfRow\n 1.65% int4out\n 1.01% 0xffff83e46be4\n 0.93% 0xffff83e46be8\n 0.93% memcpy@plt\n 0.87% pgstat_progress_update_param\n 0.78% enlargeStringInfo\n 0.67% 0xffff83e46bb4\n 0.66% 0xffff83e46bcc\n 0.57% MemoryContextReset\n + 15.54% _start\nCOPY TO, text format with 30 attributes and v17:\n- 79.35% 16.08% postgres postgres [.] CopyOneRowTo\n - 62.27% CopyOneRowTo\n + 28.92% OutputFunctionCall\n + 10.88% appendBinaryStringInfo\n 9.54% CopyAttributeOutText\n + 3.03% CopySendEndOfRow\n 1.60% int4out\n 0.97% pgstat_progress_update_param\n 0.95% 0xffff8c46cbe8\n 0.89% memcpy@plt\n 0.87% 0xffff8c46cbe4\n 0.79% enlargeStringInfo\n 0.64% 0xffff8c46cbcc\n 0.61% 0xffff8c46cbb4\n 0.58% MemoryContextReset\n + 16.08% _start\n\nSo, in short, and that's not really a surprise, there is no effect\nonce we use the dispatching with the routines only when a format would\nwant to plug-in with the APIs, but a custom format would still have a\npenalty of a few percents for both if bottlenecked on CPU.\n--\nMichael", "msg_date": "Thu, 7 Mar 2024 15:32:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 7 Mar 2024 15:32:01 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> While on it, here are some profiles based on HEAD and v17 with the\n> previous tests (COPY TO /dev/null, COPY FROM data sent to the void).\n> \n...\n> \n> So, in short, and that's not really a surprise, there is no effect\n> once we use the dispatching with the routines only when a format would\n> want to plug-in with the APIs, but a custom format would still have a\n> penalty of a few percents for both if bottlenecked on CPU.\n\nThanks for sharing these profiles!\nI agree with you.\n\nThis shows that the v17 approach doesn't affect the current\ntext/csv/binary implementations. (The v17 approach just adds\n2 new structs, Copy{From,To}Rountine, without changing the\ncurrent text/csv/binary implementations.)\n\nCan we push the v17 patch and proceed following\nimplementations? Could someone (especially a PostgreSQL\ncommitter) take a look at this for double-check?\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 08 Mar 2024 09:22:54 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Fri, Mar 8, 2024 at 8:23 AM Sutou Kouhei <[email protected]> wrote:\n>\n>\n> This shows that the v17 approach doesn't affect the current\n> text/csv/binary implementations. (The v17 approach just adds\n> 2 new structs, Copy{From,To}Rountine, without changing the\n> current text/csv/binary implementations.)\n>\n> Can we push the v17 patch and proceed following\n> implementations? Could someone (especially a PostgreSQL\n> committer) take a look at this for double-check?\n>\n\nHi, here are my cents:\nCurrently in v17, we have 3 extra functions within DoCopyTo\nCopyToStart, one time, start, doing some preliminary work.\nCopyToOneRow, doing the repetitive work, called many times, row by row.\nCopyToEnd, one time doing the closing work.\n\nseems to need a function pointer for processing the format and other options.\nor maybe the reason is we need a one time function call before doing DoCopyTo,\nlike one time initialization.\n\nWe can placed the function pointer after:\n`\ncstate = BeginCopyTo(pstate, rel, query, relid,\nstmt->filename, stmt->is_program,\nNULL, stmt->attlist, stmt->options);\n`\n\n\ngenerally in v17, the code pattern looks like this.\nif (cstate->opts.binary)\n{\n/* handle binary format */\n}\nelse if (cstate->routine)\n{\n/* custom code, make the copy format extensible */\n}\nelse\n{\n/* handle non-binary, (csv or text) format */\n}\nmaybe we need another bool flag like `bool buildin_format`.\nif the copy format is {csv|text|binary} then buildin_format is true else false.\n\nso the code pattern would be:\nif (cstate->opts.binary)\n{\n/* handle binary format */\n}\nelse if (cstate->routine && !buildin_format)\n{\n/* custom code, make the copy format extensible */\n}\nelse\n{\n/* handle non-binary, (csv or text) format */\n}\n\notherwise the {CopyToRoutine| CopyFromRoutine} needs a function pointer\nto distinguish native copy format and extensible supported format,\nlike I mentioned above?\n\n\n", "msg_date": "Mon, 11 Mar 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CACJufxEgn3=j-UWg-f2-DbLO+uVSKGcofpkX5trx+=YX6icSFg@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Mar 2024 08:00:00 +0800,\n jian he <[email protected]> wrote:\n\n> Hi, here are my cents:\n> Currently in v17, we have 3 extra functions within DoCopyTo\n> CopyToStart, one time, start, doing some preliminary work.\n> CopyToOneRow, doing the repetitive work, called many times, row by row.\n> CopyToEnd, one time doing the closing work.\n> \n> seems to need a function pointer for processing the format and other options.\n> or maybe the reason is we need a one time function call before doing DoCopyTo,\n> like one time initialization.\n\nI know that JSON format wants it but can we defer it? We can\nadd more options later. I want to proceed this improvement\nstep by step.\n\nMore use cases will help us which callbacks are needed. We\nwill be able to collect more use cases by providing basic\ncallbacks.\n\n> generally in v17, the code pattern looks like this.\n> if (cstate->opts.binary)\n> {\n> /* handle binary format */\n> }\n> else if (cstate->routine)\n> {\n> /* custom code, make the copy format extensible */\n> }\n> else\n> {\n> /* handle non-binary, (csv or text) format */\n> }\n> maybe we need another bool flag like `bool buildin_format`.\n> if the copy format is {csv|text|binary} then buildin_format is true else false.\n> \n> so the code pattern would be:\n> if (cstate->opts.binary)\n> {\n> /* handle binary format */\n> }\n> else if (cstate->routine && !buildin_format)\n> {\n> /* custom code, make the copy format extensible */\n> }\n> else\n> {\n> /* handle non-binary, (csv or text) format */\n> }\n> \n> otherwise the {CopyToRoutine| CopyFromRoutine} needs a function pointer\n> to distinguish native copy format and extensible supported format,\n> like I mentioned above?\n\nHmm. I may miss something but I think that we don't need the\nbool flag. Because we don't set cstate->routine for native\ncopy formats. So we can distinguish native copy format and\nextensible supported format by checking only\ncstate->routine.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 11 Mar 2024 09:56:24 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On Mon, Mar 11, 2024 at 8:56 AM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <CACJufxEgn3=j-UWg-f2-DbLO+uVSKGcofpkX5trx+=YX6icSFg@mail.gmail.com>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 11 Mar 2024 08:00:00 +0800,\n> jian he <[email protected]> wrote:\n>\n> > Hi, here are my cents:\n> > Currently in v17, we have 3 extra functions within DoCopyTo\n> > CopyToStart, one time, start, doing some preliminary work.\n> > CopyToOneRow, doing the repetitive work, called many times, row by row.\n> > CopyToEnd, one time doing the closing work.\n> >\n> > seems to need a function pointer for processing the format and other options.\n> > or maybe the reason is we need a one time function call before doing DoCopyTo,\n> > like one time initialization.\n>\n> I know that JSON format wants it but can we defer it? We can\n> add more options later. I want to proceed this improvement\n> step by step.\n>\n> More use cases will help us which callbacks are needed. We\n> will be able to collect more use cases by providing basic\n> callbacks.\n\nI guess one of the ultimate goals would be that COPY can export data\nto a customized format.\nLet's say the customized format is \"csv1\", but it is just analogous to\nthe csv format.\npeople should be able to create an extension, with serval C functions,\nthen they can do `copy (select 1 ) to stdout (format 'csv1');`\nbut the output will be exact same as `copy (select 1 ) to stdout\n(format 'csv');`\n\nIn such a scenario, we require a function akin to ProcessCopyOptions\nto handle situations\nwhere CopyFormatOptions->csv_mode is true, while the format is \"csv1\".\n\nbut CopyToStart is already within the DoCopyTo function, so you do\nneed an extra function pointer?\nI do agree with the incremental improvement method.\n\n\n", "msg_date": "Wed, 13 Mar 2024 16:00:46 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CACJufxFbffGaxW1LiTNEQAPcuvP1s7GL1Ghi--kbSqsjwh7XeA@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 13 Mar 2024 16:00:46 +0800,\n jian he <[email protected]> wrote:\n\n>> More use cases will help us which callbacks are needed. We\n>> will be able to collect more use cases by providing basic\n>> callbacks.\n\n> Let's say the customized format is \"csv1\", but it is just analogous to\n> the csv format.\n> people should be able to create an extension, with serval C functions,\n> then they can do `copy (select 1 ) to stdout (format 'csv1');`\n> but the output will be exact same as `copy (select 1 ) to stdout\n> (format 'csv');`\n\nThanks for sharing one use case but I think that we need\nreal-world use cases to consider our APIs.\n\nFor example, JSON support that is currently discussing in\nanother thread is a real-world use case. My Apache Arrow\nsupport is also another real-world use case.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 15 Mar 2024 17:37:54 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nCould someone review the v17 patch to proceed this?\n\nThe v17 patch:\nhttps://www.postgresql.org/message-id/flat/20240305.171808.667980402249336456.kou%40clear-code.com#d2ee079b75ebcf00c410300ecc4a357a\n\nSome profiles by Michael:\nhttps://www.postgresql.org/message-id/flat/ZelfYatRdVZN3FbE%40paquier.xyz#eccfd1a0131af93c48026d691cc247f4\n\nThanks,\n-- \nkou\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 08 Mar 2024 09:22:54 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> Hi,\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 7 Mar 2024 15:32:01 +0900,\n> Michael Paquier <[email protected]> wrote:\n> \n>> While on it, here are some profiles based on HEAD and v17 with the\n>> previous tests (COPY TO /dev/null, COPY FROM data sent to the void).\n>> \n> ...\n>> \n>> So, in short, and that's not really a surprise, there is no effect\n>> once we use the dispatching with the routines only when a format would\n>> want to plug-in with the APIs, but a custom format would still have a\n>> penalty of a few percents for both if bottlenecked on CPU.\n> \n> Thanks for sharing these profiles!\n> I agree with you.\n> \n> This shows that the v17 approach doesn't affect the current\n> text/csv/binary implementations. (The v17 approach just adds\n> 2 new structs, Copy{From,To}Rountine, without changing the\n> current text/csv/binary implementations.)\n> \n> Can we push the v17 patch and proceed following\n> implementations? Could someone (especially a PostgreSQL\n> committer) take a look at this for double-check?\n> \n> \n> Thanks,\n> -- \n> kou\n> \n> \n\n\n", "msg_date": "Wed, 20 Mar 2024 23:27:32 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Andres,\n\nCould you take a look at this? I think that you don't want\nto touch the current text/csv/binary implementations. The\nv17 patch approach doesn't touch the current text/csv/binary\nimplementations. What do you think about this approach?\n\n\nThanks,\n-- \nkou\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Wed, 20 Mar 2024 23:27:32 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> Hi,\n> \n> Could someone review the v17 patch to proceed this?\n> \n> The v17 patch:\n> https://www.postgresql.org/message-id/flat/20240305.171808.667980402249336456.kou%40clear-code.com#d2ee079b75ebcf00c410300ecc4a357a\n> \n> Some profiles by Michael:\n> https://www.postgresql.org/message-id/flat/ZelfYatRdVZN3FbE%40paquier.xyz#eccfd1a0131af93c48026d691cc247f4\n> \n> Thanks,\n> -- \n> kou\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 08 Mar 2024 09:22:54 +0900 (JST),\n> Sutou Kouhei <[email protected]> wrote:\n> \n>> Hi,\n>> \n>> In <[email protected]>\n>> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Thu, 7 Mar 2024 15:32:01 +0900,\n>> Michael Paquier <[email protected]> wrote:\n>> \n>>> While on it, here are some profiles based on HEAD and v17 with the\n>>> previous tests (COPY TO /dev/null, COPY FROM data sent to the void).\n>>> \n>> ...\n>>> \n>>> So, in short, and that's not really a surprise, there is no effect\n>>> once we use the dispatching with the routines only when a format would\n>>> want to plug-in with the APIs, but a custom format would still have a\n>>> penalty of a few percents for both if bottlenecked on CPU.\n>> \n>> Thanks for sharing these profiles!\n>> I agree with you.\n>> \n>> This shows that the v17 approach doesn't affect the current\n>> text/csv/binary implementations. (The v17 approach just adds\n>> 2 new structs, Copy{From,To}Rountine, without changing the\n>> current text/csv/binary implementations.)\n>> \n>> Can we push the v17 patch and proceed following\n>> implementations? Could someone (especially a PostgreSQL\n>> committer) take a look at this for double-check?\n>> \n>> \n>> Thanks,\n>> -- \n>> kou\n>> \n>> \n> \n> \n\n\n", "msg_date": "Wed, 10 Apr 2024 17:16:26 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hello Kouhei-san,\n\nI think it'd be helpful if you could post a patch status, i.e. a message\nre-explaininig what it aims to achieve, summary of the discussion so\nfar, and what you think are the open questions. Otherwise every reviewer\nhas to read the whole thread to learn this.\n\nFWIW I realize there are other related patches, and maybe some of the\ndiscussion is happening on those threads. But that's just another reason\nto post the summary here - as a reviewer I'm not going to read random\nother patches that \"might\" have relevant info.\n\n-----\n\nThe way I understand it, the ultimate goal is to allow extensions to\ndefine formats using CREATE XYZ. And I agree that would be a very\nvaluable feature. But the proposed patch does not do that, right? It\nonly does some basic things at the C level, there's no DDL etc.\n\nPer the commit message, none of the existing formats (text/csv/binary)\nis implemented as \"copy routine\". IMHO that's a bit strange, because\nthat's exactly what I'd expect this patch to do - to define all the\ninfrastructure (catalogs, ...) and switch the existing formats to it.\n\nYes, the patch will be larger, but it'll also simplify some of the code\n(right now there's a bunch of branches to handle these \"old\" formats).\nHow would you even know the new code is correct, when there's nothing\nusing using the \"copy routine\" branch?\n\nIn fact, doesn't this mean that the benchmarks presented earlier are not\nvery useful? We still use the old code, except there are a couple \"if\"\nbranches that are never taken? I don't think this measures the new\napproach would not be slower once everything gets to be copy routine.\n\nOr what am I missing?\n\nAlso, how do we know this API is suitable for the alternative formats?\nFor example you mentioned Arrow, and I suppose people will want to add\nsupport for other column-oriented formats. I assume that will require\nstashing a batch of rows (or some other internal state) somewhere, but\ndoes the proposed API plan for that?\n\nMy guess would be we'll need to add a \"private_data\" pointer to the\nCopyFromStateData/CopyToStateData structs, but maybe I'm wrong.\n\nAlso, won't the alternative formats require custom parameters. For\nexample, for column-oriented-formats it might be useful to specify a\nstripe size (rows per batch), etc. I'm not saying this patch needs to\nimplement that, but maybe the API should expect it?\n\n-----\n\nTo sum this up, what I think needs to happen for this patch to move forward:\n\n1) Switch the existing formats to the new API, to validate the API works\nat least for them, allow testing and benchmarking the code.\n\n2) Try implementing some of the more exotic formats (column-oriented) to\ntest the API works for those too.\n\n3) Maybe try implementing a PoC version to do the DDL, so that it\nactually is extensible.\n\nIt's not my intent to \"move the goalposts\" - I think it's fine if the\npatches (2) and (3) are just PoC, to validate (1) goes in the right\ndirection. For example, it's fine if (2) just hard-codes the new format\nnext to the build-in ones - that's not something we'd commit, I think,\nbut for validation of (1) it's good enough.\n\nMost of the DDL stuff can probably be \"copied\" from FDW handlers. It's\npretty similar, and the \"routine\" idea is what FDW does too. It probably\nalso shows a good way to \"initialize\" the routine, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Jul 2024 14:40:05 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Kou,\r\n\r\nI tried to follow the thread but had to skip quite some discussions in the middle part of the thread. From what I read, it appears to me that there were a lot of back-and-forth discussions on the specific implementation details (i.e. do not touch existing format implementation), performance concerns and how to split the patches to make it more manageable.\r\n\r\nMy understanding is that the provided v17 patch aims to achieve the followings:\r\n- Retain existing format implementations as built-in formats, and do not go through the new interface for them.\r\n- Make sure that there is no sign of performance degradation.\r\n- Refactoring the existing code to make it easier and possible to make copy handlers extensible. However, some of the infrastructure work that are required to make copy handler extensible are intentionally delayed for future patches. Some of the work were proposed as patches in earlier messages, but they were not explicitly referenced in recent messages.\r\n\r\nOverall, the current v17 patch applies cleanly to HEAD. “make check-world” also runs cleanly. If my understanding of the current status of the patch is correct, the patch looks good to me.\r\n\r\n\r\nRegards,\r\nYong", "msg_date": "Mon, 22 Jul 2024 07:11:15 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Tomas,\n\nThanks for joining this thread!\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 19 Jul 2024 14:40:05 +0200,\n Tomas Vondra <[email protected]> wrote:\n\n> I think it'd be helpful if you could post a patch status, i.e. a message\n> re-explaininig what it aims to achieve, summary of the discussion so\n> far, and what you think are the open questions. Otherwise every reviewer\n> has to read the whole thread to learn this.\n\nIt makes sense. It seems your questions covers all important\npoints in this thread. So my answers of your questions\nsummarize the latest information.\n\n> FWIW I realize there are other related patches, and maybe some of the\n> discussion is happening on those threads. But that's just another reason\n> to post the summary here - as a reviewer I'm not going to read random\n> other patches that \"might\" have relevant info.\n\nIt makes sense too. To clarify it, other threads are\nunrelated. We can focus on only this thread for this propose.\n\n> The way I understand it, the ultimate goal is to allow extensions to\n> define formats using CREATE XYZ.\n\nRight.\n\n> But the proposed patch does not do that, right? It\n> only does some basic things at the C level, there's no DDL etc.\n\nRight. The latest patch set includes only the basic things\nfor the first implementation.\n\n> Per the commit message, none of the existing formats (text/csv/binary)\n> is implemented as \"copy routine\".\n\nRight.\n\n> IMHO that's a bit strange, because\n> that's exactly what I'd expect this patch to do - to define all the\n> infrastructure (catalogs, ...) and switch the existing formats to it.\n\nWe did it in the v1-v15 patch sets. But the v16/v17 patch\nsets remove it because of a profiling result. (It's\ndescribed later.)\n\nIn general, we don't want to decrease the current\nperformance of the existing formats:\n\nhttps://www.postgresql.org/message-id/flat/10025bac-158c-ffe7-fbec-32b42629121f%40dunslane.net#81cf82c219f2f2d77a616bbf5e511a5c\n\n> We've spent quite a lot of blood sweat and tears over the years to make\n> COPY fast, and we should not sacrifice any of that lightly.\n\nThe v15 patch set is faster than HEAD but there is a\nmysterious profiling result:\n\nhttps://www.postgresql.org/message-id/flat/ZdbtQJ-p5H1_EDwE%40paquier.xyz#6439e6ad574f2d47cd7220e9bfed3889\n\n> The increase in CopyOneRowTo from 80% to 85% worries me\n...\n> I am getting faster\n> runtimes with v15 (6232ms in average) vs HEAD (6550ms).\n\nI think that it's not a blocker because the v15 patch set\napproach is faster. But someone may think that it's a\nblocker. So the v16 or later patch sets don't include codes\nto use this extension mechanism for the existing formats. We\ncan work on it after we introduce the basic features if it's\nvaluable.\n\n> How would you even know the new code is correct, when there's nothing\n> using using the \"copy routine\" branch?\n\nWe can't test it only with the v16/v17 patch set\nchanges. But we can do it by adding more changes we did in\nthe v6 patch set.\nhttps://www.postgresql.org/message-id/flat/20240124.144936.67229716500876806.kou%40clear-code.com#f1ad092fc5e81fe38d3c376559efd52c\n\nIf we should commit the basic changes with tests, I can\nadjust the test mechanism in v6 patch set and add it to the\nlatest patch set. But it needs CREATE XYZ mechanism and\nso on too. Is it OK?\n\n> In fact, doesn't this mean that the benchmarks presented earlier are not\n> very useful? We still use the old code, except there are a couple \"if\"\n> branches that are never taken? I don't think this measures the new\n> approach would not be slower once everything gets to be copy routine.\n\nHere is a benchmark result with the v17 and HEAD:\n\nhttps://www.postgresql.org/message-id/flat/ZelfYatRdVZN3FbE%40paquier.xyz#eccfd1a0131af93c48026d691cc247f4\n\nIt shows that no performance difference for the existing\nformats.\n\nThe added mechanism may be slower than the existing formats\nmechanism but it's not a blocker. Because it's never\nperformance regression. (Because this is a new feature.)\n\nWe can improve it later if it's needed.\n\n> Also, how do we know this API is suitable for the alternative formats?\n\nThe v6 patch set has more APIs built on this API. These APIs\nare for implementing the alternative formats.\n\nhttps://www.postgresql.org/message-id/flat/20240124.144936.67229716500876806.kou%40clear-code.com#f1ad092fc5e81fe38d3c376559efd52c\n\nThis is an Apache Arrow format implementation based on the\nv6 patch set: https://github.com/kou/pg-copy-arrow\n\n> For example you mentioned Arrow, and I suppose people will want to add\n> support for other column-oriented formats. I assume that will require\n> stashing a batch of rows (or some other internal state) somewhere, but\n> does the proposed API plan for that?\n>\n> My guess would be we'll need to add a \"private_data\" pointer to the\n> CopyFromStateData/CopyToStateData structs, but maybe I'm wrong.\n\nI think so too. The v6 patch set has a \"private_data\"\npointer. But the v17 patch set doesn't have it because the\nv17 patch set has only basic changes. We'll add it and other\nfeatures in the following patches:\n\nhttps://www.postgresql.org/message-id/flat/20240305.171808.667980402249336456.kou%40clear-code.com\n\n> I'll send the following patches after this patch is\n> merged. They are based on the v6 patch[1]:\n> \n> 1. Add copy_handler\n> * This also adds a pg_proc lookup for custom FORMAT\n> * This also adds a test for copy_handler\n> 2. Export CopyToStateData\n> * We need it to implement custom copy TO handler\n> 3. Add needed APIs to implement custom copy TO handler\n> * Add CopyToStateData::opaque\n> * Export CopySendEndOfRow()\n> 4. Export CopyFromStateData\n> * We need it to implement custom copy FROM handler\n> 5. Add needed APIs to implement custom copy FROM handler\n> * Add CopyFromStateData::opaque\n> * Export CopyReadBinaryData()\n\n\"Copy{To,From}StateDate::opaque\" are the \"private_data\"\npointer in the v6 patch.\n\n> Also, won't the alternative formats require custom parameters. For\n> example, for column-oriented-formats it might be useful to specify a\n> stripe size (rows per batch), etc. I'm not saying this patch needs to\n> implement that, but maybe the API should expect it?\n\nYes. The v6 patch set also has the API. But we want to\nminimize API set as much as possible in the first\nimplementation.\n\nhttps://www.postgresql.org/message-id/flat/Zbi1TwPfAvUpKqTd%40paquier.xyz#00abc60c5a1ad9eee395849b7b5a5e0d\n\n> I am really worried about the complexities\n> this thread is getting into because we are trying to shape the\n> callbacks in the most generic way possible based on *two* use cases.\n> This is going to be a never-ending discussion. I'd rather get some\n> simple basics, and then we can discuss if tweaking the callbacks is\n> really necessary or not.\n\nAnd I agree with this approach.\n\n> 1) Switch the existing formats to the new API, to validate the API works\n> at least for them, allow testing and benchmarking the code.\n\nI want to keep the current style for the first\nimplementation to avoid affecting the existing formats\nperformance. If it's not allowed to move forward this\nproposal, could someone help us to solve the mysterious\nresult (why are %s of CopyOneRowTo() different?) in the\nfollowing v15 patch set benchmark result?\n\nhttps://www.postgresql.org/message-id/flat/ZdbtQJ-p5H1_EDwE%40paquier.xyz#6439e6ad574f2d47cd7220e9bfed3889\n\n> 2) Try implementing some of the more exotic formats (column-oriented) to\n> test the API works for those too.\n>\n> 3) Maybe try implementing a PoC version to do the DDL, so that it\n> actually is extensible.\n> \n> It's not my intent to \"move the goalposts\" - I think it's fine if the\n> patches (2) and (3) are just PoC, to validate (1) goes in the right\n> direction. For example, it's fine if (2) just hard-codes the new format\n> next to the build-in ones - that's not something we'd commit, I think,\n> but for validation of (1) it's good enough.\n>\n> Most of the DDL stuff can probably be \"copied\" from FDW handlers. It's\n> pretty similar, and the \"routine\" idea is what FDW does too. It probably\n> also shows a good way to \"initialize\" the routine, etc.\n\nIs the v6 patch set enough for it?\nhttps://www.postgresql.org/message-id/flat/20240124.144936.67229716500876806.kou%40clear-code.com#f1ad092fc5e81fe38d3c376559efd52c\n\nOr should we do it based on the v17 patch set? If so, I'll\nwork on it now. It was a plan that I'll do after the v17\npatch set is merged.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 22 Jul 2024 16:45:40 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Yong,\n\nThanks for joining this thread!\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 22 Jul 2024 07:11:15 +0000,\n \"Li, Yong\" <[email protected]> wrote:\n\n> My understanding is that the provided v17 patch aims to achieve the followings:\n> - Retain existing format implementations as built-in formats, and do not go through the new interface for them.\n> - Make sure that there is no sign of performance degradation.\n> - Refactoring the existing code to make it easier and possible to make copy handlers extensible. However, some of the infrastructure work that are required to make copy handler extensible are intentionally delayed for future patches. Some of the work were proposed as patches in earlier messages, but they were not explicitly referenced in recent messages.\n\nRight.\n\nSorry for bothering you. As Tomas suggested, I should have\nprepared the current summary.\n\nMy last e-mail summarized the current information:\nhttps://www.postgresql.org/message-id/flat/20240722.164540.889091645042390373.kou%40clear-code.com#0be14c4eeb041e70438ab7a423b728da\n\nIt also shows that your understanding is right.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Mon, 22 Jul 2024 17:01:49 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On 7/22/24 09:45, Sutou Kouhei wrote:\n> Hi Tomas,\n> \n> Thanks for joining this thread!\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 19 Jul 2024 14:40:05 +0200,\n> Tomas Vondra <[email protected]> wrote:\n> \n>> I think it'd be helpful if you could post a patch status, i.e. a message\n>> re-explaininig what it aims to achieve, summary of the discussion so\n>> far, and what you think are the open questions. Otherwise every reviewer\n>> has to read the whole thread to learn this.\n> \n> It makes sense. It seems your questions covers all important\n> points in this thread. So my answers of your questions\n> summarize the latest information.\n> \n\nThanks for the summary/responses. I still think it'd be better to post a\nsummary as a separate message, not as yet another post responding to\nsomeone else. If I was reading the thread, I would not have noticed this\nis meant to be a summary. I'd even consider putting a \"THREAD SUMMARY\"\ntitle on the first line, or something like that. Up to you, of course.\n\n\nAs for the patch / decisions, thanks for the responses and explanations.\nBut I still find it hard to review / make judgements about the approach\nbased on the current version of the patch :-( Yes, it's entirely\npossible earlier versions did something interesting - e.g. it might have\nimplemented the existing formats to the new approach. Or it might have a\nprivate pointer in v6. But how do I know why it was removed? Was it\nbecause it's unnecessary for the initial version? Or was it because it\nturned out to not work?\n\nAnd when reviewing a patch, I really don't want to scavenge through old\npatch versions, looking for random parts. Not only because I don't know\nwhat to look for, but also because it'll be harder and harder to make\nthose old versions work, as the patch moves evolves.\n\nMy suggestions would be to maintain this as a series of patches, making\nincremental changes, with the \"more complex\" or \"more experimental\"\nparts larger in the series. For example, I can imagine doing this:\n\n0001 - minimal version of the patch (e.g. current v17)\n0002 - switch existing formats to the new interface\n0003 - extend the interface to add bits needed for columnar formats\n0004 - add DML to create/alter/drop custom implementations\n0005 - minimal patch with extension adding support for Arrow\n\nOr something like that. The idea is that we still have a coherent story\nof what we're trying to do, and can discuss the incremental changes\n(easier than looking at a large patch). It's even possible to commit\nearlier parts before the later parts are quite cleanup up for commit.\nAnd some changes changes may not be even meant for commit (e.g. the\nextension) but as guidance / validation for the earlier parts.\n\n\nI do realize this might look like I'm requiring you to do more work.\nSorry about that. I'm just thinking about how to move the patch forward\nand convince myself the approach is OK. Also, it's what I think works\nquite well for other patches discussed on this mailing list (I do this\nfor various patches I submitted, for example). And I'm not even sure it\nactually is more work.\n\n\nAs for the performance / profiling issues, I've read the reports and I'm\nnot sure I see something tremendously wrong. Yes, there are differences,\nbut 5% change can easily be noise, shift in binary layout, etc.\n\nUnfortunately, there's not much information about what exactly the tests\ndid, context (hardware, ...). So I don't know, really. But if you share\nenough information on how to reproduce this, I'm willing to take a look\nand investigate.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jul 2024 14:36:40 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 22 Jul 2024 14:36:40 +0200,\n Tomas Vondra <[email protected]> wrote:\n\n> Thanks for the summary/responses. I still think it'd be better to post a\n> summary as a separate message, not as yet another post responding to\n> someone else. If I was reading the thread, I would not have noticed this\n> is meant to be a summary. I'd even consider putting a \"THREAD SUMMARY\"\n> title on the first line, or something like that. Up to you, of course.\n\nIt makes sense. I'll do it as a separated e-mail.\n\n> My suggestions would be to maintain this as a series of patches, making\n> incremental changes, with the \"more complex\" or \"more experimental\"\n> parts larger in the series. For example, I can imagine doing this:\n> \n> 0001 - minimal version of the patch (e.g. current v17)\n> 0002 - switch existing formats to the new interface\n> 0003 - extend the interface to add bits needed for columnar formats\n> 0004 - add DML to create/alter/drop custom implementations\n> 0005 - minimal patch with extension adding support for Arrow\n> \n> Or something like that. The idea is that we still have a coherent story\n> of what we're trying to do, and can discuss the incremental changes\n> (easier than looking at a large patch). It's even possible to commit\n> earlier parts before the later parts are quite cleanup up for commit.\n> And some changes changes may not be even meant for commit (e.g. the\n> extension) but as guidance / validation for the earlier parts.\n\nOK. I attach the v18 patch set:\n\n0001: add a basic feature (Copy{From,To}Routine)\n (same as the v17 but it's based on the current master)\n0002: use Copy{From,To}Rountine for the existing formats\n (this may not be committed because there is a\n profiling related concern)\n0003: add support for specifying custom format by \"COPY\n ... WITH (format 'my-format')\"\n (this also has a test)\n0004: export Copy{From,To}StateData\n (but this isn't enough to implement custom COPY\n FROM/TO handlers as an extension)\n0005: add opaque member to Copy{From,To}StateData and export\n some functions to read the next data and flush the buffer\n (we can implement a PoC Apache Arrow COPY FROM/TO\n handler as an extension with this)\n\nhttps://github.com/kou/pg-copy-arrow is a PoC Apache Arrow\nCOPY FROM/TO handler as an extension.\n\n\nNotes:\n\n* 0002: We use \"static inline\" and \"constant argument\" for\n optimization.\n* 0002: This hides NextCopyFromRawFields() in a public\n header because it's not used in PostgreSQL and we want to\n use \"static inline\" for it. If it's a problem, we can keep\n it and create an internal function for \"static inline\".\n* 0003: We use \"CREATE FUNCTION\" to register a custom COPY\n FROM/TO handler. It's the same approach as tablesample.\n* 0004 and 0005: We can mix them but this patch set split\n them for easy to review. 0004 just moves the existing\n codes. It doesn't change the existing codes.\n* PoC: I provide it as a separated repository instead of a\n patch because an extension exists as a separated project\n in general. If it's a problem, I can provide it as a patch\n for contrib/.\n* This patch set still has minimal Copy{From,To}Routine. For\n example, custom COPY FROM/TO handlers can't process their\n own options with this patch set. We may add more callbacks\n to Copy{From,To}Routine later based on real world use-cases.\n\n> Unfortunately, there's not much information about what exactly the tests\n> did, context (hardware, ...). So I don't know, really. But if you share\n> enough information on how to reproduce this, I'm willing to take a look\n> and investigate.\n\nThanks. Here is related information based on the past\ne-mails from Michael:\n\n* Use -O2 for optimization build flag\n (\"meson setup --buildtype=release\" may be used)\n* Use tmpfs for PGDATA\n* Disable fsync\n* Run on scissors (what is \"scissors\" in this context...?)\n https://www.postgresql.org/message-id/flat/Zbr6piWuVHDtFFOl%40paquier.xyz#dbbec4d5c54ef2317be01a54abaf495c\n* Unlogged table may be used\n* Use a table that has 30 integer columns (*1)\n* Use 5M rows (*2)\n* Use '/dev/null' for COPY TO (*3)\n* Use blackhole_am for COPY FROM (*4)\n https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n* perf is used but used options are unknown (sorry)\n\n(*1) This SQL may be used to create the table:\n\nCREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\nRETURNS VOID AS\n$func$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n FOR i IN 1..num_cols LOOP\n query := query || 'a_' || i::text || ' int default 1';\n IF i != num_cols THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\nEND\n$func$ LANGUAGE plpgsql;\nSELECT create_table_cols ('to_tab_30', 30);\nSELECT create_table_cols ('from_tab_30', 30);\n\n(*2) This SQL may be used to insert 5M rows:\n\nINSERT INTO to_tab_30 SELECT FROM generate_series(1, 5000000);\n\n(*3) This SQL may be used for COPY TO:\n\nCOPY to_tab_30 TO '/dev/null' WITH (FORMAT text);\n\n(*4) This SQL may be used for COPY FROM:\n\nCREATE EXTENSION blackhole_am;\nALTER TABLE from_tab_30 SET ACCESS METHOD blackhole_am;\nCOPY to_tab_30 TO '/tmp/to_tab_30.txt' WITH (FORMAT text);\nCOPY from_tab_30 FROM '/tmp/to_tab_30.txt' WITH (FORMAT text);\n\n\nIf there is enough information, could you try?\n\n\nThanks,\n-- \nkou", "msg_date": "Wed, 24 Jul 2024 17:30:59 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nTHREAD SUMMARY:\n\nProposal:\n\nHow about making COPY format extendable?\n\n\nBackground:\n\nCurrently, COPY TO/FROM supports only \"text\", \"csv\" and\n\"binary\" formats. There are some requests to support more\nCOPY formats. For example:\n\n\n* 2023-11: JSON and JSON lines [1]\n* 2022-04: Apache Arrow [2]\n* 2018-02: Apache Avro, Apache Parquet and Apache ORC [3]\n\nThere were discussions how to add support for more formats. [3][4]\nIn these discussions, we got a consensus about making COPY\nformat extendable.\n\n[1]: https://www.postgresql.org/message-id/flat/24e3ee88-ec1e-421b-89ae-8a47ee0d2df1%40joeconway.com#a5e6b8829f9a74dfc835f6f29f2e44c5\n[2]: https://www.postgresql.org/message-id/flat/CAGrfaBVyfm0wPzXVqm0%3Dh5uArYh9N_ij%2BsVpUtDHqkB%3DVyB3jw%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/flat/20180210151304.fonjztsynewldfba%40gmail.com\n[4]: https://www.postgresql.org/message-id/flat/3741749.1655952719%40sss.pgh.pa.us#2bb7af4a3d2c7669f9a49808d777a20d\n\n\nConcerns:\n\n* Performance: If we make COPY format extendable, it will\n introduce some overheads. We don't want to loss our\n optimization efforts for the current implementations by\n this. [5]\n* Extendability: We don't know which API set is enough for\n custom COPY format implementations yet. We don't want to\n provide too much APIs to reduce maintenance cost.\n\n[5]: https://www.postgresql.org/message-id/3741749.1655952719%40sss.pgh.pa.us\n\n\nImplementation:\n\nThe v18 patch set is the latest patch set. [6]\nIt includes the following patches:\n\n0001: This adds a basic feature (Copy{From,To}Routine)\n (This isn't enough for extending COPY format.\n This just extracts minimal procedure sets to be\n extendable as callback sets.)\n0002: This uses Copy{From,To}Rountine for the existing\n formats (text, csv and binary)\n (This may not be committed because there is a\n profiling related concern. See the following section\n for details)\n0003: This adds support for specifying custom format by\n \"COPY ... WITH (format 'my-format')\"\n (This also adds a test for this feature.)\n0004: This exports Copy{From,To}StateData\n (But this isn't enough to implement custom COPY\n FROM/TO handlers as an extension.)\n0005: This adds opaque member to Copy{From,To}StateData and\n export some functions to read the next data and flush\n the buffer\n (We can implement a PoC Apache Arrow COPY FROM/TO\n handler as an extension with this. [7])\n\n[6]: https://www.postgresql.org/message-id/flat/20240724.173059.909782980111496972.kou%40clear-code.com\n[7]: https://github.com/kou/pg-copy-arrow\n\n\nImplementation notes:\n\n* 0002: We use \"static inline\" and \"constant argument\" for\n optimization.\n* 0002: This hides NextCopyFromRawFields() in a public\n header because it's not used in PostgreSQL and we want to\n use \"static inline\" for it. If it's a problem, we can keep\n it and create an internal function for \"static inline\".\n* 0003: We use \"CREATE FUNCTION\" to register a custom COPY\n FROM/TO handler. It's the same approach as tablesample.\n* 0004 and 0005: We can mix them but this patch set split\n them for easy to review. 0004 just moves the existing\n codes. It doesn't change the existing codes.\n* PoC: I provide it as a separated repository instead of a\n patch because an extension exists as a separated project\n in general. If it's a problem, I can provide it as a patch\n for contrib/.\n* This patch set still has minimal Copy{From,To}Routine. For\n example, custom COPY FROM/TO handlers can't process their\n own options with this patch set. We may add more callbacks\n to Copy{From,To}Routine later based on real world use-cases.\n\n\nPerformance concern:\n\nWe have a benchmark result and a profile for the change that\nuses Copy{From,To}Routine for the existing formats. [8] They\nare based on the v15 patch but there are no significant\ndifference between the v15 patch and v18 patch set.\n\nThese results show the followings:\n\n* Runtime: The patched version is faster than HEAD.\n * The patched version: 6232ms in average\n * HEAD: 6550ms in average\n* Profile: The patched version spends more percents than\n HEAD in a core function.\n * The patched version: 85.61% in CopyOneRowTo()\n * HEAD: 80.35% in CopyOneRowTo()\n\n[8]: https://www.postgresql.org/message-id/flat/ZdbtQJ-p5H1_EDwE%40paquier.xyz\n\n\nHere are related information for this benchmark/profile:\n\n* Use -O2 for optimization build flag\n (\"meson setup --buildtype=release\" may be used)\n* Use tmpfs for PGDATA\n* Disable fsync\n* Run on scissors (what is \"scissors\" in this context...?) [9]\n* Unlogged table may be used\n* Use a table that has 30 integer columns (*1)\n* Use 5M rows (*2)\n* Use '/dev/null' for COPY TO (*3)\n* Use blackhole_am for COPY FROM (*4)\n https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n* perf is used but used options are unknown (sorry)\n\n\n(*1) This SQL may be used to create the table:\n\nCREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\nRETURNS VOID AS\n$func$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n FOR i IN 1..num_cols LOOP\n query := query || 'a_' || i::text || ' int default 1';\n IF i != num_cols THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\nEND\n$func$ LANGUAGE plpgsql;\nSELECT create_table_cols ('to_tab_30', 30);\nSELECT create_table_cols ('from_tab_30', 30);\n\n\n(*2) This SQL may be used to insert 5M rows:\n\nINSERT INTO to_tab_30 SELECT FROM generate_series(1, 5000000);\n\n\n(*3) This SQL may be used for COPY TO:\n\nCOPY to_tab_30 TO '/dev/null' WITH (FORMAT text);\n\n\n(*4) This SQL may be used for COPY FROM:\n\nCREATE EXTENSION blackhole_am;\nALTER TABLE from_tab_30 SET ACCESS METHOD blackhole_am;\nCOPY to_tab_30 TO '/tmp/to_tab_30.txt' WITH (FORMAT text);\nCOPY from_tab_30 FROM '/tmp/to_tab_30.txt' WITH (FORMAT text);\n\n\n[9]: https://www.postgresql.org/message-id/flat/Zbr6piWuVHDtFFOl%40paquier.xyz#dbbec4d5c54ef2317be01a54abaf495c\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Thu, 25 Jul 2024 13:51:38 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "\r\n\r\n> On Jul 25, 2024, at 12:51, Sutou Kouhei <[email protected]> wrote:\r\n> \r\n> Hi,\r\n> \r\n> THREAD SUMMARY:\r\n\r\nVery nice summary.\r\n\r\n> \r\n> Implementation:\r\n> \r\n> The v18 patch set is the latest patch set. [6]\r\n> It includes the following patches:\r\n> \r\n> 0001: This adds a basic feature (Copy{From,To}Routine)\r\n> (This isn't enough for extending COPY format.\r\n> This just extracts minimal procedure sets to be\r\n> extendable as callback sets.)\r\n> 0002: This uses Copy{From,To}Rountine for the existing\r\n> formats (text, csv and binary)\r\n> (This may not be committed because there is a\r\n> profiling related concern. See the following section\r\n> for details)\r\n> 0003: This adds support for specifying custom format by\r\n> \"COPY ... WITH (format 'my-format')\"\r\n> (This also adds a test for this feature.)\r\n> 0004: This exports Copy{From,To}StateData\r\n> (But this isn't enough to implement custom COPY\r\n> FROM/TO handlers as an extension.)\r\n> 0005: This adds opaque member to Copy{From,To}StateData and\r\n> export some functions to read the next data and flush\r\n> the buffer\r\n> (We can implement a PoC Apache Arrow COPY FROM/TO\r\n> handler as an extension with this. [7])\r\n> \r\n> Thanks,\r\n> --\r\n> kou\r\n> \r\n\r\nThis review is for 0001 only because the other patches are not ready\r\nfor commit.\r\n\r\nThe v18-0001 patch applies cleanly to HEAD. “make check-world” also\r\nruns cleanly. The patch looks good for me.\r\n\r\n\r\nRegards,\r\nYong", "msg_date": "Fri, 26 Jul 2024 08:57:42 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi Sutou,\n\nOn Wed, Jul 24, 2024 at 4:31 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 22 Jul 2024 14:36:40 +0200,\n> Tomas Vondra <[email protected]> wrote:\n>\n> > Thanks for the summary/responses. I still think it'd be better to post a\n> > summary as a separate message, not as yet another post responding to\n> > someone else. If I was reading the thread, I would not have noticed this\n> > is meant to be a summary. I'd even consider putting a \"THREAD SUMMARY\"\n> > title on the first line, or something like that. Up to you, of course.\n>\n> It makes sense. I'll do it as a separated e-mail.\n>\n> > My suggestions would be to maintain this as a series of patches, making\n> > incremental changes, with the \"more complex\" or \"more experimental\"\n> > parts larger in the series. For example, I can imagine doing this:\n> >\n> > 0001 - minimal version of the patch (e.g. current v17)\n> > 0002 - switch existing formats to the new interface\n> > 0003 - extend the interface to add bits needed for columnar formats\n> > 0004 - add DML to create/alter/drop custom implementations\n> > 0005 - minimal patch with extension adding support for Arrow\n> >\n> > Or something like that. The idea is that we still have a coherent story\n> > of what we're trying to do, and can discuss the incremental changes\n> > (easier than looking at a large patch). It's even possible to commit\n> > earlier parts before the later parts are quite cleanup up for commit.\n> > And some changes changes may not be even meant for commit (e.g. the\n> > extension) but as guidance / validation for the earlier parts.\n>\n> OK. I attach the v18 patch set:\n>\n> 0001: add a basic feature (Copy{From,To}Routine)\n> (same as the v17 but it's based on the current master)\n> 0002: use Copy{From,To}Rountine for the existing formats\n> (this may not be committed because there is a\n> profiling related concern)\n> 0003: add support for specifying custom format by \"COPY\n> ... WITH (format 'my-format')\"\n> (this also has a test)\n> 0004: export Copy{From,To}StateData\n> (but this isn't enough to implement custom COPY\n> FROM/TO handlers as an extension)\n> 0005: add opaque member to Copy{From,To}StateData and export\n> some functions to read the next data and flush the buffer\n> (we can implement a PoC Apache Arrow COPY FROM/TO\n> handler as an extension with this)\n>\n> https://github.com/kou/pg-copy-arrow is a PoC Apache Arrow\n> COPY FROM/TO handler as an extension.\n>\n>\n> Notes:\n>\n> * 0002: We use \"static inline\" and \"constant argument\" for\n> optimization.\n> * 0002: This hides NextCopyFromRawFields() in a public\n> header because it's not used in PostgreSQL and we want to\n> use \"static inline\" for it. If it's a problem, we can keep\n> it and create an internal function for \"static inline\".\n> * 0003: We use \"CREATE FUNCTION\" to register a custom COPY\n> FROM/TO handler. It's the same approach as tablesample.\n> * 0004 and 0005: We can mix them but this patch set split\n> them for easy to review. 0004 just moves the existing\n> codes. It doesn't change the existing codes.\n> * PoC: I provide it as a separated repository instead of a\n> patch because an extension exists as a separated project\n> in general. If it's a problem, I can provide it as a patch\n> for contrib/.\n> * This patch set still has minimal Copy{From,To}Routine. For\n> example, custom COPY FROM/TO handlers can't process their\n> own options with this patch set. We may add more callbacks\n> to Copy{From,To}Routine later based on real world use-cases.\n>\n> > Unfortunately, there's not much information about what exactly the tests\n> > did, context (hardware, ...). So I don't know, really. But if you share\n> > enough information on how to reproduce this, I'm willing to take a look\n> > and investigate.\n>\n> Thanks. Here is related information based on the past\n> e-mails from Michael:\n>\n> * Use -O2 for optimization build flag\n> (\"meson setup --buildtype=release\" may be used)\n> * Use tmpfs for PGDATA\n> * Disable fsync\n> * Run on scissors (what is \"scissors\" in this context...?)\n> https://www.postgresql.org/message-id/flat/Zbr6piWuVHDtFFOl%40paquier.xyz#dbbec4d5c54ef2317be01a54abaf495c\n> * Unlogged table may be used\n> * Use a table that has 30 integer columns (*1)\n> * Use 5M rows (*2)\n> * Use '/dev/null' for COPY TO (*3)\n> * Use blackhole_am for COPY FROM (*4)\n> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n> * perf is used but used options are unknown (sorry)\n>\n> (*1) This SQL may be used to create the table:\n>\n> CREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\n> RETURNS VOID AS\n> $func$\n> DECLARE\n> query text;\n> BEGIN\n> query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n> FOR i IN 1..num_cols LOOP\n> query := query || 'a_' || i::text || ' int default 1';\n> IF i != num_cols THEN\n> query := query || ', ';\n> END IF;\n> END LOOP;\n> query := query || ')';\n> EXECUTE format(query);\n> END\n> $func$ LANGUAGE plpgsql;\n> SELECT create_table_cols ('to_tab_30', 30);\n> SELECT create_table_cols ('from_tab_30', 30);\n>\n> (*2) This SQL may be used to insert 5M rows:\n>\n> INSERT INTO to_tab_30 SELECT FROM generate_series(1, 5000000);\n>\n> (*3) This SQL may be used for COPY TO:\n>\n> COPY to_tab_30 TO '/dev/null' WITH (FORMAT text);\n>\n> (*4) This SQL may be used for COPY FROM:\n>\n> CREATE EXTENSION blackhole_am;\n> ALTER TABLE from_tab_30 SET ACCESS METHOD blackhole_am;\n> COPY to_tab_30 TO '/tmp/to_tab_30.txt' WITH (FORMAT text);\n> COPY from_tab_30 FROM '/tmp/to_tab_30.txt' WITH (FORMAT text);\n>\n>\n> If there is enough information, could you try?\n>\nThanks for updating the patches, I applied them and test\nin my local machine, I did not use tmpfs in my test, I guess\nif I run the tests enough rounds, the OS will cache the\npages, below is my numbers(I run each test 30 times, I\ncount for the last 10 ones):\n\nHEAD PATCHED\n\nCOPY to_tab_30 TO '/dev/null' WITH (FORMAT text);\n\n5628.280 ms 5679.860 ms\n5583.144 ms 5588.078 ms\n5604.444 ms 5628.029 ms\n5617.133 ms 5613.926 ms\n5575.570 ms 5601.045 ms\n5634.828 ms 5616.409 ms\n5693.489 ms 5637.434 ms\n5585.857 ms 5609.531 ms\n5613.948 ms 5643.629 ms\n5610.394 ms 5580.206 ms\n\nCOPY from_tab_30 FROM '/tmp/to_tab_30.txt' WITH (FORMAT text);\n\n3929.955 ms 4050.895 ms\n3909.061 ms 3890.156 ms\n3940.272 ms 3927.614 ms\n3907.535 ms 3925.560 ms\n3952.719 ms 3942.141 ms\n3933.751 ms 3904.250 ms\n3958.274 ms 4025.581 ms\n3937.065 ms 3894.149 ms\n3949.896 ms 3933.878 ms\n3925.399 ms 3936.170 ms\n\nI did not see obvious performance degradation, maybe it's\nbecause I did not use tmpfs, but I think this OTH means\nthat the *function call* and *if branch* added for each row\nis not the bottleneck of the whole execution path.\n\nIn 0001,\n\n+typedef struct CopyFromRoutine\n+{\n+ /*\n+ * Called when COPY FROM is started to set up the input functions\n+ * associated to the relation's attributes writing to. `finfo` can be\n+ * optionally filled to provide the catalog information of the input\n+ * function. `typioparam` can be optionally filled to define the OID of\n+ * the type to pass to the input function. `atttypid` is the OID of data\n+ * type used by the relation's attribute.\n\n+typedef struct CopyToRoutine\n+{\n+ /*\n+ * Called when COPY TO is started to set up the output functions\n+ * associated to the relation's attributes reading from. `finfo` can be\n+ * optionally filled. `atttypid` is the OID of data type used by the\n+ * relation's attribute.\n\nThe second comment has a simplified description for `finfo`, I think it\nshould match the first by:\n\n`finfo` can be optionally filled to provide the catalog information of the\noutput function.\n\nAfter I post the patch diffs, the gmail grammer shows some hints that\nit should be *associated with* rather than *associated to*, but I'm\nnot sure about this one.\n\nI think the patches are in good shape, I can help to do some\nfurther tests if needed, thanks for working on this.\n\n>\n> Thanks,\n> --\n> kou\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 28 Jul 2024 22:49:47 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On 7/25/24 06:51, Sutou Kouhei wrote:\n> Hi,\n> \n> ...\n>\n> Here are related information for this benchmark/profile:\n> \n> * Use -O2 for optimization build flag\n> (\"meson setup --buildtype=release\" may be used)\n> * Use tmpfs for PGDATA\n> * Disable fsync\n> * Run on scissors (what is \"scissors\" in this context...?) [9]\n> * Unlogged table may be used\n> * Use a table that has 30 integer columns (*1)\n> * Use 5M rows (*2)\n> * Use '/dev/null' for COPY TO (*3)\n> * Use blackhole_am for COPY FROM (*4)\n> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n> * perf is used but used options are unknown (sorry)\n> \n> \n> (*1) This SQL may be used to create the table:\n> \n> CREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\n> RETURNS VOID AS\n> $func$\n> DECLARE\n> query text;\n> BEGIN\n> query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n> FOR i IN 1..num_cols LOOP\n> query := query || 'a_' || i::text || ' int default 1';\n> IF i != num_cols THEN\n> query := query || ', ';\n> END IF;\n> END LOOP;\n> query := query || ')';\n> EXECUTE format(query);\n> END\n> $func$ LANGUAGE plpgsql;\n> SELECT create_table_cols ('to_tab_30', 30);\n> SELECT create_table_cols ('from_tab_30', 30);\n> \n> \n> (*2) This SQL may be used to insert 5M rows:\n> \n> INSERT INTO to_tab_30 SELECT FROM generate_series(1, 5000000);\n> \n> \n> (*3) This SQL may be used for COPY TO:\n> \n> COPY to_tab_30 TO '/dev/null' WITH (FORMAT text);\n> \n> \n> (*4) This SQL may be used for COPY FROM:\n> \n> CREATE EXTENSION blackhole_am;\n> ALTER TABLE from_tab_30 SET ACCESS METHOD blackhole_am;\n> COPY to_tab_30 TO '/tmp/to_tab_30.txt' WITH (FORMAT text);\n> COPY from_tab_30 FROM '/tmp/to_tab_30.txt' WITH (FORMAT text);\n> \n\nThanks for the benchmark instructions and updated patches. Very helpful!\n\nI wrote a simple script to automate the benchmark - it just runs these\ntests with different parameters (number of columns and number of\nimported/exported rows). See the run.sh attachment, along with two CSV\nresults from current master and with all patches applied.\n\nThe attached PDF has a simple summary, with a median duration for each\ncombination, and a comparison (patched/master). The results are from my\nlaptop, so it's probably noisy, and it would be good to test it on a\nmore realistic hardware (for perf-sensitive things).\n\n- For COPY FROM there is no difference - the results are within 1% of\nmaster, and there's no systemic difference.\n\n- For COPY TO it's a different story, though. There's a pretty clear\nregression, by ~5%. It's a bit interesting the correlation with the\nnumber of columns is not stronger ...\n\nI did do some basic profiling, and the perf diff looks like this:\n\n# Event 'task-clock:upppH'\n#\n# Baseline Delta Abs Shared Object Symbol\n\n# ........ ......... .............\n.........................................\n#\n 13.34% -12.94% postgres [.] CopyOneRowTo\n +10.75% postgres [.] CopyToTextOneRow\n 4.31% +2.84% postgres [.] pg_ltoa\n 10.96% +1.15% postgres [.] CopySendChar\n 8.68% +0.78% postgres [.] AllocSetAlloc\n 10.89% -0.70% postgres [.] CopyAttributeOutText\n 5.01% -0.47% postgres [.] enlargeStringInfo\n 4.95% -0.42% postgres [.] OutputFunctionCall\n 5.29% -0.37% postgres [.] int4out\n 5.90% -0.31% postgres [.] appendBinaryStringInfo\n +0.29% postgres [.] CopyToStateFlush\n 0.27% -0.27% postgres [.] memcpy@plt\n\nNot particularly surprising that CopyToTextOneRow has +11%, but that's\nbecause it's a new function. The perf difference is perhaps due to\npg_ltoa/CopySendChar, but not sure why.\n\nI also did some flamegraph - attached is for master, patched and diff.\n\nIt's interesting the main change in the flamegraphs is CopyToStateFlush\npops up on the left side. Because, what is that about? That is a thing\nintroduced in the 0005 patch, so maybe the regression is not strictly\nabout the existing formats moving to the new API, but due to something\nelse in a later version of the patch?\n\nIt would be good do run the tests for each patch in the series, and then\nsee when does the regression actually appear.\n\nFWIW None of this actually proves this is an issue in practice. No one\nwill be exporting into /dev/null or importing into blackhole, and I'd\nbet the difference gets way smaller for more realistic cases.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 29 Jul 2024 14:17:08 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAEG8a3+KN=uofw5ksnCwh5s3m_VcfFYd=jTzcpO5uVLBHwSQEg@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Sun, 28 Jul 2024 22:49:47 +0800,\n Junwang Zhao <[email protected]> wrote:\n\n> Thanks for updating the patches, I applied them and test\n> in my local machine, I did not use tmpfs in my test, I guess\n> if I run the tests enough rounds, the OS will cache the\n> pages, below is my numbers(I run each test 30 times, I\n> count for the last 10 ones):\n> \n> HEAD PATCHED\n> \n> COPY to_tab_30 TO '/dev/null' WITH (FORMAT text);\n> \n> 5628.280 ms 5679.860 ms\n> 5583.144 ms 5588.078 ms\n> 5604.444 ms 5628.029 ms\n> 5617.133 ms 5613.926 ms\n> 5575.570 ms 5601.045 ms\n> 5634.828 ms 5616.409 ms\n> 5693.489 ms 5637.434 ms\n> 5585.857 ms 5609.531 ms\n> 5613.948 ms 5643.629 ms\n> 5610.394 ms 5580.206 ms\n> \n> COPY from_tab_30 FROM '/tmp/to_tab_30.txt' WITH (FORMAT text);\n> \n> 3929.955 ms 4050.895 ms\n> 3909.061 ms 3890.156 ms\n> 3940.272 ms 3927.614 ms\n> 3907.535 ms 3925.560 ms\n> 3952.719 ms 3942.141 ms\n> 3933.751 ms 3904.250 ms\n> 3958.274 ms 4025.581 ms\n> 3937.065 ms 3894.149 ms\n> 3949.896 ms 3933.878 ms\n> 3925.399 ms 3936.170 ms\n> \n> I did not see obvious performance degradation, maybe it's\n> because I did not use tmpfs, but I think this OTH means\n> that the *function call* and *if branch* added for each row\n> is not the bottleneck of the whole execution path.\n\nThanks for sharing your numbers. I agree with there are no\nobvious performance degradation.\n\n\n> In 0001,\n> \n> +typedef struct CopyFromRoutine\n> +{\n> + /*\n> + * Called when COPY FROM is started to set up the input functions\n> + * associated to the relation's attributes writing to. `finfo` can be\n> + * optionally filled to provide the catalog information of the input\n> + * function. `typioparam` can be optionally filled to define the OID of\n> + * the type to pass to the input function. `atttypid` is the OID of data\n> + * type used by the relation's attribute.\n> \n> +typedef struct CopyToRoutine\n> +{\n> + /*\n> + * Called when COPY TO is started to set up the output functions\n> + * associated to the relation's attributes reading from. `finfo` can be\n> + * optionally filled. `atttypid` is the OID of data type used by the\n> + * relation's attribute.\n> \n> The second comment has a simplified description for `finfo`, I think it\n> should match the first by:\n> \n> `finfo` can be optionally filled to provide the catalog information of the\n> output function.\n\nGood catch. I'll update it as suggested in the next patch set.\n\n> After I post the patch diffs, the gmail grammer shows some hints that\n> it should be *associated with* rather than *associated to*, but I'm\n> not sure about this one.\n\nThanks. I'll use \"associated with\".\n\n> I think the patches are in good shape, I can help to do some\n> further tests if needed, thanks for working on this.\n\nThanks!\n\n-- \nkou\n\n\n", "msg_date": "Tue, 30 Jul 2024 11:58:24 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 29 Jul 2024 14:17:08 +0200,\n Tomas Vondra <[email protected]> wrote:\n\n> I wrote a simple script to automate the benchmark - it just runs these\n> tests with different parameters (number of columns and number of\n> imported/exported rows). See the run.sh attachment, along with two CSV\n> results from current master and with all patches applied.\n\nThanks. I also used the script with some modifications:\n\n1. Create a test database automatically\n2. Enable blackhole_am automatically\n3. Create create_table_cols() automatically\n\nI attach it. I also attach results of master and patched. My\nresults are from my desktop. So it's probably noisy.\n\n> - For COPY FROM there is no difference - the results are within 1% of\n> master, and there's no systemic difference.\n> \n> - For COPY TO it's a different story, though. There's a pretty clear\n> regression, by ~5%. It's a bit interesting the correlation with the\n> number of columns is not stronger ...\n\nMy results showed different trend:\n\n- COPY FROM: Patched is about 15-20% slower than master\n- COPY TO: Patched is a bit faster than master\n\nHere are some my numbers:\n\ntype\tn_cols\tn_rows\tdiff\tmaster\t\tpatched\n----------------------------------------------------------\nTO\t5\t1\t100.56%\t218.376000\t219.609000\nFROM\t5\t1\t113.33%\t168.493000\t190.954000\n...\nTO\t5\t5\t100.60%\t1037.773000\t1044.045000\nFROM\t5\t5\t116.46%\t767.966000\t894.377000\n...\nTO\t5\t10\t100.15%\t2092.245000\t2095.472000\nFROM\t5\t10\t115.91%\t1508.160000\t1748.130000\nTO\t10\t1\t98.62%\t353.087000\t348.214000\nFROM\t10\t1\t118.65%\t260.551000\t309.133000\n...\nTO\t10\t5\t96.89%\t1724.061000\t1670.427000\nFROM\t10\t5\t119.92%\t1224.098000\t1467.941000\n...\nTO\t10\t10\t98.70%\t3444.291000\t3399.538000\nFROM\t10\t10\t118.79%\t2462.314000\t2924.866000\nTO\t15\t1\t97.71%\t492.082000\t480.802000\nFROM\t15\t1\t115.59%\t347.820000\t402.033000\n...\nTO\t15\t5\t98.32%\t2402.419000\t2362.140000\nFROM\t15\t5\t115.48%\t1657.594000\t1914.245000\n...\nTO\t15\t10\t96.91%\t4830.319000\t4681.145000\nFROM\t15\t10\t115.09%\t3304.798000\t3803.542000\nTO\t20\t1\t96.05%\t629.828000\t604.939000\nFROM\t20\t1\t118.50%\t438.673000\t519.839000\n...\nTO\t20\t5\t97.15%\t3084.210000\t2996.331000\nFROM\t20\t5\t115.35%\t2110.909000\t2435.032000\n...\nTO\t25\t1\t98.29%\t764.779000\t751.684000\nFROM\t25\t1\t115.13%\t519.686000\t598.301000\n...\nTO\t25\t5\t94.08%\t3843.996000\t3616.614000\nFROM\t25\t5\t115.62%\t2554.008000\t2952.928000\n...\nTO\t25\t10\t97.41%\t7504.865000\t7310.549000\nFROM\t25\t10\t117.25%\t4994.463000\t5856.029000\nTO\t30\t1\t94.39%\t906.324000\t855.503000\nFROM\t30\t1\t119.60%\t604.110000\t722.491000\n...\nTO\t30\t5\t96.50%\t4419.907000\t4265.417000\nFROM\t30\t5\t116.97%\t2932.883000\t3430.556000\n...\nTO\t30\t10\t94.39%\t8974.878000\t8470.991000\nFROM\t30\t10\t117.84%\t5800.793000\t6835.900000\n----\n\nSee the attached diff.txt for full numbers.\nI also attach scripts to generate the diff.txt. Here is the\ncommand line I used:\n\n----\nruby diff.rb <(ruby aggregate.rb master.result) <(ruby aggregate.rb patched.result) | tee diff.txt\n----\n\nMy environment:\n\n* Debian GNU/Linux sid\n* gcc (Debian 13.3.0-2) 13.3.0\n* AMD Ryzen 9 3900X 12-Core Processor\n\nI'll look into this.\n\nIf someone is interested in this proposal, could you share\nyour numbers?\n\n> It's interesting the main change in the flamegraphs is CopyToStateFlush\n> pops up on the left side. Because, what is that about? That is a thing\n> introduced in the 0005 patch, so maybe the regression is not strictly\n> about the existing formats moving to the new API, but due to something\n> else in a later version of the patch?\n\nAh, making static CopySendEndOfRow() a to non-static function\n(CopyToStateFlush()) may be the reason of this. Could you\ntry the attached v19 patch? It changes the 0005 patch:\n\n* It reverts the static change\n* It adds a new non-static function that just exports\n CopySendEndOfRow()\n\n\nThanks,\n-- \nkou\n\n#!/usr/bin/bash\n\nDIR=${1:-$(pwd)}\n\npsql postgres > /dev/null 2>&1 <<EOF\nDROP DATABASE IF EXISTS test;\nCREATE DATABASE test;\nEOF\npsql test > /dev/null 2>&1 <<EOF\nCREATE EXTENSION blackhole_am;\nCREATE OR REPLACE FUNCTION create_table_cols(tabname text, num_cols int)\nRETURNS VOID AS\n\\$func\\$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE UNLOGGED TABLE ' || tabname || ' (';\n FOR i IN 1..num_cols LOOP\n query := query || 'a_' || i::text || ' int default 1';\n IF i != num_cols THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\nEND\n\\$func\\$ LANGUAGE plpgsql;\nEOF\n\nfor c in $(seq 5 5 30); do\n\n\tfor rows in $(seq 1 10); do\n\n\t\tpsql test > /dev/null 2>&1 <<EOF\nDROP TABLE IF EXISTS to_table;\nDROP TABLE IF EXISTS from_table;\n\nSELECT create_table_cols ('to_table', $c);\nSELECT create_table_cols ('from_table', $c);\n\nINSERT INTO to_table SELECT FROM generate_series(1, $rows * 1000000);\n\nCOPY to_table TO '$DIR/test.data' WITH (FORMAT text);\n\nALTER TABLE from_table SET ACCESS METHOD blackhole_am;\nEOF\n\n\t\tfor r in $(seq 1 10); do\n\n\t\t\ts=$(psql test -t -A -c \"SELECT EXTRACT(EPOCH FROM now())\")\n\t\t\tpsql test -c \"COPY to_table TO '/dev/null' WITH (FORMAT text)\" > /dev/null 2>&1\n\t\t\td=$(psql test -t -A -c \"SELECT 1000 * (EXTRACT(EPOCH FROM now()) - $s)\")\n\n\t\t\techo \"COPY_TO\" $c $rows $r $d\n\n\t\tdone\n\n\t\t# run COPY FROM 10x\n\t\tfor r in $(seq 1 10); do\n\n\t\t\ts=$(psql test -t -A -c \"SELECT EXTRACT(EPOCH FROM now())\")\n\t\t\tpsql test -c \"COPY from_table FROM '$DIR/test.data' WITH (FORMAT text)\" > /dev/null 2>&1\n\t\t\td=$(psql test -t -A -c \"SELECT 1000 * (EXTRACT(EPOCH FROM now()) - $s)\")\n\n\t\t\techo \"COPY_FROM\" $c $rows $r $d\n\n\t\tdone\n\n\tdone\n\ndone\n\nCOPY_TO 5 1 1 212.831000\nCOPY_TO 5 1 2 208.677000\nCOPY_TO 5 1 3 215.074000\nCOPY_TO 5 1 4 218.376000\nCOPY_TO 5 1 5 219.056000\nCOPY_TO 5 1 6 218.237000\nCOPY_TO 5 1 7 234.709000\nCOPY_TO 5 1 8 220.561000\nCOPY_TO 5 1 9 219.747000\nCOPY_TO 5 1 10 211.881000\nCOPY_FROM 5 1 1 166.336000\nCOPY_FROM 5 1 2 166.000000\nCOPY_FROM 5 1 3 166.776000\nCOPY_FROM 5 1 4 168.493000\nCOPY_FROM 5 1 5 169.632000\nCOPY_FROM 5 1 6 164.290000\nCOPY_FROM 5 1 7 167.841000\nCOPY_FROM 5 1 8 169.336000\nCOPY_FROM 5 1 9 172.948000\nCOPY_FROM 5 1 10 168.893000\nCOPY_TO 5 2 1 412.065000\nCOPY_TO 5 2 2 420.758000\nCOPY_TO 5 2 3 421.387000\nCOPY_TO 5 2 4 402.165000\nCOPY_TO 5 2 5 414.407000\nCOPY_TO 5 2 6 423.387000\nCOPY_TO 5 2 7 426.431000\nCOPY_TO 5 2 8 424.798000\nCOPY_TO 5 2 9 419.588000\nCOPY_TO 5 2 10 425.688000\nCOPY_FROM 5 2 1 308.856000\nCOPY_FROM 5 2 2 319.487000\nCOPY_FROM 5 2 3 316.488000\nCOPY_FROM 5 2 4 315.212000\nCOPY_FROM 5 2 5 316.066000\nCOPY_FROM 5 2 6 310.381000\nCOPY_FROM 5 2 7 322.447000\nCOPY_FROM 5 2 8 318.206000\nCOPY_FROM 5 2 9 322.588000\nCOPY_FROM 5 2 10 317.101000\nCOPY_TO 5 3 1 633.255000\nCOPY_TO 5 3 2 616.202000\nCOPY_TO 5 3 3 610.864000\nCOPY_TO 5 3 4 628.803000\nCOPY_TO 5 3 5 638.041000\nCOPY_TO 5 3 6 647.732000\nCOPY_TO 5 3 7 624.457000\nCOPY_TO 5 3 8 624.007000\nCOPY_TO 5 3 9 616.109000\nCOPY_TO 5 3 10 624.354000\nCOPY_FROM 5 3 1 469.425000\nCOPY_FROM 5 3 2 471.284000\nCOPY_FROM 5 3 3 468.651000\nCOPY_FROM 5 3 4 465.177000\nCOPY_FROM 5 3 5 466.697000\nCOPY_FROM 5 3 6 463.886000\nCOPY_FROM 5 3 7 480.866000\nCOPY_FROM 5 3 8 465.048000\nCOPY_FROM 5 3 9 469.349000\nCOPY_FROM 5 3 10 467.342000\nCOPY_TO 5 4 1 837.447000\nCOPY_TO 5 4 2 848.536000\nCOPY_TO 5 4 3 867.580000\nCOPY_TO 5 4 4 831.669000\nCOPY_TO 5 4 5 839.633000\nCOPY_TO 5 4 6 846.060000\nCOPY_TO 5 4 7 824.590000\nCOPY_TO 5 4 8 836.084000\nCOPY_TO 5 4 9 845.936000\nCOPY_TO 5 4 10 851.128000\nCOPY_FROM 5 4 1 604.809000\nCOPY_FROM 5 4 2 617.653000\nCOPY_FROM 5 4 3 615.883000\nCOPY_FROM 5 4 4 616.633000\nCOPY_FROM 5 4 5 617.737000\nCOPY_FROM 5 4 6 617.361000\nCOPY_FROM 5 4 7 608.998000\nCOPY_FROM 5 4 8 621.576000\nCOPY_FROM 5 4 9 619.759000\nCOPY_FROM 5 4 10 625.312000\nCOPY_TO 5 5 1 1057.027000\nCOPY_TO 5 5 2 1038.905000\nCOPY_TO 5 5 3 1034.425000\nCOPY_TO 5 5 4 1048.834000\nCOPY_TO 5 5 5 1069.693000\nCOPY_TO 5 5 6 1019.558000\nCOPY_TO 5 5 7 1007.099000\nCOPY_TO 5 5 8 1021.759000\nCOPY_TO 5 5 9 1037.773000\nCOPY_TO 5 5 10 1008.977000\nCOPY_FROM 5 5 1 753.724000\nCOPY_FROM 5 5 2 769.060000\nCOPY_FROM 5 5 3 765.603000\nCOPY_FROM 5 5 4 769.101000\nCOPY_FROM 5 5 5 767.057000\nCOPY_FROM 5 5 6 767.966000\nCOPY_FROM 5 5 7 781.901000\nCOPY_FROM 5 5 8 772.262000\nCOPY_FROM 5 5 9 762.266000\nCOPY_FROM 5 5 10 767.036000\nCOPY_TO 5 6 1 1245.932000\nCOPY_TO 5 6 2 1254.330000\nCOPY_TO 5 6 3 1254.507000\nCOPY_TO 5 6 4 1255.708000\nCOPY_TO 5 6 5 1238.643000\nCOPY_TO 5 6 6 1259.656000\nCOPY_TO 5 6 7 1262.356000\nCOPY_TO 5 6 8 1253.554000\nCOPY_TO 5 6 9 1262.281000\nCOPY_TO 5 6 10 1253.491000\nCOPY_FROM 5 6 1 940.044000\nCOPY_FROM 5 6 2 938.479000\nCOPY_FROM 5 6 3 926.584000\nCOPY_FROM 5 6 4 920.494000\nCOPY_FROM 5 6 5 908.873000\nCOPY_FROM 5 6 6 917.936000\nCOPY_FROM 5 6 7 917.126000\nCOPY_FROM 5 6 8 921.488000\nCOPY_FROM 5 6 9 917.245000\nCOPY_FROM 5 6 10 916.243000\nCOPY_TO 5 7 1 1430.487000\nCOPY_TO 5 7 2 1427.373000\nCOPY_TO 5 7 3 1497.434000\nCOPY_TO 5 7 4 1463.688000\nCOPY_TO 5 7 5 1441.485000\nCOPY_TO 5 7 6 1474.119000\nCOPY_TO 5 7 7 1514.650000\nCOPY_TO 5 7 8 1478.208000\nCOPY_TO 5 7 9 1495.704000\nCOPY_TO 5 7 10 1459.739000\nCOPY_FROM 5 7 1 1077.841000\nCOPY_FROM 5 7 2 1084.081000\nCOPY_FROM 5 7 3 1093.168000\nCOPY_FROM 5 7 4 1078.736000\nCOPY_FROM 5 7 5 1076.685000\nCOPY_FROM 5 7 6 1110.902000\nCOPY_FROM 5 7 7 1079.210000\nCOPY_FROM 5 7 8 1067.793000\nCOPY_FROM 5 7 9 1079.762000\nCOPY_FROM 5 7 10 1084.935000\nCOPY_TO 5 8 1 1650.517000\nCOPY_TO 5 8 2 1697.566000\nCOPY_TO 5 8 3 1667.700000\nCOPY_TO 5 8 4 1656.847000\nCOPY_TO 5 8 5 1659.966000\nCOPY_TO 5 8 6 1702.676000\nCOPY_TO 5 8 7 1696.178000\nCOPY_TO 5 8 8 1682.269000\nCOPY_TO 5 8 9 1690.789000\nCOPY_TO 5 8 10 1703.132000\nCOPY_FROM 5 8 1 1253.866000\nCOPY_FROM 5 8 2 1245.838000\nCOPY_FROM 5 8 3 1231.959000\nCOPY_FROM 5 8 4 1216.493000\nCOPY_FROM 5 8 5 1213.282000\nCOPY_FROM 5 8 6 1213.838000\nCOPY_FROM 5 8 7 1251.825000\nCOPY_FROM 5 8 8 1245.100000\nCOPY_FROM 5 8 9 1261.415000\nCOPY_FROM 5 8 10 1212.752000\nCOPY_TO 5 9 1 1850.090000\nCOPY_TO 5 9 2 1899.929000\nCOPY_TO 5 9 3 1860.290000\nCOPY_TO 5 9 4 1832.055000\nCOPY_TO 5 9 5 1857.414000\nCOPY_TO 5 9 6 1879.424000\nCOPY_TO 5 9 7 1875.373000\nCOPY_TO 5 9 8 1854.969000\nCOPY_TO 5 9 9 1915.033000\nCOPY_TO 5 9 10 1866.939000\nCOPY_FROM 5 9 1 1370.836000\nCOPY_FROM 5 9 2 1379.806000\nCOPY_FROM 5 9 3 1372.183000\nCOPY_FROM 5 9 4 1367.779000\nCOPY_FROM 5 9 5 1368.464000\nCOPY_FROM 5 9 6 1380.544000\nCOPY_FROM 5 9 7 1363.804000\nCOPY_FROM 5 9 8 1362.463000\nCOPY_FROM 5 9 9 1371.727000\nCOPY_FROM 5 9 10 1377.122000\nCOPY_TO 5 10 1 2058.078000\nCOPY_TO 5 10 2 2064.015000\nCOPY_TO 5 10 3 2120.218000\nCOPY_TO 5 10 4 2060.682000\nCOPY_TO 5 10 5 2105.438000\nCOPY_TO 5 10 6 2076.790000\nCOPY_TO 5 10 7 2095.560000\nCOPY_TO 5 10 8 2092.245000\nCOPY_TO 5 10 9 2034.601000\nCOPY_TO 5 10 10 2094.292000\nCOPY_FROM 5 10 1 1557.934000\nCOPY_FROM 5 10 2 1517.610000\nCOPY_FROM 5 10 3 1506.637000\nCOPY_FROM 5 10 4 1515.831000\nCOPY_FROM 5 10 5 1490.391000\nCOPY_FROM 5 10 6 1507.338000\nCOPY_FROM 5 10 7 1508.160000\nCOPY_FROM 5 10 8 1523.402000\nCOPY_FROM 5 10 9 1504.555000\nCOPY_FROM 5 10 10 1500.368000\nCOPY_TO 10 1 1 350.108000\nCOPY_TO 10 1 2 354.319000\nCOPY_TO 10 1 3 347.724000\nCOPY_TO 10 1 4 344.384000\nCOPY_TO 10 1 5 355.083000\nCOPY_TO 10 1 6 363.509000\nCOPY_TO 10 1 7 355.307000\nCOPY_TO 10 1 8 345.092000\nCOPY_TO 10 1 9 353.087000\nCOPY_TO 10 1 10 352.411000\nCOPY_FROM 10 1 1 259.050000\nCOPY_FROM 10 1 2 261.272000\nCOPY_FROM 10 1 3 258.407000\nCOPY_FROM 10 1 4 260.551000\nCOPY_FROM 10 1 5 260.306000\nCOPY_FROM 10 1 6 262.650000\nCOPY_FROM 10 1 7 259.448000\nCOPY_FROM 10 1 8 263.050000\nCOPY_FROM 10 1 9 259.594000\nCOPY_FROM 10 1 10 262.014000\nCOPY_TO 10 2 1 687.593000\nCOPY_TO 10 2 2 689.272000\nCOPY_TO 10 2 3 672.518000\nCOPY_TO 10 2 4 697.031000\nCOPY_TO 10 2 5 709.173000\nCOPY_TO 10 2 6 704.194000\nCOPY_TO 10 2 7 696.468000\nCOPY_TO 10 2 8 693.674000\nCOPY_TO 10 2 9 699.779000\nCOPY_TO 10 2 10 692.238000\nCOPY_FROM 10 2 1 497.979000\nCOPY_FROM 10 2 2 513.060000\nCOPY_FROM 10 2 3 502.765000\nCOPY_FROM 10 2 4 509.832000\nCOPY_FROM 10 2 5 507.076000\nCOPY_FROM 10 2 6 501.886000\nCOPY_FROM 10 2 7 503.953000\nCOPY_FROM 10 2 8 509.601000\nCOPY_FROM 10 2 9 508.680000\nCOPY_FROM 10 2 10 497.768000\nCOPY_TO 10 3 1 1036.252000\nCOPY_TO 10 3 2 1011.853000\nCOPY_TO 10 3 3 1022.256000\nCOPY_TO 10 3 4 1034.388000\nCOPY_TO 10 3 5 1011.247000\nCOPY_TO 10 3 6 1042.124000\nCOPY_TO 10 3 7 1040.866000\nCOPY_TO 10 3 8 1025.704000\nCOPY_TO 10 3 9 1023.673000\nCOPY_TO 10 3 10 1061.591000\nCOPY_FROM 10 3 1 733.149000\nCOPY_FROM 10 3 2 743.642000\nCOPY_FROM 10 3 3 752.712000\nCOPY_FROM 10 3 4 735.685000\nCOPY_FROM 10 3 5 743.496000\nCOPY_FROM 10 3 6 749.289000\nCOPY_FROM 10 3 7 747.307000\nCOPY_FROM 10 3 8 750.439000\nCOPY_FROM 10 3 9 747.840000\nCOPY_FROM 10 3 10 746.275000\nCOPY_TO 10 4 1 1339.158000\nCOPY_TO 10 4 2 1349.486000\nCOPY_TO 10 4 3 1391.879000\nCOPY_TO 10 4 4 1402.481000\nCOPY_TO 10 4 5 1402.359000\nCOPY_TO 10 4 6 1349.511000\nCOPY_TO 10 4 7 1395.431000\nCOPY_TO 10 4 8 1395.865000\nCOPY_TO 10 4 9 1352.711000\nCOPY_TO 10 4 10 1335.961000\nCOPY_FROM 10 4 1 988.278000\nCOPY_FROM 10 4 2 988.250000\nCOPY_FROM 10 4 3 986.391000\nCOPY_FROM 10 4 4 992.929000\nCOPY_FROM 10 4 5 984.924000\nCOPY_FROM 10 4 6 989.783000\nCOPY_FROM 10 4 7 984.885000\nCOPY_FROM 10 4 8 977.104000\nCOPY_FROM 10 4 9 991.286000\nCOPY_FROM 10 4 10 984.057000\nCOPY_TO 10 5 1 1714.117000\nCOPY_TO 10 5 2 1737.382000\nCOPY_TO 10 5 3 1744.501000\nCOPY_TO 10 5 4 1705.814000\nCOPY_TO 10 5 5 1724.631000\nCOPY_TO 10 5 6 1670.422000\nCOPY_TO 10 5 7 1724.061000\nCOPY_TO 10 5 8 1741.960000\nCOPY_TO 10 5 9 1698.542000\nCOPY_TO 10 5 10 1703.680000\nCOPY_FROM 10 5 1 1236.786000\nCOPY_FROM 10 5 2 1228.271000\nCOPY_FROM 10 5 3 1233.229000\nCOPY_FROM 10 5 4 1223.438000\nCOPY_FROM 10 5 5 1218.269000\nCOPY_FROM 10 5 6 1215.843000\nCOPY_FROM 10 5 7 1218.998000\nCOPY_FROM 10 5 8 1223.761000\nCOPY_FROM 10 5 9 1237.311000\nCOPY_FROM 10 5 10 1224.098000\nCOPY_TO 10 6 1 2034.971000\nCOPY_TO 10 6 2 2086.575000\nCOPY_TO 10 6 3 2061.166000\nCOPY_TO 10 6 4 2028.774000\nCOPY_TO 10 6 5 1976.820000\nCOPY_TO 10 6 6 2048.341000\nCOPY_TO 10 6 7 2126.830000\nCOPY_TO 10 6 8 2113.916000\nCOPY_TO 10 6 9 2044.993000\nCOPY_TO 10 6 10 2059.930000\nCOPY_FROM 10 6 1 1460.496000\nCOPY_FROM 10 6 2 1455.160000\nCOPY_FROM 10 6 3 1472.230000\nCOPY_FROM 10 6 4 1466.294000\nCOPY_FROM 10 6 5 1470.005000\nCOPY_FROM 10 6 6 1460.124000\nCOPY_FROM 10 6 7 1484.157000\nCOPY_FROM 10 6 8 1498.308000\nCOPY_FROM 10 6 9 1472.033000\nCOPY_FROM 10 6 10 1464.715000\nCOPY_TO 10 7 1 2392.091000\nCOPY_TO 10 7 2 2376.371000\nCOPY_TO 10 7 3 2409.333000\nCOPY_TO 10 7 4 2436.201000\nCOPY_TO 10 7 5 2406.606000\nCOPY_TO 10 7 6 2415.106000\nCOPY_TO 10 7 7 2460.604000\nCOPY_TO 10 7 8 2407.684000\nCOPY_TO 10 7 9 2352.239000\nCOPY_TO 10 7 10 2453.835000\nCOPY_FROM 10 7 1 1720.389000\nCOPY_FROM 10 7 2 1716.569000\nCOPY_FROM 10 7 3 1724.858000\nCOPY_FROM 10 7 4 1714.529000\nCOPY_FROM 10 7 5 1704.039000\nCOPY_FROM 10 7 6 1723.536000\nCOPY_FROM 10 7 7 1725.329000\nCOPY_FROM 10 7 8 1690.714000\nCOPY_FROM 10 7 9 1726.614000\nCOPY_FROM 10 7 10 1740.956000\nCOPY_TO 10 8 1 2796.200000\nCOPY_TO 10 8 2 2761.445000\nCOPY_TO 10 8 3 2753.313000\nCOPY_TO 10 8 4 2767.549000\nCOPY_TO 10 8 5 2759.920000\nCOPY_TO 10 8 6 2753.090000\nCOPY_TO 10 8 7 2766.374000\nCOPY_TO 10 8 8 2758.385000\nCOPY_TO 10 8 9 2822.724000\nCOPY_TO 10 8 10 2746.903000\nCOPY_FROM 10 8 1 1963.436000\nCOPY_FROM 10 8 2 1965.409000\nCOPY_FROM 10 8 3 1978.345000\nCOPY_FROM 10 8 4 1957.258000\nCOPY_FROM 10 8 5 1948.144000\nCOPY_FROM 10 8 6 1960.546000\nCOPY_FROM 10 8 7 1985.631000\nCOPY_FROM 10 8 8 1928.848000\nCOPY_FROM 10 8 9 1932.803000\nCOPY_FROM 10 8 10 1950.939000\nCOPY_TO 10 9 1 3101.821000\nCOPY_TO 10 9 2 3119.955000\nCOPY_TO 10 9 3 3071.974000\nCOPY_TO 10 9 4 3058.962000\nCOPY_TO 10 9 5 3100.206000\nCOPY_TO 10 9 6 3085.071000\nCOPY_TO 10 9 7 3099.553000\nCOPY_TO 10 9 8 3133.255000\nCOPY_TO 10 9 9 3112.448000\nCOPY_TO 10 9 10 3078.218000\nCOPY_FROM 10 9 1 2231.402000\nCOPY_FROM 10 9 2 2270.319000\nCOPY_FROM 10 9 3 2212.585000\nCOPY_FROM 10 9 4 2214.820000\nCOPY_FROM 10 9 5 2182.791000\nCOPY_FROM 10 9 6 2297.706000\nCOPY_FROM 10 9 7 2203.188000\nCOPY_FROM 10 9 8 2275.390000\nCOPY_FROM 10 9 9 2191.008000\nCOPY_FROM 10 9 10 2202.305000\nCOPY_TO 10 10 1 3451.928000\nCOPY_TO 10 10 2 3434.760000\nCOPY_TO 10 10 3 3473.176000\nCOPY_TO 10 10 4 3483.251000\nCOPY_TO 10 10 5 3432.006000\nCOPY_TO 10 10 6 3423.612000\nCOPY_TO 10 10 7 3444.291000\nCOPY_TO 10 10 8 3425.827000\nCOPY_TO 10 10 9 3411.279000\nCOPY_TO 10 10 10 3454.997000\nCOPY_FROM 10 10 1 2447.611000\nCOPY_FROM 10 10 2 2462.314000\nCOPY_FROM 10 10 3 2435.629000\nCOPY_FROM 10 10 4 2536.588000\nCOPY_FROM 10 10 5 2472.294000\nCOPY_FROM 10 10 6 2461.858000\nCOPY_FROM 10 10 7 2451.032000\nCOPY_FROM 10 10 8 2430.200000\nCOPY_FROM 10 10 9 2493.472000\nCOPY_FROM 10 10 10 2473.227000\nCOPY_TO 15 1 1 484.336000\nCOPY_TO 15 1 2 479.033000\nCOPY_TO 15 1 3 479.735000\nCOPY_TO 15 1 4 485.839000\nCOPY_TO 15 1 5 478.669000\nCOPY_TO 15 1 6 503.063000\nCOPY_TO 15 1 7 497.116000\nCOPY_TO 15 1 8 496.381000\nCOPY_TO 15 1 9 492.082000\nCOPY_TO 15 1 10 503.341000\nCOPY_FROM 15 1 1 344.944000\nCOPY_FROM 15 1 2 347.820000\nCOPY_FROM 15 1 3 350.856000\nCOPY_FROM 15 1 4 343.687000\nCOPY_FROM 15 1 5 348.509000\nCOPY_FROM 15 1 6 352.488000\nCOPY_FROM 15 1 7 347.183000\nCOPY_FROM 15 1 8 346.262000\nCOPY_FROM 15 1 9 346.139000\nCOPY_FROM 15 1 10 349.911000\nCOPY_TO 15 2 1 962.932000\nCOPY_TO 15 2 2 963.658000\nCOPY_TO 15 2 3 962.137000\nCOPY_TO 15 2 4 962.108000\nCOPY_TO 15 2 5 970.632000\nCOPY_TO 15 2 6 953.700000\nCOPY_TO 15 2 7 981.138000\nCOPY_TO 15 2 8 973.898000\nCOPY_TO 15 2 9 970.741000\nCOPY_TO 15 2 10 948.693000\nCOPY_FROM 15 2 1 665.328000\nCOPY_FROM 15 2 2 676.310000\nCOPY_FROM 15 2 3 671.458000\nCOPY_FROM 15 2 4 670.664000\nCOPY_FROM 15 2 5 679.016000\nCOPY_FROM 15 2 6 669.444000\nCOPY_FROM 15 2 7 667.946000\nCOPY_FROM 15 2 8 667.764000\nCOPY_FROM 15 2 9 674.499000\nCOPY_FROM 15 2 10 671.073000\nCOPY_TO 15 3 1 1420.086000\nCOPY_TO 15 3 2 1448.308000\nCOPY_TO 15 3 3 1454.913000\nCOPY_TO 15 3 4 1439.704000\nCOPY_TO 15 3 5 1446.298000\nCOPY_TO 15 3 6 1470.852000\nCOPY_TO 15 3 7 1456.382000\nCOPY_TO 15 3 8 1458.444000\nCOPY_TO 15 3 9 1463.900000\nCOPY_TO 15 3 10 1461.039000\nCOPY_FROM 15 3 1 1007.392000\nCOPY_FROM 15 3 2 1001.408000\nCOPY_FROM 15 3 3 1010.905000\nCOPY_FROM 15 3 4 1004.810000\nCOPY_FROM 15 3 5 1000.949000\nCOPY_FROM 15 3 6 1011.789000\nCOPY_FROM 15 3 7 1009.358000\nCOPY_FROM 15 3 8 1010.479000\nCOPY_FROM 15 3 9 999.429000\nCOPY_FROM 15 3 10 1021.780000\nCOPY_TO 15 4 1 1933.560000\nCOPY_TO 15 4 2 1903.286000\nCOPY_TO 15 4 3 1931.942000\nCOPY_TO 15 4 4 1963.633000\nCOPY_TO 15 4 5 1936.343000\nCOPY_TO 15 4 6 1944.860000\nCOPY_TO 15 4 7 1924.043000\nCOPY_TO 15 4 8 1923.471000\nCOPY_TO 15 4 9 1890.255000\nCOPY_TO 15 4 10 1960.720000\nCOPY_FROM 15 4 1 1368.884000\nCOPY_FROM 15 4 2 1339.628000\nCOPY_FROM 15 4 3 1311.420000\nCOPY_FROM 15 4 4 1334.597000\nCOPY_FROM 15 4 5 1351.018000\nCOPY_FROM 15 4 6 1344.878000\nCOPY_FROM 15 4 7 1355.443000\nCOPY_FROM 15 4 8 1339.170000\nCOPY_FROM 15 4 9 1340.700000\nCOPY_FROM 15 4 10 1326.241000\nCOPY_TO 15 5 1 2318.844000\nCOPY_TO 15 5 2 2402.419000\nCOPY_TO 15 5 3 2396.781000\nCOPY_TO 15 5 4 2423.735000\nCOPY_TO 15 5 5 2409.499000\nCOPY_TO 15 5 6 2428.851000\nCOPY_TO 15 5 7 2348.598000\nCOPY_TO 15 5 8 2322.848000\nCOPY_TO 15 5 9 2352.590000\nCOPY_TO 15 5 10 2445.076000\nCOPY_FROM 15 5 1 1648.351000\nCOPY_FROM 15 5 2 1651.210000\nCOPY_FROM 15 5 3 1652.014000\nCOPY_FROM 15 5 4 1657.594000\nCOPY_FROM 15 5 5 1664.718000\nCOPY_FROM 15 5 6 1659.842000\nCOPY_FROM 15 5 7 1624.535000\nCOPY_FROM 15 5 8 1674.149000\nCOPY_FROM 15 5 9 1647.591000\nCOPY_FROM 15 5 10 1669.124000\nCOPY_TO 15 6 1 2881.853000\nCOPY_TO 15 6 2 2868.673000\nCOPY_TO 15 6 3 2965.197000\nCOPY_TO 15 6 4 2884.662000\nCOPY_TO 15 6 5 2838.135000\nCOPY_TO 15 6 6 2916.165000\nCOPY_TO 15 6 7 2886.197000\nCOPY_TO 15 6 8 2933.154000\nCOPY_TO 15 6 9 2928.349000\nCOPY_TO 15 6 10 2901.545000\nCOPY_FROM 15 6 1 2013.017000\nCOPY_FROM 15 6 2 1978.835000\nCOPY_FROM 15 6 3 2004.485000\nCOPY_FROM 15 6 4 1987.586000\nCOPY_FROM 15 6 5 1975.135000\nCOPY_FROM 15 6 6 1989.522000\nCOPY_FROM 15 6 7 1988.856000\nCOPY_FROM 15 6 8 1983.815000\nCOPY_FROM 15 6 9 2013.150000\nCOPY_FROM 15 6 10 1997.074000\nCOPY_TO 15 7 1 3335.493000\nCOPY_TO 15 7 2 3357.154000\nCOPY_TO 15 7 3 3347.085000\nCOPY_TO 15 7 4 3296.994000\nCOPY_TO 15 7 5 3376.383000\nCOPY_TO 15 7 6 3368.554000\nCOPY_TO 15 7 7 3401.287000\nCOPY_TO 15 7 8 3359.792000\nCOPY_TO 15 7 9 3351.542000\nCOPY_TO 15 7 10 3359.085000\nCOPY_FROM 15 7 1 2341.669000\nCOPY_FROM 15 7 2 2318.762000\nCOPY_FROM 15 7 3 2302.094000\nCOPY_FROM 15 7 4 2295.824000\nCOPY_FROM 15 7 5 2282.052000\nCOPY_FROM 15 7 6 2285.734000\nCOPY_FROM 15 7 7 2286.871000\nCOPY_FROM 15 7 8 2301.570000\nCOPY_FROM 15 7 9 2294.122000\nCOPY_FROM 15 7 10 2318.100000\nCOPY_TO 15 8 1 3838.944000\nCOPY_TO 15 8 2 3832.013000\nCOPY_TO 15 8 3 3794.855000\nCOPY_TO 15 8 4 3829.692000\nCOPY_TO 15 8 5 3902.267000\nCOPY_TO 15 8 6 3876.061000\nCOPY_TO 15 8 7 3844.652000\nCOPY_TO 15 8 8 3819.619000\nCOPY_TO 15 8 9 3891.511000\nCOPY_TO 15 8 10 3902.708000\nCOPY_FROM 15 8 1 2665.396000\nCOPY_FROM 15 8 2 2677.914000\nCOPY_FROM 15 8 3 2666.726000\nCOPY_FROM 15 8 4 2633.747000\nCOPY_FROM 15 8 5 2632.702000\nCOPY_FROM 15 8 6 2664.116000\nCOPY_FROM 15 8 7 2614.453000\nCOPY_FROM 15 8 8 2662.111000\nCOPY_FROM 15 8 9 2660.616000\nCOPY_FROM 15 8 10 2695.048000\nCOPY_TO 15 9 1 4341.815000\nCOPY_TO 15 9 2 4302.586000\nCOPY_TO 15 9 3 4281.296000\nCOPY_TO 15 9 4 4260.384000\nCOPY_TO 15 9 5 4354.295000\nCOPY_TO 15 9 6 4395.239000\nCOPY_TO 15 9 7 4294.927000\nCOPY_TO 15 9 8 4299.131000\nCOPY_TO 15 9 9 4324.381000\nCOPY_TO 15 9 10 4308.416000\nCOPY_FROM 15 9 1 2952.762000\nCOPY_FROM 15 9 2 2976.541000\nCOPY_FROM 15 9 3 2980.895000\nCOPY_FROM 15 9 4 2988.607000\nCOPY_FROM 15 9 5 2931.639000\nCOPY_FROM 15 9 6 2980.360000\nCOPY_FROM 15 9 7 2987.142000\nCOPY_FROM 15 9 8 2942.020000\nCOPY_FROM 15 9 9 2956.429000\nCOPY_FROM 15 9 10 2976.833000\nCOPY_TO 15 10 1 4908.128000\nCOPY_TO 15 10 2 4808.306000\nCOPY_TO 15 10 3 4884.962000\nCOPY_TO 15 10 4 4871.861000\nCOPY_TO 15 10 5 4793.649000\nCOPY_TO 15 10 6 4783.691000\nCOPY_TO 15 10 7 4953.107000\nCOPY_TO 15 10 8 4770.645000\nCOPY_TO 15 10 9 4830.319000\nCOPY_TO 15 10 10 4817.374000\nCOPY_FROM 15 10 1 3316.914000\nCOPY_FROM 15 10 2 3317.386000\nCOPY_FROM 15 10 3 3304.798000\nCOPY_FROM 15 10 4 3260.573000\nCOPY_FROM 15 10 5 3275.390000\nCOPY_FROM 15 10 6 3298.207000\nCOPY_FROM 15 10 7 3286.026000\nCOPY_FROM 15 10 8 3363.954000\nCOPY_FROM 15 10 9 3294.820000\nCOPY_FROM 15 10 10 3306.407000\nCOPY_TO 20 1 1 619.998000\nCOPY_TO 20 1 2 616.942000\nCOPY_TO 20 1 3 624.587000\nCOPY_TO 20 1 4 633.838000\nCOPY_TO 20 1 5 651.659000\nCOPY_TO 20 1 6 638.405000\nCOPY_TO 20 1 7 629.828000\nCOPY_TO 20 1 8 621.210000\nCOPY_TO 20 1 9 635.503000\nCOPY_TO 20 1 10 629.262000\nCOPY_FROM 20 1 1 433.467000\nCOPY_FROM 20 1 2 431.611000\nCOPY_FROM 20 1 3 438.673000\nCOPY_FROM 20 1 4 439.864000\nCOPY_FROM 20 1 5 436.883000\nCOPY_FROM 20 1 6 436.025000\nCOPY_FROM 20 1 7 447.105000\nCOPY_FROM 20 1 8 452.754000\nCOPY_FROM 20 1 9 434.757000\nCOPY_FROM 20 1 10 439.372000\nCOPY_TO 20 2 1 1215.557000\nCOPY_TO 20 2 2 1198.834000\nCOPY_TO 20 2 3 1248.734000\nCOPY_TO 20 2 4 1224.716000\nCOPY_TO 20 2 5 1221.355000\nCOPY_TO 20 2 6 1235.157000\nCOPY_TO 20 2 7 1213.212000\nCOPY_TO 20 2 8 1251.544000\nCOPY_TO 20 2 9 1211.466000\nCOPY_TO 20 2 10 1232.067000\nCOPY_FROM 20 2 1 853.265000\nCOPY_FROM 20 2 2 861.634000\nCOPY_FROM 20 2 3 875.109000\nCOPY_FROM 20 2 4 866.576000\nCOPY_FROM 20 2 5 869.608000\nCOPY_FROM 20 2 6 867.634000\nCOPY_FROM 20 2 7 868.359000\nCOPY_FROM 20 2 8 879.867000\nCOPY_FROM 20 2 9 856.513000\nCOPY_FROM 20 2 10 846.929000\nCOPY_TO 20 3 1 1853.167000\nCOPY_TO 20 3 2 1908.958000\nCOPY_TO 20 3 3 1854.300000\nCOPY_TO 20 3 4 1854.920000\nCOPY_TO 20 3 5 1908.171000\nCOPY_TO 20 3 6 1875.182000\nCOPY_TO 20 3 7 1858.945000\nCOPY_TO 20 3 8 1836.676000\nCOPY_TO 20 3 9 1892.760000\nCOPY_TO 20 3 10 1832.188000\nCOPY_FROM 20 3 1 1269.621000\nCOPY_FROM 20 3 2 1268.794000\nCOPY_FROM 20 3 3 1306.010000\nCOPY_FROM 20 3 4 1268.746000\nCOPY_FROM 20 3 5 1285.443000\nCOPY_FROM 20 3 6 1272.459000\nCOPY_FROM 20 3 7 1284.552000\nCOPY_FROM 20 3 8 1277.634000\nCOPY_FROM 20 3 9 1283.592000\nCOPY_FROM 20 3 10 1277.291000\nCOPY_TO 20 4 1 2366.791000\nCOPY_TO 20 4 2 2467.617000\nCOPY_TO 20 4 3 2503.922000\nCOPY_TO 20 4 4 2419.396000\nCOPY_TO 20 4 5 2362.517000\nCOPY_TO 20 4 6 2436.106000\nCOPY_TO 20 4 7 2515.537000\nCOPY_TO 20 4 8 2444.051000\nCOPY_TO 20 4 9 2368.470000\nCOPY_TO 20 4 10 2476.241000\nCOPY_FROM 20 4 1 1686.377000\nCOPY_FROM 20 4 2 1766.247000\nCOPY_FROM 20 4 3 1765.013000\nCOPY_FROM 20 4 4 1710.638000\nCOPY_FROM 20 4 5 1681.944000\nCOPY_FROM 20 4 6 1672.305000\nCOPY_FROM 20 4 7 1680.594000\nCOPY_FROM 20 4 8 1692.007000\nCOPY_FROM 20 4 9 1696.334000\nCOPY_FROM 20 4 10 1673.502000\nCOPY_TO 20 5 1 3044.926000\nCOPY_TO 20 5 2 2999.139000\nCOPY_TO 20 5 3 3012.201000\nCOPY_TO 20 5 4 3079.507000\nCOPY_TO 20 5 5 3084.210000\nCOPY_TO 20 5 6 3106.328000\nCOPY_TO 20 5 7 3107.643000\nCOPY_TO 20 5 8 3103.127000\nCOPY_TO 20 5 9 3098.074000\nCOPY_TO 20 5 10 3071.407000\nCOPY_FROM 20 5 1 2110.909000\nCOPY_FROM 20 5 2 2119.924000\nCOPY_FROM 20 5 3 2094.429000\nCOPY_FROM 20 5 4 2113.787000\nCOPY_FROM 20 5 5 2093.251000\nCOPY_FROM 20 5 6 2103.724000\nCOPY_FROM 20 5 7 2163.264000\nCOPY_FROM 20 5 8 2110.832000\nCOPY_FROM 20 5 9 2120.593000\nCOPY_FROM 20 5 10 2108.865000\nCOPY_TO 20 6 1 3778.026000\nCOPY_TO 20 6 2 3660.842000\nCOPY_TO 20 6 3 3586.255000\nCOPY_TO 20 6 4 3621.287000\nCOPY_TO 20 6 5 3765.054000\nCOPY_TO 20 6 6 3730.942000\nCOPY_TO 20 6 7 3700.704000\nCOPY_TO 20 6 8 3683.990000\nCOPY_TO 20 6 9 3654.364000\nCOPY_TO 20 6 10 3711.707000\nCOPY_FROM 20 6 1 2512.796000\nCOPY_FROM 20 6 2 2499.849000\nCOPY_FROM 20 6 3 2581.643000\nCOPY_FROM 20 6 4 2540.972000\nCOPY_FROM 20 6 5 2522.357000\nCOPY_FROM 20 6 6 2519.327000\nCOPY_FROM 20 6 7 2539.536000\nCOPY_FROM 20 6 8 2529.492000\nCOPY_FROM 20 6 9 2527.186000\nCOPY_FROM 20 6 10 2537.575000\nCOPY_TO 20 7 1 4302.273000\nCOPY_TO 20 7 2 4320.033000\nCOPY_TO 20 7 3 4234.169000\nCOPY_TO 20 7 4 4347.949000\nCOPY_TO 20 7 5 4297.509000\nCOPY_TO 20 7 6 4348.086000\nCOPY_TO 20 7 7 4302.051000\nCOPY_TO 20 7 8 4325.364000\nCOPY_TO 20 7 9 4322.654000\nCOPY_TO 20 7 10 4271.526000\nCOPY_FROM 20 7 1 2911.560000\nCOPY_FROM 20 7 2 2940.254000\nCOPY_FROM 20 7 3 2980.597000\nCOPY_FROM 20 7 4 2973.070000\nCOPY_FROM 20 7 5 2933.554000\nCOPY_FROM 20 7 6 2953.611000\nCOPY_FROM 20 7 7 2922.042000\nCOPY_FROM 20 7 8 2906.997000\nCOPY_FROM 20 7 9 2904.686000\nCOPY_FROM 20 7 10 2941.453000\nCOPY_TO 20 8 1 4764.222000\nCOPY_TO 20 8 2 4728.320000\nCOPY_TO 20 8 3 4795.743000\nCOPY_TO 20 8 4 4882.833000\nCOPY_TO 20 8 5 4815.518000\nCOPY_TO 20 8 6 4886.483000\nCOPY_TO 20 8 7 4924.319000\nCOPY_TO 20 8 8 4838.255000\nCOPY_TO 20 8 9 4863.534000\nCOPY_TO 20 8 10 4925.173000\nCOPY_FROM 20 8 1 3377.310000\nCOPY_FROM 20 8 2 3374.520000\nCOPY_FROM 20 8 3 3415.924000\nCOPY_FROM 20 8 4 3359.085000\nCOPY_FROM 20 8 5 3354.984000\nCOPY_FROM 20 8 6 3314.657000\nCOPY_FROM 20 8 7 3315.929000\nCOPY_FROM 20 8 8 3446.995000\nCOPY_FROM 20 8 9 3368.091000\nCOPY_FROM 20 8 10 3390.674000\nCOPY_TO 20 9 1 5463.960000\nCOPY_TO 20 9 2 5463.921000\nCOPY_TO 20 9 3 5378.138000\nCOPY_TO 20 9 4 5535.958000\nCOPY_TO 20 9 5 5503.000000\nCOPY_TO 20 9 6 5457.850000\nCOPY_TO 20 9 7 5435.157000\nCOPY_TO 20 9 8 5422.457000\nCOPY_TO 20 9 9 5482.427000\nCOPY_TO 20 9 10 5495.809000\nCOPY_FROM 20 9 1 3876.496000\nCOPY_FROM 20 9 2 3770.921000\nCOPY_FROM 20 9 3 3729.432000\nCOPY_FROM 20 9 4 3739.708000\nCOPY_FROM 20 9 5 3787.856000\nCOPY_FROM 20 9 6 3757.324000\nCOPY_FROM 20 9 7 3793.676000\nCOPY_FROM 20 9 8 3840.151000\nCOPY_FROM 20 9 9 3721.829000\nCOPY_FROM 20 9 10 3769.584000\nCOPY_TO 20 10 1 6021.466000\nCOPY_TO 20 10 2 6050.644000\nCOPY_TO 20 10 3 6035.796000\nCOPY_TO 20 10 4 5991.765000\nCOPY_TO 20 10 5 6095.925000\nCOPY_TO 20 10 6 6006.453000\nCOPY_TO 20 10 7 6043.915000\nCOPY_TO 20 10 8 6184.330000\nCOPY_TO 20 10 9 5997.352000\nCOPY_TO 20 10 10 6142.882000\nCOPY_FROM 20 10 1 4220.218000\nCOPY_FROM 20 10 2 4160.915000\nCOPY_FROM 20 10 3 4172.628000\nCOPY_FROM 20 10 4 4183.532000\nCOPY_FROM 20 10 5 4208.204000\nCOPY_FROM 20 10 6 4232.293000\nCOPY_FROM 20 10 7 4188.968000\nCOPY_FROM 20 10 8 4191.494000\nCOPY_FROM 20 10 9 4196.841000\nCOPY_FROM 20 10 10 4172.418000\nCOPY_TO 25 1 1 774.678000\nCOPY_TO 25 1 2 787.791000\nCOPY_TO 25 1 3 773.815000\nCOPY_TO 25 1 4 744.220000\nCOPY_TO 25 1 5 763.742000\nCOPY_TO 25 1 6 764.779000\nCOPY_TO 25 1 7 763.397000\nCOPY_TO 25 1 8 750.529000\nCOPY_TO 25 1 9 775.028000\nCOPY_TO 25 1 10 763.085000\nCOPY_FROM 25 1 1 524.445000\nCOPY_FROM 25 1 2 519.951000\nCOPY_FROM 25 1 3 516.212000\nCOPY_FROM 25 1 4 516.155000\nCOPY_FROM 25 1 5 519.686000\nCOPY_FROM 25 1 6 524.260000\nCOPY_FROM 25 1 7 521.384000\nCOPY_FROM 25 1 8 516.947000\nCOPY_FROM 25 1 9 516.268000\nCOPY_FROM 25 1 10 513.815000\nCOPY_TO 25 2 1 1513.097000\nCOPY_TO 25 2 2 1516.435000\nCOPY_TO 25 2 3 1514.322000\nCOPY_TO 25 2 4 1515.332000\nCOPY_TO 25 2 5 1539.159000\nCOPY_TO 25 2 6 1504.517000\nCOPY_TO 25 2 7 1551.701000\nCOPY_TO 25 2 8 1536.408000\nCOPY_TO 25 2 9 1506.469000\nCOPY_TO 25 2 10 1507.693000\nCOPY_FROM 25 2 1 1031.906000\nCOPY_FROM 25 2 2 1011.518000\nCOPY_FROM 25 2 3 1015.601000\nCOPY_FROM 25 2 4 1022.738000\nCOPY_FROM 25 2 5 1024.219000\nCOPY_FROM 25 2 6 1018.943000\nCOPY_FROM 25 2 7 1008.076000\nCOPY_FROM 25 2 8 1008.687000\nCOPY_FROM 25 2 9 1019.874000\nCOPY_FROM 25 2 10 1010.362000\nCOPY_TO 25 3 1 2275.840000\nCOPY_TO 25 3 2 2292.456000\nCOPY_TO 25 3 3 2304.261000\nCOPY_TO 25 3 4 2260.663000\nCOPY_TO 25 3 5 2274.911000\nCOPY_TO 25 3 6 2307.456000\nCOPY_TO 25 3 7 2304.885000\nCOPY_TO 25 3 8 2328.952000\nCOPY_TO 25 3 9 2205.891000\nCOPY_TO 25 3 10 2252.140000\nCOPY_FROM 25 3 1 1491.799000\nCOPY_FROM 25 3 2 1508.012000\nCOPY_FROM 25 3 3 1507.554000\nCOPY_FROM 25 3 4 1540.556000\nCOPY_FROM 25 3 5 1538.755000\nCOPY_FROM 25 3 6 1524.962000\nCOPY_FROM 25 3 7 1519.040000\nCOPY_FROM 25 3 8 1527.385000\nCOPY_FROM 25 3 9 1542.953000\nCOPY_FROM 25 3 10 1523.412000\nCOPY_TO 25 4 1 3052.605000\nCOPY_TO 25 4 2 2998.820000\nCOPY_TO 25 4 3 2984.156000\nCOPY_TO 25 4 4 3034.054000\nCOPY_TO 25 4 5 3035.638000\nCOPY_TO 25 4 6 3021.914000\nCOPY_TO 25 4 7 3086.029000\nCOPY_TO 25 4 8 3104.967000\nCOPY_TO 25 4 9 3084.419000\nCOPY_TO 25 4 10 3052.696000\nCOPY_FROM 25 4 1 2019.843000\nCOPY_FROM 25 4 2 2010.303000\nCOPY_FROM 25 4 3 2008.544000\nCOPY_FROM 25 4 4 2017.551000\nCOPY_FROM 25 4 5 1983.106000\nCOPY_FROM 25 4 6 1972.640000\nCOPY_FROM 25 4 7 1998.370000\nCOPY_FROM 25 4 8 1972.399000\nCOPY_FROM 25 4 9 2014.721000\nCOPY_FROM 25 4 10 1990.860000\nCOPY_TO 25 5 1 3803.703000\nCOPY_TO 25 5 2 3801.972000\nCOPY_TO 25 5 3 3732.563000\nCOPY_TO 25 5 4 3844.295000\nCOPY_TO 25 5 5 3843.996000\nCOPY_TO 25 5 6 3860.533000\nCOPY_TO 25 5 7 3885.893000\nCOPY_TO 25 5 8 3901.853000\nCOPY_TO 25 5 9 3811.751000\nCOPY_TO 25 5 10 3830.153000\nCOPY_FROM 25 5 1 2512.122000\nCOPY_FROM 25 5 2 2485.190000\nCOPY_FROM 25 5 3 2514.064000\nCOPY_FROM 25 5 4 2629.482000\nCOPY_FROM 25 5 5 2574.073000\nCOPY_FROM 25 5 6 2554.008000\nCOPY_FROM 25 5 7 2554.302000\nCOPY_FROM 25 5 8 2538.815000\nCOPY_FROM 25 5 9 2557.007000\nCOPY_FROM 25 5 10 2498.580000\nCOPY_TO 25 6 1 4623.929000\nCOPY_TO 25 6 2 4565.644000\nCOPY_TO 25 6 3 4579.721000\nCOPY_TO 25 6 4 4524.352000\nCOPY_TO 25 6 5 4470.642000\nCOPY_TO 25 6 6 4563.316000\nCOPY_TO 25 6 7 4576.716000\nCOPY_TO 25 6 8 4491.117000\nCOPY_TO 25 6 9 4544.761000\nCOPY_TO 25 6 10 4424.612000\nCOPY_FROM 25 6 1 3018.827000\nCOPY_FROM 25 6 2 2978.490000\nCOPY_FROM 25 6 3 2995.232000\nCOPY_FROM 25 6 4 2967.654000\nCOPY_FROM 25 6 5 3029.289000\nCOPY_FROM 25 6 6 2956.739000\nCOPY_FROM 25 6 7 2964.034000\nCOPY_FROM 25 6 8 2969.406000\nCOPY_FROM 25 6 9 2990.859000\nCOPY_FROM 25 6 10 3004.016000\nCOPY_TO 25 7 1 5388.767000\nCOPY_TO 25 7 2 5261.497000\nCOPY_TO 25 7 3 5266.503000\nCOPY_TO 25 7 4 5328.781000\nCOPY_TO 25 7 5 5331.428000\nCOPY_TO 25 7 6 5342.277000\nCOPY_TO 25 7 7 5309.748000\nCOPY_TO 25 7 8 5396.271000\nCOPY_TO 25 7 9 5242.006000\nCOPY_TO 25 7 10 5204.319000\nCOPY_FROM 25 7 1 3526.509000\nCOPY_FROM 25 7 2 3533.526000\nCOPY_FROM 25 7 3 3574.351000\nCOPY_FROM 25 7 4 3550.997000\nCOPY_FROM 25 7 5 3519.623000\nCOPY_FROM 25 7 6 3462.743000\nCOPY_FROM 25 7 7 3504.243000\nCOPY_FROM 25 7 8 3521.010000\nCOPY_FROM 25 7 9 3431.482000\nCOPY_FROM 25 7 10 3419.169000\nCOPY_TO 25 8 1 6097.554000\nCOPY_TO 25 8 2 5984.897000\nCOPY_TO 25 8 3 6040.903000\nCOPY_TO 25 8 4 6147.806000\nCOPY_TO 25 8 5 6037.164000\nCOPY_TO 25 8 6 5987.661000\nCOPY_TO 25 8 7 6096.899000\nCOPY_TO 25 8 8 6073.973000\nCOPY_TO 25 8 9 6105.735000\nCOPY_TO 25 8 10 5974.114000\nCOPY_FROM 25 8 1 3988.738000\nCOPY_FROM 25 8 2 4009.777000\nCOPY_FROM 25 8 3 4027.431000\nCOPY_FROM 25 8 4 3976.333000\nCOPY_FROM 25 8 5 3961.928000\nCOPY_FROM 25 8 6 3974.345000\nCOPY_FROM 25 8 7 4029.581000\nCOPY_FROM 25 8 8 4025.947000\nCOPY_FROM 25 8 9 3977.926000\nCOPY_FROM 25 8 10 4035.786000\nCOPY_TO 25 9 1 6753.774000\nCOPY_TO 25 9 2 6700.288000\nCOPY_TO 25 9 3 6880.717000\nCOPY_TO 25 9 4 6825.173000\nCOPY_TO 25 9 5 6697.153000\nCOPY_TO 25 9 6 6785.494000\nCOPY_TO 25 9 7 6879.979000\nCOPY_TO 25 9 8 6743.111000\nCOPY_TO 25 9 9 6850.346000\nCOPY_TO 25 9 10 6787.185000\nCOPY_FROM 25 9 1 4517.219000\nCOPY_FROM 25 9 2 4531.329000\nCOPY_FROM 25 9 3 4529.439000\nCOPY_FROM 25 9 4 4481.905000\nCOPY_FROM 25 9 5 4518.109000\nCOPY_FROM 25 9 6 4502.731000\nCOPY_FROM 25 9 7 4473.914000\nCOPY_FROM 25 9 8 4471.436000\nCOPY_FROM 25 9 9 4500.187000\nCOPY_FROM 25 9 10 4479.554000\nCOPY_TO 25 10 1 7557.810000\nCOPY_TO 25 10 2 7559.711000\nCOPY_TO 25 10 3 7542.392000\nCOPY_TO 25 10 4 7291.018000\nCOPY_TO 25 10 5 7504.865000\nCOPY_TO 25 10 6 7432.488000\nCOPY_TO 25 10 7 7432.530000\nCOPY_TO 25 10 8 7474.229000\nCOPY_TO 25 10 9 7384.188000\nCOPY_TO 25 10 10 7551.992000\nCOPY_FROM 25 10 1 4964.734000\nCOPY_FROM 25 10 2 5042.329000\nCOPY_FROM 25 10 3 5013.357000\nCOPY_FROM 25 10 4 4986.712000\nCOPY_FROM 25 10 5 4996.862000\nCOPY_FROM 25 10 6 4945.983000\nCOPY_FROM 25 10 7 4994.463000\nCOPY_FROM 25 10 8 4944.533000\nCOPY_FROM 25 10 9 5018.457000\nCOPY_FROM 25 10 10 4967.123000\nCOPY_TO 30 1 1 905.785000\nCOPY_TO 30 1 2 919.553000\nCOPY_TO 30 1 3 891.263000\nCOPY_TO 30 1 4 923.963000\nCOPY_TO 30 1 5 901.843000\nCOPY_TO 30 1 6 915.491000\nCOPY_TO 30 1 7 896.540000\nCOPY_TO 30 1 8 906.324000\nCOPY_TO 30 1 9 892.686000\nCOPY_TO 30 1 10 924.998000\nCOPY_FROM 30 1 1 587.472000\nCOPY_FROM 30 1 2 605.176000\nCOPY_FROM 30 1 3 591.641000\nCOPY_FROM 30 1 4 622.076000\nCOPY_FROM 30 1 5 604.110000\nCOPY_FROM 30 1 6 619.221000\nCOPY_FROM 30 1 7 612.524000\nCOPY_FROM 30 1 8 603.729000\nCOPY_FROM 30 1 9 595.670000\nCOPY_FROM 30 1 10 598.395000\nCOPY_TO 30 2 1 1799.114000\nCOPY_TO 30 2 2 1802.407000\nCOPY_TO 30 2 3 1813.957000\nCOPY_TO 30 2 4 1765.727000\nCOPY_TO 30 2 5 1798.418000\nCOPY_TO 30 2 6 1817.917000\nCOPY_TO 30 2 7 1780.496000\nCOPY_TO 30 2 8 1772.734000\nCOPY_TO 30 2 9 1771.637000\nCOPY_TO 30 2 10 1837.537000\nCOPY_FROM 30 2 1 1186.556000\nCOPY_FROM 30 2 2 1189.396000\nCOPY_FROM 30 2 3 1188.794000\nCOPY_FROM 30 2 4 1196.751000\nCOPY_FROM 30 2 5 1208.097000\nCOPY_FROM 30 2 6 1195.639000\nCOPY_FROM 30 2 7 1181.028000\nCOPY_FROM 30 2 8 1177.701000\nCOPY_FROM 30 2 9 1181.959000\nCOPY_FROM 30 2 10 1171.377000\nCOPY_TO 30 3 1 2668.510000\nCOPY_TO 30 3 2 2662.493000\nCOPY_TO 30 3 3 2659.467000\nCOPY_TO 30 3 4 2629.276000\nCOPY_TO 30 3 5 2630.829000\nCOPY_TO 30 3 6 2632.760000\nCOPY_TO 30 3 7 2642.559000\nCOPY_TO 30 3 8 2675.854000\nCOPY_TO 30 3 9 2686.168000\nCOPY_TO 30 3 10 2703.022000\nCOPY_FROM 30 3 1 1749.300000\nCOPY_FROM 30 3 2 1732.106000\nCOPY_FROM 30 3 3 1744.452000\nCOPY_FROM 30 3 4 1762.979000\nCOPY_FROM 30 3 5 1758.033000\nCOPY_FROM 30 3 6 1772.605000\nCOPY_FROM 30 3 7 1754.809000\nCOPY_FROM 30 3 8 1751.785000\nCOPY_FROM 30 3 9 1762.331000\nCOPY_FROM 30 3 10 1745.872000\nCOPY_TO 30 4 1 3575.638000\nCOPY_TO 30 4 2 3540.611000\nCOPY_TO 30 4 3 3555.631000\nCOPY_TO 30 4 4 3508.023000\nCOPY_TO 30 4 5 3548.267000\nCOPY_TO 30 4 6 3530.229000\nCOPY_TO 30 4 7 3624.151000\nCOPY_TO 30 4 8 3549.913000\nCOPY_TO 30 4 9 3579.071000\nCOPY_TO 30 4 10 3548.049000\nCOPY_FROM 30 4 1 2333.686000\nCOPY_FROM 30 4 2 2354.055000\nCOPY_FROM 30 4 3 2329.804000\nCOPY_FROM 30 4 4 2393.154000\nCOPY_FROM 30 4 5 2357.848000\nCOPY_FROM 30 4 6 2351.915000\nCOPY_FROM 30 4 7 2340.428000\nCOPY_FROM 30 4 8 2364.307000\nCOPY_FROM 30 4 9 2353.620000\nCOPY_FROM 30 4 10 2363.992000\nCOPY_TO 30 5 1 4324.843000\nCOPY_TO 30 5 2 4387.595000\nCOPY_TO 30 5 3 4416.761000\nCOPY_TO 30 5 4 4406.291000\nCOPY_TO 30 5 5 4418.657000\nCOPY_TO 30 5 6 4432.811000\nCOPY_TO 30 5 7 4422.989000\nCOPY_TO 30 5 8 4467.277000\nCOPY_TO 30 5 9 4474.720000\nCOPY_TO 30 5 10 4419.907000\nCOPY_FROM 30 5 1 2911.757000\nCOPY_FROM 30 5 2 2921.622000\nCOPY_FROM 30 5 3 2863.662000\nCOPY_FROM 30 5 4 3017.345000\nCOPY_FROM 30 5 5 2904.579000\nCOPY_FROM 30 5 6 2954.328000\nCOPY_FROM 30 5 7 2965.111000\nCOPY_FROM 30 5 8 2962.503000\nCOPY_FROM 30 5 9 2881.468000\nCOPY_FROM 30 5 10 2932.883000\nCOPY_TO 30 6 1 5324.111000\nCOPY_TO 30 6 2 5273.693000\nCOPY_TO 30 6 3 5477.630000\nCOPY_TO 30 6 4 5470.590000\nCOPY_TO 30 6 5 5330.046000\nCOPY_TO 30 6 6 5314.785000\nCOPY_TO 30 6 7 5280.238000\nCOPY_TO 30 6 8 5447.156000\nCOPY_TO 30 6 9 5470.025000\nCOPY_TO 30 6 10 5382.615000\nCOPY_FROM 30 6 1 3519.835000\nCOPY_FROM 30 6 2 3495.999000\nCOPY_FROM 30 6 3 3447.579000\nCOPY_FROM 30 6 4 3503.293000\nCOPY_FROM 30 6 5 3467.442000\nCOPY_FROM 30 6 6 3502.490000\nCOPY_FROM 30 6 7 3539.083000\nCOPY_FROM 30 6 8 3514.108000\nCOPY_FROM 30 6 9 3558.769000\nCOPY_FROM 30 6 10 3557.883000\nCOPY_TO 30 7 1 6270.765000\nCOPY_TO 30 7 2 6250.630000\nCOPY_TO 30 7 3 6291.501000\nCOPY_TO 30 7 4 6277.021000\nCOPY_TO 30 7 5 6197.067000\nCOPY_TO 30 7 6 6204.168000\nCOPY_TO 30 7 7 6326.866000\nCOPY_TO 30 7 8 6219.435000\nCOPY_TO 30 7 9 6229.165000\nCOPY_TO 30 7 10 6182.055000\nCOPY_FROM 30 7 1 4064.754000\nCOPY_FROM 30 7 2 4161.991000\nCOPY_FROM 30 7 3 4099.098000\nCOPY_FROM 30 7 4 4098.243000\nCOPY_FROM 30 7 5 4094.954000\nCOPY_FROM 30 7 6 4113.331000\nCOPY_FROM 30 7 7 4162.527000\nCOPY_FROM 30 7 8 4117.655000\nCOPY_FROM 30 7 9 4038.147000\nCOPY_FROM 30 7 10 4247.750000\nCOPY_TO 30 8 1 7036.335000\nCOPY_TO 30 8 2 7161.077000\nCOPY_TO 30 8 3 7198.475000\nCOPY_TO 30 8 4 7057.568000\nCOPY_TO 30 8 5 7068.777000\nCOPY_TO 30 8 6 7145.575000\nCOPY_TO 30 8 7 7164.393000\nCOPY_TO 30 8 8 7146.893000\nCOPY_TO 30 8 9 7263.004000\nCOPY_TO 30 8 10 7258.462000\nCOPY_FROM 30 8 1 4709.346000\nCOPY_FROM 30 8 2 4727.176000\nCOPY_FROM 30 8 3 4643.916000\nCOPY_FROM 30 8 4 4646.425000\nCOPY_FROM 30 8 5 4714.948000\nCOPY_FROM 30 8 6 4669.370000\nCOPY_FROM 30 8 7 4649.179000\nCOPY_FROM 30 8 8 4604.831000\nCOPY_FROM 30 8 9 4657.557000\nCOPY_FROM 30 8 10 4672.892000\nCOPY_TO 30 9 1 7908.138000\nCOPY_TO 30 9 2 8046.895000\nCOPY_TO 30 9 3 8140.333000\nCOPY_TO 30 9 4 8103.733000\nCOPY_TO 30 9 5 8007.650000\nCOPY_TO 30 9 6 7955.601000\nCOPY_TO 30 9 7 8044.544000\nCOPY_TO 30 9 8 8086.140000\nCOPY_TO 30 9 9 8062.369000\nCOPY_TO 30 9 10 7827.011000\nCOPY_FROM 30 9 1 5204.533000\nCOPY_FROM 30 9 2 5201.463000\nCOPY_FROM 30 9 3 5234.632000\nCOPY_FROM 30 9 4 5236.902000\nCOPY_FROM 30 9 5 5269.275000\nCOPY_FROM 30 9 6 5263.596000\nCOPY_FROM 30 9 7 5192.508000\nCOPY_FROM 30 9 8 5234.723000\nCOPY_FROM 30 9 9 5188.671000\nCOPY_FROM 30 9 10 5160.328000\nCOPY_TO 30 10 1 8859.946000\nCOPY_TO 30 10 2 8904.060000\nCOPY_TO 30 10 3 9075.677000\nCOPY_TO 30 10 4 8911.511000\nCOPY_TO 30 10 5 8923.505000\nCOPY_TO 30 10 6 8955.312000\nCOPY_TO 30 10 7 9014.532000\nCOPY_TO 30 10 8 9100.991000\nCOPY_TO 30 10 9 8978.536000\nCOPY_TO 30 10 10 8974.878000\nCOPY_FROM 30 10 1 5820.402000\nCOPY_FROM 30 10 2 5800.793000\nCOPY_FROM 30 10 3 5817.289000\nCOPY_FROM 30 10 4 5766.636000\nCOPY_FROM 30 10 5 5947.599000\nCOPY_FROM 30 10 6 5756.134000\nCOPY_FROM 30 10 7 5764.180000\nCOPY_FROM 30 10 8 5796.569000\nCOPY_FROM 30 10 9 5796.612000\nCOPY_FROM 30 10 10 5849.049000\n\nCOPY_TO 5 1 1 226.623000\nCOPY_TO 5 1 2 227.444000\nCOPY_TO 5 1 3 214.579000\nCOPY_TO 5 1 4 218.737000\nCOPY_TO 5 1 5 218.708000\nCOPY_TO 5 1 6 221.763000\nCOPY_TO 5 1 7 212.154000\nCOPY_TO 5 1 8 219.050000\nCOPY_TO 5 1 9 225.217000\nCOPY_TO 5 1 10 219.609000\nCOPY_FROM 5 1 1 190.806000\nCOPY_FROM 5 1 2 192.065000\nCOPY_FROM 5 1 3 192.423000\nCOPY_FROM 5 1 4 200.560000\nCOPY_FROM 5 1 5 190.027000\nCOPY_FROM 5 1 6 190.954000\nCOPY_FROM 5 1 7 190.775000\nCOPY_FROM 5 1 8 187.590000\nCOPY_FROM 5 1 9 194.545000\nCOPY_FROM 5 1 10 190.831000\nCOPY_TO 5 2 1 419.239000\nCOPY_TO 5 2 2 428.527000\nCOPY_TO 5 2 3 424.408000\nCOPY_TO 5 2 4 428.882000\nCOPY_TO 5 2 5 419.476000\nCOPY_TO 5 2 6 422.894000\nCOPY_TO 5 2 7 418.744000\nCOPY_TO 5 2 8 425.265000\nCOPY_TO 5 2 9 428.402000\nCOPY_TO 5 2 10 425.687000\nCOPY_FROM 5 2 1 368.581000\nCOPY_FROM 5 2 2 368.334000\nCOPY_FROM 5 2 3 379.807000\nCOPY_FROM 5 2 4 367.980000\nCOPY_FROM 5 2 5 364.015000\nCOPY_FROM 5 2 6 366.088000\nCOPY_FROM 5 2 7 360.102000\nCOPY_FROM 5 2 8 366.403000\nCOPY_FROM 5 2 9 363.637000\nCOPY_FROM 5 2 10 362.853000\nCOPY_TO 5 3 1 636.678000\nCOPY_TO 5 3 2 635.709000\nCOPY_TO 5 3 3 631.716000\nCOPY_TO 5 3 4 608.849000\nCOPY_TO 5 3 5 625.253000\nCOPY_TO 5 3 6 630.432000\nCOPY_TO 5 3 7 636.818000\nCOPY_TO 5 3 8 640.687000\nCOPY_TO 5 3 9 651.740000\nCOPY_TO 5 3 10 622.738000\nCOPY_FROM 5 3 1 541.713000\nCOPY_FROM 5 3 2 532.056000\nCOPY_FROM 5 3 3 539.630000\nCOPY_FROM 5 3 4 549.629000\nCOPY_FROM 5 3 5 548.109000\nCOPY_FROM 5 3 6 533.228000\nCOPY_FROM 5 3 7 532.981000\nCOPY_FROM 5 3 8 527.524000\nCOPY_FROM 5 3 9 566.548000\nCOPY_FROM 5 3 10 531.553000\nCOPY_TO 5 4 1 823.149000\nCOPY_TO 5 4 2 842.084000\nCOPY_TO 5 4 3 841.990000\nCOPY_TO 5 4 4 834.844000\nCOPY_TO 5 4 5 847.631000\nCOPY_TO 5 4 6 852.530000\nCOPY_TO 5 4 7 822.453000\nCOPY_TO 5 4 8 851.579000\nCOPY_TO 5 4 9 841.356000\nCOPY_TO 5 4 10 840.655000\nCOPY_FROM 5 4 1 715.727000\nCOPY_FROM 5 4 2 700.656000\nCOPY_FROM 5 4 3 714.135000\nCOPY_FROM 5 4 4 711.922000\nCOPY_FROM 5 4 5 703.007000\nCOPY_FROM 5 4 6 700.765000\nCOPY_FROM 5 4 7 705.071000\nCOPY_FROM 5 4 8 716.543000\nCOPY_FROM 5 4 9 702.448000\nCOPY_FROM 5 4 10 716.714000\nCOPY_TO 5 5 1 1044.045000\nCOPY_TO 5 5 2 1039.683000\nCOPY_TO 5 5 3 1010.508000\nCOPY_TO 5 5 4 1032.182000\nCOPY_TO 5 5 5 1056.995000\nCOPY_TO 5 5 6 1028.120000\nCOPY_TO 5 5 7 1035.610000\nCOPY_TO 5 5 8 1047.220000\nCOPY_TO 5 5 9 1056.572000\nCOPY_TO 5 5 10 1052.532000\nCOPY_FROM 5 5 1 880.451000\nCOPY_FROM 5 5 2 892.421000\nCOPY_FROM 5 5 3 926.924000\nCOPY_FROM 5 5 4 891.630000\nCOPY_FROM 5 5 5 931.319000\nCOPY_FROM 5 5 6 900.775000\nCOPY_FROM 5 5 7 894.377000\nCOPY_FROM 5 5 8 892.984000\nCOPY_FROM 5 5 9 882.452000\nCOPY_FROM 5 5 10 941.360000\nCOPY_TO 5 6 1 1258.759000\nCOPY_TO 5 6 2 1259.336000\nCOPY_TO 5 6 3 1268.761000\nCOPY_TO 5 6 4 1234.730000\nCOPY_TO 5 6 5 1272.013000\nCOPY_TO 5 6 6 1233.970000\nCOPY_TO 5 6 7 1281.098000\nCOPY_TO 5 6 8 1267.348000\nCOPY_TO 5 6 9 1259.674000\nCOPY_TO 5 6 10 1266.219000\nCOPY_FROM 5 6 1 1052.524000\nCOPY_FROM 5 6 2 1067.610000\nCOPY_FROM 5 6 3 1057.225000\nCOPY_FROM 5 6 4 1053.887000\nCOPY_FROM 5 6 5 1066.923000\nCOPY_FROM 5 6 6 1066.930000\nCOPY_FROM 5 6 7 1064.119000\nCOPY_FROM 5 6 8 1103.817000\nCOPY_FROM 5 6 9 1040.265000\nCOPY_FROM 5 6 10 1049.068000\nCOPY_TO 5 7 1 1492.215000\nCOPY_TO 5 7 2 1488.576000\nCOPY_TO 5 7 3 1467.710000\nCOPY_TO 5 7 4 1478.339000\nCOPY_TO 5 7 5 1501.272000\nCOPY_TO 5 7 6 1483.944000\nCOPY_TO 5 7 7 1479.922000\nCOPY_TO 5 7 8 1476.075000\nCOPY_TO 5 7 9 1470.403000\nCOPY_TO 5 7 10 1504.996000\nCOPY_FROM 5 7 1 1231.400000\nCOPY_FROM 5 7 2 1207.745000\nCOPY_FROM 5 7 3 1238.918000\nCOPY_FROM 5 7 4 1228.868000\nCOPY_FROM 5 7 5 1239.988000\nCOPY_FROM 5 7 6 1230.274000\nCOPY_FROM 5 7 7 1236.876000\nCOPY_FROM 5 7 8 1227.257000\nCOPY_FROM 5 7 9 1230.378000\nCOPY_FROM 5 7 10 1286.864000\nCOPY_TO 5 8 1 1739.946000\nCOPY_TO 5 8 2 1699.952000\nCOPY_TO 5 8 3 1679.076000\nCOPY_TO 5 8 4 1686.910000\nCOPY_TO 5 8 5 1688.083000\nCOPY_TO 5 8 6 1694.051000\nCOPY_TO 5 8 7 1678.831000\nCOPY_TO 5 8 8 1659.907000\nCOPY_TO 5 8 9 1641.518000\nCOPY_TO 5 8 10 1679.057000\nCOPY_FROM 5 8 1 1437.100000\nCOPY_FROM 5 8 2 1424.070000\nCOPY_FROM 5 8 3 1473.867000\nCOPY_FROM 5 8 4 1405.431000\nCOPY_FROM 5 8 5 1406.246000\nCOPY_FROM 5 8 6 1419.742000\nCOPY_FROM 5 8 7 1387.097000\nCOPY_FROM 5 8 8 1396.140000\nCOPY_FROM 5 8 9 1420.520000\nCOPY_FROM 5 8 10 1412.001000\nCOPY_TO 5 9 1 1858.925000\nCOPY_TO 5 9 2 1850.901000\nCOPY_TO 5 9 3 1873.408000\nCOPY_TO 5 9 4 1905.935000\nCOPY_TO 5 9 5 1910.295000\nCOPY_TO 5 9 6 1889.258000\nCOPY_TO 5 9 7 1865.899000\nCOPY_TO 5 9 8 1874.485000\nCOPY_TO 5 9 9 1906.459000\nCOPY_TO 5 9 10 1844.316000\nCOPY_FROM 5 9 1 1584.546000\nCOPY_FROM 5 9 2 1586.177000\nCOPY_FROM 5 9 3 1578.157000\nCOPY_FROM 5 9 4 1553.313000\nCOPY_FROM 5 9 5 1547.309000\nCOPY_FROM 5 9 6 1588.149000\nCOPY_FROM 5 9 7 1569.061000\nCOPY_FROM 5 9 8 1579.066000\nCOPY_FROM 5 9 9 1570.615000\nCOPY_FROM 5 9 10 1592.860000\nCOPY_TO 5 10 1 2095.472000\nCOPY_TO 5 10 2 2077.086000\nCOPY_TO 5 10 3 2082.011000\nCOPY_TO 5 10 4 2118.808000\nCOPY_TO 5 10 5 2122.738000\nCOPY_TO 5 10 6 2116.635000\nCOPY_TO 5 10 7 2065.169000\nCOPY_TO 5 10 8 2071.043000\nCOPY_TO 5 10 9 2104.322000\nCOPY_TO 5 10 10 2094.018000\nCOPY_FROM 5 10 1 1746.916000\nCOPY_FROM 5 10 2 1748.130000\nCOPY_FROM 5 10 3 1742.154000\nCOPY_FROM 5 10 4 1822.064000\nCOPY_FROM 5 10 5 1739.668000\nCOPY_FROM 5 10 6 1736.219000\nCOPY_FROM 5 10 7 1828.462000\nCOPY_FROM 5 10 8 1741.385000\nCOPY_FROM 5 10 9 1749.339000\nCOPY_FROM 5 10 10 1749.511000\nCOPY_TO 10 1 1 348.214000\nCOPY_TO 10 1 2 344.409000\nCOPY_TO 10 1 3 348.118000\nCOPY_TO 10 1 4 351.250000\nCOPY_TO 10 1 5 345.164000\nCOPY_TO 10 1 6 348.686000\nCOPY_TO 10 1 7 347.033000\nCOPY_TO 10 1 8 356.881000\nCOPY_TO 10 1 9 363.224000\nCOPY_TO 10 1 10 344.265000\nCOPY_FROM 10 1 1 309.644000\nCOPY_FROM 10 1 2 315.303000\nCOPY_FROM 10 1 3 309.133000\nCOPY_FROM 10 1 4 308.645000\nCOPY_FROM 10 1 5 307.657000\nCOPY_FROM 10 1 6 308.826000\nCOPY_FROM 10 1 7 306.510000\nCOPY_FROM 10 1 8 306.906000\nCOPY_FROM 10 1 9 312.354000\nCOPY_FROM 10 1 10 309.572000\nCOPY_TO 10 2 1 680.964000\nCOPY_TO 10 2 2 686.492000\nCOPY_TO 10 2 3 673.836000\nCOPY_TO 10 2 4 693.009000\nCOPY_TO 10 2 5 679.089000\nCOPY_TO 10 2 6 675.584000\nCOPY_TO 10 2 7 682.799000\nCOPY_TO 10 2 8 692.569000\nCOPY_TO 10 2 9 671.136000\nCOPY_TO 10 2 10 658.264000\nCOPY_FROM 10 2 1 596.359000\nCOPY_FROM 10 2 2 592.091000\nCOPY_FROM 10 2 3 593.862000\nCOPY_FROM 10 2 4 595.358000\nCOPY_FROM 10 2 5 595.974000\nCOPY_FROM 10 2 6 620.312000\nCOPY_FROM 10 2 7 596.066000\nCOPY_FROM 10 2 8 600.032000\nCOPY_FROM 10 2 9 600.454000\nCOPY_FROM 10 2 10 596.003000\nCOPY_TO 10 3 1 1019.610000\nCOPY_TO 10 3 2 1007.821000\nCOPY_TO 10 3 3 1014.551000\nCOPY_TO 10 3 4 1004.209000\nCOPY_TO 10 3 5 1037.550000\nCOPY_TO 10 3 6 1006.828000\nCOPY_TO 10 3 7 1018.162000\nCOPY_TO 10 3 8 992.985000\nCOPY_TO 10 3 9 1025.867000\nCOPY_TO 10 3 10 1028.286000\nCOPY_FROM 10 3 1 892.287000\nCOPY_FROM 10 3 2 887.084000\nCOPY_FROM 10 3 3 892.174000\nCOPY_FROM 10 3 4 899.172000\nCOPY_FROM 10 3 5 880.837000\nCOPY_FROM 10 3 6 885.155000\nCOPY_FROM 10 3 7 893.880000\nCOPY_FROM 10 3 8 870.693000\nCOPY_FROM 10 3 9 882.712000\nCOPY_FROM 10 3 10 878.129000\nCOPY_TO 10 4 1 1358.053000\nCOPY_TO 10 4 2 1360.787000\nCOPY_TO 10 4 3 1322.403000\nCOPY_TO 10 4 4 1388.729000\nCOPY_TO 10 4 5 1371.818000\nCOPY_TO 10 4 6 1349.647000\nCOPY_TO 10 4 7 1373.746000\nCOPY_TO 10 4 8 1426.870000\nCOPY_TO 10 4 9 1347.131000\nCOPY_TO 10 4 10 1336.103000\nCOPY_FROM 10 4 1 1187.230000\nCOPY_FROM 10 4 2 1175.994000\nCOPY_FROM 10 4 3 1186.190000\nCOPY_FROM 10 4 4 1186.168000\nCOPY_FROM 10 4 5 1182.343000\nCOPY_FROM 10 4 6 1179.842000\nCOPY_FROM 10 4 7 1175.598000\nCOPY_FROM 10 4 8 1189.885000\nCOPY_FROM 10 4 9 1166.391000\nCOPY_FROM 10 4 10 1175.445000\nCOPY_TO 10 5 1 1715.079000\nCOPY_TO 10 5 2 1685.962000\nCOPY_TO 10 5 3 1670.427000\nCOPY_TO 10 5 4 1662.272000\nCOPY_TO 10 5 5 1683.975000\nCOPY_TO 10 5 6 1662.505000\nCOPY_TO 10 5 7 1678.846000\nCOPY_TO 10 5 8 1659.495000\nCOPY_TO 10 5 9 1640.480000\nCOPY_TO 10 5 10 1666.186000\nCOPY_FROM 10 5 1 1469.628000\nCOPY_FROM 10 5 2 1482.367000\nCOPY_FROM 10 5 3 1467.941000\nCOPY_FROM 10 5 4 1443.202000\nCOPY_FROM 10 5 5 1438.004000\nCOPY_FROM 10 5 6 1435.447000\nCOPY_FROM 10 5 7 1440.746000\nCOPY_FROM 10 5 8 1440.996000\nCOPY_FROM 10 5 9 1516.147000\nCOPY_FROM 10 5 10 1523.081000\nCOPY_TO 10 6 1 2008.597000\nCOPY_TO 10 6 2 2002.489000\nCOPY_TO 10 6 3 2054.108000\nCOPY_TO 10 6 4 2020.325000\nCOPY_TO 10 6 5 2074.237000\nCOPY_TO 10 6 6 2029.803000\nCOPY_TO 10 6 7 2004.565000\nCOPY_TO 10 6 8 2027.488000\nCOPY_TO 10 6 9 2018.207000\nCOPY_TO 10 6 10 2043.407000\nCOPY_FROM 10 6 1 1732.059000\nCOPY_FROM 10 6 2 1698.740000\nCOPY_FROM 10 6 3 1744.397000\nCOPY_FROM 10 6 4 1752.396000\nCOPY_FROM 10 6 5 1738.473000\nCOPY_FROM 10 6 6 1750.763000\nCOPY_FROM 10 6 7 1760.260000\nCOPY_FROM 10 6 8 1734.506000\nCOPY_FROM 10 6 9 1752.343000\nCOPY_FROM 10 6 10 1800.136000\nCOPY_TO 10 7 1 2373.267000\nCOPY_TO 10 7 2 2359.455000\nCOPY_TO 10 7 3 2376.194000\nCOPY_TO 10 7 4 2369.713000\nCOPY_TO 10 7 5 2380.525000\nCOPY_TO 10 7 6 2355.680000\nCOPY_TO 10 7 7 2365.292000\nCOPY_TO 10 7 8 2387.594000\nCOPY_TO 10 7 9 2361.091000\nCOPY_TO 10 7 10 2399.029000\nCOPY_FROM 10 7 1 2040.625000\nCOPY_FROM 10 7 2 2031.799000\nCOPY_FROM 10 7 3 2054.883000\nCOPY_FROM 10 7 4 2020.964000\nCOPY_FROM 10 7 5 2085.711000\nCOPY_FROM 10 7 6 2056.172000\nCOPY_FROM 10 7 7 2053.141000\nCOPY_FROM 10 7 8 2017.080000\nCOPY_FROM 10 7 9 2036.249000\nCOPY_FROM 10 7 10 2055.574000\nCOPY_TO 10 8 1 2708.496000\nCOPY_TO 10 8 2 2670.277000\nCOPY_TO 10 8 3 2739.491000\nCOPY_TO 10 8 4 2670.203000\nCOPY_TO 10 8 5 2686.905000\nCOPY_TO 10 8 6 2715.423000\nCOPY_TO 10 8 7 2661.954000\nCOPY_TO 10 8 8 2679.533000\nCOPY_TO 10 8 9 2700.084000\nCOPY_TO 10 8 10 2692.732000\nCOPY_FROM 10 8 1 2332.678000\nCOPY_FROM 10 8 2 2327.148000\nCOPY_FROM 10 8 3 2365.272000\nCOPY_FROM 10 8 4 2323.775000\nCOPY_FROM 10 8 5 2327.727000\nCOPY_FROM 10 8 6 2328.340000\nCOPY_FROM 10 8 7 2351.656000\nCOPY_FROM 10 8 8 2359.587000\nCOPY_FROM 10 8 9 2315.807000\nCOPY_FROM 10 8 10 2323.951000\nCOPY_TO 10 9 1 3048.751000\nCOPY_TO 10 9 2 3047.431000\nCOPY_TO 10 9 3 3033.034000\nCOPY_TO 10 9 4 3024.685000\nCOPY_TO 10 9 5 3033.612000\nCOPY_TO 10 9 6 3071.925000\nCOPY_TO 10 9 7 3066.067000\nCOPY_TO 10 9 8 3061.065000\nCOPY_TO 10 9 9 3033.557000\nCOPY_TO 10 9 10 3139.233000\nCOPY_FROM 10 9 1 2637.134000\nCOPY_FROM 10 9 2 2648.296000\nCOPY_FROM 10 9 3 2595.698000\nCOPY_FROM 10 9 4 2684.115000\nCOPY_FROM 10 9 5 2640.266000\nCOPY_FROM 10 9 6 2647.282000\nCOPY_FROM 10 9 7 2626.573000\nCOPY_FROM 10 9 8 2597.198000\nCOPY_FROM 10 9 9 2590.305000\nCOPY_FROM 10 9 10 2607.834000\nCOPY_TO 10 10 1 3399.538000\nCOPY_TO 10 10 2 3395.112000\nCOPY_TO 10 10 3 3379.849000\nCOPY_TO 10 10 4 3447.512000\nCOPY_TO 10 10 5 3395.209000\nCOPY_TO 10 10 6 3372.455000\nCOPY_TO 10 10 7 3426.450000\nCOPY_TO 10 10 8 3406.147000\nCOPY_TO 10 10 9 3401.163000\nCOPY_TO 10 10 10 3398.863000\nCOPY_FROM 10 10 1 2918.524000\nCOPY_FROM 10 10 2 2946.519000\nCOPY_FROM 10 10 3 2897.459000\nCOPY_FROM 10 10 4 2949.553000\nCOPY_FROM 10 10 5 2924.340000\nCOPY_FROM 10 10 6 2880.430000\nCOPY_FROM 10 10 7 2943.481000\nCOPY_FROM 10 10 8 2924.866000\nCOPY_FROM 10 10 9 2882.415000\nCOPY_FROM 10 10 10 2939.448000\nCOPY_TO 15 1 1 481.490000\nCOPY_TO 15 1 2 480.802000\nCOPY_TO 15 1 3 505.153000\nCOPY_TO 15 1 4 480.755000\nCOPY_TO 15 1 5 487.445000\nCOPY_TO 15 1 6 478.630000\nCOPY_TO 15 1 7 471.924000\nCOPY_TO 15 1 8 484.494000\nCOPY_TO 15 1 9 475.958000\nCOPY_TO 15 1 10 476.259000\nCOPY_FROM 15 1 1 404.762000\nCOPY_FROM 15 1 2 411.539000\nCOPY_FROM 15 1 3 396.594000\nCOPY_FROM 15 1 4 402.033000\nCOPY_FROM 15 1 5 399.084000\nCOPY_FROM 15 1 6 402.425000\nCOPY_FROM 15 1 7 399.751000\nCOPY_FROM 15 1 8 396.732000\nCOPY_FROM 15 1 9 408.485000\nCOPY_FROM 15 1 10 401.768000\nCOPY_TO 15 2 1 948.346000\nCOPY_TO 15 2 2 960.359000\nCOPY_TO 15 2 3 945.425000\nCOPY_TO 15 2 4 942.055000\nCOPY_TO 15 2 5 946.342000\nCOPY_TO 15 2 6 974.876000\nCOPY_TO 15 2 7 935.041000\nCOPY_TO 15 2 8 962.795000\nCOPY_TO 15 2 9 934.524000\nCOPY_TO 15 2 10 944.476000\nCOPY_FROM 15 2 1 794.640000\nCOPY_FROM 15 2 2 764.601000\nCOPY_FROM 15 2 3 785.607000\nCOPY_FROM 15 2 4 768.691000\nCOPY_FROM 15 2 5 789.261000\nCOPY_FROM 15 2 6 766.484000\nCOPY_FROM 15 2 7 762.206000\nCOPY_FROM 15 2 8 777.008000\nCOPY_FROM 15 2 9 777.736000\nCOPY_FROM 15 2 10 768.562000\nCOPY_TO 15 3 1 1471.715000\nCOPY_TO 15 3 2 1425.784000\nCOPY_TO 15 3 3 1430.887000\nCOPY_TO 15 3 4 1411.350000\nCOPY_TO 15 3 5 1399.500000\nCOPY_TO 15 3 6 1414.848000\nCOPY_TO 15 3 7 1471.325000\nCOPY_TO 15 3 8 1424.225000\nCOPY_TO 15 3 9 1438.927000\nCOPY_TO 15 3 10 1383.432000\nCOPY_FROM 15 3 1 1157.842000\nCOPY_FROM 15 3 2 1148.168000\nCOPY_FROM 15 3 3 1170.290000\nCOPY_FROM 15 3 4 1163.281000\nCOPY_FROM 15 3 5 1164.792000\nCOPY_FROM 15 3 6 1170.901000\nCOPY_FROM 15 3 7 1167.411000\nCOPY_FROM 15 3 8 1136.925000\nCOPY_FROM 15 3 9 1163.268000\nCOPY_FROM 15 3 10 1167.786000\nCOPY_TO 15 4 1 1879.456000\nCOPY_TO 15 4 2 1851.491000\nCOPY_TO 15 4 3 1834.399000\nCOPY_TO 15 4 4 1909.106000\nCOPY_TO 15 4 5 1939.416000\nCOPY_TO 15 4 6 1856.175000\nCOPY_TO 15 4 7 1936.540000\nCOPY_TO 15 4 8 1872.650000\nCOPY_TO 15 4 9 1846.497000\nCOPY_TO 15 4 10 1851.336000\nCOPY_FROM 15 4 1 1527.390000\nCOPY_FROM 15 4 2 1559.869000\nCOPY_FROM 15 4 3 1549.983000\nCOPY_FROM 15 4 4 1519.352000\nCOPY_FROM 15 4 5 1534.720000\nCOPY_FROM 15 4 6 1531.672000\nCOPY_FROM 15 4 7 1514.365000\nCOPY_FROM 15 4 8 1524.385000\nCOPY_FROM 15 4 9 1519.783000\nCOPY_FROM 15 4 10 1518.088000\nCOPY_TO 15 5 1 2315.126000\nCOPY_TO 15 5 2 2403.590000\nCOPY_TO 15 5 3 2353.186000\nCOPY_TO 15 5 4 2362.140000\nCOPY_TO 15 5 5 2337.372000\nCOPY_TO 15 5 6 2369.436000\nCOPY_TO 15 5 7 2344.194000\nCOPY_TO 15 5 8 2345.627000\nCOPY_TO 15 5 9 2393.136000\nCOPY_TO 15 5 10 2390.355000\nCOPY_FROM 15 5 1 1904.628000\nCOPY_FROM 15 5 2 1910.340000\nCOPY_FROM 15 5 3 1918.427000\nCOPY_FROM 15 5 4 1912.737000\nCOPY_FROM 15 5 5 1955.806000\nCOPY_FROM 15 5 6 1892.326000\nCOPY_FROM 15 5 7 1915.079000\nCOPY_FROM 15 5 8 1920.116000\nCOPY_FROM 15 5 9 1914.245000\nCOPY_FROM 15 5 10 1887.371000\nCOPY_TO 15 6 1 2825.865000\nCOPY_TO 15 6 2 2834.549000\nCOPY_TO 15 6 3 2838.698000\nCOPY_TO 15 6 4 2769.660000\nCOPY_TO 15 6 5 2771.549000\nCOPY_TO 15 6 6 2824.433000\nCOPY_TO 15 6 7 2850.494000\nCOPY_TO 15 6 8 2873.406000\nCOPY_TO 15 6 9 2819.338000\nCOPY_TO 15 6 10 2800.095000\nCOPY_FROM 15 6 1 2312.919000\nCOPY_FROM 15 6 2 2280.861000\nCOPY_FROM 15 6 3 2276.382000\nCOPY_FROM 15 6 4 2328.440000\nCOPY_FROM 15 6 5 2306.146000\nCOPY_FROM 15 6 6 2290.642000\nCOPY_FROM 15 6 7 2318.425000\nCOPY_FROM 15 6 8 2319.431000\nCOPY_FROM 15 6 9 2271.906000\nCOPY_FROM 15 6 10 2307.933000\nCOPY_TO 15 7 1 3276.071000\nCOPY_TO 15 7 2 3302.277000\nCOPY_TO 15 7 3 3227.547000\nCOPY_TO 15 7 4 3205.014000\nCOPY_TO 15 7 5 3216.083000\nCOPY_TO 15 7 6 3288.328000\nCOPY_TO 15 7 7 3273.990000\nCOPY_TO 15 7 8 3269.459000\nCOPY_TO 15 7 9 3276.029000\nCOPY_TO 15 7 10 3257.944000\nCOPY_FROM 15 7 1 2650.110000\nCOPY_FROM 15 7 2 2687.294000\nCOPY_FROM 15 7 3 2679.115000\nCOPY_FROM 15 7 4 2642.092000\nCOPY_FROM 15 7 5 2754.050000\nCOPY_FROM 15 7 6 2670.190000\nCOPY_FROM 15 7 7 2659.509000\nCOPY_FROM 15 7 8 2680.944000\nCOPY_FROM 15 7 9 2702.110000\nCOPY_FROM 15 7 10 2714.737000\nCOPY_TO 15 8 1 3743.204000\nCOPY_TO 15 8 2 3728.396000\nCOPY_TO 15 8 3 3694.741000\nCOPY_TO 15 8 4 3792.445000\nCOPY_TO 15 8 5 3774.482000\nCOPY_TO 15 8 6 3741.767000\nCOPY_TO 15 8 7 3763.394000\nCOPY_TO 15 8 8 3760.802000\nCOPY_TO 15 8 9 3778.522000\nCOPY_TO 15 8 10 3719.006000\nCOPY_FROM 15 8 1 3036.831000\nCOPY_FROM 15 8 2 3053.770000\nCOPY_FROM 15 8 3 3084.446000\nCOPY_FROM 15 8 4 3083.344000\nCOPY_FROM 15 8 5 3063.590000\nCOPY_FROM 15 8 6 2994.047000\nCOPY_FROM 15 8 7 2997.051000\nCOPY_FROM 15 8 8 3028.378000\nCOPY_FROM 15 8 9 2985.766000\nCOPY_FROM 15 8 10 3048.673000\nCOPY_TO 15 9 1 4209.894000\nCOPY_TO 15 9 2 4166.767000\nCOPY_TO 15 9 3 4381.260000\nCOPY_TO 15 9 4 4166.933000\nCOPY_TO 15 9 5 4208.765000\nCOPY_TO 15 9 6 4221.622000\nCOPY_TO 15 9 7 4243.140000\nCOPY_TO 15 9 8 4221.371000\nCOPY_TO 15 9 9 4206.701000\nCOPY_TO 15 9 10 4173.130000\nCOPY_FROM 15 9 1 3509.903000\nCOPY_FROM 15 9 2 3429.411000\nCOPY_FROM 15 9 3 3476.601000\nCOPY_FROM 15 9 4 3532.142000\nCOPY_FROM 15 9 5 3482.214000\nCOPY_FROM 15 9 6 3481.428000\nCOPY_FROM 15 9 7 3512.648000\nCOPY_FROM 15 9 8 3429.037000\nCOPY_FROM 15 9 9 3481.796000\nCOPY_FROM 15 9 10 3405.251000\nCOPY_TO 15 10 1 4783.435000\nCOPY_TO 15 10 2 4644.036000\nCOPY_TO 15 10 3 4681.145000\nCOPY_TO 15 10 4 4649.832000\nCOPY_TO 15 10 5 4695.900000\nCOPY_TO 15 10 6 4741.715000\nCOPY_TO 15 10 7 4645.073000\nCOPY_TO 15 10 8 4685.227000\nCOPY_TO 15 10 9 4667.401000\nCOPY_TO 15 10 10 4674.533000\nCOPY_FROM 15 10 1 3791.541000\nCOPY_FROM 15 10 2 3769.122000\nCOPY_FROM 15 10 3 3875.967000\nCOPY_FROM 15 10 4 3898.442000\nCOPY_FROM 15 10 5 3757.463000\nCOPY_FROM 15 10 6 3803.542000\nCOPY_FROM 15 10 7 3898.392000\nCOPY_FROM 15 10 8 3853.286000\nCOPY_FROM 15 10 9 3796.088000\nCOPY_FROM 15 10 10 3775.505000\nCOPY_TO 20 1 1 588.371000\nCOPY_TO 20 1 2 588.534000\nCOPY_TO 20 1 3 604.939000\nCOPY_TO 20 1 4 609.174000\nCOPY_TO 20 1 5 599.870000\nCOPY_TO 20 1 6 606.180000\nCOPY_TO 20 1 7 593.885000\nCOPY_TO 20 1 8 619.903000\nCOPY_TO 20 1 9 613.359000\nCOPY_TO 20 1 10 579.570000\nCOPY_FROM 20 1 1 512.783000\nCOPY_FROM 20 1 2 520.086000\nCOPY_FROM 20 1 3 526.413000\nCOPY_FROM 20 1 4 503.333000\nCOPY_FROM 20 1 5 499.622000\nCOPY_FROM 20 1 6 504.114000\nCOPY_FROM 20 1 7 522.611000\nCOPY_FROM 20 1 8 504.078000\nCOPY_FROM 20 1 9 519.839000\nCOPY_FROM 20 1 10 521.364000\nCOPY_TO 20 2 1 1192.504000\nCOPY_TO 20 2 2 1204.486000\nCOPY_TO 20 2 3 1184.451000\nCOPY_TO 20 2 4 1224.046000\nCOPY_TO 20 2 5 1168.863000\nCOPY_TO 20 2 6 1185.104000\nCOPY_TO 20 2 7 1213.106000\nCOPY_TO 20 2 8 1219.240000\nCOPY_TO 20 2 9 1239.428000\nCOPY_TO 20 2 10 1202.900000\nCOPY_FROM 20 2 1 970.190000\nCOPY_FROM 20 2 2 968.745000\nCOPY_FROM 20 2 3 968.316000\nCOPY_FROM 20 2 4 963.391000\nCOPY_FROM 20 2 5 977.050000\nCOPY_FROM 20 2 6 977.689000\nCOPY_FROM 20 2 7 986.514000\nCOPY_FROM 20 2 8 996.876000\nCOPY_FROM 20 2 9 988.527000\nCOPY_FROM 20 2 10 973.651000\nCOPY_TO 20 3 1 1830.128000\nCOPY_TO 20 3 2 1783.896000\nCOPY_TO 20 3 3 1824.977000\nCOPY_TO 20 3 4 1808.527000\nCOPY_TO 20 3 5 1831.361000\nCOPY_TO 20 3 6 1793.910000\nCOPY_TO 20 3 7 1790.156000\nCOPY_TO 20 3 8 1901.378000\nCOPY_TO 20 3 9 1808.887000\nCOPY_TO 20 3 10 1820.944000\nCOPY_FROM 20 3 1 1463.492000\nCOPY_FROM 20 3 2 1473.649000\nCOPY_FROM 20 3 3 1470.260000\nCOPY_FROM 20 3 4 1460.868000\nCOPY_FROM 20 3 5 1472.357000\nCOPY_FROM 20 3 6 1467.554000\nCOPY_FROM 20 3 7 1447.413000\nCOPY_FROM 20 3 8 1477.952000\nCOPY_FROM 20 3 9 1428.962000\nCOPY_FROM 20 3 10 1488.391000\nCOPY_TO 20 4 1 2425.563000\nCOPY_TO 20 4 2 2389.778000\nCOPY_TO 20 4 3 2423.539000\nCOPY_TO 20 4 4 2402.541000\nCOPY_TO 20 4 5 2435.064000\nCOPY_TO 20 4 6 2375.980000\nCOPY_TO 20 4 7 2405.196000\nCOPY_TO 20 4 8 2516.577000\nCOPY_TO 20 4 9 2420.785000\nCOPY_TO 20 4 10 2396.486000\nCOPY_FROM 20 4 1 1982.045000\nCOPY_FROM 20 4 2 1936.064000\nCOPY_FROM 20 4 3 1949.428000\nCOPY_FROM 20 4 4 1962.934000\nCOPY_FROM 20 4 5 1970.196000\nCOPY_FROM 20 4 6 2011.610000\nCOPY_FROM 20 4 7 1990.147000\nCOPY_FROM 20 4 8 1932.480000\nCOPY_FROM 20 4 9 1988.822000\nCOPY_FROM 20 4 10 1971.847000\nCOPY_TO 20 5 1 2981.284000\nCOPY_TO 20 5 2 2976.269000\nCOPY_TO 20 5 3 3098.910000\nCOPY_TO 20 5 4 2959.358000\nCOPY_TO 20 5 5 3077.661000\nCOPY_TO 20 5 6 2964.573000\nCOPY_TO 20 5 7 3000.287000\nCOPY_TO 20 5 8 3028.061000\nCOPY_TO 20 5 9 2981.529000\nCOPY_TO 20 5 10 2996.331000\nCOPY_FROM 20 5 1 2432.749000\nCOPY_FROM 20 5 2 2431.906000\nCOPY_FROM 20 5 3 2439.713000\nCOPY_FROM 20 5 4 2412.802000\nCOPY_FROM 20 5 5 2473.731000\nCOPY_FROM 20 5 6 2481.380000\nCOPY_FROM 20 5 7 2434.936000\nCOPY_FROM 20 5 8 2435.429000\nCOPY_FROM 20 5 9 2424.148000\nCOPY_FROM 20 5 10 2435.032000\nCOPY_TO 20 6 1 3515.496000\nCOPY_TO 20 6 2 3600.580000\nCOPY_TO 20 6 3 3661.207000\nCOPY_TO 20 6 4 3560.876000\nCOPY_TO 20 6 5 3560.392000\nCOPY_TO 20 6 6 3610.652000\nCOPY_TO 20 6 7 3592.674000\nCOPY_TO 20 6 8 3572.048000\nCOPY_TO 20 6 9 3552.225000\nCOPY_TO 20 6 10 3544.525000\nCOPY_FROM 20 6 1 3029.925000\nCOPY_FROM 20 6 2 2868.143000\nCOPY_FROM 20 6 3 3003.149000\nCOPY_FROM 20 6 4 3004.859000\nCOPY_FROM 20 6 5 2975.037000\nCOPY_FROM 20 6 6 2941.359000\nCOPY_FROM 20 6 7 2958.822000\nCOPY_FROM 20 6 8 2929.398000\nCOPY_FROM 20 6 9 2942.849000\nCOPY_FROM 20 6 10 2980.994000\nCOPY_TO 20 7 1 4116.153000\nCOPY_TO 20 7 2 4151.977000\nCOPY_TO 20 7 3 4191.156000\nCOPY_TO 20 7 4 4122.951000\nCOPY_TO 20 7 5 4132.277000\nCOPY_TO 20 7 6 4117.195000\nCOPY_TO 20 7 7 4102.429000\nCOPY_TO 20 7 8 4167.083000\nCOPY_TO 20 7 9 4186.748000\nCOPY_TO 20 7 10 4176.378000\nCOPY_FROM 20 7 1 3385.308000\nCOPY_FROM 20 7 2 3424.239000\nCOPY_FROM 20 7 3 3469.448000\nCOPY_FROM 20 7 4 3427.053000\nCOPY_FROM 20 7 5 3393.027000\nCOPY_FROM 20 7 6 3390.030000\nCOPY_FROM 20 7 7 3412.631000\nCOPY_FROM 20 7 8 3416.462000\nCOPY_FROM 20 7 9 3440.032000\nCOPY_FROM 20 7 10 3406.263000\nCOPY_TO 20 8 1 4763.503000\nCOPY_TO 20 8 2 4747.748000\nCOPY_TO 20 8 3 4701.871000\nCOPY_TO 20 8 4 4792.958000\nCOPY_TO 20 8 5 4802.671000\nCOPY_TO 20 8 6 4700.079000\nCOPY_TO 20 8 7 4735.823000\nCOPY_TO 20 8 8 4697.905000\nCOPY_TO 20 8 9 4792.274000\nCOPY_TO 20 8 10 4824.296000\nCOPY_FROM 20 8 1 3911.984000\nCOPY_FROM 20 8 2 3964.766000\nCOPY_FROM 20 8 3 3868.331000\nCOPY_FROM 20 8 4 3864.961000\nCOPY_FROM 20 8 5 3901.993000\nCOPY_FROM 20 8 6 3913.048000\nCOPY_FROM 20 8 7 3913.909000\nCOPY_FROM 20 8 8 3911.791000\nCOPY_FROM 20 8 9 3922.104000\nCOPY_FROM 20 8 10 3865.939000\nCOPY_TO 20 9 1 5345.060000\nCOPY_TO 20 9 2 5332.009000\nCOPY_TO 20 9 3 5396.687000\nCOPY_TO 20 9 4 5465.947000\nCOPY_TO 20 9 5 5338.527000\nCOPY_TO 20 9 6 5336.702000\nCOPY_TO 20 9 7 5312.314000\nCOPY_TO 20 9 8 5355.565000\nCOPY_TO 20 9 9 5299.061000\nCOPY_TO 20 9 10 5444.467000\nCOPY_FROM 20 9 1 4348.639000\nCOPY_FROM 20 9 2 4324.543000\nCOPY_FROM 20 9 3 4405.188000\nCOPY_FROM 20 9 4 4371.563000\nCOPY_FROM 20 9 5 4362.497000\nCOPY_FROM 20 9 6 4399.904000\nCOPY_FROM 20 9 7 4360.325000\nCOPY_FROM 20 9 8 4319.934000\nCOPY_FROM 20 9 9 4374.741000\nCOPY_FROM 20 9 10 4312.770000\nCOPY_TO 20 10 1 5898.919000\nCOPY_TO 20 10 2 5937.106000\nCOPY_TO 20 10 3 5982.895000\nCOPY_TO 20 10 4 6006.050000\nCOPY_TO 20 10 5 6017.783000\nCOPY_TO 20 10 6 6027.734000\nCOPY_TO 20 10 7 5980.353000\nCOPY_TO 20 10 8 5880.622000\nCOPY_TO 20 10 9 6004.364000\nCOPY_TO 20 10 10 5991.804000\nCOPY_FROM 20 10 1 4898.844000\nCOPY_FROM 20 10 2 4803.747000\nCOPY_FROM 20 10 3 4811.804000\nCOPY_FROM 20 10 4 4876.269000\nCOPY_FROM 20 10 5 4880.505000\nCOPY_FROM 20 10 6 4911.774000\nCOPY_FROM 20 10 7 4798.363000\nCOPY_FROM 20 10 8 4992.114000\nCOPY_FROM 20 10 9 4979.897000\nCOPY_FROM 20 10 10 4991.956000\nCOPY_TO 25 1 1 734.355000\nCOPY_TO 25 1 2 751.684000\nCOPY_TO 25 1 3 741.651000\nCOPY_TO 25 1 4 741.016000\nCOPY_TO 25 1 5 758.336000\nCOPY_TO 25 1 6 742.133000\nCOPY_TO 25 1 7 754.167000\nCOPY_TO 25 1 8 756.050000\nCOPY_TO 25 1 9 734.336000\nCOPY_TO 25 1 10 755.085000\nCOPY_FROM 25 1 1 596.991000\nCOPY_FROM 25 1 2 595.280000\nCOPY_FROM 25 1 3 592.821000\nCOPY_FROM 25 1 4 598.301000\nCOPY_FROM 25 1 5 592.075000\nCOPY_FROM 25 1 6 598.580000\nCOPY_FROM 25 1 7 612.489000\nCOPY_FROM 25 1 8 594.139000\nCOPY_FROM 25 1 9 606.733000\nCOPY_FROM 25 1 10 599.884000\nCOPY_TO 25 2 1 1454.990000\nCOPY_TO 25 2 2 1464.926000\nCOPY_TO 25 2 3 1442.608000\nCOPY_TO 25 2 4 1467.204000\nCOPY_TO 25 2 5 1469.535000\nCOPY_TO 25 2 6 1441.349000\nCOPY_TO 25 2 7 1465.709000\nCOPY_TO 25 2 8 1477.000000\nCOPY_TO 25 2 9 1466.388000\nCOPY_TO 25 2 10 1491.142000\nCOPY_FROM 25 2 1 1186.171000\nCOPY_FROM 25 2 2 1197.788000\nCOPY_FROM 25 2 3 1189.075000\nCOPY_FROM 25 2 4 1168.622000\nCOPY_FROM 25 2 5 1194.059000\nCOPY_FROM 25 2 6 1175.976000\nCOPY_FROM 25 2 7 1180.774000\nCOPY_FROM 25 2 8 1240.536000\nCOPY_FROM 25 2 9 1216.618000\nCOPY_FROM 25 2 10 1162.956000\nCOPY_TO 25 3 1 2124.055000\nCOPY_TO 25 3 2 2208.083000\nCOPY_TO 25 3 3 2171.897000\nCOPY_TO 25 3 4 2200.009000\nCOPY_TO 25 3 5 2157.130000\nCOPY_TO 25 3 6 2178.217000\nCOPY_TO 25 3 7 2170.618000\nCOPY_TO 25 3 8 2159.246000\nCOPY_TO 25 3 9 2180.852000\nCOPY_TO 25 3 10 2162.987000\nCOPY_FROM 25 3 1 1814.018000\nCOPY_FROM 25 3 2 1777.425000\nCOPY_FROM 25 3 3 1804.940000\nCOPY_FROM 25 3 4 1777.249000\nCOPY_FROM 25 3 5 1755.835000\nCOPY_FROM 25 3 6 1762.743000\nCOPY_FROM 25 3 7 1791.698000\nCOPY_FROM 25 3 8 1812.374000\nCOPY_FROM 25 3 9 1807.720000\nCOPY_FROM 25 3 10 1779.357000\nCOPY_TO 25 4 1 2880.647000\nCOPY_TO 25 4 2 2914.773000\nCOPY_TO 25 4 3 2935.268000\nCOPY_TO 25 4 4 2899.907000\nCOPY_TO 25 4 5 2973.967000\nCOPY_TO 25 4 6 2957.416000\nCOPY_TO 25 4 7 2988.516000\nCOPY_TO 25 4 8 2960.672000\nCOPY_TO 25 4 9 2995.815000\nCOPY_TO 25 4 10 2938.285000\nCOPY_FROM 25 4 1 2390.532000\nCOPY_FROM 25 4 2 2402.782000\nCOPY_FROM 25 4 3 2380.246000\nCOPY_FROM 25 4 4 2313.968000\nCOPY_FROM 25 4 5 2336.368000\nCOPY_FROM 25 4 6 2333.098000\nCOPY_FROM 25 4 7 2341.465000\nCOPY_FROM 25 4 8 2364.110000\nCOPY_FROM 25 4 9 2347.709000\nCOPY_FROM 25 4 10 2440.437000\nCOPY_TO 25 5 1 3615.292000\nCOPY_TO 25 5 2 3706.462000\nCOPY_TO 25 5 3 3629.459000\nCOPY_TO 25 5 4 3614.923000\nCOPY_TO 25 5 5 3646.021000\nCOPY_TO 25 5 6 3622.527000\nCOPY_TO 25 5 7 3614.309000\nCOPY_TO 25 5 8 3590.665000\nCOPY_TO 25 5 9 3570.947000\nCOPY_TO 25 5 10 3616.614000\nCOPY_FROM 25 5 1 3025.729000\nCOPY_FROM 25 5 2 2890.083000\nCOPY_FROM 25 5 3 2832.641000\nCOPY_FROM 25 5 4 2896.295000\nCOPY_FROM 25 5 5 2906.869000\nCOPY_FROM 25 5 6 2964.634000\nCOPY_FROM 25 5 7 2981.976000\nCOPY_FROM 25 5 8 2899.424000\nCOPY_FROM 25 5 9 2952.928000\nCOPY_FROM 25 5 10 2956.315000\nCOPY_TO 25 6 1 4335.410000\nCOPY_TO 25 6 2 4352.629000\nCOPY_TO 25 6 3 4328.721000\nCOPY_TO 25 6 4 4329.229000\nCOPY_TO 25 6 5 4326.110000\nCOPY_TO 25 6 6 4350.785000\nCOPY_TO 25 6 7 4422.414000\nCOPY_TO 25 6 8 4327.600000\nCOPY_TO 25 6 9 4399.015000\nCOPY_TO 25 6 10 4344.686000\nCOPY_FROM 25 6 1 3546.817000\nCOPY_FROM 25 6 2 3583.911000\nCOPY_FROM 25 6 3 3577.654000\nCOPY_FROM 25 6 4 3545.870000\nCOPY_FROM 25 6 5 3517.331000\nCOPY_FROM 25 6 6 3621.318000\nCOPY_FROM 25 6 7 3518.093000\nCOPY_FROM 25 6 8 3487.595000\nCOPY_FROM 25 6 9 3502.635000\nCOPY_FROM 25 6 10 3442.832000\nCOPY_TO 25 7 1 5127.114000\nCOPY_TO 25 7 2 5147.491000\nCOPY_TO 25 7 3 5030.220000\nCOPY_TO 25 7 4 5039.242000\nCOPY_TO 25 7 5 5024.293000\nCOPY_TO 25 7 6 5177.402000\nCOPY_TO 25 7 7 5091.543000\nCOPY_TO 25 7 8 5047.738000\nCOPY_TO 25 7 9 5032.130000\nCOPY_TO 25 7 10 5125.969000\nCOPY_FROM 25 7 1 4236.840000\nCOPY_FROM 25 7 2 4166.021000\nCOPY_FROM 25 7 3 4072.998000\nCOPY_FROM 25 7 4 4044.735000\nCOPY_FROM 25 7 5 4095.923000\nCOPY_FROM 25 7 6 4100.569000\nCOPY_FROM 25 7 7 4065.397000\nCOPY_FROM 25 7 8 4038.183000\nCOPY_FROM 25 7 9 4051.760000\nCOPY_FROM 25 7 10 4100.604000\nCOPY_TO 25 8 1 5755.047000\nCOPY_TO 25 8 2 5882.932000\nCOPY_TO 25 8 3 5711.378000\nCOPY_TO 25 8 4 5750.234000\nCOPY_TO 25 8 5 5813.714000\nCOPY_TO 25 8 6 5818.114000\nCOPY_TO 25 8 7 5869.900000\nCOPY_TO 25 8 8 5792.470000\nCOPY_TO 25 8 9 5842.988000\nCOPY_TO 25 8 10 5826.206000\nCOPY_FROM 25 8 1 4633.711000\nCOPY_FROM 25 8 2 4604.926000\nCOPY_FROM 25 8 3 4843.423000\nCOPY_FROM 25 8 4 4623.654000\nCOPY_FROM 25 8 5 4764.668000\nCOPY_FROM 25 8 6 4665.590000\nCOPY_FROM 25 8 7 4743.428000\nCOPY_FROM 25 8 8 4684.806000\nCOPY_FROM 25 8 9 4625.929000\nCOPY_FROM 25 8 10 4796.581000\nCOPY_TO 25 9 1 6537.881000\nCOPY_TO 25 9 2 6467.843000\nCOPY_TO 25 9 3 6485.727000\nCOPY_TO 25 9 4 6419.503000\nCOPY_TO 25 9 5 6547.430000\nCOPY_TO 25 9 6 6647.516000\nCOPY_TO 25 9 7 6545.266000\nCOPY_TO 25 9 8 6483.089000\nCOPY_TO 25 9 9 6488.061000\nCOPY_TO 25 9 10 6501.837000\nCOPY_FROM 25 9 1 5218.724000\nCOPY_FROM 25 9 2 5322.326000\nCOPY_FROM 25 9 3 5134.734000\nCOPY_FROM 25 9 4 5181.265000\nCOPY_FROM 25 9 5 5236.810000\nCOPY_FROM 25 9 6 5394.804000\nCOPY_FROM 25 9 7 5216.023000\nCOPY_FROM 25 9 8 5228.834000\nCOPY_FROM 25 9 9 5233.673000\nCOPY_FROM 25 9 10 5370.429000\nCOPY_TO 25 10 1 7187.403000\nCOPY_TO 25 10 2 7237.522000\nCOPY_TO 25 10 3 7215.015000\nCOPY_TO 25 10 4 7310.549000\nCOPY_TO 25 10 5 7204.966000\nCOPY_TO 25 10 6 7395.831000\nCOPY_TO 25 10 7 7235.840000\nCOPY_TO 25 10 8 7335.155000\nCOPY_TO 25 10 9 7320.366000\nCOPY_TO 25 10 10 7399.998000\nCOPY_FROM 25 10 1 5766.539000\nCOPY_FROM 25 10 2 5868.594000\nCOPY_FROM 25 10 3 5717.698000\nCOPY_FROM 25 10 4 5824.276000\nCOPY_FROM 25 10 5 5865.970000\nCOPY_FROM 25 10 6 5943.953000\nCOPY_FROM 25 10 7 5730.136000\nCOPY_FROM 25 10 8 5856.029000\nCOPY_FROM 25 10 9 5782.006000\nCOPY_FROM 25 10 10 5872.458000\nCOPY_TO 30 1 1 855.345000\nCOPY_TO 30 1 2 871.867000\nCOPY_TO 30 1 3 855.503000\nCOPY_TO 30 1 4 852.658000\nCOPY_TO 30 1 5 872.763000\nCOPY_TO 30 1 6 840.381000\nCOPY_TO 30 1 7 858.728000\nCOPY_TO 30 1 8 854.579000\nCOPY_TO 30 1 9 849.983000\nCOPY_TO 30 1 10 865.011000\nCOPY_FROM 30 1 1 722.491000\nCOPY_FROM 30 1 2 697.416000\nCOPY_FROM 30 1 3 724.112000\nCOPY_FROM 30 1 4 723.154000\nCOPY_FROM 30 1 5 742.389000\nCOPY_FROM 30 1 6 736.040000\nCOPY_FROM 30 1 7 706.634000\nCOPY_FROM 30 1 8 703.661000\nCOPY_FROM 30 1 9 711.113000\nCOPY_FROM 30 1 10 698.277000\nCOPY_TO 30 2 1 1724.559000\nCOPY_TO 30 2 2 1719.146000\nCOPY_TO 30 2 3 1713.379000\nCOPY_TO 30 2 4 1715.448000\nCOPY_TO 30 2 5 1731.645000\nCOPY_TO 30 2 6 1715.003000\nCOPY_TO 30 2 7 1694.419000\nCOPY_TO 30 2 8 1692.365000\nCOPY_TO 30 2 9 1734.901000\nCOPY_TO 30 2 10 1753.928000\nCOPY_FROM 30 2 1 1381.412000\nCOPY_FROM 30 2 2 1393.055000\nCOPY_FROM 30 2 3 1429.064000\nCOPY_FROM 30 2 4 1400.549000\nCOPY_FROM 30 2 5 1390.625000\nCOPY_FROM 30 2 6 1399.524000\nCOPY_FROM 30 2 7 1428.245000\nCOPY_FROM 30 2 8 1396.228000\nCOPY_FROM 30 2 9 1394.769000\nCOPY_FROM 30 2 10 1376.140000\nCOPY_TO 30 3 1 2549.615000\nCOPY_TO 30 3 2 2549.549000\nCOPY_TO 30 3 3 2554.699000\nCOPY_TO 30 3 4 2620.901000\nCOPY_TO 30 3 5 2542.416000\nCOPY_TO 30 3 6 2463.919000\nCOPY_TO 30 3 7 2514.404000\nCOPY_TO 30 3 8 2606.338000\nCOPY_TO 30 3 9 2549.300000\nCOPY_TO 30 3 10 2614.069000\nCOPY_FROM 30 3 1 2061.723000\nCOPY_FROM 30 3 2 2054.595000\nCOPY_FROM 30 3 3 2064.499000\nCOPY_FROM 30 3 4 2029.387000\nCOPY_FROM 30 3 5 2060.673000\nCOPY_FROM 30 3 6 2071.234000\nCOPY_FROM 30 3 7 2039.847000\nCOPY_FROM 30 3 8 2034.512000\nCOPY_FROM 30 3 9 2048.970000\nCOPY_FROM 30 3 10 2070.192000\nCOPY_TO 30 4 1 3439.779000\nCOPY_TO 30 4 2 3415.562000\nCOPY_TO 30 4 3 3472.690000\nCOPY_TO 30 4 4 3426.406000\nCOPY_TO 30 4 5 3417.655000\nCOPY_TO 30 4 6 3420.833000\nCOPY_TO 30 4 7 3380.506000\nCOPY_TO 30 4 8 3462.000000\nCOPY_TO 30 4 9 3402.428000\nCOPY_TO 30 4 10 3428.111000\nCOPY_FROM 30 4 1 2733.262000\nCOPY_FROM 30 4 2 2683.878000\nCOPY_FROM 30 4 3 2821.240000\nCOPY_FROM 30 4 4 2768.113000\nCOPY_FROM 30 4 5 2867.414000\nCOPY_FROM 30 4 6 2759.740000\nCOPY_FROM 30 4 7 2796.335000\nCOPY_FROM 30 4 8 2688.241000\nCOPY_FROM 30 4 9 2693.820000\nCOPY_FROM 30 4 10 2731.140000\nCOPY_TO 30 5 1 4242.226000\nCOPY_TO 30 5 2 4337.764000\nCOPY_TO 30 5 3 4201.378000\nCOPY_TO 30 5 4 4276.924000\nCOPY_TO 30 5 5 4195.586000\nCOPY_TO 30 5 6 4147.869000\nCOPY_TO 30 5 7 4262.615000\nCOPY_TO 30 5 8 4283.672000\nCOPY_TO 30 5 9 4316.076000\nCOPY_TO 30 5 10 4265.417000\nCOPY_FROM 30 5 1 3414.952000\nCOPY_FROM 30 5 2 3484.110000\nCOPY_FROM 30 5 3 3410.230000\nCOPY_FROM 30 5 4 3456.846000\nCOPY_FROM 30 5 5 3383.937000\nCOPY_FROM 30 5 6 3430.556000\nCOPY_FROM 30 5 7 3430.628000\nCOPY_FROM 30 5 8 3428.378000\nCOPY_FROM 30 5 9 3396.417000\nCOPY_FROM 30 5 10 3432.408000\nCOPY_TO 30 6 1 5074.778000\nCOPY_TO 30 6 2 5101.994000\nCOPY_TO 30 6 3 5069.600000\nCOPY_TO 30 6 4 5222.574000\nCOPY_TO 30 6 5 5071.946000\nCOPY_TO 30 6 6 5076.127000\nCOPY_TO 30 6 7 5080.155000\nCOPY_TO 30 6 8 5189.124000\nCOPY_TO 30 6 9 5172.174000\nCOPY_TO 30 6 10 5100.780000\nCOPY_FROM 30 6 1 4211.710000\nCOPY_FROM 30 6 2 4088.827000\nCOPY_FROM 30 6 3 4140.018000\nCOPY_FROM 30 6 4 4200.005000\nCOPY_FROM 30 6 5 4083.156000\nCOPY_FROM 30 6 6 4142.306000\nCOPY_FROM 30 6 7 4302.596000\nCOPY_FROM 30 6 8 4166.638000\nCOPY_FROM 30 6 9 4063.275000\nCOPY_FROM 30 6 10 3989.077000\nCOPY_TO 30 7 1 5985.682000\nCOPY_TO 30 7 2 5944.822000\nCOPY_TO 30 7 3 5909.677000\nCOPY_TO 30 7 4 5959.397000\nCOPY_TO 30 7 5 5973.909000\nCOPY_TO 30 7 6 5971.125000\nCOPY_TO 30 7 7 5970.800000\nCOPY_TO 30 7 8 5928.120000\nCOPY_TO 30 7 9 6065.392000\nCOPY_TO 30 7 10 5967.311000\nCOPY_FROM 30 7 1 4832.597000\nCOPY_FROM 30 7 2 4763.587000\nCOPY_FROM 30 7 3 5007.212000\nCOPY_FROM 30 7 4 4831.589000\nCOPY_FROM 30 7 5 4761.464000\nCOPY_FROM 30 7 6 4964.790000\nCOPY_FROM 30 7 7 4911.089000\nCOPY_FROM 30 7 8 4804.915000\nCOPY_FROM 30 7 9 4830.199000\nCOPY_FROM 30 7 10 4821.159000\nCOPY_TO 30 8 1 6780.338000\nCOPY_TO 30 8 2 6780.465000\nCOPY_TO 30 8 3 6891.504000\nCOPY_TO 30 8 4 6924.545000\nCOPY_TO 30 8 5 6887.753000\nCOPY_TO 30 8 6 6667.140000\nCOPY_TO 30 8 7 6766.440000\nCOPY_TO 30 8 8 6847.607000\nCOPY_TO 30 8 9 6949.330000\nCOPY_TO 30 8 10 6807.099000\nCOPY_FROM 30 8 1 5408.566000\nCOPY_FROM 30 8 2 5430.909000\nCOPY_FROM 30 8 3 5413.220000\nCOPY_FROM 30 8 4 5426.873000\nCOPY_FROM 30 8 5 5471.004000\nCOPY_FROM 30 8 6 5454.879000\nCOPY_FROM 30 8 7 5467.374000\nCOPY_FROM 30 8 8 5463.669000\nCOPY_FROM 30 8 9 5382.302000\nCOPY_FROM 30 8 10 5430.827000\nCOPY_TO 30 9 1 7646.096000\nCOPY_TO 30 9 2 7663.106000\nCOPY_TO 30 9 3 7649.568000\nCOPY_TO 30 9 4 7582.509000\nCOPY_TO 30 9 5 7677.910000\nCOPY_TO 30 9 6 7649.933000\nCOPY_TO 30 9 7 7639.381000\nCOPY_TO 30 9 8 7628.082000\nCOPY_TO 30 9 9 7742.443000\nCOPY_TO 30 9 10 7749.198000\nCOPY_FROM 30 9 1 6254.021000\nCOPY_FROM 30 9 2 6189.310000\nCOPY_FROM 30 9 3 6080.114000\nCOPY_FROM 30 9 4 6117.857000\nCOPY_FROM 30 9 5 6120.318000\nCOPY_FROM 30 9 6 6131.465000\nCOPY_FROM 30 9 7 6119.603000\nCOPY_FROM 30 9 8 6132.356000\nCOPY_FROM 30 9 9 6217.884000\nCOPY_FROM 30 9 10 6169.986000\nCOPY_TO 30 10 1 8511.796000\nCOPY_TO 30 10 2 8541.021000\nCOPY_TO 30 10 3 8470.991000\nCOPY_TO 30 10 4 8429.901000\nCOPY_TO 30 10 5 8399.581000\nCOPY_TO 30 10 6 8449.127000\nCOPY_TO 30 10 7 8421.535000\nCOPY_TO 30 10 8 8409.578000\nCOPY_TO 30 10 9 8588.901000\nCOPY_TO 30 10 10 8615.748000\nCOPY_FROM 30 10 1 6838.794000\nCOPY_FROM 30 10 2 6835.900000\nCOPY_FROM 30 10 3 6685.443000\nCOPY_FROM 30 10 4 6878.933000\nCOPY_FROM 30 10 5 6862.674000\nCOPY_FROM 30 10 6 6709.240000\nCOPY_FROM 30 10 7 6805.730000\nCOPY_FROM 30 10 8 6793.489000\nCOPY_FROM 30 10 9 6638.819000\nCOPY_FROM 30 10 10 6852.015000\n\nTO\t5\t1\t100.56%\t218.376000\t219.609000\nFROM\t5\t1\t113.33%\t168.493000\t190.954000\nTO\t5\t2\t100.92%\t421.387000\t425.265000\nFROM\t5\t2\t115.55%\t317.101000\t366.403000\nTO\t5\t3\t101.80%\t624.457000\t635.709000\nFROM\t5\t3\t115.15%\t468.651000\t539.630000\nTO\t5\t4\t99.53%\t845.936000\t841.990000\nFROM\t5\t4\t115.26%\t617.653000\t711.922000\nTO\t5\t5\t100.60%\t1037.773000\t1044.045000\nFROM\t5\t5\t116.46%\t767.966000\t894.377000\nTO\t5\t6\t100.93%\t1254.507000\t1266.219000\nFROM\t5\t6\t115.60%\t920.494000\t1064.119000\nTO\t5\t7\t100.67%\t1474.119000\t1483.944000\nFROM\t5\t7\t114.04%\t1079.762000\t1231.400000\nTO\t5\t8\t99.77%\t1690.789000\t1686.910000\nFROM\t5\t8\t114.03%\t1245.100000\t1419.742000\nTO\t5\t9\t100.40%\t1866.939000\t1874.485000\nFROM\t5\t9\t115.12%\t1371.727000\t1579.066000\nTO\t5\t10\t100.15%\t2092.245000\t2095.472000\nFROM\t5\t10\t115.91%\t1508.160000\t1748.130000\nTO\t10\t1\t98.62%\t353.087000\t348.214000\nFROM\t10\t1\t118.65%\t260.551000\t309.133000\nTO\t10\t2\t97.77%\t696.468000\t680.964000\nFROM\t10\t2\t117.55%\t507.076000\t596.066000\nTO\t10\t3\t98.57%\t1034.388000\t1019.610000\nFROM\t10\t3\t118.70%\t747.307000\t887.084000\nTO\t10\t4\t97.77%\t1391.879000\t1360.787000\nFROM\t10\t4\t119.64%\t988.250000\t1182.343000\nTO\t10\t5\t96.89%\t1724.061000\t1670.427000\nFROM\t10\t5\t119.92%\t1224.098000\t1467.941000\nTO\t10\t6\t98.43%\t2059.930000\t2027.488000\nFROM\t10\t6\t119.10%\t1470.005000\t1750.763000\nTO\t10\t7\t98.50%\t2409.333000\t2373.267000\nFROM\t10\t7\t119.12%\t1723.536000\t2053.141000\nTO\t10\t8\t97.51%\t2761.445000\t2692.732000\nFROM\t10\t8\t118.76%\t1960.546000\t2328.340000\nTO\t10\t9\t98.34%\t3100.206000\t3048.751000\nFROM\t10\t9\t119.07%\t2214.820000\t2637.134000\nTO\t10\t10\t98.70%\t3444.291000\t3399.538000\nFROM\t10\t10\t118.79%\t2462.314000\t2924.866000\nTO\t15\t1\t97.71%\t492.082000\t480.802000\nFROM\t15\t1\t115.59%\t347.820000\t402.033000\nTO\t15\t2\t98.20%\t963.658000\t946.342000\nFROM\t15\t2\t115.79%\t671.073000\t777.008000\nTO\t15\t3\t97.90%\t1456.382000\t1425.784000\nFROM\t15\t3\t115.27%\t1010.479000\t1164.792000\nTO\t15\t4\t96.85%\t1933.560000\t1872.650000\nFROM\t15\t4\t113.92%\t1340.700000\t1527.390000\nTO\t15\t5\t98.32%\t2402.419000\t2362.140000\nFROM\t15\t5\t115.48%\t1657.594000\t1914.245000\nTO\t15\t6\t97.39%\t2901.545000\t2825.865000\nFROM\t15\t6\t116.00%\t1989.522000\t2307.933000\nTO\t15\t7\t97.47%\t3359.085000\t3273.990000\nFROM\t15\t7\t116.48%\t2301.570000\t2680.944000\nTO\t15\t8\t97.82%\t3844.652000\t3760.802000\nFROM\t15\t8\t114.43%\t2664.116000\t3048.673000\nTO\t15\t9\t97.71%\t4308.416000\t4209.894000\nFROM\t15\t9\t116.96%\t2976.833000\t3481.796000\nTO\t15\t10\t96.91%\t4830.319000\t4681.145000\nFROM\t15\t10\t115.09%\t3304.798000\t3803.542000\nTO\t20\t1\t96.05%\t629.828000\t604.939000\nFROM\t20\t1\t118.50%\t438.673000\t519.839000\nTO\t20\t2\t98.35%\t1224.716000\t1204.486000\nFROM\t20\t2\t112.61%\t867.634000\t977.050000\nTO\t20\t3\t97.96%\t1858.945000\t1820.944000\nFROM\t20\t3\t115.08%\t1277.634000\t1470.260000\nTO\t20\t4\t99.05%\t2444.051000\t2420.785000\nFROM\t20\t4\t116.54%\t1692.007000\t1971.847000\nTO\t20\t5\t97.15%\t3084.210000\t2996.331000\nFROM\t20\t5\t115.35%\t2110.909000\t2435.032000\nTO\t20\t6\t96.52%\t3700.704000\t3572.048000\nFROM\t20\t6\t117.61%\t2529.492000\t2975.037000\nTO\t20\t7\t96.11%\t4320.033000\t4151.977000\nFROM\t20\t7\t116.20%\t2940.254000\t3416.462000\nTO\t20\t8\t97.94%\t4863.534000\t4763.503000\nFROM\t20\t8\t115.93%\t3374.520000\t3911.984000\nTO\t20\t9\t97.82%\t5463.960000\t5345.060000\nFROM\t20\t9\t115.69%\t3770.921000\t4362.497000\nTO\t20\t10\t99.14%\t6043.915000\t5991.804000\nFROM\t20\t10\t116.88%\t4191.494000\t4898.844000\nTO\t25\t1\t98.29%\t764.779000\t751.684000\nFROM\t25\t1\t115.13%\t519.686000\t598.301000\nTO\t25\t2\t96.77%\t1515.332000\t1466.388000\nFROM\t25\t2\t116.70%\t1018.943000\t1189.075000\nTO\t25\t3\t94.74%\t2292.456000\t2171.897000\nFROM\t25\t3\t117.49%\t1524.962000\t1791.698000\nTO\t25\t4\t96.88%\t3052.605000\t2957.416000\nFROM\t25\t4\t117.70%\t2008.544000\t2364.110000\nTO\t25\t5\t94.08%\t3843.996000\t3616.614000\nFROM\t25\t5\t115.62%\t2554.008000\t2952.928000\nTO\t25\t6\t95.21%\t4563.316000\t4344.686000\nFROM\t25\t6\t118.56%\t2990.859000\t3545.870000\nTO\t25\t7\t95.55%\t5328.781000\t5091.543000\nFROM\t25\t7\t116.33%\t3521.010000\t4095.923000\nTO\t25\t8\t95.79%\t6073.973000\t5818.114000\nFROM\t25\t8\t116.83%\t4009.777000\t4684.806000\nTO\t25\t9\t95.80%\t6787.185000\t6501.837000\nFROM\t25\t9\t116.23%\t4502.731000\t5233.673000\nTO\t25\t10\t97.41%\t7504.865000\t7310.549000\nFROM\t25\t10\t117.25%\t4994.463000\t5856.029000\nTO\t30\t1\t94.39%\t906.324000\t855.503000\nFROM\t30\t1\t119.60%\t604.110000\t722.491000\nTO\t30\t2\t95.56%\t1799.114000\t1719.146000\nFROM\t30\t2\t117.45%\t1188.794000\t1396.228000\nTO\t30\t3\t95.76%\t2662.493000\t2549.615000\nFROM\t30\t3\t117.43%\t1754.809000\t2060.673000\nTO\t30\t4\t96.52%\t3549.913000\t3426.406000\nFROM\t30\t4\t117.23%\t2354.055000\t2759.740000\nTO\t30\t5\t96.50%\t4419.907000\t4265.417000\nFROM\t30\t5\t116.97%\t2932.883000\t3430.556000\nTO\t30\t6\t94.76%\t5382.615000\t5100.780000\nFROM\t30\t6\t117.88%\t3514.108000\t4142.306000\nTO\t30\t7\t95.52%\t6250.630000\t5970.800000\nFROM\t30\t7\t117.46%\t4113.331000\t4831.589000\nTO\t30\t8\t95.62%\t7161.077000\t6847.607000\nFROM\t30\t8\t116.31%\t4669.370000\t5430.909000\nTO\t30\t9\t95.07%\t8046.895000\t7649.933000\nFROM\t30\t9\t117.15%\t5234.632000\t6132.356000\nTO\t30\t10\t94.39%\t8974.878000\t8470.991000\nFROM\t30\t10\t117.84%\t5800.793000\t6835.900000\n\n#!/usr/bin/env ruby\n\nfrom = File.open(ARGV[0])\nto = File.open(ARGV[1])\nloop do\n from_line = from.gets\n to_line = to.gets\n break if from_line.nil?\n break if to_line.nil?\n from_type, from_n_columns, from_n_rows, from_runtime = from_line.split\n to_type, to_n_columns, to_n_rows, to_runtime = to_line.split\n break if from_type != to_type\n break if from_n_columns != to_n_columns\n break if from_n_rows != to_n_rows\n runtime_diff = to_runtime.to_f/ from_runtime.to_f\n puts(\"%s\\t%s\\t%s\\t%5.2f%%\\t%s\\t%s\" % [\n from_type[5..-1],\n from_n_columns,\n from_n_rows,\n runtime_diff * 100,\n from_runtime,\n to_runtime,\n ])\nend\n\n#!/usr/bin/env ruby\n\nstatistics = {}\nARGF.each_line do |line|\n type, n_columns, n_rows, round, runtime = line.split\n statistics[[type, n_columns, n_rows]] ||= []\n statistics[[type, n_columns, n_rows]] << runtime\nend\n\nrequire \"pp\"\nstatistics.each do |(type, n_columns, n_rows), runtimes|\n runtime_median = runtimes.sort[runtimes.size / 2]\n puts(\"#{type}\\t#{n_columns}\\t#{n_rows}\\t#{runtime_median}\")\nend", "msg_date": "Tue, 30 Jul 2024 16:13:06 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "On 7/30/24 09:13, Sutou Kouhei wrote:\n> Hi,\n> \n> In <[email protected]>\n> \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Mon, 29 Jul 2024 14:17:08 +0200,\n> Tomas Vondra <[email protected]> wrote:\n> \n>> I wrote a simple script to automate the benchmark - it just runs these\n>> tests with different parameters (number of columns and number of\n>> imported/exported rows). See the run.sh attachment, along with two CSV\n>> results from current master and with all patches applied.\n> \n> Thanks. I also used the script with some modifications:\n> \n> 1. Create a test database automatically\n> 2. Enable blackhole_am automatically\n> 3. Create create_table_cols() automatically\n> \n> I attach it. I also attach results of master and patched. My\n> results are from my desktop. So it's probably noisy.\n> \n>> - For COPY FROM there is no difference - the results are within 1% of\n>> master, and there's no systemic difference.\n>>\n>> - For COPY TO it's a different story, though. There's a pretty clear\n>> regression, by ~5%. It's a bit interesting the correlation with the\n>> number of columns is not stronger ...\n> \n> My results showed different trend:\n> \n> - COPY FROM: Patched is about 15-20% slower than master\n> - COPY TO: Patched is a bit faster than master\n> \n> Here are some my numbers:\n> \n> type\tn_cols\tn_rows\tdiff\tmaster\t\tpatched\n> ----------------------------------------------------------\n> TO\t5\t1\t100.56%\t218.376000\t219.609000\n> FROM\t5\t1\t113.33%\t168.493000\t190.954000\n> ...\n> TO\t5\t5\t100.60%\t1037.773000\t1044.045000\n> FROM\t5\t5\t116.46%\t767.966000\t894.377000\n> ...\n> TO\t5\t10\t100.15%\t2092.245000\t2095.472000\n> FROM\t5\t10\t115.91%\t1508.160000\t1748.130000\n> TO\t10\t1\t98.62%\t353.087000\t348.214000\n> FROM\t10\t1\t118.65%\t260.551000\t309.133000\n> ...\n> TO\t10\t5\t96.89%\t1724.061000\t1670.427000\n> FROM\t10\t5\t119.92%\t1224.098000\t1467.941000\n> ...\n> TO\t10\t10\t98.70%\t3444.291000\t3399.538000\n> FROM\t10\t10\t118.79%\t2462.314000\t2924.866000\n> TO\t15\t1\t97.71%\t492.082000\t480.802000\n> FROM\t15\t1\t115.59%\t347.820000\t402.033000\n> ...\n> TO\t15\t5\t98.32%\t2402.419000\t2362.140000\n> FROM\t15\t5\t115.48%\t1657.594000\t1914.245000\n> ...\n> TO\t15\t10\t96.91%\t4830.319000\t4681.145000\n> FROM\t15\t10\t115.09%\t3304.798000\t3803.542000\n> TO\t20\t1\t96.05%\t629.828000\t604.939000\n> FROM\t20\t1\t118.50%\t438.673000\t519.839000\n> ...\n> TO\t20\t5\t97.15%\t3084.210000\t2996.331000\n> FROM\t20\t5\t115.35%\t2110.909000\t2435.032000\n> ...\n> TO\t25\t1\t98.29%\t764.779000\t751.684000\n> FROM\t25\t1\t115.13%\t519.686000\t598.301000\n> ...\n> TO\t25\t5\t94.08%\t3843.996000\t3616.614000\n> FROM\t25\t5\t115.62%\t2554.008000\t2952.928000\n> ...\n> TO\t25\t10\t97.41%\t7504.865000\t7310.549000\n> FROM\t25\t10\t117.25%\t4994.463000\t5856.029000\n> TO\t30\t1\t94.39%\t906.324000\t855.503000\n> FROM\t30\t1\t119.60%\t604.110000\t722.491000\n> ...\n> TO\t30\t5\t96.50%\t4419.907000\t4265.417000\n> FROM\t30\t5\t116.97%\t2932.883000\t3430.556000\n> ...\n> TO\t30\t10\t94.39%\t8974.878000\t8470.991000\n> FROM\t30\t10\t117.84%\t5800.793000\t6835.900000\n> ----\n> \n> See the attached diff.txt for full numbers.\n> I also attach scripts to generate the diff.txt. Here is the\n> command line I used:\n> \n> ----\n> ruby diff.rb <(ruby aggregate.rb master.result) <(ruby aggregate.rb patched.result) | tee diff.txt\n> ----\n> \n> My environment:\n> \n> * Debian GNU/Linux sid\n> * gcc (Debian 13.3.0-2) 13.3.0\n> * AMD Ryzen 9 3900X 12-Core Processor\n> \n> I'll look into this.\n> \n> If someone is interested in this proposal, could you share\n> your numbers?\n> \n\nI'm on Fedora 40 with gcc 14.1, on Intel i7-9750H. But it's running on\nQubes OS, so it's really in a VM which makes it noisier. I'll try to do\nmore benchmarks on a regular hw, but that may take a couple days.\n\nI decided to do the benchmark for individual parts of the patch series.\nThe attached PDF shows results for master (label 0000) and the 0001-0005\npatches, along with relative performance difference between the patches.\nThe color scale is the same as before - red = bad, green = good.\n\nThere are pretty clear differences between the patches and \"direction\"\nof the COPY. I'm sure it does depend on the hardware - I tried running\nthis on rpi5 (with 32-bits), and it looks very different. There might be\na similar behavior difference between Intel and Ryzen, but my point is\nthat when looking for regressions, looking at these \"per patch\" charts\ncan be very useful (as it reduces the scope of changes that might have\ncaused the regression).\n\n>> It's interesting the main change in the flamegraphs is CopyToStateFlush\n>> pops up on the left side. Because, what is that about? That is a thing\n>> introduced in the 0005 patch, so maybe the regression is not strictly\n>> about the existing formats moving to the new API, but due to something\n>> else in a later version of the patch?\n> \n> Ah, making static CopySendEndOfRow() a to non-static function\n> (CopyToStateFlush()) may be the reason of this. Could you\n> try the attached v19 patch? It changes the 0005 patch:\n> \n\nPerhaps, that's possible.\n\n> * It reverts the static change\n> * It adds a new non-static function that just exports\n> CopySendEndOfRow()\n> \n\nI'll try to benchmark this later, when the other machine is available.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 30 Jul 2024 11:51:37 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Tue, 30 Jul 2024 11:51:37 +0200,\n Tomas Vondra <[email protected]> wrote:\n\n> I decided to do the benchmark for individual parts of the patch series.\n> The attached PDF shows results for master (label 0000) and the 0001-0005\n> patches, along with relative performance difference between the patches.\n> The color scale is the same as before - red = bad, green = good.\n> \n> There are pretty clear differences between the patches and \"direction\"\n> of the COPY. I'm sure it does depend on the hardware - I tried running\n> this on rpi5 (with 32-bits), and it looks very different. There might be\n> a similar behavior difference between Intel and Ryzen, but my point is\n> that when looking for regressions, looking at these \"per patch\" charts\n> can be very useful (as it reduces the scope of changes that might have\n> caused the regression).\n\nThanks.\nThe numbers on your environment shows that there are\nperformance problems in the following cases in the v18 patch\nset:\n\n1. 0001 + TO\n2. 0005 + TO\n\nThere are +-~3% differences in FROM cases. They may be noise.\n+~6% differences in TO cases may not be noise.\n\nI also tried another benchmark with the v19 (not v18) patch\nset with \"Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz\" not \"AMD\nRyzen 9 3900X 12-Core Processor\".\n\nThe attached PDF visualized my numbers like your PDF but red\n= bad, green = good. -30 (blue) means 70% (faster) and 30\n(red) means 130% (slower).\n\n0001 + TO is a bit slower like your numbers. Other TO cases\nare a bit faster.\n0002 + FROM is very slower. Other FROM cases are slower with\nless records but a bit faster with many records.\n\nI'll re-run it with \"AMD Ryzen 9 3900X 12-Core Processor\".\n\n\nFYI: I've created a repository to push benchmark scripts:\nhttps://gitlab.com/ktou/pg-bench\n\n\nThanks,\n-- \nkou", "msg_date": "Thu, 01 Aug 2024 19:54:12 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nI re-ran the benchmark(*) with the v19 patch set and the\nfollowing CPUs:\n\n1. AMD Ryzen 9 3900X 12-Core Processor\n2. Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz\n\n(*)\n* Use tables that have {5,10,15,20,25,30} integer columns\n* Use tables that have {1,2,3,4,5,6,7,8,9,10}M rows\n* Use '/dev/null' for COPY TO\n* Use blackhole_am for COPY FROM\n\nSee the attached graphs for details.\n\nNotes:\n* X-axis is the number of columns\n* Y-axis is the number of M rows\n* Z-axis is the elapsed time percent (smaller is faster,\n e.g. 99% is a bit faster than the HEAD and 101% is a bit\n slower than the HEAD)\n* Z-ranges aren't same (The Ryzen case uses about 79%-121%\n but the Intel case uses about 91%-111%)\n* Red means the patch is slower than HEAD\n* Blue means the patch is faster than HEAD\n* The upper row shows FROM results\n* The lower row shows TO results\n\nHere are summaries based on the results:\n\nFor FROM:\n* With Ryzen: It shows that negative performance impact\n* With Intel: It shows that negative performance impact with\n 1-5M rows and positive performance impact with 6M-10M rows\nFor TO:\n* With Ryzen: It shows that positive performance impact\n* With Intel: It shows that positive performance impact\n\nHere are insights based on the results:\n\n* 0001 (that introduces Copy{From,To}Routine} and adds some\n \"if () {...}\" for them but the existing formats still\n doesn't use them) has a bit negative performance impact\n* 0002 (that migrates the existing codes to\n Copy{From,To}Routine} based implementations) has positive\n performance impact\n * For FROM: Negative impact by 0001 and positive impact by\n 0002 almost balanced\n * We should use both of 0001 and 0002 than only 0001\n * With Ryzon: It's a bit slower than HEAD. So we may not\n want to reject this propose for FROM\n * With Intel:\n * With 1-5M rows: It's a bit slower than HEAD\n * With 6-10M rows: It's a bit faster than HEAD\n * For TO: Positive impact by 0002 is larger than negative\n impact by 0002\n * We should use both of 0001 and 0002 than only 0001\n* 0003 (that makes Copy{From,To}Routine Node) has a bit\n negative performance impact\n * But I don't know why. This doesn't change per row\n related codes. Increasing Copy{From,To}Routine size\n (NodeTag is added) may be related.\n* 0004 (that moves Copy{From,To}StateData to copyapi.h)\n doesn't have impact\n * It makes sense because this doesn't change any\n implementations.\n* 0005 (that add \"void *opaque\" to Copy{From,To}StateData)\n has a bit negative impact for FROM and a bit positive\n impact for TO\n * But I don't know why. This doesn't change per row\n related codes. Increasing Copy{From,To}StateData size\n (\"void *opaque\" is added) may be related.\n\n\nHow to proceed this proposal?\n\n* Do we need more numbers to judge this proposal?\n * If so, could someone help us?\n* There is no negative performance impact for TO with both\n of Ryzen and Intel based on my results. Can we merge only\n the TO part?\n * Can we defer the FROM part? Should we proceed this\n proposal with both of the FROM and TO part?\n* Could someone provide a hint why the FROM part is more\n slower with Ryzen?\n\n(If nobody responds to this, this proposal will get stuck\nagain. If you're interested in this proposal, could you help\nus?)\n\n\nHow to run this benchmark on your machine:\n\n$ cd your-postgres\n$ git switch -c copy-format-extendable\n$ git am v19-*.patch\n$ git clone https://gitlab.com/ktou/pg-bench.git ../pg-bench\n$ ../pg-bench/bench.sh copy-format-extendable ../pg-bench/copy-format-extendable/run.sh\n(This will take about 5 hours...)\n\nIf you want to visualize your results on your machine:\n\n$ sudo gem install ruby-gr\n$ ../pg-bench/visualize.rb 5\n\nIf you share your results to me, I can visualize it and\nshare.\n\n\nThanks,\n-- \nkou", "msg_date": "Mon, 05 Aug 2024 07:20:12 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nOn Sun, Aug 4, 2024 at 3:20 PM Sutou Kouhei <[email protected]> wrote:\n>\n> Hi,\n>\n> I re-ran the benchmark(*) with the v19 patch set and the\n> following CPUs:\n>\n> 1. AMD Ryzen 9 3900X 12-Core Processor\n> 2. Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz\n>\n> (*)\n> * Use tables that have {5,10,15,20,25,30} integer columns\n> * Use tables that have {1,2,3,4,5,6,7,8,9,10}M rows\n> * Use '/dev/null' for COPY TO\n> * Use blackhole_am for COPY FROM\n>\n> See the attached graphs for details.\n>\n> Notes:\n> * X-axis is the number of columns\n> * Y-axis is the number of M rows\n> * Z-axis is the elapsed time percent (smaller is faster,\n> e.g. 99% is a bit faster than the HEAD and 101% is a bit\n> slower than the HEAD)\n> * Z-ranges aren't same (The Ryzen case uses about 79%-121%\n> but the Intel case uses about 91%-111%)\n> * Red means the patch is slower than HEAD\n> * Blue means the patch is faster than HEAD\n> * The upper row shows FROM results\n> * The lower row shows TO results\n>\n> Here are summaries based on the results:\n>\n> For FROM:\n> * With Ryzen: It shows that negative performance impact\n> * With Intel: It shows that negative performance impact with\n> 1-5M rows and positive performance impact with 6M-10M rows\n> For TO:\n> * With Ryzen: It shows that positive performance impact\n> * With Intel: It shows that positive performance impact\n>\n> Here are insights based on the results:\n>\n> * 0001 (that introduces Copy{From,To}Routine} and adds some\n> \"if () {...}\" for them but the existing formats still\n> doesn't use them) has a bit negative performance impact\n> * 0002 (that migrates the existing codes to\n> Copy{From,To}Routine} based implementations) has positive\n> performance impact\n> * For FROM: Negative impact by 0001 and positive impact by\n> 0002 almost balanced\n> * We should use both of 0001 and 0002 than only 0001\n> * With Ryzon: It's a bit slower than HEAD. So we may not\n> want to reject this propose for FROM\n> * With Intel:\n> * With 1-5M rows: It's a bit slower than HEAD\n> * With 6-10M rows: It's a bit faster than HEAD\n> * For TO: Positive impact by 0002 is larger than negative\n> impact by 0002\n> * We should use both of 0001 and 0002 than only 0001\n> * 0003 (that makes Copy{From,To}Routine Node) has a bit\n> negative performance impact\n> * But I don't know why. This doesn't change per row\n> related codes. Increasing Copy{From,To}Routine size\n> (NodeTag is added) may be related.\n> * 0004 (that moves Copy{From,To}StateData to copyapi.h)\n> doesn't have impact\n> * It makes sense because this doesn't change any\n> implementations.\n> * 0005 (that add \"void *opaque\" to Copy{From,To}StateData)\n> has a bit negative impact for FROM and a bit positive\n> impact for TO\n> * But I don't know why. This doesn't change per row\n> related codes. Increasing Copy{From,To}StateData size\n> (\"void *opaque\" is added) may be related.\n\nI was surprised that the 0005 patch made COPY FROM slower (with fewer\nrows) and COPY TO faster overall in spite of just adding one struct\nfield and some functions.\n\nI'm interested in why the performance trends of COPY FROM are\ndifferent between fewer than 6M rows and more than 6M rows.\n\n>\n> How to proceed this proposal?\n>\n> * Do we need more numbers to judge this proposal?\n> * If so, could someone help us?\n> * There is no negative performance impact for TO with both\n> of Ryzen and Intel based on my results. Can we merge only\n> the TO part?\n> * Can we defer the FROM part? Should we proceed this\n> proposal with both of the FROM and TO part?\n> * Could someone provide a hint why the FROM part is more\n> slower with Ryzen?\n>\n\nSeparating the patches into two parts (one is for COPY TO and another\none is for COPY FROM) could be a good idea. It would help reviews and\ninvestigate performance regression in COPY FROM cases. And I think we\ncan commit them separately.\n\nAlso, could you please rebase the patches as they conflict with the\ncurrent HEAD? I'll run some benchmarks on my environment as well.\n\nRegards,\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Sep 2024 16:33:13 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" }, { "msg_contents": "Hi,\n\nIn <CAD21AoCwMmwLJ8PQLnZu0MbB4gDJiMvWrHREQD4xRp3-F2RU2Q@mail.gmail.com>\n \"Re: Make COPY format extendable: Extract COPY TO format implementations\" on Fri, 27 Sep 2024 16:33:13 -0700,\n Masahiko Sawada <[email protected]> wrote:\n\n>> * 0005 (that add \"void *opaque\" to Copy{From,To}StateData)\n>> has a bit negative impact for FROM and a bit positive\n>> impact for TO\n>> * But I don't know why. This doesn't change per row\n>> related codes. Increasing Copy{From,To}StateData size\n>> (\"void *opaque\" is added) may be related.\n> \n> I was surprised that the 0005 patch made COPY FROM slower (with fewer\n> rows) and COPY TO faster overall in spite of just adding one struct\n> field and some functions.\n\nMe too...\n\n> I'm interested in why the performance trends of COPY FROM are\n> different between fewer than 6M rows and more than 6M rows.\n\nMy hypothesis:\n\nWith this patch set:\n 1. One row processing is faster than master.\n 2. Non row related processing is slower than master.\n\nIf we have many rows, 1. impact is greater than 2. impact.\n\n\n> Separating the patches into two parts (one is for COPY TO and another\n> one is for COPY FROM) could be a good idea. It would help reviews and\n> investigate performance regression in COPY FROM cases. And I think we\n> can commit them separately.\n> \n> Also, could you please rebase the patches as they conflict with the\n> current HEAD?\n\nOK. I've prepared 2 patch sets:\n\nv20: It just rebased on master. It still mixes COPY TO and\nCOPY FROM implementations.\n\nv21: It's based on v20 but splits COPY TO implementations\nand COPY FROM implementations.\n0001-0005 includes only COPY TO related changes.\n0006-0010 includes only COPY FROM related changes.\n\n(v21 0001 + 0006) == (v20 v0001),\n(v21 0002 + 0007) == (v20 v0002) and so on.\n\n> I'll run some benchmarks on my environment as well.\n\nThanks. It's very helpful.\n\n\nThanks,\n-- \nkou", "msg_date": "Sun, 29 Sep 2024 00:56:45 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make COPY format extendable: Extract COPY TO format\n implementations" } ]
[ { "msg_contents": "Currently our code can do lazily detoast by design, for example:\n\nSELECT toast_col FROM t; \nSELECT toast_col FROM t ORDER BY b;\nSELECT toast_col FROM t join t2 using(c); \n\nit is only detoast at {type}_out function. The benefits includes:\n1. The life time of detoast datum is pretty short which is good for\n general memory usage.\n2. In the order by / hash case, the less memory usage can let the\n work_mem hold more tuples so it is good for performance aspect.\n\nRecently I run into a user case like this:\n\ncreate table b(big jsonb);\n...\nselect big->'1', big->'2', big->'3', big->'5', big->'10' from b;\n\nIn the above query, we can see the 'big' datum is detoasted 5 times, and\nif the toast value is huge, it causes a pretty bad performance. jsonb\nwill be a common case to access the toast value multi times, but it\nis possible for other data type as well. for example:\n\nSELECT f1(big_toast_col), f2(big_toast_col) FROM t;\n\nI attached a POC patch which eagerly detoast the datum during\nEEOP_INNER/OUTER/SCAN_VAR step and store the detoast value back to the\noriginal slot->tts_values, so the later call of slot->tts_values[n] will\nuse the detoast value automatically. With the attached setup.sql and\nthe patch, the performance is easy to reduced to 310ms from 1600ms.\n\nselect big->'1', big->'2', big->'3', big->'5', big->'10' from b; \n QUERY PLAN \n---------------------------------------------------------------\n Seq Scan on b (actual time=1.731..1577.911 rows=1001 loops=1)\n Planning Time: 0.099 ms\n Execution Time: 1578.411 ms\n(3 rows) \n\nset jit to off;\n\nselect big->'1', big->'2', big->'3', big->'5', big->'10' from b; \n QUERY PLAN \n--------------------------------------------------------------\n Seq Scan on b (actual time=0.417..309.937 rows=1001 loops=1)\n Planning Time: 0.097 ms\n Execution Time: 310.255 m\n\n(I used 'jit=off' to turn on this feature just because I'm still not\nready for JIT code.)\n\nHowever this patch just throws away almost all the benefits of toast, so\nhow can we draw a line between should vs should not do this code path?\nIMO, we should only run the 'eagerly detoast' when we know that we will\nhave a FuncCall against the toast_col on the current plan node. I think\nthis information can be get from Qual and TargetList. If so, we can set\nthe slot->detoast_attrs accordingly.\n\nif we code like this: \n\nSELECT f1(toast_col) FROM t join t2 using(c);\n\nWe only apply the code path on the join plan node, so even the join method\nis hash / sort merge, the benefit of toast is still there.\n\n'SELECT f1(toast_col) FROM t;' will apply this code path, but nothing\ngain and nothing lost. Applying this code path only when the toast\ndatum is accessed 1+ times needs some extra run-time effort. I don't\nimplement this so far, I'd like to see if I miss some obvious points.\nAny feedback is welcome.\n\n\n\n\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 04 Dec 2023 14:37:02 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Avoid detoast overhead when possible" }, { "msg_contents": "Hi!\n\nThere's a view from the other angle - detoast just attributes that are\nneeded\n(partial detoast), with optimized storage mechanics for JSONb. I'm preparing\na patch for it, so maybe the best results could be acquired by combining\nthese\ntwo techniques.\n\nWhat do you think?\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!There's a view from the other angle - detoast just attributes that are needed(partial detoast), with optimized storage mechanics for JSONb. I'm preparinga patch for it, so maybe the best results could be acquired by combining thesetwo techniques.What do you think?--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Mon, 4 Dec 2023 11:40:51 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "\n\nNikita Malakhov <[email protected]> writes:\n\nHi!\n>\n> There's a view from the other angle - detoast just attributes that are needed\n> (partial detoast), with optimized storage mechanics for JSONb.\n\nVery glad to know that, looking forward your design & patch!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 04 Dec 2023 17:31:39 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "On Mon, 4 Dec 2023 at 07:56, <[email protected]> wrote:\n> 'SELECT f1(toast_col) FROM t;' will apply this code path, but nothing\n> gain and nothing lost. Applying this code path only when the toast\n> datum is accessed 1+ times needs some extra run-time effort. I don't\n> implement this so far, I'd like to see if I miss some obvious points.\n> Any feedback is welcome.\n\nThis does add some measurable memory overhead to query execution where\nthe produced derivative of the large toasted field is small (e.g. 1MB\ntoast value -> 2x BIGINT), and when the toasted value is deep in the\nquery tree (e.g. 3 nested loops deep). It would also add overhead when\nwe write results to disk, such as spilling merge sorts, hash join\nspills, or CTE materializations.\n\nCould you find a way to reduce this memory and IO usage when the value\nis not going to be used immediately? Using the toast pointer at such\npoints surely will be cheaper than storing the full value again and\nagain.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 4 Dec 2023 13:10:36 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "\nHi,\n\nMatthias van de Meent <[email protected]> writes:\n\n> On Mon, 4 Dec 2023 at 07:56, <[email protected]> wrote:\n\n> ..It would also add overhead when\n> we write results to disk, such as spilling merge sorts, hash join\n> spills, or CTE materializations.\n>\n> Could you find a way to reduce this memory and IO usage when the value\n> is not going to be used immediately? Using the toast pointer at such\n> points surely will be cheaper than storing the full value again and\n> again.\n\nI'm not sure I understand you correctly, I think the issue you raised\nhere is covered by the below design (not implemented in the patch).\n\n\"\nHowever this patch just throws away almost all the benefits of toast, so\nhow can we draw a line between should vs should not do this code path?\nIMO, we should only run the 'eagerly detoast' when we know that we will\nhave a FuncCall against the toast_col on **the current plan node**. I\nthink this information can be get from Qual and TargetList. If so, we\ncan set the slot->detoast_attrs accordingly.\n\"\n\nLet's see an example of this:\n\nSELECT f(t1.toastable_col) FROM t1 join t2 using(c);\n\nSuppose it is using hash join and t1 should be hashed. With the above\ndesign, we will NOT detoast toastable_col at the scan of t1 or hash t1\nsince there is no one \"funcall\" access it in either SeqScan of t1 or\nhash (t1). But when we do the projection on the joinrel, the detoast\nwould happen.\n\nI'm still working on how to know if a toast_col will be detoast for a\ngiven PlanState. If there is no design error, I think I can work out a\nversion tomorrow.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 04 Dec 2023 20:55:05 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "On Mon, 4 Dec 2023 at 14:23, <[email protected]> wrote:\n>\n>\n> Hi,\n>\n> Matthias van de Meent <[email protected]> writes:\n>\n> > On Mon, 4 Dec 2023 at 07:56, <[email protected]> wrote:\n>\n> > ..It would also add overhead when\n> > we write results to disk, such as spilling merge sorts, hash join\n> > spills, or CTE materializations.\n> >\n> > Could you find a way to reduce this memory and IO usage when the value\n> > is not going to be used immediately? Using the toast pointer at such\n> > points surely will be cheaper than storing the full value again and\n> > again.\n>\n> I'm not sure I understand you correctly, I think the issue you raised\n> here is covered by the below design (not implemented in the patch).\n>\n> \"\n> However this patch just throws away almost all the benefits of toast, so\n> how can we draw a line between should vs should not do this code path?\n> IMO, we should only run the 'eagerly detoast' when we know that we will\n> have a FuncCall against the toast_col on **the current plan node**. I\n> think this information can be get from Qual and TargetList. If so, we\n> can set the slot->detoast_attrs accordingly.\n> \"\n>\n> Let's see an example of this:\n>\n> SELECT f(t1.toastable_col) FROM t1 join t2 using(c);\n>\n> Suppose it is using hash join and t1 should be hashed. With the above\n> design, we will NOT detoast toastable_col at the scan of t1 or hash t1\n> since there is no one \"funcall\" access it in either SeqScan of t1 or\n> hash (t1). But when we do the projection on the joinrel, the detoast\n> would happen.\n\nI assume that you detoast the column only once, and not in a separate\nper-node context? This would indicate to me that a query like the\nfollowing would detoast toastable_col and never \"retoast\" it.\n\nSELECT toastable_col FROM t1\nWHERE f(t1.toastable_col)\nORDER BY nonindexed;\n\nor the equivalent in current PG catalogs:\n\nSELECT ev_class\nFROM pg_rewrite\nWHERE octet_length(ev_action) > 1\nORDER BY ev_class;\n\nwhose plan is\n\n Sort\n Sort Key: ev_class\n -> Seq Scan on pg_rewrite\n Filter: (octet_length((ev_action)::text) > 1)\n\nThis would first apply the condition (because sort-then-filter is\ngenerally more expensive than filter-then-sort), and thus permanently\ndetoast the column, which is thus detoasted when it is fed into the\nsort, which made the sort much more expensive than without the\naggressive detoasting.\n\nOr do I still misunderstand something here?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 4 Dec 2023 15:41:24 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "\nHi,\n\nMatthias van de Meent <[email protected]> writes:\n\n> SELECT toastable_col FROM t1\n> WHERE f(t1.toastable_col)\n> ORDER BY nonindexed;\n\nThanks for this example! it's true that the current design requires more\nmemory to sort since toastable_col is detoasted at the scan stage and it\nis output to the sort node. It should be avoided.\n\n> SELECT ev_class\n> FROM pg_rewrite\n> WHERE octet_length(ev_action) > 1\n> ORDER BY ev_class;\n\nThis one is different I think, since the ev_action (the toastable_col) is\n*NOT* output to sort node, so no extra memory is required IIUC. \n\n * CP_SMALL_TLIST specifies that a narrower tlist is preferred. This is\n * passed down by parent nodes such as Sort and Hash, which will have to\n * store the returned tuples.\n\nWe can also verify this by\n\nexplain (costs off, verbose) SELECT ev_class\nFROM pg_rewrite\nWHERE octet_length(ev_action) > 1\nORDER BY ev_class;\n QUERY PLAN \n------------------------------------------------------------------\n Sort\n Output: ev_class\n Sort Key: pg_rewrite.ev_class\n -> Seq Scan on pg_catalog.pg_rewrite\n Output: ev_class\n Filter: (octet_length((pg_rewrite.ev_action)::text) > 1)\n(6 rows)\n\nOnly ev_class is output to Sort node.\n\nSo if we want to make sure there is performance regression for all the\nexisting queries in any case, we can add 1 more restriction into the\nsaved-detoast-value logic. It must be (NOT under CP_SMALL_TLIST) OR (the\ntoastable_col is not in the output list). It can be a planner decision.\n\nIf we code like this, the result will be we need to dotoast N times\nfor toastable_col in qual for the below query.\n\nSELECT toastable_col FROM t\nWHERE f1(toastable_col)\nAND f2(toastable_col)\n..\nAND fn(toastable_col)\nORDER BY any-target-entry;\n\nHowever\n\nSELECT\n f1(toastable_col),\n f2(toastable_col),\n ..\n fn(toastable_col)\nFROM t\nORDER BY any-target-entry;\n\nthe current path still works for it.\n\nThis one is my favorite one so far. Another option is saving the\ndetoast-value in some other memory or existing-slot-in-place for\ndifferent sistuation, that would requires more expr expression changes\nand planner changes. I just checked all the queries in my hand, the\ncurrent design can cover all of them. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 05 Dec 2023 08:28:21 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "Hi,\n\nHmmm, I've checked this patch and can't see performance difference on a\nlarge\n(20000 key-value pairs) json, using toasted json column several times makes\nno\ndifference between current implementation on master (like queries mentioned\nabove).\n\nMaybe I'm doing something wrong?\n\nOn Tue, Dec 5, 2023 at 4:16 AM <[email protected]> wrote:\n\n>\n> Hi,\n>\n> Matthias van de Meent <[email protected]> writes:\n>\n> > SELECT toastable_col FROM t1\n> > WHERE f(t1.toastable_col)\n> > ORDER BY nonindexed;\n>\n> Thanks for this example! it's true that the current design requires more\n> memory to sort since toastable_col is detoasted at the scan stage and it\n> is output to the sort node. It should be avoided.\n>\n> > SELECT ev_class\n> > FROM pg_rewrite\n> > WHERE octet_length(ev_action) > 1\n> > ORDER BY ev_class;\n>\n> This one is different I think, since the ev_action (the toastable_col) is\n> *NOT* output to sort node, so no extra memory is required IIUC.\n>\n> * CP_SMALL_TLIST specifies that a narrower tlist is preferred. This is\n> * passed down by parent nodes such as Sort and Hash, which will have to\n> * store the returned tuples.\n>\n> We can also verify this by\n>\n> explain (costs off, verbose) SELECT ev_class\n> FROM pg_rewrite\n> WHERE octet_length(ev_action) > 1\n> ORDER BY ev_class;\n> QUERY PLAN\n> ------------------------------------------------------------------\n> Sort\n> Output: ev_class\n> Sort Key: pg_rewrite.ev_class\n> -> Seq Scan on pg_catalog.pg_rewrite\n> Output: ev_class\n> Filter: (octet_length((pg_rewrite.ev_action)::text) > 1)\n> (6 rows)\n>\n> Only ev_class is output to Sort node.\n>\n> So if we want to make sure there is performance regression for all the\n> existing queries in any case, we can add 1 more restriction into the\n> saved-detoast-value logic. It must be (NOT under CP_SMALL_TLIST) OR (the\n> toastable_col is not in the output list). It can be a planner decision.\n>\n> If we code like this, the result will be we need to dotoast N times\n> for toastable_col in qual for the below query.\n>\n> SELECT toastable_col FROM t\n> WHERE f1(toastable_col)\n> AND f2(toastable_col)\n> ..\n> AND fn(toastable_col)\n> ORDER BY any-target-entry;\n>\n> However\n>\n> SELECT\n> f1(toastable_col),\n> f2(toastable_col),\n> ..\n> fn(toastable_col)\n> FROM t\n> ORDER BY any-target-entry;\n>\n> the current path still works for it.\n>\n> This one is my favorite one so far. Another option is saving the\n> detoast-value in some other memory or existing-slot-in-place for\n> different sistuation, that would requires more expr expression changes\n> and planner changes. I just checked all the queries in my hand, the\n> current design can cover all of them.\n>\n> --\n> Best Regards\n> Andy Fan\n>\n>\n>\n>\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Hmmm, I've checked this patch and can't see performance difference on a large(20000 key-value pairs) json, using toasted json column several times makes nodifference between current implementation on master (like queries mentioned above).Maybe I'm doing something wrong?On Tue, Dec 5, 2023 at 4:16 AM <[email protected]> wrote:\nHi,\n\nMatthias van de Meent <[email protected]> writes:\n\n> SELECT toastable_col FROM t1\n> WHERE f(t1.toastable_col)\n> ORDER BY nonindexed;\n\nThanks for this example! it's true that the current design requires more\nmemory to sort since toastable_col is detoasted at the scan stage and it\nis output to the sort node. It should be avoided.\n\n> SELECT ev_class\n> FROM pg_rewrite\n> WHERE octet_length(ev_action) > 1\n> ORDER BY ev_class;\n\nThis one is different I think, since the ev_action (the toastable_col) is\n*NOT* output to sort node, so no extra memory is required IIUC. \n\n * CP_SMALL_TLIST specifies that a narrower tlist is preferred.  This is\n * passed down by parent nodes such as Sort and Hash, which will have to\n * store the returned tuples.\n\nWe can also verify this by\n\nexplain (costs off, verbose) SELECT ev_class\nFROM pg_rewrite\nWHERE octet_length(ev_action) > 1\nORDER BY ev_class;\n                            QUERY PLAN                            \n------------------------------------------------------------------\n Sort\n   Output: ev_class\n   Sort Key: pg_rewrite.ev_class\n   ->  Seq Scan on pg_catalog.pg_rewrite\n         Output: ev_class\n         Filter: (octet_length((pg_rewrite.ev_action)::text) > 1)\n(6 rows)\n\nOnly ev_class is output to Sort node.\n\nSo if we want to make sure there is performance regression for all the\nexisting queries in any case, we can add 1 more restriction into the\nsaved-detoast-value logic. It must be (NOT under CP_SMALL_TLIST) OR (the\ntoastable_col is not in the output list). It can be a planner decision.\n\nIf we code like this, the result will be we need to dotoast N times\nfor toastable_col in qual for the below query.\n\nSELECT toastable_col FROM t\nWHERE f1(toastable_col)\nAND f2(toastable_col)\n..\nAND fn(toastable_col)\nORDER BY any-target-entry;\n\nHowever\n\nSELECT\n  f1(toastable_col),\n  f2(toastable_col),\n  ..\n  fn(toastable_col)\nFROM t\nORDER BY any-target-entry;\n\nthe current path still works for it.\n\nThis one is my favorite one so far. Another option is saving the\ndetoast-value in some other memory or existing-slot-in-place for\ndifferent sistuation, that would requires more expr expression changes\nand planner changes. I just checked all the queries in my hand, the\ncurrent design can cover all of them. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n-- Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Tue, 5 Dec 2023 11:38:58 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "\nNikita Malakhov <[email protected]> writes:\n\n> Hi,\n>\n> Hmmm, I've checked this patch and can't see performance difference on a large\n> (20000 key-value pairs) json, using toasted json column several times makes no\n> difference between current implementation on master (like queries mentioned above).\n>\n> Maybe I'm doing something wrong?\n\nCould you try something like below? (set jit to off to turn on this\nfeature). Or could you tell me the steps you used? I also attached the\nsetup.sql at the begining of this thread.\n\nselect big->'1', big->'2', big->'3', big->'5', big->'10' from b; \n QUERY PLAN \n---------------------------------------------------------------\n Seq Scan on b (actual time=1.731..1577.911 rows=1001 loops=1)\n Planning Time: 0.099 ms\n Execution Time: 1578.411 ms\n(3 rows) \n\nset jit to off;\n\nselect big->'1', big->'2', big->'3', big->'5', big->'10' from b; \n QUERY PLAN \n--------------------------------------------------------------\n Seq Scan on b (actual time=0.417..309.937 rows=1001 loops=1)\n Planning Time: 0.097 ms\n Execution Time: 310.255 m\n\n(I used 'jit=off' to turn on this feature just because I'm still not\nready for JIT code.)\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 05 Dec 2023 16:54:59 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "Hi,\n\nWith your setup (table created with setup.sql):\npostgres@postgres=# explain analyze select big->'1', big->'2', big->'3',\nbig->'5', big->'10' from b;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Seq Scan on b (cost=0.00..29.52 rows=1001 width=160) (actual\ntime=0.656..359.964 rows=1001 loops=1)\n Planning Time: 0.042 ms\n Execution Time: 360.177 ms\n(3 rows)\n\nTime: 361.054 ms\npostgres@postgres=# explain analyze select big->'1' from b;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on b (cost=0.00..19.51 rows=1001 width=32) (actual\ntime=0.170..63.996 rows=1001 loops=1)\n Planning Time: 0.042 ms\n Execution Time: 64.063 ms\n(3 rows)\n\nTime: 64.626 ms\n\nWithout patch, the same table and queries:\npostgres@postgres=# explain analyze select big->'1', big->'2', big->'3',\nbig->'5', big->'10' from b;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Seq Scan on b (cost=0.00..29.52 rows=1001 width=160) (actual\ntime=0.665..326.399 rows=1001 loops=1)\n Planning Time: 0.035 ms\n Execution Time: 326.508 ms\n(3 rows)\n\nTime: 327.132 ms\npostgres@postgres=# explain analyze select big->'1' from b;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on b (cost=0.00..19.51 rows=1001 width=32) (actual\ntime=0.159..62.807 rows=1001 loops=1)\n Planning Time: 0.033 ms\n Execution Time: 62.879 ms\n(3 rows)\n\nTime: 63.504 ms\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,With your setup (table created with setup.sql):postgres@postgres=# explain analyze select big->'1', big->'2', big->'3', big->'5', big->'10' from b;                                              QUERY PLAN------------------------------------------------------------------------------------------------------ Seq Scan on b  (cost=0.00..29.52 rows=1001 width=160) (actual time=0.656..359.964 rows=1001 loops=1) Planning Time: 0.042 ms Execution Time: 360.177 ms(3 rows)Time: 361.054 mspostgres@postgres=# explain analyze select big->'1' from b;                                             QUERY PLAN---------------------------------------------------------------------------------------------------- Seq Scan on b  (cost=0.00..19.51 rows=1001 width=32) (actual time=0.170..63.996 rows=1001 loops=1) Planning Time: 0.042 ms Execution Time: 64.063 ms(3 rows)Time: 64.626 msWithout patch, the same table and queries:postgres@postgres=# explain analyze select big->'1', big->'2', big->'3', big->'5', big->'10' from b;                                              QUERY PLAN------------------------------------------------------------------------------------------------------ Seq Scan on b  (cost=0.00..29.52 rows=1001 width=160) (actual time=0.665..326.399 rows=1001 loops=1) Planning Time: 0.035 ms Execution Time: 326.508 ms(3 rows)Time: 327.132 mspostgres@postgres=# explain analyze select big->'1' from b;                                             QUERY PLAN---------------------------------------------------------------------------------------------------- Seq Scan on b  (cost=0.00..19.51 rows=1001 width=32) (actual time=0.159..62.807 rows=1001 loops=1) Planning Time: 0.033 ms Execution Time: 62.879 ms(3 rows)Time: 63.504 ms--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Tue, 5 Dec 2023 13:09:20 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid detoast overhead when possible" }, { "msg_contents": "\nHi\n\nNikita Malakhov <[email protected]> writes:\n>\n> With your setup (table created with setup.sql):\n\nYou need to \"set jit to off\" to turn on this feature, as I state in [1]\n[2]. \n\n[1] https://www.postgresql.org/message-id/87ttoyihgm.fsf%40163.com\n[2] https://www.postgresql.org/message-id/877cltvxgt.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 05 Dec 2023 20:24:04 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid detoast overhead when possible" } ]
[ { "msg_contents": "Hi\n\nThe lifecycle of cursors in plpgsql is not strictly joined with the life\ncycle of related cursor's variables. Without breaking compatibility it is\nnot possible to change this behaviour. Usually it doesn't cause problems,\nbut in some cases very big numbers or unclosed cursors can force memory\nissues that are not simple to investigate and are not too simple (in a\nbigger project) to fix.\n\nI think we can reduce this issue by enhancing the syntax of the OPEN\nstatement. New syntax can looks like\n\nCurrent syntax (still will be supported)\n\nOPEN cursorvar ...\n\nNew syntax\n\nOPEN LOCAL cursorvar ...\n\nWith the clause LOCAL the opened cursor (and related portal) will be surely\nclosed immediately after function exit.\n\nProbably we can enhance the syntax of DECLARE section too, so should be\npossible to write\n\nDECLARE cursorvar LOCAL CURSOR ...\n\nWhat do you think about this proposal?\n\nRegards\n\nPavel\n\nHiThe lifecycle of cursors in plpgsql is not strictly joined with the life cycle of related cursor's variables. Without breaking compatibility it is not possible to change this behaviour. Usually it doesn't cause problems, but in some cases very big numbers or unclosed cursors can force memory issues that are not simple to investigate and are not too simple (in a bigger project) to fix.I think we can reduce this issue by enhancing the syntax of the OPEN statement. New syntax can looks likeCurrent syntax (still will be supported)OPEN cursorvar ...New syntaxOPEN LOCAL cursorvar ...With the clause LOCAL the opened cursor (and related portal) will be surely closed immediately after function exit. Probably we can enhance the syntax of DECLARE section too, so should be possible to writeDECLARE cursorvar LOCAL CURSOR ...What do you think about this proposal?RegardsPavel", "msg_date": "Mon, 4 Dec 2023 07:41:58 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "proposal: plpgsql - OPEN LOCAL statement" } ]
[ { "msg_contents": "I think that cost_incremental_sort() does not account for the limit_tuples\nargument properly. Attached is my proposal to fix the problem.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Mon, 04 Dec 2023 08:46:37 +0100", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": true, "msg_subject": "cost_incremental_sort() and limit_tuples" } ]
[ { "msg_contents": "Hi all,\n(Alvaro in CC.)\n\nWhile running some tests for 8984480b545d, I have noticed that the TAP\ntests of pgbench fail when ~16 is compiled with\n--disable-thread-safety:\n[16:51:10.467](0.004s) not ok 227 - working \\startpipeline with serializable status (got 1 vs expected 0)\n[16:51:10.467](0.000s) \n[16:51:10.467](0.000s) # Failed test 'working \\startpipeline with serializable status (got 1 vs expected 0)'\n# at t/001_pgbench_with_server.pl line 845.\n[16:51:10.467](0.000s) not ok 228 - working \\startpipeline with serializable stdout /(?^:type: .*/001_pgbench_pipeline_serializable)/\n[16:51:10.467](0.000s) \n[16:51:10.467](0.000s) # Failed test 'working \\startpipeline with serializable stdout /(?^:type: .*/001_pgbench_pipeline_serializable)/'\n# at t/001_pgbench_with_server.pl line 845.\n[16:51:10.467](0.000s) # ''\n# doesn't match '(?^:type: .*/001_pgbench_pipeline_serializable)'\n[16:51:10.467](0.000s) not ok 229 - working \\startpipeline with serializable stdout /(?^:actually processed: (\\d+)/\\1)/\n[16:51:10.467](0.000s) \n[16:51:10.467](0.000s) # Failed test 'working \\startpipeline with serializable stdout /(?^:actually processed: (\\d+)/\\1)/'\n# at t/001_pgbench_with_server.pl line 845.\n[16:51:10.467](0.000s) # ''\n# doesn't match '(?^:actually processed: (\\d+)/\\1)'\n\nThis ./configure switch has been removed in 17~, and, while I've not\nanalyzed the problem in details, I am wondering if this points to an\nactual bug with \\startpipeline in all branches.\n\nThanks,\n--\nMichael", "msg_date": "Mon, 4 Dec 2023 16:59:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Failure with pgbench and --disable-thread-safety in ~v16" }, { "msg_contents": "On 2023-Dec-04, Michael Paquier wrote:\n\n> While running some tests for 8984480b545d, I have noticed that the TAP\n> tests of pgbench fail when ~16 is compiled with\n> --disable-thread-safety:\n> [16:51:10.467](0.004s) not ok 227 - working \\startpipeline with serializable status (got 1 vs expected 0)\n> [16:51:10.467](0.000s) \n> [16:51:10.467](0.000s) # Failed test 'working \\startpipeline with serializable status (got 1 vs expected 0)'\n> # at t/001_pgbench_with_server.pl line 845.\n> [16:51:10.467](0.000s) not ok 228 - working \\startpipeline with serializable stdout /(?^:type: .*/001_pgbench_pipeline_serializable)/\n> [16:51:10.467](0.000s) \n> [16:51:10.467](0.000s) # Failed test 'working \\startpipeline with serializable stdout /(?^:type: .*/001_pgbench_pipeline_serializable)/'\n> # at t/001_pgbench_with_server.pl line 845.\n> [16:51:10.467](0.000s) # ''\n> # doesn't match '(?^:type: .*/001_pgbench_pipeline_serializable)'\n> [16:51:10.467](0.000s) not ok 229 - working \\startpipeline with serializable stdout /(?^:actually processed: (\\d+)/\\1)/\n> [16:51:10.467](0.000s) \n> [16:51:10.467](0.000s) # Failed test 'working \\startpipeline with serializable stdout /(?^:actually processed: (\\d+)/\\1)/'\n> # at t/001_pgbench_with_server.pl line 845.\n> [16:51:10.467](0.000s) # ''\n> # doesn't match '(?^:actually processed: (\\d+)/\\1)'\n> \n> This ./configure switch has been removed in 17~, and, while I've not\n> analyzed the problem in details, I am wondering if this points to an\n> actual bug with \\startpipeline in all branches.\n\nThanks, I'll have a look. I'm sure I didn't test any of this code with\nthreading disabled :-)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n", "msg_date": "Mon, 4 Dec 2023 12:09:32 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Failure with pgbench and --disable-thread-safety in ~v16" }, { "msg_contents": "On 2023-Dec-04, Michael Paquier wrote:\n\n> While running some tests for 8984480b545d, I have noticed that the TAP\n> tests of pgbench fail when ~16 is compiled with\n> --disable-thread-safety:\n> [16:51:10.467](0.004s) not ok 227 - working \\startpipeline with serializable status (got 1 vs expected 0)\n\nSo the problem is that we do this:\n\n./pgbench -c4 -j2 -t 10 -n -M prepared -f /home/alvherre/Code/pgsql-build/REL_16_STABLE/src/bin/pgbench/tmp_check/t_001_pgbench_with_server_main_data/001_pgbench_pipeline\n\nand get this error:\npgbench: error: threads are not supported on this platform; use -j1\n\nSo, the fix is just to remove the -j2 in the command line. I'll do that\nin a jiffy. There's no other test that uses -j, except the one\nspecifically designed to test threads.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Mon, 4 Dec 2023 12:17:37 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Failure with pgbench and --disable-thread-safety in ~v16" }, { "msg_contents": "On Mon, Dec 04, 2023 at 12:17:37PM +0100, Alvaro Herrera wrote:\n> So, the fix is just to remove the -j2 in the command line. I'll do that\n> in a jiffy. There's no other test that uses -j, except the one\n> specifically designed to test threads.\n\nThanks for the quick fix.\n--\nMichael", "msg_date": "Tue, 5 Dec 2023 07:22:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Failure with pgbench and --disable-thread-safety in ~v16" } ]
[ { "msg_contents": "Hi,\n\nAs per discussion in [1] splitting the patch. Part1 moves replacement\nlogic in initdb of NAMEDATALEN, FLOAT8PASSBYVAL, SIZEOF_VOID_P,\nALIGNOF_POINTER to compile time via genbki.pl.\n\n--\nThanks and regards,\nKrishnakumar (KK).\n[Microsoft]\n\n[1] https://www.postgresql.org/message-id/flat/CAPMWgZ9TCByVjpfdsgyte4rx%3DYsrAttYay2xDK4UN4Lm%3D-wJMQ%40mail.gmail.com", "msg_date": "Mon, 4 Dec 2023 02:03:12 -0800", "msg_from": "Krishnakumar R <[email protected]>", "msg_from_op": true, "msg_subject": "Move bki file pre-processing from initdb - part 1 - initdb->genbki.pl" }, { "msg_contents": "On Mon, Dec 4, 2023 at 5:03 PM Krishnakumar R <[email protected]> wrote:\n>\n> Hi,\n>\n> As per discussion in [1] splitting the patch. Part1 moves replacement\n> logic in initdb of NAMEDATALEN, FLOAT8PASSBYVAL, SIZEOF_VOID_P,\n> ALIGNOF_POINTER to compile time via genbki.pl.\n\nHi Krishnakumar,\n\nNote this comment in genbki.pl:\n\n# Fetch some special data that we will substitute into the output file.\n# CAUTION: be wary about what symbols you substitute into the .bki file here!\n# It's okay to substitute things that are expected to be really constant\n# within a given Postgres release, such as fixed OIDs. Do not substitute\n# anything that could depend on platform or configuration. (The right place\n# to handle those sorts of things is in initdb.c's bootstrap_template1().)\n\nThe premise of this patch is to invalidate this comment, so we need to\nshow that it's not needed any more. With commit 721856ff24 to remove\ndistprep, the biggest obstacle is out of the way, I think. If all else\ngoes well, this comment will need to be removed.\n\nAlso some cosmetic review:\n\n+my $include_conf;\n\nThis is referring to a path, a different one than the include path. I\nsuggest \"config_path\" or something like that. Since the script now\nuses both paths, I wonder if we can just call them \"source\" and\n\"build\" paths...\n\n+ if ($row{attnotnull} eq 't' && ($row{attlen} eq 'NAMEDATALEN'))\n+ {\n+ $row{attlen} = $NameDataLen;\n+ }\n\nThe check for $row{attnotnull} must be a copy-paste-o.\n\n+my $Float8PassByVal=$SizeOfPointer >= 8 ? \"true\": \"false\";\n\nThis is copied from the C source, but it's spelled 't' / 'f' in the\nbki file, so I'm mildly astonished it still seems to work.\n\n\n", "msg_date": "Mon, 4 Dec 2023 20:38:53 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Move bki file pre-processing from initdb - part 1 -\n initdb->genbki.pl" } ]
[ { "msg_contents": "Thank you to all who participated!\n\nHere are the stats:\n\nAt start:\nNeeds review: 210. Waiting on Author: 42. Ready for Committer: 29.\nCommitted: 55. Withdrawn: 10. Returned with Feedback: 1. Total: 347.\n\nToday:\nCommitted: 87. Moved to next CF: 234. Withdrawn: 14. Returned with\nFeedback: 9. Rejected: 3. Total: 347.\n\nAlso, a few minutes ago I marked committed one that had moved over, so\n88 is the more accurate figure, and the most since March. January 2024\nnow has 261 pending patches.\n\nPrevious November CFs:\n2022: 94 committed\n2021: 58\n2020: 69\n2019: 35\n\nThe January 2024 CF didn't start with very many patches, but after\nmoving November over, there are plenty that have no reviewer of\nrecord.\n\n--\nJohn Naylor\n\n\n", "msg_date": "Mon, 4 Dec 2023 17:52:19 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest 2023-11 is now closed" } ]
[ { "msg_contents": "I found the way that MergeAttributes() handles column compression and \nstorage settings very weird.\n\nFor example, I don't understand the purpose of this error:\n\ncreate table t1 (a int, b text compression pglz);\n\ncreate table t1a (b text compression lz4) inherits (t1);\n...\nERROR: 42804: column \"b\" has a compression method conflict\nDETAIL: pglz versus lz4\n\nor this:\n\ncreate table t2 (a int, b text compression lz4);\n\ncreate table t12 () inherits (t1, t2);\n...\nERROR: column \"b\" has a compression method conflict\nDETAIL: pglz versus lz4\n\nAnd we can't override it in the child, per the first example.\n\nBut this works:\n\ncreate table t1a (a int, b text compression lz4);\nalter table t1a inherit t1;\n\nAlso, you can change the settings in the child using ALTER TABLE ... SET \nCOMPRESSION (which is also how pg_dump will represent the above \nconstructions), so the restrictions at CREATE TABLE time don't seem to \nmake much sense.\n\nLooking at the code, I suspect these rules were just sort of \ncopy-and-pasted from the nearby rules for types and collations. The \nlatter are needed so that a table with inheritance children can present \na logically consistent view of the data. But compression and storage \nare physical properties that are not logically visible, so every table \nin an inheritance hierarchy can have their own setting.\n\nI think the rules should be approximately like this (both for \ncompression and storage):\n\n- A newly created child inherits the settings from the parent.\n- A newly created child can override the settings.\n- Attaching a child changes nothing.\n- When inheriting from multiple parents with different settings, an \nexplicit setting in the child is required.\n\nThoughts?\n\n\n", "msg_date": "Mon, 4 Dec 2023 11:52:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "table inheritance versus column compression and storage settings" }, { "msg_contents": "On Mon, Dec 4, 2023 at 4:23 PM Peter Eisentraut <[email protected]> wrote:\n\n>\n> Looking at the code, I suspect these rules were just sort of\n> copy-and-pasted from the nearby rules for types and collations. The\n> latter are needed so that a table with inheritance children can present\n> a logically consistent view of the data. But compression and storage\n> are physical properties that are not logically visible, so every table\n> in an inheritance hierarchy can have their own setting.\n\nIncidentally I was looking at that code yesterday and had the same thoughts.\n\n>\n> I think the rules should be approximately like this (both for\n> compression and storage):\n>\n> - A newly created child inherits the settings from the parent.\n> - A newly created child can override the settings.\n> - Attaching a child changes nothing.\n\nLooks fine to me.\n\n> - When inheriting from multiple parents with different settings, an\n> explicit setting in the child is required.\n\nWhen no explicit setting for child is specified, it will throw an\nerror as it does today. Right?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 5 Dec 2023 09:56:27 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On 05.12.23 05:26, Ashutosh Bapat wrote:\n>> - When inheriting from multiple parents with different settings, an\n>> explicit setting in the child is required.\n> When no explicit setting for child is specified, it will throw an\n> error as it does today. Right?\n\nYes, it would throw an error, but a different error than today, saying \nsomething like \"the settings in the parents conflict, so you need to \nspecify one here to override the conflict\".\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 13:58:10 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Tue, Dec 5, 2023 at 6:28 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 05.12.23 05:26, Ashutosh Bapat wrote:\n> >> - When inheriting from multiple parents with different settings, an\n> >> explicit setting in the child is required.\n> > When no explicit setting for child is specified, it will throw an\n> > error as it does today. Right?\n>\n> Yes, it would throw an error, but a different error than today, saying\n> something like \"the settings in the parents conflict, so you need to\n> specify one here to override the conflict\".\n>\n\nPFA patch fixing inheritance and compression. It also fixes a crash\nreported in [1].\n\nThe storage looks more involved. The way it has been coded, the child\nalways inherits the parent's storage properties.\n#create table t1 (a text storage plain);\nCREATE TABLE\n#create table c1 (b text storage main) inherits(t1);\nCREATE TABLE\n#create table c1 (a text storage main) inherits(t1);\nNOTICE: merging column \"a\" with inherited definition\nCREATE TABLE\n#\\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | text | | | | plain |\n | |\nChild tables: c1\nAccess method: heap\n#\\d+ c1\n Table \"public.c1\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | text | | | | plain |\n | |\nInherits: t1\nAccess method: heap\n\nObserve that c1.a did not have storage \"main\" but instead inherits\n\"plain\" from t1.\n\nAccording to the code at\nhttps://github.com/postgres/postgres/blob/6a1ea02c491d16474a6214603dce40b5b122d4d1/src/backend/commands/tablecmds.c#L3253,\nthere is supposed to be a conflict error. But that does not happen\nsince child's storage specification is in ColumnDef::storage_name\nwhich is never consulted. The ColumnDef::storage_name is converted to\nColumnDef::storage only in BuildDescForRelation(), after\nMergeAttribute() has been finished. There are two ways to fix this\n1. In MergeChildAttribute() resolve ColumnDef::storage_name to\nColumnDef::storage before comparing it against inherited property. I\ndon't like this approach since a. we duplicate the conversion logic in\nMergeChildAttribute() and BuildDescForRelation(), b. the conversion\nhappens only for the attributes which are inherited.\n\n2. Deal with it the same way as compression. Get rid of\nColumnDef::storage altogether. Instead set ColumnDef::storage_name at\nhttps://github.com/postgres/postgres/blob/6a1ea02c491d16474a6214603dce40b5b122d4d1/src/backend/commands/tablecmds.c#L2723.\n\nI am inclined to take the second approach. Let me know if you feel otherwise.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 31 Jan 2024 13:29:26 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Wed, Jan 31, 2024 at 1:29 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Dec 5, 2023 at 6:28 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 05.12.23 05:26, Ashutosh Bapat wrote:\n> > >> - When inheriting from multiple parents with different settings, an\n> > >> explicit setting in the child is required.\n> > > When no explicit setting for child is specified, it will throw an\n> > > error as it does today. Right?\n> >\n> > Yes, it would throw an error, but a different error than today, saying\n> > something like \"the settings in the parents conflict, so you need to\n> > specify one here to override the conflict\".\n> >\n>\n> PFA patch fixing inheritance and compression. It also fixes a crash\n> reported in [1].\n>\n> The storage looks more involved. The way it has been coded, the child\n> always inherits the parent's storage properties.\n> #create table t1 (a text storage plain);\n> CREATE TABLE\n> #create table c1 (b text storage main) inherits(t1);\n> CREATE TABLE\n> #create table c1 (a text storage main) inherits(t1);\n> NOTICE: merging column \"a\" with inherited definition\n> CREATE TABLE\n> #\\d+ t1\n> Table \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | text | | | | plain |\n> | |\n> Child tables: c1\n> Access method: heap\n> #\\d+ c1\n> Table \"public.c1\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | text | | | | plain |\n> | |\n> Inherits: t1\n> Access method: heap\n>\n> Observe that c1.a did not have storage \"main\" but instead inherits\n> \"plain\" from t1.\n>\n> According to the code at\n> https://github.com/postgres/postgres/blob/6a1ea02c491d16474a6214603dce40b5b122d4d1/src/backend/commands/tablecmds.c#L3253,\n> there is supposed to be a conflict error. But that does not happen\n> since child's storage specification is in ColumnDef::storage_name\n> which is never consulted. The ColumnDef::storage_name is converted to\n> ColumnDef::storage only in BuildDescForRelation(), after\n> MergeAttribute() has been finished. There are two ways to fix this\n> 1. In MergeChildAttribute() resolve ColumnDef::storage_name to\n> ColumnDef::storage before comparing it against inherited property. I\n> don't like this approach since a. we duplicate the conversion logic in\n> MergeChildAttribute() and BuildDescForRelation(), b. the conversion\n> happens only for the attributes which are inherited.\n>\n> 2. Deal with it the same way as compression. Get rid of\n> ColumnDef::storage altogether. Instead set ColumnDef::storage_name at\n> https://github.com/postgres/postgres/blob/6a1ea02c491d16474a6214603dce40b5b122d4d1/src/backend/commands/tablecmds.c#L2723.\n>\n> I am inclined to take the second approach. Let me know if you feel otherwise.\n\nTook the second approach. PFA patches\n0001 fixes compression inheritance\n0002 fixes storage inheritance\n\nThe patches may be committed separately or as a single patch. Keeping\nthem separate in case we decide to commit one but not the other.\n\nWe always set storage even if it's not specified, in which case the\ncolumn type's default storage is used. This is slightly different from\ncompression which defaults to the GUC's value if not set. This has led\nto slight difference in the tests.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 7 Feb 2024 12:47:04 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Wed, Feb 7, 2024 at 12:47 PM Ashutosh Bapat\n<[email protected]> wrote:\n\n> 0001 fixes compression inheritance\n> 0002 fixes storage inheritance\n>\n\nThe first patch does not update compression_1.out which makes CI\nunhappy. Here's patchset fixing that.\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 8 Feb 2024 12:50:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On 08.02.24 08:20, Ashutosh Bapat wrote:\n> On Wed, Feb 7, 2024 at 12:47 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> \n>> 0001 fixes compression inheritance\n>> 0002 fixes storage inheritance\n>>\n> \n> The first patch does not update compression_1.out which makes CI\n> unhappy. Here's patchset fixing that.\n\nThe changed behavior looks good to me. The tests are good, the code \nchanges are pretty straightforward.\n\nDid you by any change check that pg_dump dumps the resulting structures \ncorrectly? I notice in tablecmds.c that ALTER COLUMN SET STORAGE \nrecurses but ALTER COLUMN SET COMPRESSION does not. I don't understand \nwhy that is, and I wonder whether it affects pg_dump.\n\n\n\n", "msg_date": "Mon, 12 Feb 2024 16:18:12 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Mon, Feb 12, 2024 at 8:48 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 08.02.24 08:20, Ashutosh Bapat wrote:\n> > On Wed, Feb 7, 2024 at 12:47 PM Ashutosh Bapat\n> > <[email protected]> wrote:\n> >\n> >> 0001 fixes compression inheritance\n> >> 0002 fixes storage inheritance\n> >>\n> >\n> > The first patch does not update compression_1.out which makes CI\n> > unhappy. Here's patchset fixing that.\n>\n> The changed behavior looks good to me. The tests are good, the code\n> changes are pretty straightforward.\n>\n> Did you by any change check that pg_dump dumps the resulting structures\n> correctly? I notice in tablecmds.c that ALTER COLUMN SET STORAGE\n> recurses but ALTER COLUMN SET COMPRESSION does not. I don't understand\n> why that is, and I wonder whether it affects pg_dump.\n>\n\nI used src/bin/pg_upgrade/t/002_pg_upgrade.pl to test dump and restore\nby leaving back the new objects created in compression.sql and\ninherit.sql.\n\nCOMPRESSION is set using ALTER TABLE ONLY so it affects only the\nparent and should not propagate to children. A child inherits the\nparent first and then changes compression property. For example\n```\nCREATE TABLE public.cmparent1 (\n f1 text\n);\nALTER TABLE ONLY public.cmparent1 ALTER COLUMN f1 SET COMPRESSION pglz;\n\nCREATE TABLE public.cminh1 (\n f1 text\n)\nINHERITS (public.cmparent1);\nALTER TABLE ONLY public.cminh1 ALTER COLUMN f1 SET COMPRESSION lz4;\n```\n\nSame is true with the STORAGE parameter. Example\n```\nCREATE TABLE public.stparent1 (\n a text\n);\nALTER TABLE ONLY public.stparent1 ALTER COLUMN a SET STORAGE PLAIN;\n\nCREATE TABLE public.stchild1 (\n a text\n)\nINHERITS (public.stparent1);\nALTER TABLE ONLY public.stchild1 ALTER COLUMN a SET STORAGE PLAIN;\n```\n\nI don't think pg_dump would be affected by the difference you noted.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 13 Feb 2024 18:19:24 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "I have committed this. It is great to get this behavior fixed and also \nto get the internals more consistent. Thanks.\n\n\n", "msg_date": "Fri, 16 Feb 2024 14:18:08 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I have committed this. It is great to get this behavior fixed and also \n> to get the internals more consistent. Thanks.\n\nI find it surprising that the committed patch does not touch\npg_dump. Is it really true that pg_dump dumps situations with\ndiffering compression/storage settings accurately already?\n\n(Note that it proves little that the pg_upgrade test passes,\nsince if pg_dump were blind to the settings applicable to a\nchild table, the second dump would still be blind to them.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Feb 2024 09:58:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "I wrote:\n> I find it surprising that the committed patch does not touch\n> pg_dump. Is it really true that pg_dump dumps situations with\n> differing compression/storage settings accurately already?\n\nIt's worse than I thought. Run \"make installcheck\" with\ntoday's HEAD, then:\n\n$ pg_dump -Fc regression >r.dump\n$ createdb r2\n$ pg_restore -d r2 r.dump \npg_restore: error: could not execute query: ERROR: column \"a\" inherits conflicting storage methods\nHINT: To resolve the conflict, specify a storage method explicitly.\nCommand was: CREATE TABLE public.stchild4 (\n a text\n)\nINHERITS (public.stparent1, public.stparent2);\nALTER TABLE ONLY public.stchild4 ALTER COLUMN a SET STORAGE MAIN;\n\n\npg_restore: error: could not execute query: ERROR: relation \"public.stchild4\" does not exist\nCommand was: ALTER TABLE public.stchild4 OWNER TO postgres;\n\npg_restore: error: could not execute query: ERROR: relation \"public.stchild4\" does not exist\nCommand was: COPY public.stchild4 (a) FROM stdin;\npg_restore: warning: errors ignored on restore: 3\n\n\nWhat I'd intended to compare was the results of the query added to the\nregression tests:\n\nregression=# SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\nWHERE (attrelid::regclass::name like 'stparent%'\nOR attrelid::regclass::name like 'stchild%')\nand attname = 'a'\nORDER BY 1, 2;\n attrelid | attname | attstorage \n-----------+---------+------------\n stparent1 | a | p\n stparent2 | a | x\n stchild1 | a | p\n stchild3 | a | m\n stchild4 | a | m\n stchild5 | a | x\n stchild6 | a | m\n(7 rows)\n\nr2=# SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\nWHERE (attrelid::regclass::name like 'stparent%'\nOR attrelid::regclass::name like 'stchild%')\nand attname = 'a'\nORDER BY 1, 2;\n attrelid | attname | attstorage \n-----------+---------+------------\n stparent1 | a | p\n stchild1 | a | p\n stchild3 | a | m\n stparent2 | a | x\n stchild5 | a | p\n stchild6 | a | m\n(6 rows)\n\nSo not only does stchild4 fail to restore altogether, but stchild5\nends with the wrong attstorage.\n\nThis patch definitely needs more work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Feb 2024 13:24:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Fri, Feb 16, 2024 at 11:54 PM Tom Lane <[email protected]> wrote:\n>\n> I wrote:\n> > I find it surprising that the committed patch does not touch\n> > pg_dump. Is it really true that pg_dump dumps situations with\n> > differing compression/storage settings accurately already?\n>\n> It's worse than I thought. Run \"make installcheck\" with\n> today's HEAD, then:\n>\n> $ pg_dump -Fc regression >r.dump\n> $ createdb r2\n> $ pg_restore -d r2 r.dump\n> pg_restore: error: could not execute query: ERROR: column \"a\" inherits conflicting storage methods\n> HINT: To resolve the conflict, specify a storage method explicitly.\n> Command was: CREATE TABLE public.stchild4 (\n> a text\n> )\n> INHERITS (public.stparent1, public.stparent2);\n> ALTER TABLE ONLY public.stchild4 ALTER COLUMN a SET STORAGE MAIN;\n>\n>\n> pg_restore: error: could not execute query: ERROR: relation \"public.stchild4\" does not exist\n> Command was: ALTER TABLE public.stchild4 OWNER TO postgres;\n>\n> pg_restore: error: could not execute query: ERROR: relation \"public.stchild4\" does not exist\n> Command was: COPY public.stchild4 (a) FROM stdin;\n> pg_restore: warning: errors ignored on restore: 3\n\nThanks for the test. Let's call this Problem1. I expected\nsrc/bin/pg_upgrade/t/002_pg_upgrade.pl to fail in this case since it\nwill execute similar steps as you did. And it actually does, except\nthat it uses binary-upgrade mode. In that mode, INHERITed tables are\ndumped in a different manner\n-- For binary upgrade, set up inheritance this way.\nALTER TABLE ONLY \"public\".\"stchild4\" INHERIT \"public\".\"stparent1\";\nALTER TABLE ONLY \"public\".\"stchild4\" INHERIT \"public\".\"stparent2\";\n... snip ...\nALTER TABLE ONLY \"public\".\"stchild4\" ALTER COLUMN \"a\" SET STORAGE MAIN;\n\nthat does not lead to the conflict and pg_upgrade does not fail.\n\n>\n>\n> What I'd intended to compare was the results of the query added to the\n> regression tests:\n>\n> regression=# SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\n> WHERE (attrelid::regclass::name like 'stparent%'\n> OR attrelid::regclass::name like 'stchild%')\n> and attname = 'a'\n> ORDER BY 1, 2;\n> attrelid | attname | attstorage\n> -----------+---------+------------\n> stparent1 | a | p\n> stparent2 | a | x\n> stchild1 | a | p\n> stchild3 | a | m\n> stchild4 | a | m\n> stchild5 | a | x\n> stchild6 | a | m\n> (7 rows)\n>\n> r2=# SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\n> WHERE (attrelid::regclass::name like 'stparent%'\n> OR attrelid::regclass::name like 'stchild%')\n> and attname = 'a'\n> ORDER BY 1, 2;\n> attrelid | attname | attstorage\n> -----------+---------+------------\n> stparent1 | a | p\n> stchild1 | a | p\n> stchild3 | a | m\n> stparent2 | a | x\n> stchild5 | a | p\n> stchild6 | a | m\n> (6 rows)\n>\n> So not only does stchild4 fail to restore altogether, but stchild5\n> ends with the wrong attstorage.\n\nWith binary-upgrade dump and restore stchild5 gets the correct storage value.\n\nLooks like we need a test which pg_dump s regression database and\nrestores it without going through pg_upgrade.\n\nI think the fix is easy one. Dump the STORAGE and COMPRESSION clauses\nwith CREATE TABLE for local attributes. Those for inherited attributes\nwill be dumped separately.\n\nBut that will not fix an existing problem described below. Let's call\nit Problem2. With HEAD at commit\n57f59396bb51953bb7b957780c7f1b7f67602125 (almost a month back)\n$ createdb regression\n$ psql -d regression\n#create table par1 (a text storage plain);\n#create table par2 (a text storage plain);\n#create table chld (a text) inherits (par1, par2);\nNOTICE: merging multiple inherited definitions of column \"a\"\nNOTICE: merging column \"a\" with inherited definition\n-- parent storages conflict after child creation\n#alter table par1 alter column a set storage extended;\n#SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\n WHERE (attrelid::regclass::name like 'par%'\n OR attrelid::regclass::name like 'chld%')\n and attname = 'a'\n ORDER BY 1, 2;\n attrelid | attname | attstorage\n----------+---------+------------\n par1 | a | x\n par2 | a | p\n chld | a | x\n(3 rows)\n\n$ createdb r2\n$ pg_dump -Fc regression > /tmp/r.dump\n$ pg_restore -d r2 /tmp/r.dump\npg_restore: error: could not execute query: ERROR: inherited column\n\"a\" has a storage parameter conflict\nDETAIL: EXTENDED versus PLAIN\nCommand was: CREATE TABLE public.chld (\n a text\n)\nINHERITS (public.par1, public.par2);\n\npg_restore: error: could not execute query: ERROR: relation\n\"public.chld\" does not exist\nCommand was: ALTER TABLE public.chld OWNER TO ashutosh;\n\npg_restore: error: could not execute query: ERROR: relation\n\"public.chld\" does not exist\nCommand was: COPY public.chld (a) FROM stdin;\npg_restore: warning: errors ignored on restore: 3\n\nFixing this requires that we dump ALTER TABLE ... ALTER COLUMN SET\nSTORAGE and COMPRESSION commands after all the tables (at least\nchildren) have been created. That seems to break the way we dump the\nwhole table together right now. OR dump inherited tables like binary\nupgrade mode.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Feb 2024 17:04:58 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "I have reverted the patch for now (and re-opened the commitfest entry). \nWe should continue to work on this and see if we can at least try to get \nthe pg_dump test coverage suitable.\n\n\nOn 19.02.24 12:34, Ashutosh Bapat wrote:\n> On Fri, Feb 16, 2024 at 11:54 PM Tom Lane <[email protected]> wrote:\n>>\n>> I wrote:\n>>> I find it surprising that the committed patch does not touch\n>>> pg_dump. Is it really true that pg_dump dumps situations with\n>>> differing compression/storage settings accurately already?\n>>\n>> It's worse than I thought. Run \"make installcheck\" with\n>> today's HEAD, then:\n>>\n>> $ pg_dump -Fc regression >r.dump\n>> $ createdb r2\n>> $ pg_restore -d r2 r.dump\n>> pg_restore: error: could not execute query: ERROR: column \"a\" inherits conflicting storage methods\n>> HINT: To resolve the conflict, specify a storage method explicitly.\n>> Command was: CREATE TABLE public.stchild4 (\n>> a text\n>> )\n>> INHERITS (public.stparent1, public.stparent2);\n>> ALTER TABLE ONLY public.stchild4 ALTER COLUMN a SET STORAGE MAIN;\n>>\n>>\n>> pg_restore: error: could not execute query: ERROR: relation \"public.stchild4\" does not exist\n>> Command was: ALTER TABLE public.stchild4 OWNER TO postgres;\n>>\n>> pg_restore: error: could not execute query: ERROR: relation \"public.stchild4\" does not exist\n>> Command was: COPY public.stchild4 (a) FROM stdin;\n>> pg_restore: warning: errors ignored on restore: 3\n> \n> Thanks for the test. Let's call this Problem1. I expected\n> src/bin/pg_upgrade/t/002_pg_upgrade.pl to fail in this case since it\n> will execute similar steps as you did. And it actually does, except\n> that it uses binary-upgrade mode. In that mode, INHERITed tables are\n> dumped in a different manner\n> -- For binary upgrade, set up inheritance this way.\n> ALTER TABLE ONLY \"public\".\"stchild4\" INHERIT \"public\".\"stparent1\";\n> ALTER TABLE ONLY \"public\".\"stchild4\" INHERIT \"public\".\"stparent2\";\n> ... snip ...\n> ALTER TABLE ONLY \"public\".\"stchild4\" ALTER COLUMN \"a\" SET STORAGE MAIN;\n> \n> that does not lead to the conflict and pg_upgrade does not fail.\n> \n>>\n>>\n>> What I'd intended to compare was the results of the query added to the\n>> regression tests:\n>>\n>> regression=# SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\n>> WHERE (attrelid::regclass::name like 'stparent%'\n>> OR attrelid::regclass::name like 'stchild%')\n>> and attname = 'a'\n>> ORDER BY 1, 2;\n>> attrelid | attname | attstorage\n>> -----------+---------+------------\n>> stparent1 | a | p\n>> stparent2 | a | x\n>> stchild1 | a | p\n>> stchild3 | a | m\n>> stchild4 | a | m\n>> stchild5 | a | x\n>> stchild6 | a | m\n>> (7 rows)\n>>\n>> r2=# SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\n>> WHERE (attrelid::regclass::name like 'stparent%'\n>> OR attrelid::regclass::name like 'stchild%')\n>> and attname = 'a'\n>> ORDER BY 1, 2;\n>> attrelid | attname | attstorage\n>> -----------+---------+------------\n>> stparent1 | a | p\n>> stchild1 | a | p\n>> stchild3 | a | m\n>> stparent2 | a | x\n>> stchild5 | a | p\n>> stchild6 | a | m\n>> (6 rows)\n>>\n>> So not only does stchild4 fail to restore altogether, but stchild5\n>> ends with the wrong attstorage.\n> \n> With binary-upgrade dump and restore stchild5 gets the correct storage value.\n> \n> Looks like we need a test which pg_dump s regression database and\n> restores it without going through pg_upgrade.\n> \n> I think the fix is easy one. Dump the STORAGE and COMPRESSION clauses\n> with CREATE TABLE for local attributes. Those for inherited attributes\n> will be dumped separately.\n> \n> But that will not fix an existing problem described below. Let's call\n> it Problem2. With HEAD at commit\n> 57f59396bb51953bb7b957780c7f1b7f67602125 (almost a month back)\n> $ createdb regression\n> $ psql -d regression\n> #create table par1 (a text storage plain);\n> #create table par2 (a text storage plain);\n> #create table chld (a text) inherits (par1, par2);\n> NOTICE: merging multiple inherited definitions of column \"a\"\n> NOTICE: merging column \"a\" with inherited definition\n> -- parent storages conflict after child creation\n> #alter table par1 alter column a set storage extended;\n> #SELECT attrelid::regclass, attname, attstorage FROM pg_attribute\n> WHERE (attrelid::regclass::name like 'par%'\n> OR attrelid::regclass::name like 'chld%')\n> and attname = 'a'\n> ORDER BY 1, 2;\n> attrelid | attname | attstorage\n> ----------+---------+------------\n> par1 | a | x\n> par2 | a | p\n> chld | a | x\n> (3 rows)\n> \n> $ createdb r2\n> $ pg_dump -Fc regression > /tmp/r.dump\n> $ pg_restore -d r2 /tmp/r.dump\n> pg_restore: error: could not execute query: ERROR: inherited column\n> \"a\" has a storage parameter conflict\n> DETAIL: EXTENDED versus PLAIN\n> Command was: CREATE TABLE public.chld (\n> a text\n> )\n> INHERITS (public.par1, public.par2);\n> \n> pg_restore: error: could not execute query: ERROR: relation\n> \"public.chld\" does not exist\n> Command was: ALTER TABLE public.chld OWNER TO ashutosh;\n> \n> pg_restore: error: could not execute query: ERROR: relation\n> \"public.chld\" does not exist\n> Command was: COPY public.chld (a) FROM stdin;\n> pg_restore: warning: errors ignored on restore: 3\n> \n> Fixing this requires that we dump ALTER TABLE ... ALTER COLUMN SET\n> STORAGE and COMPRESSION commands after all the tables (at least\n> children) have been created. That seems to break the way we dump the\n> whole table together right now. OR dump inherited tables like binary\n> upgrade mode.\n> \n\n\n\n", "msg_date": "Tue, 20 Feb 2024 11:21:12 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Tue, Feb 20, 2024 at 3:51 PM Peter Eisentraut <[email protected]> wrote:\n>\n> I have reverted the patch for now (and re-opened the commitfest entry).\n> We should continue to work on this and see if we can at least try to get\n> the pg_dump test coverage suitable.\n>\n\nI have started a separate thread for dump/restore test. [1].\n\nUsing that test, I found an existing bug:\nConsider\nCREATE TABLE cminh6 (f1 TEXT);\nALTER TABLE cminh6 INHERIT cmparent1;\nf1 remains without compression even after inherit per the current code.\nBut pg_dump dumps it out as\nCREATE TABLE cminh6 (f1 TEXT) INHERIT(cmparent1)\nBecause of this after restoring cminh6::f1 inherits compression of\ncmparent1. So before dump cminh6::f1 has no compression and after\nrestore it has compression.\n\nI am not sure how to fix this. We want inheritance children to have\ntheir on compression. So ALTER TABLE ... INHERIT ... no passing a\ncompression onto child is fine. CREATE TABLE .... INHERIT ... passing\ncompression onto the child being created also looks fine since that's\nwhat we do with other attributes. Only solution I see is to introduce\n\"none\" as a special compression method to indicate \"no compression\"\nand store that instead of NULL in attcompression column. That looks\nugly.\n\nSimilar is the case with storage.\n\n[1] https://www.postgresql.org/message-id/CAExHW5uF5V=Cjecx3_Z=7xfh4rg2Wf61PT+hfquzjBqouRzQJQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 23 Feb 2024 18:05:38 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Fri, Feb 23, 2024 at 6:05 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Feb 20, 2024 at 3:51 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > I have reverted the patch for now (and re-opened the commitfest entry).\n> > We should continue to work on this and see if we can at least try to get\n> > the pg_dump test coverage suitable.\n> >\n>\n> I have started a separate thread for dump/restore test. [1].\n>\n> Using that test, I found an existing bug:\n> Consider\n> CREATE TABLE cminh6 (f1 TEXT);\n> ALTER TABLE cminh6 INHERIT cmparent1;\n> f1 remains without compression even after inherit per the current code.\n> But pg_dump dumps it out as\n> CREATE TABLE cminh6 (f1 TEXT) INHERIT(cmparent1)\n> Because of this after restoring cminh6::f1 inherits compression of\n> cmparent1. So before dump cminh6::f1 has no compression and after\n> restore it has compression.\n>\n> I am not sure how to fix this. We want inheritance children to have\n> their on compression. So ALTER TABLE ... INHERIT ... no passing a\n> compression onto child is fine. CREATE TABLE .... INHERIT ... passing\n> compression onto the child being created also looks fine since that's\n> what we do with other attributes. Only solution I see is to introduce\n> \"none\" as a special compression method to indicate \"no compression\"\n> and store that instead of NULL in attcompression column. That looks\n> ugly.\n\nSpecifying DEFAULT as COMPRESSION method instead of inventing \"none\"\nworks. We should do it only for INHERITed tables.\n\n>\n> Similar is the case with storage.\n>\n\nSimilar to compression, for inherited tables we have to output STORAGE\nclause even if it's default.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 27 Feb 2024 15:53:53 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "Hi Peter and Tom,\n\n> On Tue, Feb 20, 2024 at 3:51 PM Peter Eisentraut <[email protected]>\n> wrote:\n> > >\n> > > I have reverted the patch for now (and re-opened the commitfest entry).\n> > > We should continue to work on this and see if we can at least try to\n> get\n> > > the pg_dump test coverage suitable.\n> > >\n> >\n>\n\nThe pg_dump problems arise because we throw an error when parents have\nconflicting compression and storage properties. The patch that got\nreverted, changed this slightly by allowing a child to override parent's\nproperties even when they conflict. It still threw an error when child\ndidn't override and parents conflicted. I guess, MergeAttributes() raises\nerror when it encounters parents with conflicting properties because it can\nnot decide which of the conflicting properties the child should inherit.\nInstead it could just set the DEFAULT properties when parent properties\nconflict but child doesn't override. Thus when compression conflicts,\nchild's compression would be set to default and when storage conflicts it\nwill be set to the type's default storage. Child's properties when\nspecified explicitly would override always. This will solve all the pg_dump\nbugs we saw with the reverted patch and also existing bug I reported\nearlier.\n\nThis change would break backward compatibility but I don't think anybody\nwould rely on error being thrown when parent properties conflict.\n\nWhat do you think?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Peter and Tom,\n> On Tue, Feb 20, 2024 at 3:51 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > I have reverted the patch for now (and re-opened the commitfest entry).\n> > We should continue to work on this and see if we can at least try to get\n> > the pg_dump test coverage suitable.\n> >\n>\nThe pg_dump problems arise because we throw an error when parents have conflicting compression and storage properties. The patch that got reverted, changed this slightly by allowing a child to override parent's properties even when they conflict. It still threw an error when child didn't override and parents conflicted. I guess, MergeAttributes() raises error when it encounters parents with conflicting properties because it can not decide which of the conflicting properties the child should inherit. Instead it could just set the DEFAULT properties when parent properties conflict but child doesn't override. Thus when compression conflicts, child's compression would be set to default and when storage conflicts it will be set to the type's default storage. Child's properties when specified explicitly would override always. This will solve all the pg_dump bugs we saw with the reverted patch and also existing bug I reported earlier.This change would break backward compatibility but I don't think anybody would rely on error being thrown when parent properties conflict.What do you think?-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 7 Mar 2024 22:24:26 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On 07.03.24 17:54, Ashutosh Bapat wrote:\n> The pg_dump problems arise because we throw an error when parents have \n> conflicting compression and storage properties. The patch that got \n> reverted, changed this slightly by allowing a child to override parent's \n> properties even when they conflict. It still threw an error when child \n> didn't override and parents conflicted. I guess, MergeAttributes() \n> raises error when it encounters parents with conflicting properties \n> because it can not decide which of the conflicting properties the child \n> should inherit. Instead it could just set the DEFAULT properties when \n> parent properties conflict but child doesn't override. Thus when \n> compression conflicts, child's compression would be set to default and \n> when storage conflicts it will be set to the type's default storage. \n> Child's properties when specified explicitly would override always. This \n> will solve all the pg_dump bugs we saw with the reverted patch and also \n> existing bug I reported earlier.\n> \n> This change would break backward compatibility but I don't think anybody \n> would rely on error being thrown when parent properties conflict.\n> \n> What do you think?\n\nAt this point in the development cycle, I would rather not undertake \nsuch changes. We have already discovered with the previous attempt that \nthere are unexpected pitfalls and lacking test coverage. Also, there \nisn't even a patch yet. I suggest we drop this for now, or reconsider \nit for PG18, as you wish.\n\n\n\n", "msg_date": "Thu, 21 Mar 2024 10:49:47 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table inheritance versus column compression and storage settings" }, { "msg_contents": "On Thu, Mar 21, 2024 at 3:19 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 07.03.24 17:54, Ashutosh Bapat wrote:\n> > The pg_dump problems arise because we throw an error when parents have\n> > conflicting compression and storage properties. The patch that got\n> > reverted, changed this slightly by allowing a child to override parent's\n> > properties even when they conflict. It still threw an error when child\n> > didn't override and parents conflicted. I guess, MergeAttributes()\n> > raises error when it encounters parents with conflicting properties\n> > because it can not decide which of the conflicting properties the child\n> > should inherit. Instead it could just set the DEFAULT properties when\n> > parent properties conflict but child doesn't override. Thus when\n> > compression conflicts, child's compression would be set to default and\n> > when storage conflicts it will be set to the type's default storage.\n> > Child's properties when specified explicitly would override always. This\n> > will solve all the pg_dump bugs we saw with the reverted patch and also\n> > existing bug I reported earlier.\n> >\n> > This change would break backward compatibility but I don't think anybody\n> > would rely on error being thrown when parent properties conflict.\n> >\n> > What do you think?\n>\n> At this point in the development cycle, I would rather not undertake\n> such changes. We have already discovered with the previous attempt that\n> there are unexpected pitfalls and lacking test coverage. Also, there\n> isn't even a patch yet. I suggest we drop this for now, or reconsider\n> it for PG18, as you wish.\n>\n>\nI am fine with this. Should I mark the CF entry as RWF?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Mar 21, 2024 at 3:19 PM Peter Eisentraut <[email protected]> wrote:On 07.03.24 17:54, Ashutosh Bapat wrote:\n> The pg_dump problems arise because we throw an error when parents have \n> conflicting compression and storage properties. The patch that got \n> reverted, changed this slightly by allowing a child to override parent's \n> properties even when they conflict. It still threw an error when child \n> didn't override and parents conflicted. I guess, MergeAttributes() \n> raises error when it encounters parents with conflicting properties \n> because it can not decide which of the conflicting properties the child \n> should inherit. Instead it could just set the DEFAULT properties when \n> parent properties conflict but child doesn't override. Thus when \n> compression conflicts, child's compression would be set to default and \n> when storage conflicts it will be set to the type's default storage. \n> Child's properties when specified explicitly would override always. This \n> will solve all the pg_dump bugs we saw with the reverted patch and also \n> existing bug I reported earlier.\n> \n> This change would break backward compatibility but I don't think anybody \n> would rely on error being thrown when parent properties conflict.\n> \n> What do you think?\n\nAt this point in the development cycle, I would rather not undertake \nsuch changes.  We have already discovered with the previous attempt that \nthere are unexpected pitfalls and lacking test coverage.  Also, there \nisn't even a patch yet.  I suggest we drop this for now, or reconsider \nit for PG18, as you wish.\n\nI am fine with this. Should I mark the CF entry as RWF?-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 21 Mar 2024 16:46:29 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table inheritance versus column compression and storage settings" } ]
[ { "msg_contents": "Attached is the v12 patch. Below are the summary of the changes from\nprevious version.\n\n- Rebase. CFbot says v11 patch needs rebase since Nov 30, 2023.\n \n- Apply preprocess_expression() to DEFINE clause in the planning\n phase. This is necessary to simply const expressions like:\n\n DEFINE A price < (99 + 1)\n to:\n DEFINE A price < 100\n\n- Re-allow to use WinSetMarkPosition() in eval_windowaggregates().\n\n- FYI here is the list to explain what were changed in each patch file.\n\n0001-Row-pattern-recognition-patch-for-raw-parser.patch\n- Fix conflict.\n\n0002-Row-pattern-recognition-patch-parse-analysis.patch\n- Same as before.\n\n0003-Row-pattern-recognition-patch-planner.patch\n- Call preprocess_expression() for DEFINE clause in subquery_planner().\n\n0004-Row-pattern-recognition-patch-executor.patch\n- Re-allow to use WinSetMarkPosition() in eval_windowaggregates().\n\n0005-Row-pattern-recognition-patch-docs.patch\n- Same as before.\n\n0006-Row-pattern-recognition-patch-tests.patch\n- Same as before.\n\n0007-Allow-to-print-raw-parse-tree.patch\n- Same as before.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Mon, 04 Dec 2023 20:40:48 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Row pattern recognition" }, { "msg_contents": "On 04.12.23 12:40, Tatsuo Ishii wrote:\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index d631ac89a9..5a77fca17f 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -251,6 +251,8 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tDefElem\t *defelt;\n> \tSortBy\t *sortby;\n> \tWindowDef *windef;\n> +\tRPCommonSyntax\t*rpcom;\n> +\tRPSubsetItem\t*rpsubset;\n> \tJoinExpr *jexpr;\n> \tIndexElem *ielem;\n> \tStatsElem *selem;\n> @@ -278,6 +280,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> \tMergeWhenClause *mergewhen;\n> \tstruct KeyActions *keyactions;\n> \tstruct KeyAction *keyaction;\n> +\tRPSkipTo\tskipto;\n> }\n> \n> %type <node>\tstmt toplevel_stmt schema_stmt routine_body_stmt\n\nIt is usually not the style to add an entry for every node type to the \n%union. Otherwise, we'd have hundreds of entries in there.\n\n> @@ -866,6 +878,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n> %nonassoc\tUNBOUNDED\t\t/* ideally would have same precedence as IDENT */\n> %nonassoc\tIDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n> \t\t\tSET KEYS OBJECT_P SCALAR VALUE_P WITH WITHOUT\n> +%nonassoc\tMEASURES AFTER INITIAL SEEK PATTERN_P\n> %left\t\tOp OPERATOR\t\t/* multi-character ops and user-defined operators */\n> %left\t\t'+' '-'\n> %left\t\t'*' '/' '%'\n\nIt was recently discussed that these %nonassoc should ideally all have \nthe same precedence. Did you consider that here?\n\n\n\n", "msg_date": "Wed, 6 Dec 2023 12:05:33 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Row pattern recognition" } ]
[ { "msg_contents": "I have been playing around with the idea of adding support for OLD/NEW\nto RETURNING, partly motivated by the discussion on the MERGE\nRETURNING thread [1], but also because I think it would be a very\nuseful addition for other commands (UPDATE in particular).\n\nThis was discussed a long time ago [2], but that previous discussion\ndidn't lead to a workable patch, and so I have taken a different\napproach here.\n\nMy first thought was that this would only really make sense for UPDATE\nand MERGE, since OLD/NEW are pretty pointless for INSERT/DELETE\nrespectively. However...\n\n1. For an INSERT with an ON CONFLICT ... DO UPDATE clause, returning\nOLD might be very useful, since it provides a way to see which rows\nconflicted, and return the old conflicting values.\n\n2. If a DELETE is turned into an UPDATE by a rule (e.g., to mark rows\nas deleted, rather than actually deleting them), then returning NEW\ncan also be useful. (I admit, this is a somewhat obscure use case, but\nit's still possible.)\n\n3. In a MERGE, we need to be able to handle all 3 command types anyway.\n\n4. It really isn't any extra effort to support INSERT and DELETE.\n\nSo in the attached very rough patch (no docs, minimal testing) I have\njust allowed OLD/NEW in RETURNING for all command types (except, I\nhaven't done MERGE here - I think that's best kept as a separate\npatch). If there is no OLD/NEW row in a particular context, it just\nreturns NULLs. The regression tests contain examples of 1 & 2 above.\n\n\nBased on Robert Haas' suggestion in [2], the patch works by adding a\nnew \"varreturningtype\" field to Var nodes. This field is set during\nparse analysis of the returning clause, which adds new namespace\naliases for OLD and NEW, if tables with those names/aliases are not\nalready present. So the resulting Var nodes have the same\nvarno/varattno as they would normally have had, but a different\nvarreturningtype.\n\nFor the most part, the rewriter and parser are then untouched, except\nfor a couple of places necessary to ensure that the new field makes it\nthrough correctly. In particular, none of this affects the shape of\nthe final plan produced. All of the work to support the new Var\nreturning type is done in the executor.\n\nThis turns out to be relatively straightforward, except for\ncross-partition updates, which was a little trickier since the tuple\nformat of the old row isn't necessarily compatible with the new row,\nwhich is in a different partition table and so might have a different\ncolumn order.\n\nOne thing that I've explicitly disallowed is returning OLD/NEW for\nupdates to foreign tables. It's possible that could be added in a\nlater patch, but I have no plans to support that right now.\n\n\nOne difficult question is what names to use for the new aliases. I\nthink OLD and NEW are the most obvious and natural choices. However,\nthere is a problem - if they are used in a trigger function, they will\nconflict. In PL/pgSQL, this leads to an error like the following:\n\nERROR: column reference \"new.f1\" is ambiguous\nLINE 3: RETURNING new.f1, new.f4\n ^\nDETAIL: It could refer to either a PL/pgSQL variable or a table column.\n\nThat's the same error that you'd get if a different alias name had\nbeen chosen, and it happened to conflict with a user-defined PL/pgSQL\nvariable, except that in that case, the user could just change their\nvariable name to fix the problem, which is not possible with the\nautomatically-added OLD/NEW trigger variables. As a way round that, I\nadded a way to optionally change the alias used in the RETURNING list,\nusing the following syntax:\n\n RETURNING [ WITH ( { OLD | NEW } AS output_alias [, ...] ) ]\n * | output_expression [ [ AS ] output_name ] [, ...]\n\nfor example:\n\n RETURNING WITH (OLD AS o) o.id, o.val, ...\n\nI'm not sure how good a solution that is, but the syntax doesn't look\ntoo bad to me (somewhat reminiscent of a WITH-query), and it's only\nnecessary in cases where there is a name conflict.\n\nThe simpler solution would be to just pick different alias names to\nstart with. The previous thread seemed to settle on BEFORE/AFTER, but\nI don't find those names particularly intuitive or appealing. Over on\n[1], PREVIOUS/CURRENT was suggested, which I prefer, but they still\ndon't seem as natural as OLD/NEW.\n\nSo, as is often the case, naming things turns out to be the hardest\nproblem, which is why I quite like the idea of letting the user pick\ntheir own name, if they need to. In most contexts, OLD and NEW will\nwork, so they won't need to.\n\nThoughts?\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/51822C0F.5030807%40gmail.com", "msg_date": "Mon, 4 Dec 2023 12:14:42 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, Dec 4, 2023 at 8:15 PM Dean Rasheed <[email protected]> wrote:\n>\n> I have been playing around with the idea of adding support for OLD/NEW\n> to RETURNING, partly motivated by the discussion on the MERGE\n> RETURNING thread [1], but also because I think it would be a very\n> useful addition for other commands (UPDATE in particular).\n>\n> This was discussed a long time ago [2], but that previous discussion\n> didn't lead to a workable patch, and so I have taken a different\n> approach here.\n>\n> Thoughts?\n>\n\n\n /* get the tuple from the relation being scanned */\n- scratch.opcode = EEOP_ASSIGN_SCAN_VAR;\n+ switch (variable->varreturningtype)\n+ {\n+ case VAR_RETURNING_OLD:\n+ scratch.opcode = EEOP_ASSIGN_OLD_VAR;\n+ break;\n+ case VAR_RETURNING_NEW:\n+ scratch.opcode = EEOP_ASSIGN_NEW_VAR;\n+ break;\n+ default:\n+ scratch.opcode = EEOP_ASSIGN_SCAN_VAR;\n+ break;\n+ }\nI have roughly an idea of what this code is doing. but do you need to\nrefactor the above comment?\n\n\n/* for EEOP_INNER/OUTER/SCAN_FETCHSOME */\nin src/backend/executor/execExpr.c, do you need to update the comment?\n\ncreate temp table foo (f1 int, f2 int);\ninsert into foo values (1,2), (3,4);\nINSERT INTO foo select 11, 22 RETURNING WITH (old AS new, new AS old)\nnew.*, old.*;\n--this works. which is fine.\n\ncreate or replace function stricttest1() returns void as $$\ndeclare x record;\nbegin\n insert into foo values(5,6) returning new.* into x;\n raise notice 'x.f1 = % x.f2 %', x.f1, x.f2;\nend$$ language plpgsql;\nselect * from stricttest1();\n--this works.\n\ncreate or replace function stricttest2() returns void as $$\ndeclare x record; y record;\nbegin\n INSERT INTO foo select 11, 22 RETURNING WITH (old AS o, new AS n)\no into x, n into y;\n raise notice 'x.f1: % x.f2 % y.f1 % y.f2 %', x.f1,x.f2, y.f1, y.f2;\nend$$ language plpgsql;\n--this does not work.\n--because https://www.postgresql.org/message-id/flat/CAFj8pRB76FE2MVxJYPc1RvXmsf2upoTgoPCC9GsvSAssCM2APQ%40mail.gmail.com\n\ncreate or replace function stricttest3() returns void as $$\ndeclare x record; y record;\nbegin\n INSERT INTO foo select 11, 22 RETURNING WITH (old AS o, new AS n) o.*,n.*\n into x;\n raise notice 'x.f1 % x.f2 %, % %', x.f1, x.f2, x.f1,x.f2;\nend$$ language plpgsql;\nselect * from stricttest3();\n--this is not what we want. because old and new share the same column name\n--so here you cannot get the \"new\" content.\n\ncreate or replace function stricttest4() returns void as $$\ndeclare x record; y record;\nbegin\n INSERT INTO foo select 11, 22\n RETURNING WITH (old AS o, new AS n)\n o.f1 as of1,o.f2 as of2,n.f1 as nf1, n.f2 as nf2\n into x;\n raise notice 'x.0f1 % x.of2 % nf1 % nf2 %', x.of1, x.of2, x.nf1, x.nf2;\nend$$ language plpgsql;\n--kind of verbose, but works, which is fine.\n\ncreate or replace function stricttest5() returns void as $$\ndeclare x record; y record;\n a foo%ROWTYPE; b foo%ROWTYPE;\nbegin\n INSERT INTO foo select 11, 22\n RETURNING WITH (old AS o, new AS n) o into a, n into b;\nend$$ language plpgsql;\n-- expect this to work.\n\n\n", "msg_date": "Sat, 16 Dec 2023 21:03:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Sat, 16 Dec 2023 at 13:04, jian he <[email protected]> wrote:\n>\n> /* get the tuple from the relation being scanned */\n> I have roughly an idea of what this code is doing. but do you need to\n> refactor the above comment?\n>\n> /* for EEOP_INNER/OUTER/SCAN_FETCHSOME */\n> in src/backend/executor/execExpr.c, do you need to update the comment?\n>\n\nThanks for looking at this.\n\nAttached is a new version with some updated comments. In addition, I\nfixed a couple of issues:\n\nIn raw_expression_tree_walker(), I had missed one of the new node types.\n\nWhen \"old\" or \"new\" are specified by themselves in the RETURNING list\nto return the whole old/new row, the parser was generating a RowExpr\nnode, which appeared to work OK, but failed if there were any dropped\ncolumns in the relation. I have changed this to generate a wholerow\nVar instead, which deals with that issue, and seems better for\nefficiency and consistency with existing code.\n\nIn addition, I have added code during executor startup to record\nwhether or not the RETURNING list actually has any references to\nOLD/NEW values. This allows the building of old/new tuple slots to be\nskipped when they're not actually needed, reducing per-row overheads.\n\nI still haven't written any docs yet.\n\n\n> create or replace function stricttest2() returns void as $$\n> declare x record; y record;\n> begin\n> INSERT INTO foo select 11, 22 RETURNING WITH (old AS o, new AS n)\n> o into x, n into y;\n> raise notice 'x.f1: % x.f2 % y.f1 % y.f2 %', x.f1,x.f2, y.f1, y.f2;\n> end$$ language plpgsql;\n> --this does not work.\n> --because https://www.postgresql.org/message-id/flat/CAFj8pRB76FE2MVxJYPc1RvXmsf2upoTgoPCC9GsvSAssCM2APQ%40mail.gmail.com\n>\n> create or replace function stricttest5() returns void as $$\n> declare x record; y record;\n> a foo%ROWTYPE; b foo%ROWTYPE;\n> begin\n> INSERT INTO foo select 11, 22\n> RETURNING WITH (old AS o, new AS n) o into a, n into b;\n> end$$ language plpgsql;\n> -- expect this to work.\n\nYeah, but note that multiple INTO clauses aren't allowed. An\nalternative is to create a custom type to hold the old and new\nrecords, e.g.:\n\nCREATE TYPE foo_delta AS (old foo, new foo);\n\nthen you can just do \"RETURNING old, new INTO delta\" where delta is a\nvariable of type foo_delta, and you can extract individual fields\nusing expressions like \"(delta.old).f1\".\n\nRegards,\nDean", "msg_date": "Wed, 3 Jan 2024 10:22:07 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On 12/4/23 13:14, Dean Rasheed wrote:\n> I have been playing around with the idea of adding support for OLD/NEW\n> to RETURNING, partly motivated by the discussion on the MERGE\n> RETURNING thread [1], but also because I think it would be a very\n> useful addition for other commands (UPDATE in particular).\n> \n\nSounds reasonable ...\n\n> This was discussed a long time ago [2], but that previous discussion\n> didn't lead to a workable patch, and so I have taken a different\n> approach here.\n> \n\nPresumably the 2013 thread went nowhere because of some implementation\nproblems, not simply because the author lost interest and disappeared?\nWould it be helpful for this new patch to briefly summarize what the\nmain issues were and how this new approach deals with that? (It's hard\nto say if reading the old thread is necessary/helpful for understanding\nthis new patch, and time is a scarce resource.)\n\n> My first thought was that this would only really make sense for UPDATE\n> and MERGE, since OLD/NEW are pretty pointless for INSERT/DELETE\n> respectively. However...\n> \n> 1. For an INSERT with an ON CONFLICT ... DO UPDATE clause, returning\n> OLD might be very useful, since it provides a way to see which rows\n> conflicted, and return the old conflicting values.\n> \n> 2. If a DELETE is turned into an UPDATE by a rule (e.g., to mark rows\n> as deleted, rather than actually deleting them), then returning NEW\n> can also be useful. (I admit, this is a somewhat obscure use case, but\n> it's still possible.)\n> \n> 3. In a MERGE, we need to be able to handle all 3 command types anyway.\n> \n> 4. It really isn't any extra effort to support INSERT and DELETE.\n> \n> So in the attached very rough patch (no docs, minimal testing) I have\n> just allowed OLD/NEW in RETURNING for all command types (except, I\n> haven't done MERGE here - I think that's best kept as a separate\n> patch). If there is no OLD/NEW row in a particular context, it just\n> returns NULLs. The regression tests contain examples of 1 & 2 above.\n> \n> \n> Based on Robert Haas' suggestion in [2], the patch works by adding a\n> new \"varreturningtype\" field to Var nodes. This field is set during\n> parse analysis of the returning clause, which adds new namespace\n> aliases for OLD and NEW, if tables with those names/aliases are not\n> already present. So the resulting Var nodes have the same\n> varno/varattno as they would normally have had, but a different\n> varreturningtype.\n> \n\nNo opinion on whether varreturningtype is the right approach - it sounds\nlike it's working better than the 2013 patch, but I won't pretend my\nknowledge of this code is sufficient to make judgments beyond that.\n\n> For the most part, the rewriter and parser are then untouched, except\n> for a couple of places necessary to ensure that the new field makes it\n> through correctly. In particular, none of this affects the shape of\n> the final plan produced. All of the work to support the new Var\n> returning type is done in the executor.\n> \n> This turns out to be relatively straightforward, except for\n> cross-partition updates, which was a little trickier since the tuple\n> format of the old row isn't necessarily compatible with the new row,\n> which is in a different partition table and so might have a different\n> column order.\n> \n\nSo we just \"remap\" the attributes, right?\n\n> One thing that I've explicitly disallowed is returning OLD/NEW for\n> updates to foreign tables. It's possible that could be added in a\n> later patch, but I have no plans to support that right now.\n> \n\nSounds like an acceptable restriction, as long as it's documented.\n\nWhat are the challenges for supporting OLD/NEW for foreign tables? I\nguess we'd need to ask the FDW handler to tell us if it can support\nOLD/NEW for this table (and only allow it for postgres_fdw with\nsufficiently new server version), and then deparse the SQL.\n\nI'm asking because this seems like a nice first patch idea, but if I\ndon't see some major obstacle that I don't see ...\n\n> \n> One difficult question is what names to use for the new aliases. I\n> think OLD and NEW are the most obvious and natural choices. However,\n> there is a problem - if they are used in a trigger function, they will\n> conflict. In PL/pgSQL, this leads to an error like the following:\n> \n> ERROR: column reference \"new.f1\" is ambiguous\n> LINE 3: RETURNING new.f1, new.f4\n> ^\n> DETAIL: It could refer to either a PL/pgSQL variable or a table column.\n> \n> That's the same error that you'd get if a different alias name had\n> been chosen, and it happened to conflict with a user-defined PL/pgSQL\n> variable, except that in that case, the user could just change their\n> variable name to fix the problem, which is not possible with the\n> automatically-added OLD/NEW trigger variables. As a way round that, I\n> added a way to optionally change the alias used in the RETURNING list,\n> using the following syntax:\n> \n> RETURNING [ WITH ( { OLD | NEW } AS output_alias [, ...] ) ]\n> * | output_expression [ [ AS ] output_name ] [, ...]\n> \n> for example:\n> \n> RETURNING WITH (OLD AS o) o.id, o.val, ...\n> \n> I'm not sure how good a solution that is, but the syntax doesn't look\n> too bad to me (somewhat reminiscent of a WITH-query), and it's only\n> necessary in cases where there is a name conflict.\n> \n> The simpler solution would be to just pick different alias names to\n> start with. The previous thread seemed to settle on BEFORE/AFTER, but\n> I don't find those names particularly intuitive or appealing. Over on\n> [1], PREVIOUS/CURRENT was suggested, which I prefer, but they still\n> don't seem as natural as OLD/NEW.\n> \n> So, as is often the case, naming things turns out to be the hardest\n> problem, which is why I quite like the idea of letting the user pick\n> their own name, if they need to. In most contexts, OLD and NEW will\n> work, so they won't need to.\n> \n\nI think OLD/NEW with a way to define a custom alias when needed seems\nacceptable. Or at least I can't think of a clearly better solution. Yes,\nusing some other name might not have this problem, but I guess we'd have\nto pick an existing keyword or add one. And Tom didn't seem thrilled\nwith reserving a keyword in 2013 ...\n\nPlus I think there's value in consistency, and OLD/NEW seems way more\nnatural that BEFORE/AFTER.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 24 Feb 2024 18:52:36 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Sat, 24 Feb 2024 at 17:52, Tomas Vondra\n<[email protected]> wrote:\n>\n> Presumably the 2013 thread went nowhere because of some implementation\n> problems, not simply because the author lost interest and disappeared?\n> Would it be helpful for this new patch to briefly summarize what the\n> main issues were and how this new approach deals with that? (It's hard\n> to say if reading the old thread is necessary/helpful for understanding\n> this new patch, and time is a scarce resource.)\n\nThanks for looking!\n\nThe 2013 patch got fairly far down a particular implementation path\n(adding a new kind of RTE called RTE_ALIAS) before Robert reviewed it\n[1]. He pointed out various specific issues, as well as questioning\nthe overall approach, and suggesting a different approach that would\nhave involved significant rewriting (this is essentially the approach\nthat I have taken, adding a new field to Var nodes).\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoY5EXE-YKMV7CsdSFj-noyZz%3D2z45sgyJX5Y84rO3RnWQ%40mail.gmail.com\n\nThe thread kind-of petered out shortly after that, with the conclusion\nthat the patch needed a pretty significant redesign and rewrite.\n\n\n> No opinion on whether varreturningtype is the right approach - it sounds\n> like it's working better than the 2013 patch, but I won't pretend my\n> knowledge of this code is sufficient to make judgments beyond that.\n>\n> > For the most part, the rewriter and parser are then untouched, except\n> > for a couple of places necessary to ensure that the new field makes it\n> > through correctly. In particular, none of this affects the shape of\n> > the final plan produced. All of the work to support the new Var\n> > returning type is done in the executor.\n\n(Of course, I meant the rewriter and the *planner* are largely untouched.)\n\nI think this is one of the main advantages of this approach. The 2013\ndesign, adding a new RTE kind, required changes all over the place,\nincluding lots of hacking in the planner.\n\n\n> > This turns out to be relatively straightforward, except for\n> > cross-partition updates, which was a little trickier since the tuple\n> > format of the old row isn't necessarily compatible with the new row,\n> > which is in a different partition table and so might have a different\n> > column order.\n>\n> So we just \"remap\" the attributes, right?\n\nRight. That's what the majority of the new code in ExecDelete() and\nExecInsert() is for. It's not that complicated, but it did require a\nbit of care.\n\n\n> What are the challenges for supporting OLD/NEW for foreign tables?\n\nI didn't really look at that in any detail, but I don't think it\nshould be too hard. It's not something I want to tackle now though,\nbecause the patch is big enough already.\n\n\n> I think OLD/NEW with a way to define a custom alias when needed seems\n> acceptable. Or at least I can't think of a clearly better solution. Yes,\n> using some other name might not have this problem, but I guess we'd have\n> to pick an existing keyword or add one. And Tom didn't seem thrilled\n> with reserving a keyword in 2013 ...\n>\n> Plus I think there's value in consistency, and OLD/NEW seems way more\n> natural that BEFORE/AFTER.\n\nYes, I think OLD/NEW are much nicer too.\n\nAttached is a new patch, now with docs (no other code changes).\n\nRegards,\nDean", "msg_date": "Fri, 8 Mar 2024 19:53:20 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Sat, Mar 9, 2024 at 3:53 AM Dean Rasheed <[email protected]> wrote:\n>\n>\n> Attached is a new patch, now with docs (no other code changes).\n>\n\nHi,\nsome issues I found, while playing around with\nsupport-returning-old-new-v2.patch\n\ndoc/src/sgml/ref/update.sgml:\n [ RETURNING [ WITH ( { OLD | NEW } AS <replaceable\nclass=\"parameter\">output_alias</replaceable> [, ...] ) ]\n * | <replaceable\nclass=\"parameter\">output_expression</replaceable> [ [ AS ]\n<replaceable class=\"parameter\">output_name</replaceable> ] [, ...] ]\n</synopsis>\n\nThere is no parameter explanation for `*`.\nso, I think the synopsis may not cover cases like:\n`\nupdate foo set f3 = 443 RETURNING new.*;\n`\nI saw the explanation at output_alias, though.\n\n-----------------------------------------------------------------------------\ninsert into foo select 1, 2 RETURNING old.*, new.f2, old.f1();\nERROR: function old.f1() does not exist\nLINE 1: ...sert into foo select 1, 2 RETURNING old.*, new.f2, old.f1();\n ^\nHINT: No function matches the given name and argument types. You\nmight need to add explicit type casts.\n\nI guess that's ok, slightly different context evaluation. if you say\n\"old.f1\", old refers to the virtual table \"old\",\nbut \"old.f1()\", the \"old\" , reevaluate to the schema \"old\".\nyou need privilege to schema \"old\", you also need execution privilege\nto function \"old.f1()\" to execute the above query.\nso seems no security issue after all.\n-----------------------------------------------------------------------------\nI found a fancy expression:\n`\nCREATE TABLE foo (f1 serial, f2 text, f3 int default 42);\ninsert into foo select 1, 2 union select 11, 22 RETURNING old.*,\nnew.f2, (select sum(new.f1) over());\n`\nis this ok?\n\nalso the following works on PG16, not sure it's a bug.\n`\n insert into foo select 1, 2 union select 11, 22 RETURNING (select count(*));\n`\nbut not these\n`\ninsert into foo select 1, 2 union select 11, 22 RETURNING (select\ncount(old.*));\ninsert into foo select 1, 2 union select 11, 22 RETURNING (select sum(f1));\n`\n-----------------------------------------------------------------------------\nI found another interesting case, while trying to add some tests on\nfor new code in createplan.c.\nin postgres_fdw.sql, right after line `MERGE ought to fail cleanly`\n\n--this will work\ninsert into itrtest select 1, 'foo' returning new.*,old.*;\n--these two will fail\ninsert into remp1 select 1, 'foo' returning new.*;\ninsert into remp1 select 1, 'foo' returning old.*;\n\nitrtest is the partitioned non-foreign table.\nremp1 is the partition of itrtest, foreign table.\n\n------------------------------------------------------------------------------------------\nI did find a segment fault bug.\ninsert into foo select 1, 2 RETURNING (select sum(old.f1) over());\n\nThis information is set in a subplan node.\n/* update the ExprState's flags if Var refers to OLD/NEW */\nif (variable->varreturningtype == VAR_RETURNING_OLD)\nstate->flags |= EEO_FLAG_HAS_OLD;\nelse if (variable->varreturningtype == VAR_RETURNING_NEW)\nstate->flags |= EEO_FLAG_HAS_NEW;\n\nbut in ExecInsert:\n`\nelse if (resultRelInfo->ri_projectReturning->pi_state.flags & EEO_FLAG_HAS_OLD)\n{\noldSlot = ExecGetReturningSlot(estate, resultRelInfo);\nExecStoreAllNullTuple(oldSlot);\noldSlot->tts_tableOid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n}\n`\nit didn't use subplan node state->flags information. so the ExecInsert\nabove code, never called, and should be executed.\nhowever\n`\ninsert into foo select 1, 2 RETURNING (select sum(new.f1)over());`\nworks\n\nSimilarly this\n `\ndelete from foo RETURNING (select sum(new.f1) over());\n`\nalso causes segmentation fault.\n------------------------------------------------------------------------------------------\ndiff --git a/src/include/executor/tuptable.h b/src/include/executor/tuptable.h\nnew file mode 100644\nindex 6133dbc..c9d3661\n--- a/src/include/executor/tuptable.h\n+++ b/src/include/executor/tuptable.h\n@@ -411,12 +411,21 @@ slot_getsysattr(TupleTableSlot *slot, in\n {\n Assert(attnum < 0); /* caller error */\n\n+ /*\n+ * If the tid is not valid, there is no physical row, and all system\n+ * attributes are deemed to be NULL, except for the tableoid.\n+ */\n if (attnum == TableOidAttributeNumber)\n {\n *isnull = false;\n return ObjectIdGetDatum(slot->tts_tableOid);\n }\n- else if (attnum == SelfItemPointerAttributeNumber)\n+ if (!ItemPointerIsValid(&slot->tts_tid))\n+ {\n+ *isnull = true;\n+ return PointerGetDatum(NULL);\n+ }\n+ if (attnum == SelfItemPointerAttributeNumber)\n {\n *isnull = false;\n return PointerGetDatum(&slot->tts_tid);\n\nThese changes is slot_getsysattr is somehow independ of this feature?\n\n\n", "msg_date": "Mon, 11 Mar 2024 07:40:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Sun, 10 Mar 2024 at 23:41, jian he <[email protected]> wrote:\n>\n> Hi,\n> some issues I found, while playing around with\n> support-returning-old-new-v2.patch\n>\n\nThanks for testing. This is very useful.\n\n\n> doc/src/sgml/ref/update.sgml:\n>\n> There is no parameter explanation for `*`.\n> so, I think the synopsis may not cover cases like:\n> `\n> update foo set f3 = 443 RETURNING new.*;\n> `\n> I saw the explanation at output_alias, though.\n\n\"*\" is documented under output_alias and output_expression. I'm not\nsure that it makes sense to have a separate top-level parameter\nsection for it, because \"*\" is also something that can appear after\ntable_name, meaning something completely different, so it might get\nconfusing. Perhaps the explanation under output_expression can be\nexpanded a bit. I'll think about it some more.\n\n\n> insert into foo select 1, 2 RETURNING old.*, new.f2, old.f1();\n> ERROR: function old.f1() does not exist\n> LINE 1: ...sert into foo select 1, 2 RETURNING old.*, new.f2, old.f1();\n> ^\n> HINT: No function matches the given name and argument types. You\n> might need to add explicit type casts.\n\nYes, that's consistent with current behaviour. You can also write\nfoo.f1() or something_else.f1(). Anything of that form, with\nparentheses, is interpreted as schema_name.function_name(), not as a\ncolumn reference.\n\n\n> I found a fancy expression:\n> `\n> CREATE TABLE foo (f1 serial, f2 text, f3 int default 42);\n> insert into foo select 1, 2 union select 11, 22 RETURNING old.*,\n> new.f2, (select sum(new.f1) over());\n> `\n> is this ok?\n\nYes, I guess it's OK, though not really useful in practice.\n\n\"new.f1\" is 1 for the first row and 11 for the second. When you write\n\"(select sum(new.f1) over())\", with no FROM clause, you're implicitly\nevaluating over a table with 1 row in the subquery, so it just returns\nnew.f1.\n\nThis is the same as the standalone query\n\nSELECT sum(11) OVER();\n sum\n-----\n 11\n(1 row)\n\nSo it's likely that any window function can be used in a FROM-less\nsubquery inside a RETURNING expression. I can't think of any practical\nuse for it though. In any case, this isn't something new to this\npatch.\n\n\n> also the following works on PG16, not sure it's a bug.\n> `\n> insert into foo select 1, 2 union select 11, 22 RETURNING (select count(*));\n> `\n\nThis is OK, because that subquery is an uncorrelated aggregate query\nthat doesn't reference the outer query. In this case, it's not very\ninteresting, because it lacks a FROM clause, so it just returns 1. But\nyou could also write \"(SELECT count(*) FROM some_other_table WHERE\n...)\", and it would work because the aggregate function is evaluated\nover the rows of the table in the subquery. That's more useful if the\nsubquery is made into a correlated subquery by referring to columns\nfrom the outer query. The rules for that are documented here:\n\nhttps://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES:~:text=When%20an%20aggregate%20expression%20appears%20in%20a%20subquery\n\n\n> but not these\n> `\n> insert into foo select 1, 2 union select 11, 22 RETURNING (select\n> count(old.*));\n> insert into foo select 1, 2 union select 11, 22 RETURNING (select sum(f1));\n> `\n\nIn these cases, since the aggregate's arguments are all outer-level\nvariables, it is associated with the outer query, so it is rejected on\nthe grounds that aggregate functions aren't allowed in RETURNING.\n\nIt is allowed if that subquery has a FROM clause, since the aggregated\narguments are then treated as constants over the rows in the subquery,\nso arguably the same could be made to happen without a FROM clause,\nbut there really is no practical use case for allowing that. Again,\nthis isn't something new to this patch.\n\n\n> I found another interesting case, while trying to add some tests on\n> for new code in createplan.c.\n> in postgres_fdw.sql, right after line `MERGE ought to fail cleanly`\n>\n> --this will work\n> insert into itrtest select 1, 'foo' returning new.*,old.*;\n> --these two will fail\n> insert into remp1 select 1, 'foo' returning new.*;\n> insert into remp1 select 1, 'foo' returning old.*;\n>\n> itrtest is the partitioned non-foreign table.\n> remp1 is the partition of itrtest, foreign table.\n\nHmm, I was a little surprised that that first example worked, but I\ncan see why now.\n\nI was content to just say that RETURNING old/new wasn't supported for\nforeign tables in this first version, but looking at it more closely,\nthe only tricky part is direct-modify updates. So if we just disable\ndirect-modify when there are OLD/NEW variables in the the RETURNING\nlist, then it \"just works\".\n\nSo I've done that, and added a few additional tests to\npostgres_fdw.sql, and removed the doc notes about foreign tables not\nbeing supported. I really thought that there would be more to it than\nthat, but it seems to work fine.\n\n\n> I did find a segment fault bug.\n> insert into foo select 1, 2 RETURNING (select sum(old.f1) over());\n>\n> This information is set in a subplan node.\n> /* update the ExprState's flags if Var refers to OLD/NEW */\n> if (variable->varreturningtype == VAR_RETURNING_OLD)\n> state->flags |= EEO_FLAG_HAS_OLD;\n> else if (variable->varreturningtype == VAR_RETURNING_NEW)\n> state->flags |= EEO_FLAG_HAS_NEW;\n>\n> but in ExecInsert it didn't use subplan node state->flags information\n\nAh, good catch!\n\nWhen recursively initialising a SubPlan, if any of its expressions is\nfound to contain OLD/NEW Vars, it needs to update the flags on the\nparent ExprState. Fixed in the new version.\n\n\n> @@ -411,12 +411,21 @@ slot_getsysattr(TupleTableSlot *slot, in\n> {\n> Assert(attnum < 0); /* caller error */\n>\n> + /*\n> + * If the tid is not valid, there is no physical row, and all system\n> + * attributes are deemed to be NULL, except for the tableoid.\n> + */\n> if (attnum == TableOidAttributeNumber)\n> {\n> *isnull = false;\n> return ObjectIdGetDatum(slot->tts_tableOid);\n> }\n> - else if (attnum == SelfItemPointerAttributeNumber)\n> + if (!ItemPointerIsValid(&slot->tts_tid))\n> + {\n> + *isnull = true;\n> + return PointerGetDatum(NULL);\n> + }\n> + if (attnum == SelfItemPointerAttributeNumber)\n> {\n> *isnull = false;\n> return PointerGetDatum(&slot->tts_tid);\n>\n> These changes is slot_getsysattr is somehow independ of this feature?\n\nThis is necessary because under some circumstances, when returning\nold/new, the corresponding table slot may contain all NULLs and an\ninvalid ctid. For example, the old slot in an INSERT which didn't do\nan ON CONFLICT UPDATE. So we need to guard against that, in case the\nuser tries to return old.ctid, for example. It's useful to always\nreturn a non-NULL tableoid though, because that's a property of the\ntable, rather than the row.\n\nThanks for testing.\n\nAttached is an updated patch, fixing the seg-fault and now with\nsupport for foreign tables.\n\nRegards,\nDean", "msg_date": "Mon, 11 Mar 2024 14:03:35 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 11 Mar 2024 at 14:03, Dean Rasheed <[email protected]> wrote:\n>\n> Attached is an updated patch, fixing the seg-fault and now with\n> support for foreign tables.\n>\n\nUpdated version attached tidying up a couple of things and fixing another bug:\n\n1). Tidied up the code in createplan.c that was testing for old/new\nVars in the returning list, by adding a separate function --\ncontain_vars_returning_old_or_new() -- making it more reusable and\nefficient.\n\n2). Updated the deparsing code for EXPLAIN so that old/new Vars are\nalways prefixed with the alias, so that it's possible to tell them\napart in the EXPLAIN output.\n\n3). Updated rewriteRuleAction() to preserve the old/new alias names in\nthe rewritten query. I think this was only relevant to the EXPLAIN\noutput.\n\n4). Fixed a bug in assign_param_for_var() -- this needs to compare the\nvarreturningtype of the Vars, otherwise 2 different Vars could get\nassigned the same Param. As the comment said, this needs to compare\neverything that _equalVar() compares, except for the specific fields\nlisted. Otherwise a subquery like (select old.a = new.a) in the\nreturning list would only generate one Param for the two up-level\nVars, and produce the wrong result.\n\n5). Removed the ParseState fields p_returning_old and p_returning_new\nthat weren't being used anymore.\n\nRegards,\nDean", "msg_date": "Tue, 12 Mar 2024 18:21:14 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Tue, 12 Mar 2024 at 18:21, Dean Rasheed <[email protected]> wrote:\n>\n> Updated version attached tidying up a couple of things and fixing another bug:\n>\n\nRebased version attached, on top of c649fa24a4 (MERGE ... RETURNING support).\n\nThis just extends the previous version to work with MERGE, adding a\nfew extra tests, which is all fairly straightforward.\n\nRegards,\nDean", "msg_date": "Mon, 18 Mar 2024 10:48:43 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, Mar 18, 2024 at 6:48 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Tue, 12 Mar 2024 at 18:21, Dean Rasheed <[email protected]> wrote:\n> >\n> > Updated version attached tidying up a couple of things and fixing another bug:\n> >\n>\n> Rebased version attached, on top of c649fa24a4 (MERGE ... RETURNING support).\n>\n\n\nhi, some minor issues I found out.\n\n+/*\n+ * ReplaceReturningVarsFromTargetList -\n+ * replace RETURNING list Vars with items from a targetlist\n+ *\n+ * This is equivalent to calling ReplaceVarsFromTargetList() with a\n+ * nomatch_option of REPLACEVARS_REPORT_ERROR, but with the added effect of\n+ * copying varreturningtype onto any Vars referring to new_result_relation,\n+ * allowing RETURNING OLD/NEW to work in the rewritten query.\n+ */\n+\n+typedef struct\n+{\n+ ReplaceVarsFromTargetList_context rv_con;\n+ int new_result_relation;\n+} ReplaceReturningVarsFromTargetList_context;\n+\n+static Node *\n+ReplaceReturningVarsFromTargetList_callback(Var *var,\n+ replace_rte_variables_context *context)\n+{\n+ ReplaceReturningVarsFromTargetList_context *rcon =\n(ReplaceReturningVarsFromTargetList_context *) context->callback_arg;\n+ Node *newnode;\n+\n+ newnode = ReplaceVarsFromTargetList_callback(var, context);\n+\n+ if (var->varreturningtype != VAR_RETURNING_DEFAULT)\n+ SetVarReturningType((Node *) newnode, rcon->new_result_relation,\n+ var->varlevelsup, var->varreturningtype);\n+\n+ return newnode;\n+}\n+\n+Node *\n+ReplaceReturningVarsFromTargetList(Node *node,\n+ int target_varno, int sublevels_up,\n+ RangeTblEntry *target_rte,\n+ List *targetlist,\n+ int new_result_relation,\n+ bool *outer_hasSubLinks)\n+{\n+ ReplaceReturningVarsFromTargetList_context context;\n+\n+ context.rv_con.target_rte = target_rte;\n+ context.rv_con.targetlist = targetlist;\n+ context.rv_con.nomatch_option = REPLACEVARS_REPORT_ERROR;\n+ context.rv_con.nomatch_varno = 0;\n+ context.new_result_relation = new_result_relation;\n+\n+ return replace_rte_variables(node, target_varno, sublevels_up,\n+ ReplaceReturningVarsFromTargetList_callback,\n+ (void *) &context,\n+ outer_hasSubLinks);\n+}\n\nthe ReplaceReturningVarsFromTargetList related comment\nshould be placed right above the function ReplaceReturningVarsFromTargetList,\nnot above ReplaceReturningVarsFromTargetList_context?\n\nstruct ReplaceReturningVarsFromTargetList_context adds some comments\nabout new_result_relation would be great.\n\n\n/* INDEX_VAR is handled by default case */\nthis comment appears in execExpr.c and execExprInterp.c.\nneed to move to default case's switch default case?\n\n\n/*\n * set_deparse_context_plan - Specify Plan node containing expression\n *\n * When deparsing an expression in a Plan tree, we might have to resolve\n * OUTER_VAR, INNER_VAR, or INDEX_VAR references. To do this, the caller must\n * provide the parent Plan node.\n...\n*/\ndoes the comment in set_deparse_context_plan need to be updated?\n\n+ * buildNSItemForReturning -\n+ * add a ParseNamespaceItem for the OLD or NEW alias in RETURNING.\n+ */\n+static void\n+addNSItemForReturning(ParseState *pstate, const char *aliasname,\n+ VarReturningType returning_type)\ncomment \"buildNSItemForReturning\" should be \"addNSItemForReturning\"?\n\n\n * results. If include_dropped is true then empty strings and NULL constants\n * (not Vars!) are returned for dropped columns.\n *\n- * rtindex, sublevels_up, and location are the varno, varlevelsup, and location\n- * values to use in the created Vars. Ordinarily rtindex should match the\n- * actual position of the RTE in its rangetable.\n+ * rtindex, sublevels_up, returning_type, and location are the varno,\n+ * varlevelsup, varreturningtype, and location values to use in the created\n+ * Vars. Ordinarily rtindex should match the actual position of the RTE in\n+ * its rangetable.\nwe already updated the comment in expandRTE.\nbut it seems we only do RTE_RELATION, some part of RTE_FUNCTION.\ndo we need\n`\nvarnode->varreturningtype = returning_type;\n`\nfor other `rte->rtekind` when there is a makeVar?\n\n(I don't understand this part, in the case where rte->rtekind is\nRTE_SUBQUERY, if I add `varnode->varreturningtype = returning_type;`\nthe tests still pass.\n\n\n", "msg_date": "Mon, 25 Mar 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 18 Mar 2024 at 10:48, Dean Rasheed <[email protected]> wrote:\n>\n> Rebased version attached, on top of c649fa24a4 (MERGE ... RETURNING support).\n>\n\nI have been doing more testing of this and I realised that there was a\nproblem -- the previous patch worked fine when updating a regular\ntable, so that old/new.colname is just a Var, but when updating an\nauto-updatable view, \"colname\" could end up being replaced by an\narbitrary expression. In the cases I had tested before, that appeared\nto work OK, but actually it wasn't right in all cases where the result\nshould have been NULL, due to the old/new row being absent (e.g., the\nold row in an INSERT).\n\nAfter thinking about that for a while, the best solution seemed to be\nto add a new executable node, which I've called ReturningExpr. This\nevaluates the old/new expression if the old/new row exists, but skips\nit and returns NULL if the old/new row doesn't exist. The simplest\nexample is a query like this, which now returns what I would expect:\n\nDROP TABLE IF EXISTS tt CASCADE;\nCREATE TABLE tt (a int PRIMARY KEY, b text);\nINSERT INTO tt VALUES (1, 'R1');\nCREATE VIEW tv AS SELECT a, b, 'Const' c FROM tt;\n\nINSERT INTO tv VALUES (1, 'Row 1'), (2, 'Row 2')\n ON CONFLICT (a) DO UPDATE SET b = excluded.b\n RETURNING old.*, new.*;\n\n a | b | c | a | b | c\n---+----+-------+---+-------+-------\n 1 | R1 | Const | 1 | Row 1 | Const\n | | | 2 | Row 2 | Const\n(2 rows)\n\n(Previously that was returning old.c = 'Const' in both rows, because\nthe Const node has no old/new qualification.)\n\nIn EXPLAIN, I opted to display this as \"old/new.(expression)\", to make\nit clear that the expression is being evaluated in the context of the\nold/new row, even if it doesn't directly refer to old/new values from\nthe table. So, for example, the plan for the above query looks like\nthis:\n\n QUERY PLAN\n--------------------------------------------------------------------------------\n Insert on public.tt\n Output: old.a, old.b, old.('Const'::text), new.a, new.b, new.('Const'::text)\n Conflict Resolution: UPDATE\n Conflict Arbiter Indexes: tt_pkey\n -> Values Scan on \"*VALUES*\"\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2\n\n(It can't output \"old.c\" or \"new.c\" because all knowledge of the view\ncolumn \"c\" is gone by the time it has been through the rewriter, and\nin any case, the details of the expression being evaluated are likely\nto be useful in general.)\n\nThings get more complicated when subqueries are involved. For example,\ngiven this view definition:\n\nCREATE VIEW tv AS SELECT a, b, (SELECT concat('b=',b)) c FROM tt;\n\nthe INSERT above produces this:\n\n a | b | c | a | b | c\n---+----+------+---+-------+---------\n 1 | R1 | b=R1 | 1 | Row 1 | b=Row 1\n | | | 2 | Row 2 | b=Row 2\n(2 rows)\n\nwhich is as expected. This uses the following query plan:\n\n QUERY PLAN\n----------------------------------------------------------------------------\n Insert on public.tt\n Output: old.a, old.b, old.((SubPlan 1)), new.a, new.b, new.((SubPlan 2))\n Conflict Resolution: UPDATE\n Conflict Arbiter Indexes: tt_pkey\n -> Values Scan on \"*VALUES*\"\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2\n SubPlan 1\n -> Result\n Output: concat('b=', old.b)\n SubPlan 2\n -> Result\n Output: concat('b=', new.b)\n\nIn this case \"b\" in the view subquery becomes \"old.b\" in SubPlan 1 and\n\"new.b\" in SubPlan 2 (each with varlevelsup = 1, and therefore\nevaluated as input params to the subplans). The concat() result would\nnormally always be non-NULL, but it (or rather the SubLink subquery\ncontaining it) is wrapped in a ReturningExpr. As a result, SubPlan 1\nis skipped in the second row, for which old does not exist, and ends\nup only being executed once in that query, whereas SubPlan 2 is\nexecuted twice.\n\nThings get even more fiddly when the old/new expression itself appears\nin a subquery. For example, given the following query:\n\nINSERT INTO tv VALUES (1, 'Row 1'), (2, 'Row 2')\n ON CONFLICT (a) DO UPDATE SET b = excluded.b\n RETURNING old.a, old.b, (SELECT old.c), new.*;\n\nthe result is the same, but the query plan is now\n\n QUERY PLAN\n----------------------------------------------------------------------\n Insert on public.tt\n Output: old.a, old.b, (SubPlan 2), new.a, new.b, new.((SubPlan 3))\n Conflict Resolution: UPDATE\n Conflict Arbiter Indexes: tt_pkey\n -> Values Scan on \"*VALUES*\"\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2\n SubPlan 1\n -> Result\n Output: concat('b=', old.b)\n SubPlan 2\n -> Result\n Output: (old.((SubPlan 1)))\n SubPlan 3\n -> Result\n Output: concat('b=', new.b)\n\nThe ReturningExpr nodes belong to the query level containing the\nRETURNING list (hence they have a \"levelsup\" field, like Var,\nPlaceHolderVar, etc.). So in this example, one of the ReturningExpr\nnodes is in SubPlan 2, with \"levelsup\" = 1, wrapping SubPlan 1, i.e.,\nit only executes SubPlan 1 if the old row exists.\n\nAlthough that all sounds quite complicated, all the individual pieces\nare quite simple.\n\nAttached is an updated patch in which I have also tidied up a few\nother things, but I haven't read your latest review comments yet. I'll\nrespond to those separately.\n\nRegards,\nDean", "msg_date": "Mon, 25 Mar 2024 07:54:44 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 25 Mar 2024 at 00:00, jian he <[email protected]> wrote:\n>\n> hi, some minor issues I found out.\n>\n> the ReplaceReturningVarsFromTargetList related comment\n> should be placed right above the function ReplaceReturningVarsFromTargetList,\n> not above ReplaceReturningVarsFromTargetList_context?\n\nHmm, well there are a mix of possible styles for this kind of\nfunction. Sometimes the outer function comes first, immediately after\nthe function comment, and then the callback function comes after that.\nThat has the advantage that all documentation comments related to the\ntop-level input arguments are next to the function that takes them.\nAlso, this ordering means that you naturally read it in the order in\nwhich it is initially executed.\n\nThe other style, putting the callback function first has the advantage\nthat you can more immediately see what the function does, since it's\nusually the callback that contains the interesting logic.\n\nrewriteManip.c has examples of both styles, but in this case, since\nReplaceReturningVarsFromTargetList() is similar to\nReplaceVarsFromTargetList(), I opted to copy its style.\n\n> struct ReplaceReturningVarsFromTargetList_context adds some comments\n> about new_result_relation would be great.\n\nI substantially rewrote that function in the v6 patch. As part of\nthat, I renamed \"new_result_relation\" to \"new_target_varno\", which\nmore closely matches the existing \"target_varno\" argument, and I added\ncomments about what it's for to the top-level function comment block.\n\n> /* INDEX_VAR is handled by default case */\n> this comment appears in execExpr.c and execExprInterp.c.\n> need to move to default case's switch default case?\n\nNo, I think it's fine as it is. Its current placement is where you\nmight otherwise expect to find a \"case INDEX_VAR:\" block of code, and\nit's explaining why there isn't one there, and where to look instead.\n\nMoving it into the switch default case would lose that effect, and I\nthink it would reduce the code's readability.\n\n> /*\n> * set_deparse_context_plan - Specify Plan node containing expression\n> *\n> * When deparsing an expression in a Plan tree, we might have to resolve\n> * OUTER_VAR, INNER_VAR, or INDEX_VAR references. To do this, the caller must\n> * provide the parent Plan node.\n> ...\n> */\n> does the comment in set_deparse_context_plan need to be updated?\n\nIn the v6 patch, I moved the code change from\nset_deparse_context_plan() down into set_deparse_plan(), because I\nthought that would catch more cases, but thinking about it some more,\nthat wasn't necessary, since it won't change when moving up and down\nthe ancestor tree. So in v7, I've moved it back and updated the\ncomment.\n\n> + * buildNSItemForReturning -\n> + * add a ParseNamespaceItem for the OLD or NEW alias in RETURNING.\n> + */\n> +static void\n> +addNSItemForReturning(ParseState *pstate, const char *aliasname,\n> + VarReturningType returning_type)\n> comment \"buildNSItemForReturning\" should be \"addNSItemForReturning\"?\n\nYes, well spotted. Fixed in v7.\n\n> [in expandRTE()]\n>\n> - * rtindex, sublevels_up, and location are the varno, varlevelsup, and location\n> - * values to use in the created Vars. Ordinarily rtindex should match the\n> - * actual position of the RTE in its rangetable.\n> + * rtindex, sublevels_up, returning_type, and location are the varno,\n> + * varlevelsup, varreturningtype, and location values to use in the created\n> + * Vars. Ordinarily rtindex should match the actual position of the RTE in\n> + * its rangetable.\n> we already updated the comment in expandRTE.\n> but it seems we only do RTE_RELATION, some part of RTE_FUNCTION.\n> do we need\n> `\n> varnode->varreturningtype = returning_type;\n> `\n> for other `rte->rtekind` when there is a makeVar?\n>\n> (I don't understand this part, in the case where rte->rtekind is\n> RTE_SUBQUERY, if I add `varnode->varreturningtype = returning_type;`\n> the tests still pass.\n\nIn the v6 patch, I already added code to ensure that it's set in all\ncases, though I don't think it's strictly necessary. returning_type\ncan only have a non-default value for the target RTE, which can't\ncurrently be any of those other RTE kinds, but nonetheless it seemed\nbetter from a consistency point-of-view, and to make it more\nfuture-proof.\n\nv7 patch attached, with those updates.\n\nRegards,\nDean", "msg_date": "Mon, 25 Mar 2024 14:04:30 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 25 Mar 2024 at 14:04, Dean Rasheed <[email protected]> wrote:\n>\n> v7 patch attached, with those updates.\n>\n\nRebased version attached, forced by 87985cc925.\n\nThe changes made in that commit didn't entirely make sense to me, but\nthe ExecDelete() change, copying data between slots, broke this patch,\nbecause it wasn't setting the slot's tableoid. That copying seemed to\nbe unnecessary anyway, so I got rid of it, and it works fine. While at\nit, I also removed the extra \"oldslot\" argument added to ExecDelete(),\nwhich didn't seem necessary, and wasn't documented clearly. Those\nchanges could perhaps be extracted and applied separately.\n\nRegards,\nDean", "msg_date": "Tue, 26 Mar 2024 18:49:38 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Tue, 2024-03-26 at 18:49 +0000, Dean Rasheed wrote:\n> On Mon, 25 Mar 2024 at 14:04, Dean Rasheed <[email protected]>\n> wrote:\n> > \n> > v7 patch attached, with those updates.\n> > \n> \n> Rebased version attached, forced by 87985cc925.\n\nThis isn't a complete review, but I spent a while looking at this, and\nit looks like it's in good shape.\n\nI like the syntax, and I think the solution for renaming the alias\n(\"RETURNING WITH (new as n, old as o)\") is a good one.\n\nThe implementation touches quite a few areas. How did you identify all\nof the potential problem areas? It seems the primary sources of\ncomplexity came from rules, partitioning, and updatable views, is that\nright?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 27 Mar 2024 00:47:10 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Wed, 27 Mar 2024 at 07:47, Jeff Davis <[email protected]> wrote:\n>\n> This isn't a complete review, but I spent a while looking at this, and\n> it looks like it's in good shape.\n\nThanks for looking.\n\n> I like the syntax, and I think the solution for renaming the alias\n> (\"RETURNING WITH (new as n, old as o)\") is a good one.\n\nThanks, that's good to know. Settling on a good syntax can be\ndifficult, so it's good to know that people are generally supportive\nof this.\n\n> The implementation touches quite a few areas. How did you identify all\n> of the potential problem areas?\n\nHmm, well that's one of the hardest parts, and it's really difficult\nto be sure that I have.\n\nInitially, when I was just adding a new field to Var, I just tried to\nlook at all the existing code that made Vars, or copied other\nnon-default fields like varnullingrels around. I still managed to miss\nthe necessary change in assign_param_for_var() on my first attempt,\nbut fortunately that was an easy fix.\n\nMore worrying was the fact that I managed to completely overlook the\nfact that I needed to worry about non-updatable columns in\nauto-updatable views until v6, which added the ReturningExpr node.\nOnce I realised that I needed that, and that it needed to be tied to a\nparticular query level, and so needed a \"levelsup\" field, I just\nlooked at GroupingFunc to identify the places in code that needed to\nbe updated to do the right thing for a query-level-aware node.\n\nWhat I'm most worried about now is that there are other areas of\nfunctionality like that, that I'm overlooking, and which will interact\nwith this feature in non-trivial ways.\n\n> It seems the primary sources of\n> complexity came from rules, partitioning, and updatable views, is that\n> right?\n\nForeign tables looked like it would be tricky at first, but then\nturned out to be trivial, after disallowing direct-modify when\nreturning old/new.\n\nRules are a whole area that I wish I didn't have to worry about (I\nwish we had deprecated them a long time ago). In practice though, I\nhaven't done much beyond what seemed like the most obvious (and\nsimplest) thing.\n\nNonetheless, there are some interesting interactions that probably\nneed more careful examination. For example, the fact that the\nRETURNING clause in a RULE already has its own \"special table names\"\nOLD and NEW, which are actually references to different RTEs, unlike\nthe OLD and NEW that this patch introduces, which are references to\nthe result relation. This leads to a number of different cases:\n\nCase 1\n======\n\nIn the simplest case, the rule can simply contain \"RETURNING *\". This\nleads to what I think is the most obvious and intuitive behaviour:\n\nDROP TABLE IF EXISTS t1, t2 CASCADE;\nCREATE TABLE t1 (val1 text);\nINSERT INTO t1 VALUES ('Old value 1');\nCREATE TABLE t2 (val2 text);\nINSERT INTO t2 VALUES ('Old value 2');\n\nCREATE RULE r2 AS ON UPDATE TO t2\n DO INSTEAD UPDATE t1 SET val1 = NEW.val2\n RETURNING *;\n\nUPDATE t2 SET val2 = 'New value 2'\n RETURNING old.val2 AS old_val2, new.val2 AS new_val2,\n t2.val2 AS t2_val2, val2;\n\n old_val2 | new_val2 | t2_val2 | val2\n-------------+-------------+-------------+-------------\n Old value 1 | New value 2 | New value 2 | New value 2\n(1 row)\n\nSo someone using the table with the rule can access old and new values\nin the obvious way, and they will get new values by default for an\nUPDATE.\n\nThe query plan for this is pretty-much what you'd expect:\n\n QUERY PLAN\n-------------------------------------------------------\n Update on public.t1\n Output: old.val1, new.val1, t1.val1, t1.val1\n -> Nested Loop\n Output: 'New value 2'::text, t1.ctid, t2.ctid\n -> Seq Scan on public.t1\n Output: t1.ctid\n -> Materialize\n Output: t2.ctid\n -> Seq Scan on public.t2\n Output: t2.ctid\n\nCase 2\n======\n\nIf the rule contains \"RETURNING OLD.*\", it means that the RETURNING\nlist of the rewritten query contains Vars that no longer refer to the\nresult relation, but instead refer to the old data in t2. This leads\nthe the following behaviour:\n\nDROP TABLE IF EXISTS t1, t2 CASCADE;\nCREATE TABLE t1 (val1 text);\nINSERT INTO t1 VALUES ('Old value 1');\nCREATE TABLE t2 (val2 text);\nINSERT INTO t2 VALUES ('Old value 2');\n\nCREATE RULE r2 AS ON UPDATE TO t2\n DO INSTEAD UPDATE t1 SET val1 = NEW.val2\n RETURNING OLD.*;\n\nUPDATE t2 SET val2 = 'New value 2'\n RETURNING old.val2 AS old_val2, new.val2 AS new_val2,\n t2.val2 AS t2_val2, val2;\n\n old_val2 | new_val2 | t2_val2 | val2\n-------------+-------------+-------------+-------------\n Old value 2 | Old value 2 | Old value 2 | Old value 2\n(1 row)\n\nThe reason this happens is that the Vars in the returning list don't\nrefer to the result relation, and so setting varreturningtype on them\nhas no effect, and is simply ignored. This can be seen by looking at\nthe query plan:\n\n QUERY PLAN\n----------------------------------------------------------------\n Update on public.t1\n Output: old.(t2.val2), new.(t2.val2), t2.val2, t2.val2\n -> Nested Loop\n Output: 'New value 2'::text, t1.ctid, t2.ctid, t2.val2\n -> Seq Scan on public.t1\n Output: t1.ctid\n -> Materialize\n Output: t2.ctid, t2.val2\n -> Seq Scan on public.t2\n Output: t2.ctid, t2.val2\n\nSo all the final output values come from t2, not the result relation t1.\n\nCase 3\n======\n\nSimilarly, if the rule contains \"RETURNING NEW.*\", the effect is\nsimilar, because again, the Vars in the RETURNING list don't refer to\nthe result relation in the rewritten query:\n\nDROP TABLE IF EXISTS t1, t2 CASCADE;\nCREATE TABLE t1 (val1 text);\nINSERT INTO t1 VALUES ('Old value 1');\nCREATE TABLE t2 (val2 text);\nINSERT INTO t2 VALUES ('Old value 2');\n\nCREATE RULE r2 AS ON UPDATE TO t2\n DO INSTEAD UPDATE t1 SET val1 = NEW.val2\n RETURNING NEW.*;\n\nUPDATE t2 SET val2 = 'New value 2'\n RETURNING old.val2 AS old_val2, new.val2 AS new_val2,\n t2.val2 AS t2_val2, val2;\n\n old_val2 | new_val2 | t2_val2 | val2\n-------------+-------------+-------------+-------------\n New value 2 | New value 2 | New value 2 | New value 2\n(1 row)\n\nThis time, the query plan shows that the result values are coming from\nthe new source values:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Update on public.t1\n Output: old.('New value 2'::text), new.('New value 2'::text), 'New\nvalue 2'::text, 'New value 2'::text\n -> Nested Loop\n Output: 'New value 2'::text, t1.ctid, t2.ctid\n -> Seq Scan on public.t1\n Output: t1.ctid\n -> Materialize\n Output: t2.ctid\n -> Seq Scan on public.t2\n Output: t2.ctid\n\nCase 4\n======\n\nIt's also possible to use the new returning old/new syntax in the\nrule, by defining custom aliases. This has a subtly different meaning,\nbecause it indicates that Vars in the rewritten query should refer to\nthe result relation, with varreturningtype set accordingly. So, for\nexample, returning old in the rule using this technique leads to the\nfollowing behaviour:\n\nDROP TABLE IF EXISTS t1, t2 CASCADE;\nCREATE TABLE t1 (val1 text);\nINSERT INTO t1 VALUES ('Old value 1');\nCREATE TABLE t2 (val2 text);\nINSERT INTO t2 VALUES ('Old value 2');\n\nCREATE RULE r2 AS ON UPDATE TO t2\n DO INSTEAD UPDATE t1 SET val1 = NEW.val2\n RETURNING WITH (OLD AS o) o.*;\n\nUPDATE t2 SET val2 = 'New value 2'\n RETURNING old.val2 AS old_val2, new.val2 AS new_val2,\n t2.val2 AS t2_val2, val2;\n\n old_val2 | new_val2 | t2_val2 | val2\n-------------+-------------+-------------+-------------\n Old value 1 | New value 2 | Old value 1 | Old value 1\n(1 row)\n\nThe query plan for this indicates that all returned values now come\nfrom the result relation, but the default is to return old values\nrather than new values, and it now allows that default to be\noverridden:\n\n QUERY PLAN\n-------------------------------------------------------\n Update on public.t1\n Output: old.val1, new.val1, old.val1, old.val1\n -> Nested Loop\n Output: 'New value 2'::text, t1.ctid, t2.ctid\n -> Seq Scan on public.t1\n Output: t1.ctid\n -> Materialize\n Output: t2.ctid\n -> Seq Scan on public.t2\n Output: t2.ctid\n\nCase 5\n======\n\nSimilarly, the rule can use the new syntax to return new values:\n\nDROP TABLE IF EXISTS t1, t2 CASCADE;\nCREATE TABLE t1 (val1 text);\nINSERT INTO t1 VALUES ('Old value 1');\nCREATE TABLE t2 (val2 text);\nINSERT INTO t2 VALUES ('Old value 2');\n\nCREATE RULE r2 AS ON UPDATE TO t2\n DO INSTEAD UPDATE t1 SET val1 = NEW.val2\n RETURNING WITH (NEW AS n) n.*;\n\nUPDATE t2 SET val2 = 'New value 2'\n RETURNING old.val2 AS old_val2, new.val2 AS new_val2,\n t2.val2 AS t2_val2, val2;\n\n old_val2 | new_val2 | t2_val2 | val2\n-------------+-------------+-------------+-------------\n Old value 1 | New value 2 | New value 2 | New value 2\n(1 row)\n\nwhich is the same result as case 1, but with a slightly different query plan:\n\n QUERY PLAN\n-------------------------------------------------------\n Update on public.t1\n Output: old.val1, new.val1, new.val1, new.val1\n -> Nested Loop\n Output: 'New value 2'::text, t1.ctid, t2.ctid\n -> Seq Scan on public.t1\n Output: t1.ctid\n -> Materialize\n Output: t2.ctid\n -> Seq Scan on public.t2\n Output: t2.ctid\n\nThis explicitly sets the defaults for \"t2.val2\" and \"val2\"\nunqualified, whereas in case 1 they were the implicit defaults for an\nUPDATE command.\n\nI think that all that is probably reasonable, but it definitely needs\ndocumenting, which I haven't attempted yet.\n\nOverall, I'm pretty hesitant to try to commit this to v17. Aside from\nthe fact that there's a lot of new code that hasn't had much in the\nway of review or discussion, I also feel that I probably haven't fully\nconsidered all areas where additional complexity might arise. It\ndoesn't seem like that long ago that this was just a prototype, and\nit's certainly not that long ago that I had to add a substantial\namount of new code to deal with the auto-updatable view case that I\nhad completely overlooked.\n\nSo on reflection, rather than trying to rush to get this into v17, I\nthink it would be better to leave it to v18.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 27 Mar 2024 13:19:31 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Wed, 2024-03-27 at 13:19 +0000, Dean Rasheed wrote:\n\n> What I'm most worried about now is that there are other areas of\n> functionality like that, that I'm overlooking, and which will\n> interact\n> with this feature in non-trivial ways.\n\nAgreed. I'm not sure exactly how we'd find those other areas (if they\nexist) aside from just having more eyes on the code.\n\n> \n> So on reflection, rather than trying to rush to get this into v17, I\n> think it would be better to leave it to v18.\n\nThank you for letting me know. That allows some time for others to have\na look.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 27 Mar 2024 08:28:22 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "Rebased version attached, on top of 0294df2f1f (MERGE .. WHEN NOT\nMATCHED BY SOURCE), with a few additional tests. No code changes, just\nkeeping it up to date.\n\nRegards,\nDean", "msg_date": "Sat, 30 Mar 2024 15:31:47 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Sat, 30 Mar 2024 at 15:31, Dean Rasheed <[email protected]> wrote:\n>\n> Rebased version attached, on top of 0294df2f1f (MERGE .. WHEN NOT\n> MATCHED BY SOURCE), with a few additional tests. No code changes, just\n> keeping it up to date.\n>\n\nNew version attached, rebased following the revert of 87985cc925, but\nalso with a few other changes:\n\nI've added a note to rules.sgml explaining how this interacts with rules.\n\nI've redone the way old/new system attributes are evaluated -- the\nprevious code changed slot_getsysattr() to try to decide when to\nreturn NULL, but that didn't work correctly if the CTID was invalid\nbut non-NULL, something I hadn't anticipated, but which shows up in\nthe new tests added by 6572bd55b0. Instead, ExecEvalSysVar() now\nchecks if the OLD/NEW row exists, so there's no need to change\nslot_getsysattr(), which seems much better.\n\nI've added a new elog() error check to\nadjust_appendrel_attrs_mutator(), similar to the existing one for\nvarnullingrels, to report if we ever attempt to apply a non-default\nvarreturningtype to a non-Var, which should never be possible, but\nseems worth checking. (non-Var expressions should only occur if we've\nflattened a UNION ALL query, so shouldn't apply to the target relation\nof a data-modifying query with RETURNING.)\n\nThe previous patch added a new rewriter function\nReplaceReturningVarsFromTargetList() to rewrite the RETURNING list,\nbut that duplicated a lot of code from ReplaceVarsFromTargetList(), so\nI've now just merged them together, which looks a lot neater.\n\nRegards,\nDean", "msg_date": "Wed, 26 Jun 2024 12:18:16 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Wed, 26 Jun 2024 at 12:18, Dean Rasheed <[email protected]> wrote:\n>\n> I've added a new elog() error check to\n> adjust_appendrel_attrs_mutator(), similar to the existing one for\n> varnullingrels, to report if we ever attempt to apply a non-default\n> varreturningtype to a non-Var, which should never be possible, but\n> seems worth checking. (non-Var expressions should only occur if we've\n> flattened a UNION ALL query, so shouldn't apply to the target relation\n> of a data-modifying query with RETURNING.)\n>\n\nNew version attached, updating an earlier comment in\nadjust_appendrel_attrs_mutator() that I had missed.\n\nRegards,\nDean", "msg_date": "Fri, 12 Jul 2024 18:22:09 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Sat, Jul 13, 2024 at 1:22 AM Dean Rasheed <[email protected]> wrote:\n>\n> On Wed, 26 Jun 2024 at 12:18, Dean Rasheed <[email protected]> wrote:\n> >\n> > I've added a new elog() error check to\n> > adjust_appendrel_attrs_mutator(), similar to the existing one for\n> > varnullingrels, to report if we ever attempt to apply a non-default\n> > varreturningtype to a non-Var, which should never be possible, but\n> > seems worth checking. (non-Var expressions should only occur if we've\n> > flattened a UNION ALL query, so shouldn't apply to the target relation\n> > of a data-modifying query with RETURNING.)\n> >\n>\n> New version attached, updating an earlier comment in\n> adjust_appendrel_attrs_mutator() that I had missed.\n>\n\n\nhi.\nI have some minor questions, but overall it just works.\n\n@@ -4884,6 +5167,18 @@ ExecEvalSysVar(ExprState *state, ExprEva\n {\n Datum d;\n\n+ /* if OLD/NEW row doesn't exist, OLD/NEW system attribute is NULL */\n+ if ((op->d.var.varreturningtype == VAR_RETURNING_OLD &&\n+ state->flags & EEO_FLAG_OLD_IS_NULL) ||\n+ (op->d.var.varreturningtype == VAR_RETURNING_NEW &&\n+ state->flags & EEO_FLAG_NEW_IS_NULL))\n+ {\n+ *op->resvalue = (Datum) 0;\n+ *op->resnull = true;\n+\n+ return;\n+ }\n+\nin ExecEvalSysVar, we can add Asserts\nAssert(state->flags & EEO_FLAG_HAS_OLD || state->flags & EEO_FLAG_HAS_NEW);\nif I understand it correctly.\n\n\nin make_modifytable,\ncontain_vars_returning_old_or_new((Node *) root->parse->returningList))\nthis don't need to go through the loop\n```\nforeach(lc, resultRelations)\n```\n\n\n+ * In addition, the caller must provide result_relation, the index of the\n+ * target relation for an INSERT/UPDATE/DELETE/MERGE. This is needed to\n+ * handle any OLD/NEW RETURNING list Vars referencing target_varno. When such\n+ * Vars are expanded, varreturningtype is copied onto any replacement Vars\n+ * that reference result_relation. In addition, if the replacement expression\n+ * from the targetlist is not simply a Var referencing result_relation, we\n+ * wrap it in a ReturningExpr node, to force it to be NULL if the OLD/NEW row\n+ * doesn't exist.\n+ *\n * outer_hasSubLinks works the same as for replace_rte_variables().\n */\n@@ -1657,6 +1736,7 @@ typedef struct\n {\n RangeTblEntry *target_rte;\n List *targetlist;\n+ int result_relation;\n ReplaceVarsNoMatchOption nomatch_option;\n int nomatch_varno;\n } ReplaceVarsFromTargetList_context;\n\n\"to force it to be NULL if the OLD/NEW row doesn't exist.\"\nI am slightly confused.\ni think you mean: \"to force it to be NULL if the OLD/NEW row will be\nresulting null.\"\nFor INSERT, the old row is all null, for DELETE, the new row is all null.\n\n\n\nin sql-update.html\n\"An unqualified column name or * causes new values to be returned. The\nsame applies to columns qualified using the target table name or\nalias. \"\n\"The same\", I think, refers \"causes new values to be returned\", but I\ni am not so sure.\n(apply to sql-insert.sql-delete, sql-merge).\n\n\n", "msg_date": "Fri, 19 Jul 2024 08:11:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Fri, 19 Jul 2024 at 01:11, jian he <[email protected]> wrote:\n>\n> hi.\n> I have some minor questions, but overall it just works.\n\nThanks for the review!\n\n> in ExecEvalSysVar, we can add Asserts\n> Assert(state->flags & EEO_FLAG_HAS_OLD || state->flags & EEO_FLAG_HAS_NEW);\n> if I understand it correctly.\n\nOK. I think it's probably worth coding defensively here, so I have\nadded more specific Asserts, based on the actual varreturningtype (and\nI didn't really like that old \"if\" condition anyway, so I've rewritten\nit as a switch).\n\n> in make_modifytable,\n> contain_vars_returning_old_or_new((Node *) root->parse->returningList))\n> this don't need to go through the loop\n> ```\n> foreach(lc, resultRelations)\n> ```\n\nGood point. I agree, it's worth ensuring that we don't call\ncontain_vars_returning_old_or_new() multiple times (or at all, if we\ndon't need to).\n\n> + * In addition, the caller must provide result_relation, the index of the\n> + * target relation for an INSERT/UPDATE/DELETE/MERGE. This is needed to\n> + * handle any OLD/NEW RETURNING list Vars referencing target_varno. When such\n> + * Vars are expanded, varreturningtype is copied onto any replacement Vars\n> + * that reference result_relation. In addition, if the replacement expression\n> + * from the targetlist is not simply a Var referencing result_relation, we\n> + * wrap it in a ReturningExpr node, to force it to be NULL if the OLD/NEW row\n> + * doesn't exist.\n> + *\n> I am slightly confused.\n> i think you mean: \"to force it to be NULL if the OLD/NEW row will be\n> resulting null.\"\n> For INSERT, the old row is all null, for DELETE, the new row is all null.\n\nNo, I think it's slightly more accurate to say that the old row\ndoesn't exist for INSERT and the new row doesn't exist for DELETE. The\nend result is that all the values will be NULL, so in that sense it's\nthe same as the old/new row being NULL, or being an all-NULL tuple.\n\n> in sql-update.html\n> \"An unqualified column name or * causes new values to be returned. The\n> same applies to columns qualified using the target table name or\n> alias. \"\n> \"The same\", I think, refers \"causes new values to be returned\", but I\n> i am not so sure.\n> (apply to sql-insert.sql-delete, sql-merge).\n\nOK, I have rewritten and expanded upon that a bit to try to make it\nclearer. I also decided that this discussion really belongs in the\noutput_expression description, rather than under output_alias.\n\nThanks again for the review. Updated patch attached.\n\nRegards,\nDean", "msg_date": "Fri, 19 Jul 2024 12:55:28 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Fri, 19 Jul 2024 at 12:55, Dean Rasheed <[email protected]> wrote:\n>\n> Thanks again for the review. Updated patch attached.\n>\n\nTrivial rebase, following c7301c3b6f.\n\nRegards,\nDean", "msg_date": "Mon, 29 Jul 2024 11:22:33 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 29 Jul 2024 at 11:22, Dean Rasheed <[email protected]> wrote:\n>\n> Trivial rebase, following c7301c3b6f.\n>\n\nRebased version, forced by a7f107df2b. Evaluating the input parameters\nof correlated SubPlans in the referencing ExprState simplifies this\npatch in a couple of places, since it no longer has to worry about\ncopying ExprState flags to a new ExprState.\n\nRegards,\nDean", "msg_date": "Thu, 1 Aug 2024 12:33:11 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Thu, Aug 1, 2024 at 7:33 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Mon, 29 Jul 2024 at 11:22, Dean Rasheed <[email protected]> wrote:\n> >\n> > Trivial rebase, following c7301c3b6f.\n> >\n>\n> Rebased version, forced by a7f107df2b. Evaluating the input parameters\n> of correlated SubPlans in the referencing ExprState simplifies this\n> patch in a couple of places, since it no longer has to worry about\n> copying ExprState flags to a new ExprState.\n>\n\nhi. some minor issues.\n\n saveOld = changingPart && resultRelInfo->ri_projectReturning &&\n resultRelInfo->ri_projectReturning->pi_state.flags & EEO_FLAG_HAS_OLD;\n if (resultRelInfo->ri_projectReturning && (processReturning || saveOld))\n {\n }\n\n\"saveOld\" imply \"resultRelInfo->ri_projectReturning\"\nwe can simplified it as\n\n if (processReturning || saveOld))\n {\n }\n\n\n\nfor projectReturning->pi_state.flags,\nwe don't use EEO_FLAG_OLD_IS_NULL, EEO_FLAG_NEW_IS_NULL\nin ExecProcessReturning, we can do the following way.\n\n\n /* Make old/new tuples available to ExecProject, if required */\n if (oldSlot)\n econtext->ecxt_oldtuple = oldSlot;\n else if (projectReturning->pi_state.flags & EEO_FLAG_HAS_OLD)\n econtext->ecxt_oldtuple = ExecGetAllNullSlot(estate, resultRelInfo);\n else\n econtext->ecxt_oldtuple = NULL; /* No references to OLD columns */\n\n if (newSlot)\n econtext->ecxt_newtuple = newSlot;\n else if (projectReturning->pi_state.flags & EEO_FLAG_HAS_NEW)\n econtext->ecxt_newtuple = ExecGetAllNullSlot(estate, resultRelInfo);\n else\n econtext->ecxt_newtuple = NULL; /* No references to NEW columns */\n\n /*\n * Tell ExecProject whether or not the OLD/NEW rows exist (needed for any\n * ReturningExpr nodes).\n */\n if (oldSlot == NULL)\n projectReturning->pi_state.flags |= EEO_FLAG_OLD_IS_NULL;\n else\n projectReturning->pi_state.flags &= ~EEO_FLAG_OLD_IS_NULL;\n\n if (newSlot == NULL)\n projectReturning->pi_state.flags |= EEO_FLAG_NEW_IS_NULL;\n else\n projectReturning->pi_state.flags &= ~EEO_FLAG_NEW_IS_NULL;\n\n\n@@ -2620,6 +2620,13 @@ transformWholeRowRef(ParseState *pstate,\n * point, there seems no harm in expanding it now rather than during\n * planning.\n *\n+ * Note that if the nsitem is an OLD/NEW alias for the target RTE (as can\n+ * appear in a RETURNING list), its alias won't match the target RTE's\n+ * alias, but we still want to make a whole-row Var here rather than a\n+ * RowExpr, for consistency with direct references to the target RTE, and\n+ * so that any dropped columns are handled correctly. Thus we also check\n+ * p_returning_type here.\n+ *\nmakeWholeRowVar and subroutines only related to pg_type, but dropped\ncolumn info is in pg_attribute.\nI don't understand \"so that any dropped columns are handled correctly\".\n\n\nExecEvalSysVar, slot_getsysattr we have \"Assert(attnum < 0);\"\nbut\nExecEvalSysVar, while rowIsNull is true, we didn't do \"Assert(attnum < 0);\"\n\n\n", "msg_date": "Fri, 2 Aug 2024 15:25:07 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Fri, 2 Aug 2024 at 08:25, jian he <[email protected]> wrote:\n>\n> if (resultRelInfo->ri_projectReturning && (processReturning || saveOld))\n> {\n> }\n>\n> \"saveOld\" imply \"resultRelInfo->ri_projectReturning\"\n> we can simplified it as\n>\n> if (processReturning || saveOld))\n> {\n> }\n>\n\nNo, because processReturning can be true when\nresultRelInfo->ri_projectReturning is NULL (no RETURNING list). So we\ndo still need to check that resultRelInfo->ri_projectReturning is\nnon-NULL.\n\n> for projectReturning->pi_state.flags,\n> we don't use EEO_FLAG_OLD_IS_NULL, EEO_FLAG_NEW_IS_NULL\n> in ExecProcessReturning, we can do the following way.\n>\n> /* Make old/new tuples available to ExecProject, if required */\n> if (oldSlot)\n> econtext->ecxt_oldtuple = oldSlot;\n> else if (projectReturning->pi_state.flags & EEO_FLAG_HAS_OLD)\n> econtext->ecxt_oldtuple = ExecGetAllNullSlot(estate, resultRelInfo);\n> else\n> econtext->ecxt_oldtuple = NULL; /* No references to OLD columns */\n>\n> if (newSlot)\n> econtext->ecxt_newtuple = newSlot;\n> else if (projectReturning->pi_state.flags & EEO_FLAG_HAS_NEW)\n> econtext->ecxt_newtuple = ExecGetAllNullSlot(estate, resultRelInfo);\n> else\n> econtext->ecxt_newtuple = NULL; /* No references to NEW columns */\n>\n> /*\n> * Tell ExecProject whether or not the OLD/NEW rows exist (needed for any\n> * ReturningExpr nodes).\n> */\n> if (oldSlot == NULL)\n> projectReturning->pi_state.flags |= EEO_FLAG_OLD_IS_NULL;\n> else\n> projectReturning->pi_state.flags &= ~EEO_FLAG_OLD_IS_NULL;\n>\n> if (newSlot == NULL)\n> projectReturning->pi_state.flags |= EEO_FLAG_NEW_IS_NULL;\n> else\n> projectReturning->pi_state.flags &= ~EEO_FLAG_NEW_IS_NULL;\n>\n\nI'm not sure I understand your point. It's true that\nEEO_FLAG_OLD_IS_NULL and EEO_FLAG_NEW_IS_NULL aren't used directly in\nExecProcessReturning(), but they are used in stuff called from\nExecProject().\n\nIf the point was just to swap those 2 code blocks round, then OK, I\nguess maybe it reads a little better that way round, though it doesn't\nreally make any difference either way.\n\nI did notice that that comment should mention that ExecEvalSysVar()\nalso uses these flags, so I've updated it to do so.\n\n> @@ -2620,6 +2620,13 @@ transformWholeRowRef(ParseState *pstate,\n> * point, there seems no harm in expanding it now rather than during\n> * planning.\n> *\n> + * Note that if the nsitem is an OLD/NEW alias for the target RTE (as can\n> + * appear in a RETURNING list), its alias won't match the target RTE's\n> + * alias, but we still want to make a whole-row Var here rather than a\n> + * RowExpr, for consistency with direct references to the target RTE, and\n> + * so that any dropped columns are handled correctly. Thus we also check\n> + * p_returning_type here.\n> + *\n> makeWholeRowVar and subroutines only related to pg_type, but dropped\n> column info is in pg_attribute.\n> I don't understand \"so that any dropped columns are handled correctly\".\n>\n\nThe nsitem contains references to dropped columns, so if you expanded\nit as a RowExpr, you'd end up with mismatched columns and it would\nfail (somewhere under ParseFuncOrColumn(), from transformColumnRef(),\nI think). There's a regression test case in returning.sql that covers\nthat.\n\n> ExecEvalSysVar, slot_getsysattr we have \"Assert(attnum < 0);\"\n> but\n> ExecEvalSysVar, while rowIsNull is true, we didn't do \"Assert(attnum < 0);\"\n\nI don't see much value in that, since we aren't going to evaluate the\nattribute if the old/new row is null.\n\nRegards,\nDean", "msg_date": "Fri, 2 Aug 2024 11:12:52 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Fri, Aug 2, 2024 at 6:13 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Fri, 2 Aug 2024 at 08:25, jian he <[email protected]> wrote:\n> >\n> > if (resultRelInfo->ri_projectReturning && (processReturning || saveOld))\n> > {\n> > }\n> >\n> > \"saveOld\" imply \"resultRelInfo->ri_projectReturning\"\n> > we can simplified it as\n> >\n> > if (processReturning || saveOld))\n> > {\n> > }\n> >\n>\n> No, because processReturning can be true when\n> resultRelInfo->ri_projectReturning is NULL (no RETURNING list). So we\n> do still need to check that resultRelInfo->ri_projectReturning is\n> non-NULL.\n>\n> > for projectReturning->pi_state.flags,\n> > we don't use EEO_FLAG_OLD_IS_NULL, EEO_FLAG_NEW_IS_NULL\n> > in ExecProcessReturning, we can do the following way.\n> >\n> > /* Make old/new tuples available to ExecProject, if required */\n> > if (oldSlot)\n> > econtext->ecxt_oldtuple = oldSlot;\n> > else if (projectReturning->pi_state.flags & EEO_FLAG_HAS_OLD)\n> > econtext->ecxt_oldtuple = ExecGetAllNullSlot(estate, resultRelInfo);\n> > else\n> > econtext->ecxt_oldtuple = NULL; /* No references to OLD columns */\n> >\n> > if (newSlot)\n> > econtext->ecxt_newtuple = newSlot;\n> > else if (projectReturning->pi_state.flags & EEO_FLAG_HAS_NEW)\n> > econtext->ecxt_newtuple = ExecGetAllNullSlot(estate, resultRelInfo);\n> > else\n> > econtext->ecxt_newtuple = NULL; /* No references to NEW columns */\n> >\n> > /*\n> > * Tell ExecProject whether or not the OLD/NEW rows exist (needed for any\n> > * ReturningExpr nodes).\n> > */\n> > if (oldSlot == NULL)\n> > projectReturning->pi_state.flags |= EEO_FLAG_OLD_IS_NULL;\n> > else\n> > projectReturning->pi_state.flags &= ~EEO_FLAG_OLD_IS_NULL;\n> >\n> > if (newSlot == NULL)\n> > projectReturning->pi_state.flags |= EEO_FLAG_NEW_IS_NULL;\n> > else\n> > projectReturning->pi_state.flags &= ~EEO_FLAG_NEW_IS_NULL;\n> >\n>\n> I'm not sure I understand your point. It's true that\n> EEO_FLAG_OLD_IS_NULL and EEO_FLAG_NEW_IS_NULL aren't used directly in\n> ExecProcessReturning(), but they are used in stuff called from\n> ExecProject().\n>\n> If the point was just to swap those 2 code blocks round, then OK, I\n> guess maybe it reads a little better that way round, though it doesn't\n> really make any difference either way.\n\nsorry for confusion. I mean \"swap those 2 code blocks round\".\nI think it will make it more readable, because you first check\nprojectReturning->pi_state.flags\nwith EEO_FLAG_HAS_NEW, EEO_FLAG_HAS_OLD\nthen change it.\n\n\n> I did notice that that comment should mention that ExecEvalSysVar()\n> also uses these flags, so I've updated it to do so.\n>\n> > @@ -2620,6 +2620,13 @@ transformWholeRowRef(ParseState *pstate,\n> > * point, there seems no harm in expanding it now rather than during\n> > * planning.\n> > *\n> > + * Note that if the nsitem is an OLD/NEW alias for the target RTE (as can\n> > + * appear in a RETURNING list), its alias won't match the target RTE's\n> > + * alias, but we still want to make a whole-row Var here rather than a\n> > + * RowExpr, for consistency with direct references to the target RTE, and\n> > + * so that any dropped columns are handled correctly. Thus we also check\n> > + * p_returning_type here.\n> > + *\n> > makeWholeRowVar and subroutines only related to pg_type, but dropped\n> > column info is in pg_attribute.\n> > I don't understand \"so that any dropped columns are handled correctly\".\n> >\n>\n> The nsitem contains references to dropped columns, so if you expanded\n> it as a RowExpr, you'd end up with mismatched columns and it would\n> fail (somewhere under ParseFuncOrColumn(), from transformColumnRef(),\n> I think). There's a regression test case in returning.sql that covers\n> that.\nplay around with it, get it.\n\nif (nsitem->p_names == nsitem->p_rte->eref ||\n nsitem->p_returning_type != VAR_RETURNING_DEFAULT)\nelse\n{\n expandRTE(nsitem->p_rte, nsitem->p_rtindex, sublevels_up,\n nsitem->p_returning_type, location, false, NULL, &fields);\n}\nThe ELSE branch expandRTE include_dropped argument is false.\nthat makes the ELSE branch unable to deal with dropped columns.\n\n\n\ntook me a while to understand the changes in rewriteHandler.c, rewriteManip.c\nrule over updateable view still works, but I didn't check closely with\nrewriteRuleAction.\ni think I understand rewriteTargetView and subroutines.\n\n * In addition, the caller must provide result_relation, the index of the\n * target relation for an INSERT/UPDATE/DELETE/MERGE. This is needed to\n * handle any OLD/NEW RETURNING list Vars referencing target_varno. When such\n * Vars are expanded, varreturningtype is copied onto any replacement Vars\n * that reference result_relation. In addition, if the replacement expression\n * from the targetlist is not simply a Var referencing result_relation, we\n * wrap it in a ReturningExpr node, to force it to be NULL if the OLD/NEW row\n * doesn't exist.\n\n\"the index of the target relation for an INSERT/UPDATE/DELETE/MERGE\",\nhere, \"target relation\" I think people may be confused whether it\nrefers to view relation or the base relation.\nI think here the target relation is the base relation (rtekind == RTE_RELATION)\n\n\n\" to force it to be NULL if the OLD/NEW row doesn't exist.\"\ni think this happen in execExpr.c?\nmaybe\n\" to force it to be NULL if the OLD/NEW row doesn't exist, see execExpr.c\"\n\noverall, looks good to me.\n\n\n", "msg_date": "Mon, 5 Aug 2024 19:45:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 5 Aug 2024 at 12:46, jian he <[email protected]> wrote:\n>\n> took me a while to understand the changes in rewriteHandler.c, rewriteManip.c\n> rule over updateable view still works, but I didn't check closely with\n> rewriteRuleAction.\n> i think I understand rewriteTargetView and subroutines.\n>\n> * In addition, the caller must provide result_relation, the index of the\n> * target relation for an INSERT/UPDATE/DELETE/MERGE. This is needed to\n> * handle any OLD/NEW RETURNING list Vars referencing target_varno. When such\n> * Vars are expanded, varreturningtype is copied onto any replacement Vars\n> * that reference result_relation. In addition, if the replacement expression\n> * from the targetlist is not simply a Var referencing result_relation, we\n> * wrap it in a ReturningExpr node, to force it to be NULL if the OLD/NEW row\n> * doesn't exist.\n>\n> \"the index of the target relation for an INSERT/UPDATE/DELETE/MERGE\",\n> here, \"target relation\" I think people may be confused whether it\n> refers to view relation or the base relation.\n> I think here the target relation is the base relation (rtekind == RTE_RELATION)\n\nYes, it's the result relation in the rewritten query. I've updated\nthat comment to try to make that clearer.\n\nBasically, if a replacement Var refers to the new result relation in\nthe rewritten query, then its varreturningtype needs to be set\ncorrectly. Otherwise, if it refers to some other relation, its\nvarreturningtype shouldn't be changed, but it does need to be wrapped\nin a ReturningExpr node, if the original Var had a non-default\nvarreturningtype, so that it evaluates as NULL if the old/new row\ndoesn't exist.\n\n> \" to force it to be NULL if the OLD/NEW row doesn't exist.\"\n> i think this happen in execExpr.c?\n> maybe\n> \" to force it to be NULL if the OLD/NEW row doesn't exist, see execExpr.c\"\n\nOK, I've updated it to just say that this causes the executor to\nreturn NULL if the old/new row doesn't exist. There are multiple\nplaces in the executor that actually make that happen, so it doesn't\nmake sense to just refer to one place.\n\n> overall, looks good to me.\n\nThanks for reviewing.\n\nI'm pretty happy with the patch now, but I was just thinking about the\nwholerow case a little more, and I think it's worth changing the way\nthat's handled.\n\nPreviously, if you wrote something like \"RETURNING old\", and the old\nrow didn't exist, it would return an all-NULL record (displayed as\nsomething like '(,,,,)'), but I don't think that's really right. I\nthink it should actually return NULL. I think that's more consistent\nwith the way \"non-existent\" is generally handled, for example in a\nquery like \"SELECT t1, t2 FROM t1 OUTER JOIN t2 ON ...\".\n\nIt's pretty trivial, but it does involve changing code in 2 places\n(the first for regular tables, and the second for updatable views):\n\n1. ExecEvalWholeRowVar() now checks EEO_FLAG_OLD_IS_NULL and\nEEO_FLAG_NEW_IS_NULL. This makes it more consistent with\nExecEvalSysVar(), so I added the same Asserts.\n\n2. ReplaceVarsFromTargetList() now wraps the RowExpr node created in\nthe wholerow case in a ReturningExpr. That's consistent with the\nfunction's comment: \"if the replacement expression from the targetlist\nis not simply a Var referencing result_relation, it is wrapped in a\nReturningExpr node\".\n\nBoth those changes seem quite natural and consistent, and I think the\nresulting test output looks much nicer.\n\nRegards,\nDean", "msg_date": "Wed, 7 Aug 2024 19:39:38 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "hi.\n\ntook me a while to understand how the returning clause Var nodes\ncorrectly reference the relations RT index.\nmainly in set_plan_references, set_plan_refs and\nset_returning_clause_references.\n\ndo you think we need do something in\nset_returning_clause_references->build_tlist_index_other_vars\nto make sure that\nif the topplan->targetlist associated Var's varreturningtype is not default,\nthen the var->varno must equal to resultRelation.\nbecause set_plan_references is almost at the end of standard_planner,\nbefore that things may change.\n\n\n\n /*\n * Tell ExecProject whether or not the OLD/NEW rows exist (needed for any\n * ReturningExpr nodes and ExecEvalSysVar).\n */\n if (oldSlot == NULL)\n projectReturning->pi_state.flags |= EEO_FLAG_OLD_IS_NULL;\n else\n projectReturning->pi_state.flags &= ~EEO_FLAG_OLD_IS_NULL;\n if (newSlot == NULL)\n projectReturning->pi_state.flags |= EEO_FLAG_NEW_IS_NULL;\n else\n projectReturning->pi_state.flags &= ~EEO_FLAG_NEW_IS_NULL;\n\nExecEvalWholeRowVar also uses this information, comment needs to be\nslightly adjusted?\n\n\n\nsimialr to\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=2bb969f3998489e5dc4fe9f2a61185b43581975d\ndo you think it's necessary to\n errmsg(\"%s cannot be specified multiple times\", \"NEW\"),\n errmsg(\"%s cannot be specified multiple times\", \"OLD\"),\n\n\n\n+ /*\n+ * Scan RETURNING WITH(...) options for OLD/NEW alias names. Complain if\n+ * there is any conflict with existing relations.\n+ */\n+ foreach_node(ReturningOption, option, returningClause->options)\n+ {\n+ if (refnameNamespaceItem(pstate, NULL, option->name, -1, NULL))\n+ ereport(ERROR,\n+ errcode(ERRCODE_DUPLICATE_ALIAS),\n+ errmsg(\"table name \\\"%s\\\" specified more than once\",\n+ option->name),\n+ parser_errposition(pstate, option->location));\n+\n+ if (option->isNew)\n+ {\n+ if (qry->returningNew != NULL)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"NEW cannot be specified multiple times\"),\n+ parser_errposition(pstate, option->location));\n+ qry->returningNew = option->name;\n+ }\n+ else\n+ {\n+ if (qry->returningOld != NULL)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"OLD cannot be specified multiple times\"),\n+ parser_errposition(pstate, option->location));\n+ qry->returningOld = option->name;\n+ }\n+ }\n+\n+ /*\n+ * If no OLD/NEW aliases specified, use \"old\"/\"new\" unless masked by\n+ * existing relations.\n+ */\n+ if (qry->returningOld == NULL &&\n+ refnameNamespaceItem(pstate, NULL, \"old\", -1, NULL) == NULL)\n+ qry->returningOld = \"old\";\n+ if (qry->returningNew == NULL &&\n+ refnameNamespaceItem(pstate, NULL, \"new\", -1, NULL) == NULL)\n+ qry->returningNew = \"new\";\n+\n+ /*\n+ * Add the OLD and NEW aliases to the query namespace, for use in\n+ * expressions in the RETURNING list.\n+ */\n+ save_nslen = list_length(pstate->p_namespace);\n+ if (qry->returningOld)\n+ addNSItemForReturning(pstate, qry->returningOld, VAR_RETURNING_OLD);\n+ if (qry->returningNew)\n+ addNSItemForReturning(pstate, qry->returningNew, VAR_RETURNING_NEW);\n\n\nthe only case we don't do addNSItemForReturning is when there is\nreally a RTE called \"new\" or \"old\".\nEven if the returning list doesn't specify \"new\" or \"old\", like\n\"returning 1\", we still do addNSItemForReturning.\nDo you think it's necessary in ReturningClause add two booleans\n\"hasold\", \"hasnew\".\nso if becomes\n+ if (qry->returningOld && hasold)\n+ addNSItemForReturning(pstate, qry->returningOld, VAR_RETURNING_OLD);\n+ if (qry->returningNew && hasnew)\n+ addNSItemForReturning(pstate, qry->returningNew, VAR_RETURNING_NEW);\n\nthat means in gram.y\nreturning_clause:\n RETURNING returning_with_clause target_list\n {\n ReturningClause *n = makeNode(ReturningClause);\n\n n->options = $2;\n n->exprs = $3;\n $$ = n;\n }\n\nn->exprs will have 3 branches: NEW.expr, OLD.expr, expr.\nI guess overall we can save some cycles?\n\n\n", "msg_date": "Mon, 12 Aug 2024 14:50:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Mon, 12 Aug 2024 at 07:51, jian he <[email protected]> wrote:\n>\n> took me a while to understand how the returning clause Var nodes\n> correctly reference the relations RT index.\n> mainly in set_plan_references, set_plan_refs and\n> set_returning_clause_references.\n>\n> do you think we need do something in\n> set_returning_clause_references->build_tlist_index_other_vars\n> to make sure that\n> if the topplan->targetlist associated Var's varreturningtype is not default,\n> then the var->varno must equal to resultRelation.\n> because set_plan_references is almost at the end of standard_planner,\n> before that things may change.\n\nHmm, well actually the check has to go in fix_join_expr_mutator().\nIt's really a \"shouldn't happen\" error, a little similar to other\nchecks in setrefs.c that elog errors, so I guess it's probably worth\ndouble-checking this too.\n\n> /*\n> * Tell ExecProject whether or not the OLD/NEW rows exist (needed for any\n> * ReturningExpr nodes and ExecEvalSysVar).\n> */\n> if (oldSlot == NULL)\n> projectReturning->pi_state.flags |= EEO_FLAG_OLD_IS_NULL;\n> else\n> projectReturning->pi_state.flags &= ~EEO_FLAG_OLD_IS_NULL;\n> if (newSlot == NULL)\n> projectReturning->pi_state.flags |= EEO_FLAG_NEW_IS_NULL;\n> else\n> projectReturning->pi_state.flags &= ~EEO_FLAG_NEW_IS_NULL;\n>\n> ExecEvalWholeRowVar also uses this information, comment needs to be\n> slightly adjusted?\n\nAh, yes.\n\n> simialr to\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=2bb969f3998489e5dc4fe9f2a61185b43581975d\n> do you think it's necessary to\n> errmsg(\"%s cannot be specified multiple times\", \"NEW\"),\n> errmsg(\"%s cannot be specified multiple times\", \"OLD\"),\n\nOK, I guess so.\n\n> + /*\n> + * Add the OLD and NEW aliases to the query namespace, for use in\n> + * expressions in the RETURNING list.\n> + */\n>\n> the only case we don't do addNSItemForReturning is when there is\n> really a RTE called \"new\" or \"old\".\n> Even if the returning list doesn't specify \"new\" or \"old\", like\n> \"returning 1\", we still do addNSItemForReturning.\n> Do you think it's necessary in ReturningClause add two booleans\n> \"hasold\", \"hasnew\".\n> so if becomes\n> + if (qry->returningOld && hasold)\n> + addNSItemForReturning(pstate, qry->returningOld, VAR_RETURNING_OLD);\n> + if (qry->returningNew && hasnew)\n> + addNSItemForReturning(pstate, qry->returningNew, VAR_RETURNING_NEW);\n>\n> that means in gram.y\n> returning_clause:\n> RETURNING returning_with_clause target_list\n> {\n> ReturningClause *n = makeNode(ReturningClause);\n>\n> n->options = $2;\n> n->exprs = $3;\n> $$ = n;\n> }\n>\n> n->exprs will have 3 branches: NEW.expr, OLD.expr, expr.\n> I guess overall we can save some cycles?\n\nNo, I think that would add a whole lot of unnecessary extra\ncomplication, because n->exprs can contain any arbitrary expressions,\nincluding subqueries, which would make it very hard for gram.y to tell\nif there really was a Var referencing old/new at a particular query\nlevel, and it would probably end up adding more cycles than it saved\nlater on, as well as being quite error-prone.\n\nRegards,\nDean", "msg_date": "Fri, 16 Aug 2024 11:39:46 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Fri, Aug 16, 2024 at 6:39 PM Dean Rasheed <[email protected]> wrote:\n>\n\nin Var comments:\n\n * varlevelsup is greater than zero in Vars that represent outer references.\n * Note that it affects the meaning of all of varno, varnullingrels, and\n * varnosyn, all of which refer to the range table of that query level.\n\nDoes this need to change accordingly?\n\ni found there is no privilege test in src/test/regress/sql/updatable_views.sql?\nDo we need to add some tests?\n\nOther than that, I didn't find any issue.\n\n\n", "msg_date": "Wed, 21 Aug 2024 17:06:49 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" }, { "msg_contents": "On Wed, 21 Aug 2024 at 10:07, jian he <[email protected]> wrote:\n>\n> in Var comments:\n>\n> * varlevelsup is greater than zero in Vars that represent outer references.\n> * Note that it affects the meaning of all of varno, varnullingrels, and\n> * varnosyn, all of which refer to the range table of that query level.\n>\n> Does this need to change accordingly?\n>\n\nNo, I don't think so. varlevelsup doesn't directly change the meaning\nof varreturningtype, any more than it changes the meaning of, say,\nvarattno. The point of that comment is that the fields varno,\nvarnullingrels, and varnosyn are (or contain) the range table indexes\nof relations, which by themselves are insufficient to identify the\nrelations -- varlevelsup must be used in combination with those fields\nto find the relations they refer to.\n\n> i found there is no privilege test in src/test/regress/sql/updatable_views.sql?\n> Do we need to add some tests?\n>\n\nI don't think so, because varreturningtype doesn't affect any\npermissions checks.\n\n> Other than that, I didn't find any issue.\n\nThanks for reviewing.\n\nIf there are no other issues, I think this is probably ready for commit.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 26 Aug 2024 12:24:03 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding OLD/NEW support to RETURNING" } ]
[ { "msg_contents": "This came up in the \"Refactoring backend fork+exec code\" thread recently \n[0], but is independent of that work:\n\nOn 11/07/2023 01:50, Andres Freund wrote:\n>> --- a/src/backend/storage/ipc/shmem.c\n>> +++ b/src/backend/storage/ipc/shmem.c\n>> @@ -144,6 +144,8 @@ InitShmemAllocation(void)\n>> \t/*\n>> \t * Initialize ShmemVariableCache for transaction manager. (This doesn't\n>> \t * really belong here, but not worth moving.)\n>> +\t *\n>> +\t * XXX: we really should move this\n>> \t */\n>> \tShmemVariableCache = (VariableCache)\n>> \t\tShmemAlloc(sizeof(*ShmemVariableCache));\n> \n> Heh. Indeed. And probably just rename it to something less insane.\n\nHere's a patch to allocate and initialize it with a pair of ShmemSize \nand ShmemInit functions, like all other shared memory structs.\n\n+1 on renaming it too. It's not a cache, the values tracked in \nShmemVariableCache are the authoritative source. Attached patch renames \nit to \"TransamVariables\", but I'm all ears for other suggestions.\n\n[0] \nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 4 Dec 2023 14:49:57 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Rename ShmemVariableCache and initialize it in more standard way" }, { "msg_contents": "On Mon Dec 4, 2023 at 6:49 AM CST, Heikki Linnakangas wrote:\n> This came up in the \"Refactoring backend fork+exec code\" thread recently \n> [0], but is independent of that work:\n>\n> On 11/07/2023 01:50, Andres Freund wrote:\n> >> --- a/src/backend/storage/ipc/shmem.c\n> >> +++ b/src/backend/storage/ipc/shmem.c\n> >> @@ -144,6 +144,8 @@ InitShmemAllocation(void)\n> >> \t/*\n> >> \t * Initialize ShmemVariableCache for transaction manager. (This doesn't\n> >> \t * really belong here, but not worth moving.)\n> >> +\t *\n> >> +\t * XXX: we really should move this\n> >> \t */\n> >> \tShmemVariableCache = (VariableCache)\n> >> \t\tShmemAlloc(sizeof(*ShmemVariableCache));\n> > \n> > Heh. Indeed. And probably just rename it to something less insane.\n>\n> Here's a patch to allocate and initialize it with a pair of ShmemSize \n> and ShmemInit functions, like all other shared memory structs.\n>\n\n> + if (!IsUnderPostmaster)\n> + {\n> + Assert(!found);\n> + memset(ShmemVariableCache, 0, sizeof(VariableCacheData));\n> + }\n> + else\n> + Assert(found);\n\nShould the else branch instead be a fatal log?\n\nPatches look good to me.\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 04 Dec 2023 10:31:36 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename ShmemVariableCache and initialize it in more standard\n way" }, { "msg_contents": "On Tue, Dec 5, 2023 at 12:31 AM Tristan Partin <[email protected]> wrote:\n\n> On Mon Dec 4, 2023 at 6:49 AM CST, Heikki Linnakangas wrote:\n> > This came up in the \"Refactoring backend fork+exec code\" thread recently\n> > [0], but is independent of that work:\n> >\n> > Here's a patch to allocate and initialize it with a pair of ShmemSize\n> > and ShmemInit functions, like all other shared memory structs.\n> >\n> > + if (!IsUnderPostmaster)\n> > + {\n> > + Assert(!found);\n> > + memset(ShmemVariableCache, 0,\n> sizeof(VariableCacheData));\n> > + }\n> > + else\n> > + Assert(found);\n>\n> Should the else branch instead be a fatal log?\n\n\nThe Assert here seems OK to me. We do the same when initializing\ncommitTsShared/MultiXactState. I think it would be preferable to adhere\nto this convention.\n\n\n> Patches look good to me.\n\n\nAlso +1 to the patches.\n\nThanks\nRichard\n\nOn Tue, Dec 5, 2023 at 12:31 AM Tristan Partin <[email protected]> wrote:On Mon Dec 4, 2023 at 6:49 AM CST, Heikki Linnakangas wrote:\n> This came up in the \"Refactoring backend fork+exec code\" thread recently \n> [0], but is independent of that work:\n>\n> Here's a patch to allocate and initialize it with a pair of ShmemSize \n> and ShmemInit functions, like all other shared memory structs.\n>\n>  +        if (!IsUnderPostmaster)\n>  +        {\n>  +                Assert(!found);\n>  +                memset(ShmemVariableCache, 0, sizeof(VariableCacheData));\n>  +        }\n>  +        else\n>  +                Assert(found);\n\nShould the else branch instead be a fatal log?The Assert here seems OK to me.  We do the same when initializingcommitTsShared/MultiXactState.  I think it would be preferable to adhereto this convention. \nPatches look good to me.Also +1 to the patches.ThanksRichard", "msg_date": "Tue, 5 Dec 2023 11:40:53 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename ShmemVariableCache and initialize it in more standard way" }, { "msg_contents": "On 05/12/2023 05:40, Richard Guo wrote:\n> On Tue, Dec 5, 2023 at 12:31 AM Tristan Partin <[email protected]> wrote:\n> On Mon Dec 4, 2023 at 6:49 AM CST, Heikki Linnakangas wrote:\n> > Here's a patch to allocate and initialize it with a pair of\n> ShmemSize\n> > and ShmemInit functions, like all other shared memory structs.\n> >\n> >  +        if (!IsUnderPostmaster)\n> >  +        {\n> >  +                Assert(!found);\n> >  +                memset(ShmemVariableCache, 0,\n> sizeof(VariableCacheData));\n> >  +        }\n> >  +        else\n> >  +                Assert(found);\n> \n>> Should the else branch instead be a fatal log?\n> \n> The Assert here seems OK to me.  We do the same when initializing\n> commitTsShared/MultiXactState.  I think it would be preferable to adhere\n> to this convention.\n\nRight. I'm not 100% happy with that pattern either, but better be \nconsistent.\n\nThere's a brief comment about this in CreateOrAttachShmemStructs():\n\n> * This is called by the postmaster or by a standalone backend.\n> * It is also called by a backend forked from the postmaster in the\n> * EXEC_BACKEND case. In the latter case, the shared memory segment\n> * already exists and has been physically attached to, but we have to\n> * initialize pointers in local memory that reference the shared structures,\n> * because we didn't inherit the correct pointer values from the postmaster\n> * as we do in the fork() scenario. The easiest way to do that is to run\n> * through the same code as before. (Note that the called routines mostly\n> * check IsUnderPostmaster, rather than EXEC_BACKEND, to detect this case.\n> * This is a bit code-wasteful and could be cleaned up.)\n\nThe last sentence refers to this pattern.\n\n>> Patches look good to me.\n> \n> Also +1 to the patches.\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 8 Dec 2023 09:51:11 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename ShmemVariableCache and initialize it in more standard way" } ]
[ { "msg_contents": "Every once in a while, I find myself wanting to use shared memory in a\nloadable module without requiring it to be loaded at server start via\nshared_preload_libraries. The DSM API offers a nice way to create and\nmanage dynamic shared memory segments, so creating a segment after server\nstart is easy enough. However, AFAICT there's no easy way to teach other\nbackends about the segment without storing the handles in shared memory,\nwhich puts us right back at square one.\n\nThe attached 0001 introduces a \"DSM registry\" to solve this problem. The\nAPI provides an easy way to allocate/initialize a segment or to attach to\nan existing one. The registry itself is just a dshash table that stores\nthe handles keyed by a module-specified string. 0002 adds a test for the\nregistry that demonstrates basic usage.\n\nI don't presently have any concrete plans to use this for anything, but I\nthought it might be useful for extensions for caching, etc. and wanted to\nsee whether there was any interest in the feature.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 4 Dec 2023 21:46:47 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "introduce dynamic shared memory registry" }, { "msg_contents": "On 12/4/23 22:46, Nathan Bossart wrote:\n> Every once in a while, I find myself wanting to use shared memory in a\n> loadable module without requiring it to be loaded at server start via\n> shared_preload_libraries. The DSM API offers a nice way to create and\n> manage dynamic shared memory segments, so creating a segment after server\n> start is easy enough. However, AFAICT there's no easy way to teach other\n> backends about the segment without storing the handles in shared memory,\n> which puts us right back at square one.\n> \n> The attached 0001 introduces a \"DSM registry\" to solve this problem. The\n> API provides an easy way to allocate/initialize a segment or to attach to\n> an existing one. The registry itself is just a dshash table that stores\n> the handles keyed by a module-specified string. 0002 adds a test for the\n> registry that demonstrates basic usage.\n> \n> I don't presently have any concrete plans to use this for anything, but I\n> thought it might be useful for extensions for caching, etc. and wanted to\n> see whether there was any interest in the feature.\n\nNotwithstanding any dragons there may be, and not having actually looked \nat the the patches, I love the concept! +<many>\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 10:34:52 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Dec 5, 2023 at 10:35 AM Joe Conway <[email protected]> wrote:\n> Notwithstanding any dragons there may be, and not having actually looked\n> at the the patches, I love the concept! +<many>\n\nSeems fine to me too. I haven't looked at the patches or searched for\ndragons either, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 11:16:29 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Dec 04, 2023 at 09:46:47PM -0600, Nathan Bossart wrote:\n> The attached 0001 introduces a \"DSM registry\" to solve this problem. The\n> API provides an easy way to allocate/initialize a segment or to attach to\n> an existing one. The registry itself is just a dshash table that stores\n> the handles keyed by a module-specified string. 0002 adds a test for the\n> registry that demonstrates basic usage.\n> \n> I don't presently have any concrete plans to use this for anything, but I\n> thought it might be useful for extensions for caching, etc. and wanted to\n> see whether there was any interest in the feature.\n\nYes, tracking that in a more central way can have many usages, so your\npatch sounds like a good idea. Note that we have one case in core\nthat be improved and make use of what you have here: autoprewarm.c.\n\nThe module supports the launch of dynamic workers but the library may\nnot be loaded with shared_preload_libraries, meaning that it can\nallocate a chunk of shared memory worth a size of\nAutoPrewarmSharedState without having requested it in a\nshmem_request_hook. AutoPrewarmSharedState could be moved to a DSM\nand tracked with the shared hash table introduced by the patch instead\nof acquiring AddinShmemInitLock while eating the plate of other\nfacilities that asked for a chunk of shmem, leaving any conflict\nhandling to dsm_registry_table.\n\n+dsm_registry_init_or_attach(const char *key, void **ptr, size_t size,\n+ void (*init_callback) (void *ptr))\n\nThis is shaped around dshash_find_or_insert(), but it looks like you'd\nwant an equivalent for dshash_find(), as well.\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 16:36:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On 5/12/2023 10:46, Nathan Bossart wrote:\n> I don't presently have any concrete plans to use this for anything, but I\n> thought it might be useful for extensions for caching, etc. and wanted to\n> see whether there was any interest in the feature.\n\nI am delighted that you commenced this thread.\nDesigning extensions, every time I feel pain introducing one shared \nvalue or some global stat, the extension must be required to be loadable \non startup only. It reduces the flexibility of even very lightweight \nextensions, which look harmful to use in a cloud.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Mon, 18 Dec 2023 13:39:22 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On 18/12/2023 13:39, Andrei Lepikhov wrote:\n> On 5/12/2023 10:46, Nathan Bossart wrote:\n>> I don't presently have any concrete plans to use this for anything, but I\n>> thought it might be useful for extensions for caching, etc. and wanted to\n>> see whether there was any interest in the feature.\n> \n> I am delighted that you commenced this thread.\n> Designing extensions, every time I feel pain introducing one shared \n> value or some global stat, the extension must be required to be loadable \n> on startup only. It reduces the flexibility of even very lightweight \n> extensions, which look harmful to use in a cloud.\n\nAfter looking into the code, I have some comments:\n1. The code looks good; I didn't find possible mishaps. Some proposed \nchanges are in the attachment.\n2. I think a separate file for this feature looks too expensive. \nAccording to the gist of that code, it is a part of the DSA module.\n3. The dsm_registry_init_or_attach routine allocates a DSM segment. Why \nnot create dsa_area for a requestor and return it?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional", "msg_date": "Mon, 18 Dec 2023 15:32:08 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "Hi!\n\nThis patch looks like a good solution for a pain in the ass, I'm too for\nthis patch to be committed.\nHave looked through the code and agree with Andrei, the code looks good.\nJust a suggestion - maybe it is worth adding a function for detaching the\nsegment,\nfor cases when we unload and/or re-load the extension?\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!This patch looks like a good solution for a pain in the ass, I'm too for this patch to be committed.Have looked through the code and agree with Andrei, the code looks good.Just a suggestion - maybe it is worth adding a function for detaching the segment,for cases when we unload and/or re-load the extension?--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Mon, 18 Dec 2023 12:05:28 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov\n<[email protected]> wrote:\n> 2. I think a separate file for this feature looks too expensive.\n> According to the gist of that code, it is a part of the DSA module.\n\n-1. I think this is a totally different thing than DSA. More files\naren't nearly as expensive as the confusion that comes from smushing\nunrelated things together.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 10:49:23 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Dec 18, 2023 at 03:32:08PM +0700, Andrei Lepikhov wrote:\n> 3. The dsm_registry_init_or_attach routine allocates a DSM segment. Why not\n> create dsa_area for a requestor and return it?\n\nMy assumption is that most modules just need a fixed-size segment, and if\nthey really needed a DSA segment, the handle, tranche ID, etc. could just\nbe stored in the DSM segment. Maybe that assumption is wrong, though...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 10:01:17 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Dec 18, 2023 at 12:05:28PM +0300, Nikita Malakhov wrote:\n> Just a suggestion - maybe it is worth adding a function for detaching the\n> segment,\n> for cases when we unload and/or re-load the extension?\n\nHm. We don't presently have a good way to unload a library, but you can\ncertainly DROP EXTENSION, in which case you might expect the segment to go\naway or at least be reset. But even today, once a preloaded library is\nloaded, it stays loaded and its shared memory remains regardless of whether\nyou CREATE/DROP extension. Can you think of problems with keeping the\nsegment attached?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 10:09:39 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Dec 19, 2023 at 10:49:23AM -0500, Robert Haas wrote:\n> On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov\n> <[email protected]> wrote:\n>> 2. I think a separate file for this feature looks too expensive.\n>> According to the gist of that code, it is a part of the DSA module.\n> \n> -1. I think this is a totally different thing than DSA. More files\n> aren't nearly as expensive as the confusion that comes from smushing\n> unrelated things together.\n\nAgreed. I think there's a decent chance that more functionality will be\nadded to this registry down the line, in which case it will be even more\nimportant that this stuff stays separate from the tools it is built with.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 10:14:44 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Fri, Dec 08, 2023 at 04:36:52PM +0900, Michael Paquier wrote:\n> Yes, tracking that in a more central way can have many usages, so your\n> patch sounds like a good idea. Note that we have one case in core\n> that be improved and make use of what you have here: autoprewarm.c.\n\nI'll add a patch for autoprewarm.c. Even if we don't proceed with that\nchange, it'll be a good demonstration.\n\n> +dsm_registry_init_or_attach(const char *key, void **ptr, size_t size,\n> + void (*init_callback) (void *ptr))\n> \n> This is shaped around dshash_find_or_insert(), but it looks like you'd\n> want an equivalent for dshash_find(), as well.\n\nWhat is the use-case for only verifying the existence of a segment?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 10:19:11 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Dec 19, 2023 at 10:19:11AM -0600, Nathan Bossart wrote:\n> On Fri, Dec 08, 2023 at 04:36:52PM +0900, Michael Paquier wrote:\n>> Yes, tracking that in a more central way can have many usages, so your\n>> patch sounds like a good idea. Note that we have one case in core\n>> that be improved and make use of what you have here: autoprewarm.c.\n> \n> I'll add a patch for autoprewarm.c. Even if we don't proceed with that\n> change, it'll be a good demonstration.\n\nCool, thanks. It could just be a separate change on top of the main\none.\n\n> > +dsm_registry_init_or_attach(const char *key, void **ptr, size_t size,\n> > + void (*init_callback) (void *ptr))\n> > \n> > This is shaped around dshash_find_or_insert(), but it looks like you'd\n> > want an equivalent for dshash_find(), as well.\n> \n> What is the use-case for only verifying the existence of a segment?\n\nOne case I was thinking about is parallel aggregates that can define\ncombining and serial/deserial functions, where some of the operations\ncould happen in shared memory, requiring a DSM, and where each process\ndoing some aggregate combining would expect a DSM to exist before\nmaking use of it. So a registry wrapper for dshash_find() could be\nused as a way to perform sanity checks with what's stored in the\nregistry.\n--\nMichael", "msg_date": "Wed, 20 Dec 2023 09:02:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Dec 19, 2023 at 10:14:44AM -0600, Nathan Bossart wrote:\n> On Tue, Dec 19, 2023 at 10:49:23AM -0500, Robert Haas wrote:\n>> On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov\n>> <[email protected]> wrote:\n>>> 2. I think a separate file for this feature looks too expensive.\n>>> According to the gist of that code, it is a part of the DSA module.\n>> \n>> -1. I think this is a totally different thing than DSA. More files\n>> aren't nearly as expensive as the confusion that comes from smushing\n>> unrelated things together.\n> \n> Agreed. I think there's a decent chance that more functionality will be\n> added to this registry down the line, in which case it will be even more\n> important that this stuff stays separate from the tools it is built with.\n\n+1 for keeping a clean separation between both.\n--\nMichael", "msg_date": "Wed, 20 Dec 2023 09:04:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On 20/12/2023 07:04, Michael Paquier wrote:\n> On Tue, Dec 19, 2023 at 10:14:44AM -0600, Nathan Bossart wrote:\n>> On Tue, Dec 19, 2023 at 10:49:23AM -0500, Robert Haas wrote:\n>>> On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov\n>>> <[email protected]> wrote:\n>>>> 2. I think a separate file for this feature looks too expensive.\n>>>> According to the gist of that code, it is a part of the DSA module.\n>>>\n>>> -1. I think this is a totally different thing than DSA. More files\n>>> aren't nearly as expensive as the confusion that comes from smushing\n>>> unrelated things together.\n>>\n>> Agreed. I think there's a decent chance that more functionality will be\n>> added to this registry down the line, in which case it will be even more\n>> important that this stuff stays separate from the tools it is built with.\n> \n> +1 for keeping a clean separation between both.\n\nThanks, I got the reason.\nIn that case, maybe change the test case to make it closer to real-life \nusage - with locks and concurrent access (See attachment)?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional", "msg_date": "Wed, 20 Dec 2023 11:02:58 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Dec 5, 2023 at 9:17 AM Nathan Bossart <[email protected]> wrote:\n>\n> Every once in a while, I find myself wanting to use shared memory in a\n> loadable module without requiring it to be loaded at server start via\n> shared_preload_libraries. The DSM API offers a nice way to create and\n> manage dynamic shared memory segments, so creating a segment after server\n> start is easy enough. However, AFAICT there's no easy way to teach other\n> backends about the segment without storing the handles in shared memory,\n> which puts us right back at square one.\n>\n> The attached 0001 introduces a \"DSM registry\" to solve this problem. The\n> API provides an easy way to allocate/initialize a segment or to attach to\n> an existing one. The registry itself is just a dshash table that stores\n> the handles keyed by a module-specified string. 0002 adds a test for the\n> registry that demonstrates basic usage.\n\n+1 for something like this.\n\n> I don't presently have any concrete plans to use this for anything, but I\n> thought it might be useful for extensions for caching, etc. and wanted to\n> see whether there was any interest in the feature.\n\nIsn't the worker_spi best place to show the use of the DSM registry\ninstead of a separate test extension? Note the custom wait event\nfeature that added its usage code to worker_spi. Since worker_spi\ndemonstrates typical coding patterns, having just set_val_in_shmem()\nand get_val_in_shmem() in there makes this patch simple and shaves\nsome code off.\n\nComments on the v1 patch set:\n\n1. IIUC, this feature lets external modules create as many DSM\nsegments as possible with different keys right? If yes, is capping the\nmax number of DSMs a good idea?\n\n2. Why does this feature have to deal with DSMs? Why not DSAs? With\nDSA and an API that gives the DSA handle to the external modules, the\nmodules can dsa_allocate and dsa_free right? Do you see any problem\nwith it?\n\n3.\n+typedef struct DSMRegistryEntry\n+{\n+ char key[256];\n\nKey length 256 feels too much, can it be capped at NAMEDATALEN 64\nbytes (similar to some of the key lengths for hash_create()) to start\nwith?\n\n4. Do we need on_dsm_detach for each DSM created?\ndsm_backend_shutdown\n\n5.\n+ *\n+ * *ptr should initially be set to NULL. If it is not NULL, this routine will\n+ * assume that the segment has already been attached to the current session.\n+ * Otherwise, this routine will set *ptr appropriately.\n\n+ /* Quick exit if the value is already set. */\n+ if (*ptr)\n+ return;\n\nInstead of the above assumption and quick exit condition, can it be\nsomething like if (dsm_find_mapping(dsm_segment_handle(*ptr)) != NULL)\nreturn;?\n\n6.\n+static pg_atomic_uint32 *val;\n\nAny specific reason for it to be an atomic variable?\n\n7.\n+static pg_atomic_uint32 *val;\n\nInstead of a run-of-the-mill example with just an integer val that\ngets stored in shared memory, can it be something more realistic, a\nstruct with 2 or more variables or a struct to store linked list\n(slist_head or dlist_head) in shared memory or such?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 15:28:38 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Wed, Dec 20, 2023 at 11:02:58AM +0200, Andrei Lepikhov wrote:\n> In that case, maybe change the test case to make it closer to real-life\n> usage - with locks and concurrent access (See attachment)?\n\nI'm not following why we should make this test case more complicated. It\nis only intended to test the DSM registry machinery, and setting/retrieving\nan atomic variable seems like a realistic use-case to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 09:33:42 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "Hi, all\n\nI see most xxxShmemInit functions have the logic to handle IsUnderPostmaster env.\nDo we need to consider it in DSMRegistryShmemInit() too? For example, add some assertions.\nOthers LGTM.\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Dec 5, 2023 at 11:47 +0800, Nathan Bossart <[email protected]>, wrote:\n> Every once in a while, I find myself wanting to use shared memory in a\n> loadable module without requiring it to be loaded at server start via\n> shared_preload_libraries. The DSM API offers a nice way to create and\n> manage dynamic shared memory segments, so creating a segment after server\n> start is easy enough. However, AFAICT there's no easy way to teach other\n> backends about the segment without storing the handles in shared memory,\n> which puts us right back at square one.\n>\n> The attached 0001 introduces a \"DSM registry\" to solve this problem. The\n> API provides an easy way to allocate/initialize a segment or to attach to\n> an existing one. The registry itself is just a dshash table that stores\n> the handles keyed by a module-specified string. 0002 adds a test for the\n> registry that demonstrates basic usage.\n>\n> I don't presently have any concrete plans to use this for anything, but I\n> thought it might be useful for extensions for caching, etc. and wanted to\n> see whether there was any interest in the feature.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n\n\n\n\nHi, all\n\nI see most xxxShmemInit functions have the logic to handle IsUnderPostmaster env.\nDo we need to consider it in DSMRegistryShmemInit() too? For example, add some assertions.\nOthers LGTM.\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\nOn Dec 5, 2023 at 11:47 +0800, Nathan Bossart <[email protected]>, wrote:\nEvery once in a while, I find myself wanting to use shared memory in a\nloadable module without requiring it to be loaded at server start via\nshared_preload_libraries. The DSM API offers a nice way to create and\nmanage dynamic shared memory segments, so creating a segment after server\nstart is easy enough. However, AFAICT there's no easy way to teach other\nbackends about the segment without storing the handles in shared memory,\nwhich puts us right back at square one.\n\nThe attached 0001 introduces a \"DSM registry\" to solve this problem. The\nAPI provides an easy way to allocate/initialize a segment or to attach to\nan existing one. The registry itself is just a dshash table that stores\nthe handles keyed by a module-specified string. 0002 adds a test for the\nregistry that demonstrates basic usage.\n\nI don't presently have any concrete plans to use this for anything, but I\nthought it might be useful for extensions for caching, etc. and wanted to\nsee whether there was any interest in the feature.\n\n--\nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 21 Dec 2023 00:03:18 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Wed, Dec 20, 2023 at 03:28:38PM +0530, Bharath Rupireddy wrote:\n> Isn't the worker_spi best place to show the use of the DSM registry\n> instead of a separate test extension? Note the custom wait event\n> feature that added its usage code to worker_spi. Since worker_spi\n> demonstrates typical coding patterns, having just set_val_in_shmem()\n> and get_val_in_shmem() in there makes this patch simple and shaves\n> some code off.\n\nI don't agree. The test case really isn't that complicated, and I'd rather\nhave a dedicated test suite for this feature that we can build on instead\nof trying to squeeze it into something unrelated.\n\n> 1. IIUC, this feature lets external modules create as many DSM\n> segments as possible with different keys right? If yes, is capping the\n> max number of DSMs a good idea?\n\nWhy? Even if it is a good idea, what limit could we choose that wouldn't\nbe arbitrary and eventually cause problems down the road?\n\n> 2. Why does this feature have to deal with DSMs? Why not DSAs? With\n> DSA and an API that gives the DSA handle to the external modules, the\n> modules can dsa_allocate and dsa_free right? Do you see any problem\n> with it?\n\nPlease see upthread discussion [0].\n\n> +typedef struct DSMRegistryEntry\n> +{\n> + char key[256];\n> \n> Key length 256 feels too much, can it be capped at NAMEDATALEN 64\n> bytes (similar to some of the key lengths for hash_create()) to start\n> with?\n\nWhy is it too much?\n\n> 4. Do we need on_dsm_detach for each DSM created?\n\nPresently, I've designed this such that the DSM remains attached for the\nlifetime of a session (and stays present even if all attached sessions go\naway) to mimic what you get when you allocate shared memory during startup.\nPerhaps there's a use-case for having backends do some cleanup before\nexiting, in which case a detach_cb might be useful. IMHO we should wait\nfor a concrete use-case before adding too many bells and whistles, though.\n\n> + * *ptr should initially be set to NULL. If it is not NULL, this routine will\n> + * assume that the segment has already been attached to the current session.\n> + * Otherwise, this routine will set *ptr appropriately.\n> \n> + /* Quick exit if the value is already set. */\n> + if (*ptr)\n> + return;\n> \n> Instead of the above assumption and quick exit condition, can it be\n> something like if (dsm_find_mapping(dsm_segment_handle(*ptr)) != NULL)\n> return;?\n\nYeah, I think something like that could be better. One of the things I\ndislike about the v1 API is that it depends a little too much on the caller\ndoing exactly the right things, and I think it's possible to make it a\nlittle more robust.\n\n> +static pg_atomic_uint32 *val;\n> \n> Any specific reason for it to be an atomic variable?\n\nA regular integer would probably be fine for testing, but I figured we\nmight as well ensure correctness for when this code is inevitably\ncopy/pasted somewhere.\n\n> +static pg_atomic_uint32 *val;\n> \n> Instead of a run-of-the-mill example with just an integer val that\n> gets stored in shared memory, can it be something more realistic, a\n> struct with 2 or more variables or a struct to store linked list\n> (slist_head or dlist_head) in shared memory or such?\n\nThis is the second time this has come up [1]. The intent of this test is\nto verify the DSM registry behavior, not how folks are going to use the\nshared memory it manages, so I'm really not inclined to make this more\ncomplicated without a good reason. I don't mind changing this if I'm\noutvoted on this one, though.\n\n[0] https://postgr.es/m/20231219160117.GB831499%40nathanxps13\n[1] https://postgr.es/m/20231220153342.GA833819%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 10:03:24 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Thu, Dec 21, 2023 at 12:03:18AM +0800, Zhang Mingli wrote:\n> I see most xxxShmemInit functions have the logic to handle IsUnderPostmaster env.\n> Do we need to consider it in DSMRegistryShmemInit() too? For example, add some assertions.\n> Others LGTM.\n\nGood point. I _think_ the registry is safe to set up and use in\nsingle-user mode but not in a regular postmaster process. It'd probably be\nwise to add some assertions along those lines, but even if we didn't, I\nthink the DSM code has existing assertions that will catch it. In any\ncase, I'd like to avoid requiring folks to add special\nsingle-user-mode-only logic if we can avoid it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 10:18:36 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On 20/12/2023 17:33, Nathan Bossart wrote:\n> On Wed, Dec 20, 2023 at 11:02:58AM +0200, Andrei Lepikhov wrote:\n>> In that case, maybe change the test case to make it closer to real-life\n>> usage - with locks and concurrent access (See attachment)?\n> \n> I'm not following why we should make this test case more complicated. It\n> is only intended to test the DSM registry machinery, and setting/retrieving\n> an atomic variable seems like a realistic use-case to me.\n\nI could provide you at least two reasons here:\n1. A More complicated example would be a tutorial on using the feature \ncorrectly. It will reduce the number of questions in mailing lists.\n2. Looking into existing extensions, I see that the most common case of \nusing a shared memory segment is maintaining some hash table or state \nstructure that needs at least one lock.\n\nTry to rewrite the pg_prewarm according to this new feature, and you \nwill realize how difficult it is.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 21 Dec 2023 08:50:43 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "Here is a new version of the patch. In addition to various small changes,\nI've rewritten the test suite to use an integer and a lock, added a\ndsm_registry_find() function, and adjusted autoprewarm to use the registry.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Dec 2023 13:53:27 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Wed, Dec 27, 2023 at 01:53:27PM -0600, Nathan Bossart wrote:\n> Here is a new version of the patch. In addition to various small changes,\n> I've rewritten the test suite to use an integer and a lock, added a\n> dsm_registry_find() function, and adjusted autoprewarm to use the registry.\n\nHere's a v3 that fixes a silly mistake in the test.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 28 Dec 2023 09:34:57 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Wed, Dec 20, 2023 at 9:33 PM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Dec 20, 2023 at 03:28:38PM +0530, Bharath Rupireddy wrote:\n> > Isn't the worker_spi best place to show the use of the DSM registry\n> > instead of a separate test extension? Note the custom wait event\n> > feature that added its usage code to worker_spi. Since worker_spi\n> > demonstrates typical coding patterns, having just set_val_in_shmem()\n> > and get_val_in_shmem() in there makes this patch simple and shaves\n> > some code off.\n>\n> I don't agree. The test case really isn't that complicated, and I'd rather\n> have a dedicated test suite for this feature that we can build on instead\n> of trying to squeeze it into something unrelated.\n\nWith the use of dsm registry for pg_prewarm, do we need this\ntest_dsm_registry module at all? Because 0002 patch pretty much\ndemonstrates how to use the DSM registry. With this comment and my\nearlier comment on incorporating the use of dsm registry in\nworker_spi, I'm trying to make a point to reduce the code for this\nfeature. However, if others have different opinions, I'm okay with\nhaving a separate test module.\n\n> > 1. IIUC, this feature lets external modules create as many DSM\n> > segments as possible with different keys right? If yes, is capping the\n> > max number of DSMs a good idea?\n>\n> Why? Even if it is a good idea, what limit could we choose that wouldn't\n> be arbitrary and eventually cause problems down the road?\n\nI've tried with a shared memory structure size of 10GB on my\ndevelopment machine and I have seen an intermittent crash (I haven't\ngot a chance to dive deep into it). I think the shared memory that a\nnamed-DSM segment can get via this DSM registry feature depends on the\ntotal shared memory area that a typical DSM client can allocate today.\nI'm not sure it's worth it to limit the shared memory for this feature\ngiven that we don't limit the shared memory via startup hook.\n\n> > 2. Why does this feature have to deal with DSMs? Why not DSAs? With\n> > DSA and an API that gives the DSA handle to the external modules, the\n> > modules can dsa_allocate and dsa_free right? Do you see any problem\n> > with it?\n>\n> Please see upthread discussion [0].\n\nPer my understanding, this feature allows one to define and manage\nnamed-DSM segments.\n\n> > +typedef struct DSMRegistryEntry\n> > +{\n> > + char key[256];\n> >\n> > Key length 256 feels too much, can it be capped at NAMEDATALEN 64\n> > bytes (similar to some of the key lengths for hash_create()) to start\n> > with?\n>\n> Why is it too much?\n\nAre we expecting, for instance, a 128-bit UUID being used as a key and\nhence limiting it to a higher value 256 instead of just NAMEDATALEN?\nMy thoughts were around saving a few bytes of shared memory space that\ncan get higher when multiple modules using a DSM registry with\nmultiple DSM segments.\n\n> > 4. Do we need on_dsm_detach for each DSM created?\n>\n> Presently, I've designed this such that the DSM remains attached for the\n> lifetime of a session (and stays present even if all attached sessions go\n> away) to mimic what you get when you allocate shared memory during startup.\n> Perhaps there's a use-case for having backends do some cleanup before\n> exiting, in which case a detach_cb might be useful. IMHO we should wait\n> for a concrete use-case before adding too many bells and whistles, though.\n\nOn Thu, Dec 28, 2023 at 9:05 PM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Dec 27, 2023 at 01:53:27PM -0600, Nathan Bossart wrote:\n> > Here is a new version of the patch. In addition to various small changes,\n> > I've rewritten the test suite to use an integer and a lock, added a\n> > dsm_registry_find() function, and adjusted autoprewarm to use the registry.\n>\n> Here's a v3 that fixes a silly mistake in the test.\n\nSome comments on the v3 patch set:\n\n1. Typo: missing \"an\" before \"already-attached\".\n+ /* Return address of already-attached DSM registry entry. */\n\n2. Do you see any immediate uses of dsm_registry_find()? Especially\ngiven that dsm_registry_init_or_attach() does the necessary work\n(returns the DSM address if DSM already exists for a given key) for a\npostgres process. If there is no immediate use (the argument to remove\nthis function goes similar to detach_cb above), I'm sure we can\nintroduce it when there's one.\n\n3. I think we don't need miscadmin.h inclusion in autoprewarm.c, do we?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 29 Dec 2023 20:53:54 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Fri, Dec 29, 2023 at 08:53:54PM +0530, Bharath Rupireddy wrote:\n> With the use of dsm registry for pg_prewarm, do we need this\n> test_dsm_registry module at all? Because 0002 patch pretty much\n> demonstrates how to use the DSM registry. With this comment and my\n> earlier comment on incorporating the use of dsm registry in\n> worker_spi, I'm trying to make a point to reduce the code for this\n> feature. However, if others have different opinions, I'm okay with\n> having a separate test module.\n\nI don't have a strong opinion here, but I lean towards still having a\ndedicated test suite, if for no other reason that to guarantee some\ncoverage even if the other in-tree uses disappear.\n\n> I've tried with a shared memory structure size of 10GB on my\n> development machine and I have seen an intermittent crash (I haven't\n> got a chance to dive deep into it). I think the shared memory that a\n> named-DSM segment can get via this DSM registry feature depends on the\n> total shared memory area that a typical DSM client can allocate today.\n> I'm not sure it's worth it to limit the shared memory for this feature\n> given that we don't limit the shared memory via startup hook.\n\nI would be interested to see more details about the crashes you are seeing.\nIs this unique to the registry or an existing problem with DSM segments?\n\n> Are we expecting, for instance, a 128-bit UUID being used as a key and\n> hence limiting it to a higher value 256 instead of just NAMEDATALEN?\n> My thoughts were around saving a few bytes of shared memory space that\n> can get higher when multiple modules using a DSM registry with\n> multiple DSM segments.\n\nI'm not really expecting folks to use more than, say, 16 characters for the\nkey, but I intentionally set it much higher in case someone did have a\nreason to use longer keys. I'll lower it to 64 in the next revision unless\nanyone else objects.\n\n> 2. Do you see any immediate uses of dsm_registry_find()? Especially\n> given that dsm_registry_init_or_attach() does the necessary work\n> (returns the DSM address if DSM already exists for a given key) for a\n> postgres process. If there is no immediate use (the argument to remove\n> this function goes similar to detach_cb above), I'm sure we can\n> introduce it when there's one.\n\nSee [0]. FWIW I tend to agree that we could leave this one out for now.\n\n> 3. I think we don't need miscadmin.h inclusion in autoprewarm.c, do we?\n\nI'll take a look at this one.\n\n[0] https://postgr.es/m/ZYIu_JL-usgd3E1q%40paquier.xyz\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 10:20:54 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Jan 2, 2024 at 11:21 AM Nathan Bossart <[email protected]> wrote:\n> > Are we expecting, for instance, a 128-bit UUID being used as a key and\n> > hence limiting it to a higher value 256 instead of just NAMEDATALEN?\n> > My thoughts were around saving a few bytes of shared memory space that\n> > can get higher when multiple modules using a DSM registry with\n> > multiple DSM segments.\n>\n> I'm not really expecting folks to use more than, say, 16 characters for the\n> key, but I intentionally set it much higher in case someone did have a\n> reason to use longer keys. I'll lower it to 64 in the next revision unless\n> anyone else objects.\n\nThis surely doesn't matter either way. We're not expecting this hash\ntable to have more than a handful of entries; the difference between\n256, 64, and NAMEDATALEN won't even add up to kilobytes in any\nrealistic scenario, let along MB or GB.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 11:31:14 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "Here's a new version of the patch set with Bharath's feedback addressed.\n\nOn Tue, Jan 02, 2024 at 11:31:14AM -0500, Robert Haas wrote:\n> On Tue, Jan 2, 2024 at 11:21 AM Nathan Bossart <[email protected]> wrote:\n>> > Are we expecting, for instance, a 128-bit UUID being used as a key and\n>> > hence limiting it to a higher value 256 instead of just NAMEDATALEN?\n>> > My thoughts were around saving a few bytes of shared memory space that\n>> > can get higher when multiple modules using a DSM registry with\n>> > multiple DSM segments.\n>>\n>> I'm not really expecting folks to use more than, say, 16 characters for the\n>> key, but I intentionally set it much higher in case someone did have a\n>> reason to use longer keys. I'll lower it to 64 in the next revision unless\n>> anyone else objects.\n> \n> This surely doesn't matter either way. We're not expecting this hash\n> table to have more than a handful of entries; the difference between\n> 256, 64, and NAMEDATALEN won't even add up to kilobytes in any\n> realistic scenario, let along MB or GB.\n\nRight.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 2 Jan 2024 16:49:07 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Wed, Jan 3, 2024 at 4:19 AM Nathan Bossart <[email protected]> wrote:\n>\n> Here's a new version of the patch set with Bharath's feedback addressed.\n\nThanks. The v4 patches look good to me except for a few minor\ncomments. I've marked it as RfC in CF.\n\n1. Update all the copyright to the new year. A run of\nsrc/tools/copyright.pl on the source tree will take care of it at some\npoint, but still it's good if we can update while we are here.\n+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group\n+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group\n+# Copyright (c) 2023, PostgreSQL Global Development Group\n+ * Copyright (c) 2023, PostgreSQL Global Development Group\n\n2. Typo: missing \"an\" before \"already-attached\".\n+ /* Return address of already-attached DSM registry entry. */\n\n3. Use NAMEDATALEN instead of 64?\n+ char key[64];\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 6 Jan 2024 19:34:15 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Sat, Jan 06, 2024 at 07:34:15PM +0530, Bharath Rupireddy wrote:\n> 1. Update all the copyright to the new year. A run of\n> src/tools/copyright.pl on the source tree will take care of it at some\n> point, but still it's good if we can update while we are here.\n\nDone.\n\n> 2. Typo: missing \"an\" before \"already-attached\".\n> + /* Return address of already-attached DSM registry entry. */\n\nDone.\n\n> 3. Use NAMEDATALEN instead of 64?\n> + char key[64];\n\nI kept this the same, as I didn't see any need to tie the key size to\nNAMEDATALEN.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 6 Jan 2024 10:35:16 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Sat, Jan 6, 2024 at 10:05 PM Nathan Bossart <[email protected]> wrote:\n>\n> I kept this the same, as I didn't see any need to tie the key size to\n> NAMEDATALEN.\n\nThanks. A fresh look at the v5 patches left me with the following thoughts:\n\n1. I think we need to add some notes about this new way of getting\nshared memory for external modules in the <title>Shared Memory and\nLWLocks</title> section in xfunc.sgml? This will at least tell there's\nanother way for external modules to get shared memory, not just with\nthe shmem_request_hook and shmem_startup_hook. What do you think?\n\n2. FWIW, I'd like to call this whole feature \"Support for named DSM\nsegments in Postgres\". Do you see anything wrong with this?\n\n3. IIUC, this feature eventually makes both shmem_request_hook and\nshmem_startup_hook pointless, no? Or put another way, what's the\nsignificance of shmem request and startup hooks in lieu of this new\nfeature? I think it's quite possible to get rid of the shmem request\nand startup hooks (of course, not now but at some point in future to\nnot break the external modules), because all the external modules can\nallocate and initialize the same shared memory via\ndsm_registry_init_or_attach and its init_callback. All the external\nmodules will then need to call dsm_registry_init_or_attach in their\n_PG_init callbacks and/or in their bg worker's main functions in case\nthe modules intend to start up bg workers. Am I right?\n\n4. With the understanding in comment #3, can pg_stat_statements and\ntest_slru.c get rid of its shmem_request_hook and shmem_startup_hook\nand use dsm_registry_init_or_attach? It's not that this patch need to\nremove them now, but just asking if there's something in there that\nmakes this new feature unusable.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 10:53:17 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 8, 2024 at 10:53 AM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Sat, Jan 6, 2024 at 10:05 PM Nathan Bossart <[email protected]>\n> wrote:\n> >\n> > I kept this the same, as I didn't see any need to tie the key size to\n> > NAMEDATALEN.\n>\n> Thanks. A fresh look at the v5 patches left me with the following thoughts:\n>\n> 1. I think we need to add some notes about this new way of getting\n> shared memory for external modules in the <title>Shared Memory and\n> LWLocks</title> section in xfunc.sgml? This will at least tell there's\n> another way for external modules to get shared memory, not just with\n> the shmem_request_hook and shmem_startup_hook. What do you think?\n>\n> 2. FWIW, I'd like to call this whole feature \"Support for named DSM\n> segments in Postgres\". Do you see anything wrong with this?\n>\n> 3. IIUC, this feature eventually makes both shmem_request_hook and\n> shmem_startup_hook pointless, no? Or put another way, what's the\n> significance of shmem request and startup hooks in lieu of this new\n> feature? I think it's quite possible to get rid of the shmem request\n> and startup hooks (of course, not now but at some point in future to\n> not break the external modules), because all the external modules can\n> allocate and initialize the same shared memory via\n> dsm_registry_init_or_attach and its init_callback. All the external\n> modules will then need to call dsm_registry_init_or_attach in their\n> _PG_init callbacks and/or in their bg worker's main functions in case\n> the modules intend to start up bg workers. Am I right?\n>\n> 4. With the understanding in comment #3, can pg_stat_statements and\n> test_slru.c get rid of its shmem_request_hook and shmem_startup_hook\n> and use dsm_registry_init_or_attach? It's not that this patch need to\n> remove them now, but just asking if there's something in there that\n> makes this new feature unusable.\n>\n\n+1, since doing for pg_prewarm, better to do for these modules as well.\n\nA minor comment for v5:\n\n+void *\n+dsm_registry_init_or_attach(const char *key, size_t size,\n\nI think the name could be simple as dsm_registry_init() like we use\nelsewhere e.g. ShmemInitHash() which doesn't say attach explicitly.\n\nSimilarly, I think dshash_find_or_insert() can be as simple as\ndshash_search() and\naccept HASHACTION like hash_search().\n\nRegards,\nAmul\n\nOn Mon, Jan 8, 2024 at 10:53 AM Bharath Rupireddy <[email protected]> wrote:On Sat, Jan 6, 2024 at 10:05 PM Nathan Bossart <[email protected]> wrote:\n>\n> I kept this the same, as I didn't see any need to tie the key size to\n> NAMEDATALEN.\n\nThanks. A fresh look at the v5 patches left me with the following thoughts:\n\n1. I think we need to add some notes about this new way of getting\nshared memory for external modules in the <title>Shared Memory and\nLWLocks</title> section in xfunc.sgml? This will at least tell there's\nanother way for external modules to get shared memory, not just with\nthe shmem_request_hook and shmem_startup_hook. What do you think?\n\n2. FWIW, I'd like to call this whole feature \"Support for named DSM\nsegments in Postgres\". Do you see anything wrong with this?\n\n3. IIUC, this feature eventually makes both shmem_request_hook and\nshmem_startup_hook pointless, no? Or put another way, what's the\nsignificance of shmem request and startup hooks in lieu of this new\nfeature? I think it's quite possible to get rid of the shmem request\nand startup hooks (of course, not now but at some point in future to\nnot break the external modules), because all the external modules can\nallocate and initialize the same shared memory via\ndsm_registry_init_or_attach and its init_callback. All the external\nmodules will then need to call dsm_registry_init_or_attach in their\n_PG_init callbacks and/or in their bg worker's main functions in case\nthe modules intend to start up bg workers. Am I right?\n\n4. With the understanding in comment #3, can pg_stat_statements and\ntest_slru.c get rid of its shmem_request_hook and shmem_startup_hook\nand use dsm_registry_init_or_attach? It's not that this patch need to\nremove them now, but just asking if there's something in there that\nmakes this new feature unusable. +1, since doing for pg_prewarm, better to do for these modules as well.A minor comment for v5:+void *+dsm_registry_init_or_attach(const char *key, size_t size,I think the name could be simple as dsm_registry_init() like we useelsewhere e.g. ShmemInitHash() which doesn't say attach explicitly.Similarly, I think dshash_find_or_insert() can be as simple as dshash_search() andaccept HASHACTION like hash_search().Regards,Amul", "msg_date": "Mon, 8 Jan 2024 11:13:42 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:\n> 1. I think we need to add some notes about this new way of getting\n> shared memory for external modules in the <title>Shared Memory and\n> LWLocks</title> section in xfunc.sgml? This will at least tell there's\n> another way for external modules to get shared memory, not just with\n> the shmem_request_hook and shmem_startup_hook. What do you think?\n\nGood call. I definitely think this stuff ought to be documented. After a\nquick read, I also wonder if it'd be worth spending some time refining that\nsection.\n\n> 2. FWIW, I'd like to call this whole feature \"Support for named DSM\n> segments in Postgres\". Do you see anything wrong with this?\n\nWhy do you feel it should be renamed? I don't see anything wrong with it,\nbut I also don't see any particular advantage with that name compared to\n\"dynamic shared memory registry.\"\n\n> 3. IIUC, this feature eventually makes both shmem_request_hook and\n> shmem_startup_hook pointless, no? Or put another way, what's the\n> significance of shmem request and startup hooks in lieu of this new\n> feature? I think it's quite possible to get rid of the shmem request\n> and startup hooks (of course, not now but at some point in future to\n> not break the external modules), because all the external modules can\n> allocate and initialize the same shared memory via\n> dsm_registry_init_or_attach and its init_callback. All the external\n> modules will then need to call dsm_registry_init_or_attach in their\n> _PG_init callbacks and/or in their bg worker's main functions in case\n> the modules intend to start up bg workers. Am I right?\n\nWell, modules might need to do a number of other things (e.g., adding\nhooks) that can presently only be done when preloaded, in which case I\ndoubt there's much benefit from switching to the DSM registry. I don't\nreally intend for it to replace the existing request/startup hooks, but\nyou're probably right that most, if not all, could use the registry\ninstead. IMHO this is well beyond the scope of this thread, though.\n\n> 4. With the understanding in comment #3, can pg_stat_statements and\n> test_slru.c get rid of its shmem_request_hook and shmem_startup_hook\n> and use dsm_registry_init_or_attach? It's not that this patch need to\n> remove them now, but just asking if there's something in there that\n> makes this new feature unusable.\n\nIt might be possible, but IIUC you'd still need a way to know whether the\nlibrary was preloaded, i.e., all the other necessary hooks were in place.\nIt's convenient to just be able to check whether the shared memory was set\nup for this purpose.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 11:16:27 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 08, 2024 at 11:13:42AM +0530, Amul Sul wrote:\n> +void *\n> +dsm_registry_init_or_attach(const char *key, size_t size,\n> \n> I think the name could be simple as dsm_registry_init() like we use\n> elsewhere e.g. ShmemInitHash() which doesn't say attach explicitly.\n\nThat seems reasonable to me.\n\n> Similarly, I think dshash_find_or_insert() can be as simple as\n> dshash_search() and\n> accept HASHACTION like hash_search().\n\nI'm not totally sure what you mean here. If you mean changing the dshash\nAPI, I'd argue that's a topic for another thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 11:18:48 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 8, 2024 at 10:48 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Mon, Jan 08, 2024 at 11:13:42AM +0530, Amul Sul wrote:\n> > +void *\n> > +dsm_registry_init_or_attach(const char *key, size_t size,\n> >\n> > I think the name could be simple as dsm_registry_init() like we use\n> > elsewhere e.g. ShmemInitHash() which doesn't say attach explicitly.\n>\n> That seems reasonable to me.\n>\n> > Similarly, I think dshash_find_or_insert() can be as simple as\n> > dshash_search() and\n> > accept HASHACTION like hash_search().\n>\n> I'm not totally sure what you mean here. If you mean changing the dshash\n> API, I'd argue that's a topic for another thread.\n>\n\nYes, you are correct. I didn't realize that existing code -- now sure, why\nwouldn't we implemented as the dynahash. Sorry for the noise.\n\nRegards,\nAmul\n\nOn Mon, Jan 8, 2024 at 10:48 PM Nathan Bossart <[email protected]> wrote:On Mon, Jan 08, 2024 at 11:13:42AM +0530, Amul Sul wrote:\n> +void *\n> +dsm_registry_init_or_attach(const char *key, size_t size,\n> \n> I think the name could be simple as dsm_registry_init() like we use\n> elsewhere e.g. ShmemInitHash() which doesn't say attach explicitly.\n\nThat seems reasonable to me.\n\n> Similarly, I think dshash_find_or_insert() can be as simple as\n> dshash_search() and\n> accept HASHACTION like hash_search().\n\nI'm not totally sure what you mean here.  If you mean changing the dshash\nAPI, I'd argue that's a topic for another thread. Yes, you are correct. I didn't realize that existing code -- now sure, whywouldn't we implemented as the dynahash. Sorry for the noise.Regards,Amul", "msg_date": "Tue, 9 Jan 2024 09:29:20 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On 9/1/2024 00:16, Nathan Bossart wrote:\n> On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:\n>> 1. I think we need to add some notes about this new way of getting\n>> shared memory for external modules in the <title>Shared Memory and\n>> LWLocks</title> section in xfunc.sgml? This will at least tell there's\n>> another way for external modules to get shared memory, not just with\n>> the shmem_request_hook and shmem_startup_hook. What do you think?\n+1. Maybe even more - in the section related to extensions, this \napproach to using shared data can be mentioned, too.\n\n>> 2. FWIW, I'd like to call this whole feature \"Support for named DSM\n>> segments in Postgres\". Do you see anything wrong with this?\n> \n> Why do you feel it should be renamed? I don't see anything wrong with it,\n> but I also don't see any particular advantage with that name compared to\n> \"dynamic shared memory registry.\"\nIt is not a big issue, I suppose. But for me personally (as not a native \nEnglish speaker), the label \"Named DSM segments\" seems more \nstraightforward to understand.\n> \n>> 3. IIUC, this feature eventually makes both shmem_request_hook and\n>> shmem_startup_hook pointless, no? Or put another way, what's the\n>> significance of shmem request and startup hooks in lieu of this new\n>> feature? I think it's quite possible to get rid of the shmem request\n>> and startup hooks (of course, not now but at some point in future to\n>> not break the external modules), because all the external modules can\n>> allocate and initialize the same shared memory via\n>> dsm_registry_init_or_attach and its init_callback. All the external\n>> modules will then need to call dsm_registry_init_or_attach in their\n>> _PG_init callbacks and/or in their bg worker's main functions in case\n>> the modules intend to start up bg workers. Am I right?\n> \n> Well, modules might need to do a number of other things (e.g., adding\n> hooks) that can presently only be done when preloaded, in which case I\n> doubt there's much benefit from switching to the DSM registry. I don't\n> really intend for it to replace the existing request/startup hooks, but\n> you're probably right that most, if not all, could use the registry\n> instead. IMHO this is well beyond the scope of this thread, though.\n+1, it may be a many reasons to use these hooks.\n\n >> 3. Use NAMEDATALEN instead of 64?\n >> + char key[64];\n > I kept this the same, as I didn't see any need to tie the key size to\n > NAMEDATALEN.\nIMO, we should avoid magic numbers whenever possible. Current logic \naccording to which first users of this feature will be extensions \nnaturally bonds this size to the size of the 'name' type.\n\nAnd one more point. I think the commit already deserves a more detailed \ncommit message.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 11 Jan 2024 09:50:19 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Thu, Jan 11, 2024 at 09:50:19AM +0700, Andrei Lepikhov wrote:\n> On 9/1/2024 00:16, Nathan Bossart wrote:\n>> On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:\n>> > 2. FWIW, I'd like to call this whole feature \"Support for named DSM\n>> > segments in Postgres\". Do you see anything wrong with this?\n>> \n>> Why do you feel it should be renamed? I don't see anything wrong with it,\n>> but I also don't see any particular advantage with that name compared to\n>> \"dynamic shared memory registry.\"\n> It is not a big issue, I suppose. But for me personally (as not a native\n> English speaker), the label \"Named DSM segments\" seems more straightforward\n> to understand.\n\nThat is good to know, thanks. I see that it would also align better with\nRequestNamedLWLockTranche/GetNamedLWLockTranche.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 21:22:37 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 08, 2024 at 11:16:27AM -0600, Nathan Bossart wrote:\n> On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:\n>> 1. I think we need to add some notes about this new way of getting\n>> shared memory for external modules in the <title>Shared Memory and\n>> LWLocks</title> section in xfunc.sgml? This will at least tell there's\n>> another way for external modules to get shared memory, not just with\n>> the shmem_request_hook and shmem_startup_hook. What do you think?\n> \n> Good call. I definitely think this stuff ought to be documented. After a\n> quick read, I also wonder if it'd be worth spending some time refining that\n> section.\n\n+1. It would be a second thing to point at autoprewarm.c in the docs\nas an extra pointer that can be fed to users reading the docs.\n\n>> 3. IIUC, this feature eventually makes both shmem_request_hook and\n>> shmem_startup_hook pointless, no? Or put another way, what's the\n>> significance of shmem request and startup hooks in lieu of this new\n>> feature? I think it's quite possible to get rid of the shmem request\n>> and startup hooks (of course, not now but at some point in future to\n>> not break the external modules), because all the external modules can\n>> allocate and initialize the same shared memory via\n>> dsm_registry_init_or_attach and its init_callback. All the external\n>> modules will then need to call dsm_registry_init_or_attach in their\n>> _PG_init callbacks and/or in their bg worker's main functions in case\n>> the modules intend to start up bg workers. Am I right?\n> \n> Well, modules might need to do a number of other things (e.g., adding\n> hooks) that can presently only be done when preloaded, in which case I\n> doubt there's much benefit from switching to the DSM registry. I don't\n> really intend for it to replace the existing request/startup hooks, but\n> you're probably right that most, if not all, could use the registry\n> instead. IMHO this is well beyond the scope of this thread, though.\n\nEven if that's not in the scope of this thread, just removing these\nhooks would break a lot of out-of-core things, and they still have a\nlot of value when extensions expect to always be loaded with shared.\nThey don't cost in maintenance at this stage.\n--\nMichael", "msg_date": "Thu, 11 Jan 2024 14:12:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Thu, Jan 11, 2024 at 10:42 AM Michael Paquier <[email protected]> wrote:\n>\n> >> 3. IIUC, this feature eventually makes both shmem_request_hook and\n> >> shmem_startup_hook pointless, no? Or put another way, what's the\n> >> significance of shmem request and startup hooks in lieu of this new\n> >> feature? I think it's quite possible to get rid of the shmem request\n> >> and startup hooks (of course, not now but at some point in future to\n> >> not break the external modules), because all the external modules can\n> >> allocate and initialize the same shared memory via\n> >> dsm_registry_init_or_attach and its init_callback. All the external\n> >> modules will then need to call dsm_registry_init_or_attach in their\n> >> _PG_init callbacks and/or in their bg worker's main functions in case\n> >> the modules intend to start up bg workers. Am I right?\n> >\n> > Well, modules might need to do a number of other things (e.g., adding\n> > hooks) that can presently only be done when preloaded, in which case I\n> > doubt there's much benefit from switching to the DSM registry. I don't\n> > really intend for it to replace the existing request/startup hooks, but\n> > you're probably right that most, if not all, could use the registry\n> > instead. IMHO this is well beyond the scope of this thread, though.\n>\n> Even if that's not in the scope of this thread, just removing these\n> hooks would break a lot of out-of-core things, and they still have a\n> lot of value when extensions expect to always be loaded with shared.\n> They don't cost in maintenance at this stage.\n\nAdding some notes in the docs on when exactly one needs to use shmem\nhooks and the named DSM segments can help greatly.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 11:11:27 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "Here's a new version of the patch set in which I've attempted to address\nthe feedback in this thread. Note that 0001 is being tracked in a separate\nthread [0], but it is basically a prerequisite for adding the documentation\nfor this feature, so that's why I've also added it here.\n\n[0] https://postgr.es/m/20240112041430.GA3557928%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 12 Jan 2024 11:21:52 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "At 2024-01-12 11:21:52 -0600, [email protected] wrote:\n>\n> From: Nathan Bossart <[email protected]>\n> Date: Thu, 11 Jan 2024 21:55:25 -0600\n> Subject: [PATCH v6 1/3] reorganize shared memory and lwlocks documentation\n> \n> ---\n> doc/src/sgml/xfunc.sgml | 182 +++++++++++++++++++++++++---------------\n> 1 file changed, 114 insertions(+), 68 deletions(-)\n> \n> diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml\n> index 89116ae74c..0ba52b41d4 100644\n> --- a/doc/src/sgml/xfunc.sgml\n> +++ b/doc/src/sgml/xfunc.sgml\n> @@ -3397,90 +3397,136 @@ CREATE FUNCTION make_array(anyelement) RETURNS anyarray\n> </sect2>\n> \n> […]\n> - from your <literal>shmem_request_hook</literal>.\n> - </para>\n> - <para>\n> - LWLocks are reserved by calling:\n> + Each backend sould obtain a pointer to the reserved shared memory by\n\nsould → should\n\n> + Add-ins can reserve LWLocks on server startup. Like with shared memory,\n\n(Would \"As with shared memory\" read better? Maybe, but then again maybe\nit should be left alone because you also write \"Unlike with\" elsewhere.)\n\n-- Abhijit\n\n\n", "msg_date": "Fri, 12 Jan 2024 23:13:46 +0530", "msg_from": "Abhijit Menon-Sen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Fri, Jan 12, 2024 at 11:13:46PM +0530, Abhijit Menon-Sen wrote:\n> At 2024-01-12 11:21:52 -0600, [email protected] wrote:\n>> + Each backend sould obtain a pointer to the reserved shared memory by\n> \n> sould → should\n\nD'oh. Thanks.\n\n>> + Add-ins can reserve LWLocks on server startup. Like with shared memory,\n> \n> (Would \"As with shared memory\" read better? Maybe, but then again maybe\n> it should be left alone because you also write \"Unlike with\" elsewhere.)\n\nI think \"As with shared memory...\" sounds better here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 13:45:55 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Fri, Jan 12, 2024 at 01:45:55PM -0600, Nathan Bossart wrote:\n> On Fri, Jan 12, 2024 at 11:13:46PM +0530, Abhijit Menon-Sen wrote:\n>> At 2024-01-12 11:21:52 -0600, [email protected] wrote:\n>>> + Each backend sould obtain a pointer to the reserved shared memory by\n>> \n>> sould → should\n> \n> D'oh. Thanks.\n> \n>>> + Add-ins can reserve LWLocks on server startup. Like with shared memory,\n>> \n>> (Would \"As with shared memory\" read better? Maybe, but then again maybe\n>> it should be left alone because you also write \"Unlike with\" elsewhere.)\n> \n> I think \"As with shared memory...\" sounds better here.\n\nHere is a new version of the patch set with these changes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 13 Jan 2024 15:41:24 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Sun, Jan 14, 2024 at 3:11 AM Nathan Bossart <[email protected]> wrote:\n>\n> Here is a new version of the patch set with these changes.\n\nThanks. Here are some comments on v7-0002.\n\n1.\n+GetNamedDSMSegment(const char *name, size_t size,\n+ void (*init_callback) (void *ptr), bool *found)\n+{\n+\n+ Assert(name);\n+ Assert(size);\n+ Assert(found);\n\nI've done some input validation to GetNamedDSMSegment():\n\nWith an empty key name (\"\"), it works, but do we need that in\npractice? Can't we error out saying the name can't be empty?\n\nWith a size 0, an assertion is fine, but in production (without the\nassertion), I'm seeing the following errors.\n\n2024-01-16 04:49:28.961 UTC client backend[864369]\npg_regress/test_dsm_registry ERROR: could not resize shared memory\nsegment \"/PostgreSQL.3701090278\" to 0 bytes: Invalid argument\n2024-01-16 04:49:29.264 UTC postmaster[864357] LOG: server process\n(PID 864370) was terminated by signal 11: Segmentation fault\n\nI think it's better for GetNamedDSMSegment() to error out on empty\n'name' and size 0. This makes the user-facing function\nGetNamedDSMSegment more concrete.\n\n2.\n+void *\n+GetNamedDSMSegment(const char *name, size_t size,\n+ void (*init_callback) (void *ptr), bool *found)\n\n+ Assert(found);\n\nWhy is input parameter 'found' necessary to be passed by the caller?\nNeither the test module added, nor the pg_prewarm is using the found\nvariable. The function will anyway create the DSM segment if one with\nthe given name isn't found. IMO, found is an optional parameter for\nthe caller. So, the assert(found) isn't necessary.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 10:28:29 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Jan 16, 2024 at 10:28:29AM +0530, Bharath Rupireddy wrote:\n> I think it's better for GetNamedDSMSegment() to error out on empty\n> 'name' and size 0. This makes the user-facing function\n> GetNamedDSMSegment more concrete.\n\nAgreed, thanks for the suggestion.\n\n> +void *\n> +GetNamedDSMSegment(const char *name, size_t size,\n> + void (*init_callback) (void *ptr), bool *found)\n> \n> + Assert(found);\n> \n> Why is input parameter 'found' necessary to be passed by the caller?\n> Neither the test module added, nor the pg_prewarm is using the found\n> variable. The function will anyway create the DSM segment if one with\n> the given name isn't found. IMO, found is an optional parameter for\n> the caller. So, the assert(found) isn't necessary.\n\nThe autoprewarm change (0003) does use this variable. I considered making\nit optional (i.e., you could pass in NULL if you didn't want it), but I\ndidn't feel like the extra code in GetNamedDSMSegment() to allow this was\nworth it so that callers could avoid creating a single bool.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 16 Jan 2024 10:07:48 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Tue, Jan 16, 2024 at 9:37 PM Nathan Bossart <[email protected]> wrote:\n>\n> The autoprewarm change (0003) does use this variable. I considered making\n> it optional (i.e., you could pass in NULL if you didn't want it), but I\n> didn't feel like the extra code in GetNamedDSMSegment() to allow this was\n> worth it so that callers could avoid creating a single bool.\n\nI'm okay with it.\n\nThe v8 patches look good to me.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 08:00:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Wed, Jan 17, 2024 at 08:00:00AM +0530, Bharath Rupireddy wrote:\n> The v8 patches look good to me.\n\nCommitted. Thanks everyone for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Jan 2024 14:46:36 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Committed. Thanks everyone for reviewing!\n\nCoverity complained about this:\n\n*** CID 1586660: Null pointer dereferences (NULL_RETURNS)\n/srv/coverity/git/pgsql-git/postgresql/src/backend/storage/ipc/dsm_registry.c: 185 in GetNamedDSMSegment()\n179 \t}\n180 \telse if (!dsm_find_mapping(entry->handle))\n181 \t{\n182 \t\t/* Attach to existing segment. */\n183 \t\tdsm_segment *seg = dsm_attach(entry->handle);\n184 \n>>> CID 1586660: Null pointer dereferences (NULL_RETURNS)\n>>> Dereferencing a pointer that might be \"NULL\" \"seg\" when calling \"dsm_pin_mapping\".\n185 \t\tdsm_pin_mapping(seg);\n186 \t\tret = dsm_segment_address(seg);\n187 \t}\n188 \telse\n189 \t{\n190 \t\t/* Return address of an already-attached segment. */\n\nI think it's right --- the comments for dsm_attach explicitly\npoint out that a NULL return is possible. You need to handle\nthat scenario in some way other than SIGSEGV.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jan 2024 11:21:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Sun, Jan 21, 2024 at 11:21:46AM -0500, Tom Lane wrote:\n> Coverity complained about this:\n> \n> *** CID 1586660: Null pointer dereferences (NULL_RETURNS)\n> /srv/coverity/git/pgsql-git/postgresql/src/backend/storage/ipc/dsm_registry.c: 185 in GetNamedDSMSegment()\n> 179 \t}\n> 180 \telse if (!dsm_find_mapping(entry->handle))\n> 181 \t{\n> 182 \t\t/* Attach to existing segment. */\n> 183 \t\tdsm_segment *seg = dsm_attach(entry->handle);\n> 184 \n>>>> CID 1586660: Null pointer dereferences (NULL_RETURNS)\n>>>> Dereferencing a pointer that might be \"NULL\" \"seg\" when calling \"dsm_pin_mapping\".\n> 185 \t\tdsm_pin_mapping(seg);\n> 186 \t\tret = dsm_segment_address(seg);\n> 187 \t}\n> 188 \telse\n> 189 \t{\n> 190 \t\t/* Return address of an already-attached segment. */\n> \n> I think it's right --- the comments for dsm_attach explicitly\n> point out that a NULL return is possible. You need to handle\n> that scenario in some way other than SIGSEGV.\n\nOops. I've attached an attempt at fixing this. I took the opportunity to\nclean up the surrounding code a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 21 Jan 2024 16:13:20 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Sun, Jan 21, 2024 at 04:13:20PM -0600, Nathan Bossart wrote:\n> Oops. I've attached an attempt at fixing this. I took the opportunity to\n> clean up the surrounding code a bit.\n\nThanks for the patch. Your proposed attempt looks correct to me with\nan ERROR when no segments are found..\n--\nMichael", "msg_date": "Mon, 22 Jan 2024 16:52:52 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 22, 2024 at 3:43 AM Nathan Bossart <[email protected]> wrote:\n>\n> Oops. I've attached an attempt at fixing this. I took the opportunity to\n> clean up the surrounding code a bit.\n\nThe code looks cleaner and readable with the patch. All the call sites\nare taking care of dsm_attach returning NULL value. So, the attached\npatch looks good to me.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 17:00:48 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: introduce dynamic shared memory registry" }, { "msg_contents": "On Mon, Jan 22, 2024 at 05:00:48PM +0530, Bharath Rupireddy wrote:\n> On Mon, Jan 22, 2024 at 3:43 AM Nathan Bossart <[email protected]> wrote:\n>> Oops. I've attached an attempt at fixing this. I took the opportunity to\n>> clean up the surrounding code a bit.\n> \n> The code looks cleaner and readable with the patch. All the call sites\n> are taking care of dsm_attach returning NULL value. So, the attached\n> patch looks good to me.\n\nCommitted. Thanks for the report and the reviews.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 20:46:52 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: introduce dynamic shared memory registry" } ]
[ { "msg_contents": "We have backtrace support for server errors. You can activate that \neither by setting backtrace_functions or by explicitly attaching \nerrbacktrace() to an ereport() call.\n\nI would like an additional mode that essentially triggers a backtrace \nanytime elog() (for internal errors) is called. This is not well \ncovered by backtrace_functions, because there are many equally-worded \nlow-level errors in many functions. And if you find out where the error \nis, then you need to manually rewrite the elog() to ereport() to attach \nthe errbacktrace(), which is annoying. Having a backtrace automatically \non every elog() call would be very helpful during development for \nvarious kinds of common errors from palloc, syscache, node support, etc.\n\nI think the implementation would be very simple, something like\n\ndiff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\nindex 6aeb855e49..45d40abe92 100644\n--- a/src/backend/utils/error/elog.c\n+++ b/src/backend/utils/error/elog.c\n@@ -498,9 +498,11 @@ errfinish(const char *filename, int lineno, const \nchar *funcname)\n\n /* Collect backtrace, if enabled and we didn't already */\n if (!edata->backtrace &&\n- edata->funcname &&\n- backtrace_functions &&\n- matches_backtrace_functions(edata->funcname))\n+ ((edata->funcname &&\n+ backtrace_functions &&\n+ matches_backtrace_functions(edata->funcname)) ||\n+ (edata->sqlerrcode == ERRCODE_INTERNAL_ERROR &&\n+ backtrace_on_internal_error)))\n set_backtrace(edata, 2);\n\n /*\n\nwhere backtrace_on_internal_error would be a GUC variable.\n\nWould others find this useful? Any other settings or variants in this \narea that should be considered while we're here?\n\n\n", "msg_date": "Tue, 5 Dec 2023 11:55:05 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 05, 2023 at 11:55:05AM +0100, Peter Eisentraut wrote:\n> Would others find this useful?\n\nYes. I think I would use this pretty frequently.\n\n> Any other settings or variants in this area\n> that should be considered while we're here?\n\nIMO it would be nice to have a way to turn on backtraces for everything, or\nat least everything above a certain logging level. That would primarily be\nuseful for when you don't know exactly which C function is producing the\nerror.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 11:40:34 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 5, 2023 at 12:40 PM Nathan Bossart <[email protected]> wrote:\n> On Tue, Dec 05, 2023 at 11:55:05AM +0100, Peter Eisentraut wrote:\n> > Would others find this useful?\n>\n> Yes. I think I would use this pretty frequently.\n\nI think we should consider unconditionally emitting a backtrace when\nan elog() is hit, instead of requiring a GUC. Or at least any elog()\nthat's not at a DEBUGn level. If the extra output annoys anybody, that\nmeans they're regularly hitting an elog(), and it ought to be promoted\nto ereport().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 13:16:22 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 05, 2023 at 01:16:22PM -0500, Robert Haas wrote:\n> I think we should consider unconditionally emitting a backtrace when\n> an elog() is hit, instead of requiring a GUC. Or at least any elog()\n> that's not at a DEBUGn level. If the extra output annoys anybody, that\n> means they're regularly hitting an elog(), and it ought to be promoted\n> to ereport().\n\nPerhaps this should be a GUC that defaults to LOG or ERROR.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 12:28:45 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 5, 2023 at 1:28 PM Nathan Bossart <[email protected]> wrote:\n> On Tue, Dec 05, 2023 at 01:16:22PM -0500, Robert Haas wrote:\n> > I think we should consider unconditionally emitting a backtrace when\n> > an elog() is hit, instead of requiring a GUC. Or at least any elog()\n> > that's not at a DEBUGn level. If the extra output annoys anybody, that\n> > means they're regularly hitting an elog(), and it ought to be promoted\n> > to ereport().\n>\n> Perhaps this should be a GUC that defaults to LOG or ERROR.\n\nWhy?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 13:30:23 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, 5 Dec 2023 at 19:30, Robert Haas <[email protected]> wrote:\n>\n> On Tue, Dec 5, 2023 at 1:28 PM Nathan Bossart <[email protected]> wrote:\n> > On Tue, Dec 05, 2023 at 01:16:22PM -0500, Robert Haas wrote:\n> > > I think we should consider unconditionally emitting a backtrace when\n> > > an elog() is hit, instead of requiring a GUC. Or at least any elog()\n> > > that's not at a DEBUGn level. If the extra output annoys anybody, that\n> > > means they're regularly hitting an elog(), and it ought to be promoted\n> > > to ereport().\n> >\n> > Perhaps this should be a GUC that defaults to LOG or ERROR.\n>\n> Why?\n\nI can't speak for Nathan, but my reason would be that I'm not in the\nhabit to attach a debugger to my program to keep track of state\nprogression, but instead use elog() during patch development. I'm not\nsuper stoked for getting my developmental elog(LOG)-s spammed with\nstack traces, so I'd want to set this at least to ERROR, while in\nproduction LOG could be fine.\n\nSimilarly, there are probably extensions that do not use ereport()\ndirectly, but instead use elog(), because of reasons like 'not\nplanning on doing translations' and 'elog() is the easier API'.\nForcing a change over to ereport because of stack trace spam in logs\ncaused by elog would be quite annoying.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 5 Dec 2023 19:47:25 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 5, 2023 at 1:47 PM Matthias van de Meent\n<[email protected]> wrote:\n> I can't speak for Nathan, but my reason would be that I'm not in the\n> habit to attach a debugger to my program to keep track of state\n> progression, but instead use elog() during patch development. I'm not\n> super stoked for getting my developmental elog(LOG)-s spammed with\n> stack traces, so I'd want to set this at least to ERROR, while in\n> production LOG could be fine.\n>\n> Similarly, there are probably extensions that do not use ereport()\n> directly, but instead use elog(), because of reasons like 'not\n> planning on doing translations' and 'elog() is the easier API'.\n> Forcing a change over to ereport because of stack trace spam in logs\n> caused by elog would be quite annoying.\n\nThat does seem like a fair complaint. But I also think it would be\nreally good if we had something that could be enabled unconditionally\ninstead of via a GUC... because if it's gated by aa GUC then it often\nwon't be there when you need it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 14:59:46 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 05, 2023 at 07:47:25PM +0100, Matthias van de Meent wrote:\n> On Tue, 5 Dec 2023 at 19:30, Robert Haas <[email protected]> wrote:\n>> On Tue, Dec 5, 2023 at 1:28 PM Nathan Bossart <[email protected]> wrote:\n>> > Perhaps this should be a GUC that defaults to LOG or ERROR.\n>>\n>> Why?\n\nSorry, I should've explained why in my message.\n\n> I can't speak for Nathan, but my reason would be that I'm not in the\n> habit to attach a debugger to my program to keep track of state\n> progression, but instead use elog() during patch development. I'm not\n> super stoked for getting my developmental elog(LOG)-s spammed with\n> stack traces, so I'd want to set this at least to ERROR, while in\n> production LOG could be fine.\n> \n> Similarly, there are probably extensions that do not use ereport()\n> directly, but instead use elog(), because of reasons like 'not\n> planning on doing translations' and 'elog() is the easier API'.\n> Forcing a change over to ereport because of stack trace spam in logs\n> caused by elog would be quite annoying.\n\nMy main concern was forcing extra logging that users won't have a great way\nto turn off (except for maybe raising log_min_messages or something).\nAlso, it'd give us a way to slowly ramp up backtraces over a few years\nwithout suddenly spamming everyones logs in v17. For example, maybe this\nGUC defaults to PANIC or FATAL in v17. Once we've had a chance to address\nany common backtraces there, we could bump it to ERROR or WARNING in v18.\nAnd so on. If we just flood everyone's logs immediately, I worry that\nfolks will just turn it off, and we won't get the reports we are hoping\nfor.\n\nI know we already have so many GUCs and would like to avoid adding new ones\nwhen possible. Maybe this is one that could eventually be retired as we\ngain confidence that it won't obliterate the log files.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 14:06:10 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Tue, 5 Dec 2023 at 19:30, Robert Haas <[email protected]> wrote:\n>>> I think we should consider unconditionally emitting a backtrace when\n>>> an elog() is hit, instead of requiring a GUC.\n\n>> Perhaps this should be a GUC that defaults to LOG or ERROR.\n\n> I can't speak for Nathan, but my reason would be that I'm not in the\n> habit to attach a debugger to my program to keep track of state\n> progression, but instead use elog() during patch development. I'm not\n> super stoked for getting my developmental elog(LOG)-s spammed with\n> stack traces, so I'd want to set this at least to ERROR, while in\n> production LOG could be fine.\n\nYeah, I would not be happy either with elog(LOG) suddenly getting\n10x more verbose. I think it might be okay to unconditionally do this\nwhen elevel >= ERROR, though.\n\n(At the same time, I don't have a problem with the idea of a GUC\ncontrolling the minimum elevel to cause the report. Other people\nmight have other use-cases than I do.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Dec 2023 15:08:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On 05.12.23 11:55, Peter Eisentraut wrote:\n> I think the implementation would be very simple, something like\n> \n> diff --git a/src/backend/utils/error/elog.c \n> b/src/backend/utils/error/elog.c\n> index 6aeb855e49..45d40abe92 100644\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -498,9 +498,11 @@ errfinish(const char *filename, int lineno, const \n> char *funcname)\n> \n>     /* Collect backtrace, if enabled and we didn't already */\n>     if (!edata->backtrace &&\n> -       edata->funcname &&\n> -       backtrace_functions &&\n> -       matches_backtrace_functions(edata->funcname))\n> +       ((edata->funcname &&\n> +         backtrace_functions &&\n> +         matches_backtrace_functions(edata->funcname)) ||\n> +        (edata->sqlerrcode == ERRCODE_INTERNAL_ERROR &&\n> +         backtrace_on_internal_error)))\n>         set_backtrace(edata, 2);\n> \n>     /*\n> \n> where backtrace_on_internal_error would be a GUC variable.\n\nIt looks like many people found this idea worthwhile.\n\nSeveral people asked for a way to set the minimum log level for this \ntreatment.\n\nSomething else to note: I wrote the above code to check the error code; \nit doesn't check whether the original code write elog() or ereport(). \nThere are some internal errors that are written as ereport() now. \nOthers might be changed from time to time; until now there would have \nbeen no external effect from this. I think it would be weird to \nintroduce a difference between these forms now.\n\nBut then, elog() only uses the error code ERRCODE_INTERNAL_ERROR if the \nerror level is >=ERROR. So this already excludes everything below.\n\nDo people want a way to distinguish ERROR/FATAL/PANIC?\n\nOr do people want a way to enable backtraces for elog(LOG)? This didn't \nlook too interesting to me. (Many current elog(LOG) calls are behind \nadditional guards like TRACE_SORT or LOCK_DEBUG.)\n\nIf neither of these two are very interesting, then the above code would \nalready appear to do what was asked.\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 11:00:11 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Something else to note: I wrote the above code to check the error code; \n> it doesn't check whether the original code write elog() or ereport(). \n> There are some internal errors that are written as ereport() now. \n> Others might be changed from time to time; until now there would have \n> been no external effect from this. I think it would be weird to \n> introduce a difference between these forms now.\n\nYeah, that was bothering me too. IIRC, elog is already documented\nas being *exactly equivalent* to ereport with a minimal set of\noptions. I don't think we should break that equivalence. So I\nagree with driving this off the stated-or-imputed errcode rather\nthan which function is called.\n\n> Do people want a way to distinguish ERROR/FATAL/PANIC?\n> Or do people want a way to enable backtraces for elog(LOG)?\n\nPersonally I don't see a need for either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Dec 2023 09:22:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Here is a patch to play with.\n\nI also found a related typo.\n\nOne possible question for discussion is whether the default for this \nshould be off, on, or possibly something like on-in-assert-builds. \n(Personally, I'm happy to turn it on myself at run time, but everyone \nhas different workflows.)", "msg_date": "Fri, 8 Dec 2023 14:21:32 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Here is a patch to play with.\n\nDidn't read the patch yet, but ...\n\n> One possible question for discussion is whether the default for this \n> should be off, on, or possibly something like on-in-assert-builds. \n> (Personally, I'm happy to turn it on myself at run time, but everyone \n> has different workflows.)\n\n... there was already opinion upthread that this should be on by\ndefault, which I agree with. You shouldn't be hitting cases like\nthis commonly (if so, they're bugs to fix or the errcode should be\nrethought), and the failure might be pretty hard to reproduce.\n\nI'm not really sold that we even need YA GUC, for that matter.\nHow about committing the behavior without a GUC, and then\nback-filling one if we get pushback?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Dec 2023 10:05:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 10:05:09 -0500, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > One possible question for discussion is whether the default for this\n> > should be off, on, or possibly something like on-in-assert-builds.\n> > (Personally, I'm happy to turn it on myself at run time, but everyone\n> > has different workflows.)\n>\n> ... there was already opinion upthread that this should be on by\n> default, which I agree with. You shouldn't be hitting cases like\n> this commonly (if so, they're bugs to fix or the errcode should be\n> rethought), and the failure might be pretty hard to reproduce.\n\nFWIW, I did some analysis on aggregated logs on a larger number of machines,\nand it does look like that'd be a measurable increase in log volume. There are\na few voluminous internal errors in core, but the bigger issue is\nextensions. They are typically much less disciplined about assigning error\ncodes than core PG is.\n\nI've been wondering about doing some macro hackery to inform elog.c about\nwhether a log message is from core or an extension. It might even be possible\nto identify the concrete extension, e.g. by updating the contents of\nPG_MODULE_MAGIC during module loading, and referencing that.\n\n\nBased on the aforementioned data, the most common, in-core, log messages\nwithout assigned error codes are:\n\ncould not accept SSL connection: %m - with zero errno\narchive command was terminated by signal %d: %s\ncould not send data to client: %m - with zero errno\ncache lookup failed for type %u\narchive command failed with exit code %d\ntuple concurrently updated\ncould not restore file \"%s\" from archive: %s\narchive command was terminated by signal %d: %s\n%s at file \"%s\" line %u\ninvalid memory alloc request size %zu\ncould not send data to client: %m\ncould not open directory \"%s\": %m - errno indicating ENOMEM\ncould not write init file\nout of relcache_callback_list slots\nonline backup was canceled, recovery cannot continue\nrequested timeline %u does not contain minimum recovery point %X/%X on timeline %u\n\n\nThere were a lot more in older PG versions, I tried to filter those out.\n\n\nI'm a bit confused about the huge number of \"could not accept SSL connection:\n%m\" with a zero errno. I guess we must be clearing errno somehow, but I don't\nimmediately see where. Or perhaps we need to actually look at what\nSSL_get_error() returns?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 Dec 2023 10:14:51 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-08 10:05:09 -0500, Tom Lane wrote:\n>> ... there was already opinion upthread that this should be on by\n>> default, which I agree with. You shouldn't be hitting cases like\n>> this commonly (if so, they're bugs to fix or the errcode should be\n>> rethought), and the failure might be pretty hard to reproduce.\n\n> FWIW, I did some analysis on aggregated logs on a larger number of machines,\n> and it does look like that'd be a measurable increase in log volume. There are\n> a few voluminous internal errors in core, but the bigger issue is\n> extensions. They are typically much less disciplined about assigning error\n> codes than core PG is.\n\nWell, I don't see much wrong with making a push to assign error codes\nto more calls. We've had other discussions about doing that.\nCertainly these SSL failures are not \"internal\" errors.\n\n> could not accept SSL connection: %m - with zero errno\n> ...\n> I'm a bit confused about the huge number of \"could not accept SSL connection:\n> %m\" with a zero errno. I guess we must be clearing errno somehow, but I don't\n> immediately see where. Or perhaps we need to actually look at what\n> SSL_get_error() returns?\n\nHmm, don't suppose you have a way to reproduce that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Dec 2023 13:23:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 13:23:50 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-12-08 10:05:09 -0500, Tom Lane wrote:\n> >> ... there was already opinion upthread that this should be on by\n> >> default, which I agree with. You shouldn't be hitting cases like\n> >> this commonly (if so, they're bugs to fix or the errcode should be\n> >> rethought), and the failure might be pretty hard to reproduce.\n>\n> > FWIW, I did some analysis on aggregated logs on a larger number of machines,\n> > and it does look like that'd be a measurable increase in log volume. There are\n> > a few voluminous internal errors in core, but the bigger issue is\n> > extensions. They are typically much less disciplined about assigning error\n> > codes than core PG is.\n>\n> Well, I don't see much wrong with making a push to assign error codes\n> to more calls.\n\nOh, very much agreed. But I suspect we won't quickly do the same for\nout-of-core extensions...\n\n\n> Certainly these SSL failures are not \"internal\" errors.\n>\n> > could not accept SSL connection: %m - with zero errno\n> > ...\n> > I'm a bit confused about the huge number of \"could not accept SSL connection:\n> > %m\" with a zero errno. I guess we must be clearing errno somehow, but I don't\n> > immediately see where. Or perhaps we need to actually look at what\n> > SSL_get_error() returns?\n>\n> Hmm, don't suppose you have a way to reproduce that?\n\nAfter a bit of trying, yes. I put an abort() into pgtls_open_client(), after\ninitialize_SSL(). Connecting does result in:\n\nLOG: could not accept SSL connection: Success\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 Dec 2023 10:34:40 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-08 13:23:50 -0500, Tom Lane wrote:\n>> Hmm, don't suppose you have a way to reproduce that?\n\n> After a bit of trying, yes. I put an abort() into pgtls_open_client(), after\n> initialize_SSL(). Connecting does result in:\n> LOG: could not accept SSL connection: Success\n\nOK. I can dig into that, unless you're already on it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Dec 2023 13:46:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 13:46:07 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-12-08 13:23:50 -0500, Tom Lane wrote:\n> >> Hmm, don't suppose you have a way to reproduce that?\n> \n> > After a bit of trying, yes. I put an abort() into pgtls_open_client(), after\n> > initialize_SSL(). Connecting does result in:\n> > LOG: could not accept SSL connection: Success\n> \n> OK. I can dig into that, unless you're already on it?\n\nI think I figured it it out. Looks like we need to translate a closed socket\n(recvfrom() returning 0) to ECONNRESET or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 Dec 2023 10:51:01 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 10:51:01 -0800, Andres Freund wrote:\n> On 2023-12-08 13:46:07 -0500, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > On 2023-12-08 13:23:50 -0500, Tom Lane wrote:\n> > >> Hmm, don't suppose you have a way to reproduce that?\n> >\n> > > After a bit of trying, yes. I put an abort() into pgtls_open_client(), after\n> > > initialize_SSL(). Connecting does result in:\n> > > LOG: could not accept SSL connection: Success\n> >\n> > OK. I can dig into that, unless you're already on it?\n>\n> I think I figured it it out. Looks like we need to translate a closed socket\n> (recvfrom() returning 0) to ECONNRESET or such.\n\nI think we might just need to expand the existing branch for EOF:\n\n\t\t\t\tif (r < 0)\n\t\t\t\t\tereport(COMMERROR,\n\t\t\t\t\t\t\t(errcode_for_socket_access(),\n\t\t\t\t\t\t\t errmsg(\"could not accept SSL connection: %m\")));\n\t\t\t\telse\n\t\t\t\t\tereport(COMMERROR,\n\t\t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n\t\t\t\t\t\t\t errmsg(\"could not accept SSL connection: EOF detected\")));\n\nThe openssl docs say:\n\n The following return values can occur:\n\n0\n\n The TLS/SSL handshake was not successful but was shut down controlled and by the specifications of the TLS/SSL protocol. Call SSL_get_error() with the return value ret to find out the reason.\n1\n\n The TLS/SSL handshake was successfully completed, a TLS/SSL connection has been established.\n<0\n\n The TLS/SSL handshake was not successful because a fatal error occurred either at the protocol level or a connection failure occurred. The shutdown was not clean. It can also occur if action is needed to continue the operation for nonblocking BIOs. Call SSL_get_error() with the return value ret to find out the reason.\n\n\nWhich fits with my reproducer - due to the abort the connection was *not* shut\ndown via SSL in a controlled manner, therefore r < 0.\n\n\nHm, oddly enough, there's this tidbit in the SSL_get_error() manpage:\n\n On an unexpected EOF, versions before OpenSSL 3.0 returned SSL_ERROR_SYSCALL,\n nothing was added to the error stack, and errno was 0. Since OpenSSL 3.0 the\n returned error is SSL_ERROR_SSL with a meaningful error on the error stack.\n\nBut I reproduced this with 3.1.\n\n\nSeems like we should just treat errno == 0 as a reason to emit the \"EOF\ndetected\" message?\n\n\n\nI wonder if we should treat send/recv returning 0 different from an error\nmessage perspective during an established connection. Right now we produce\n could not receive data from client: Connection reset by peer\n\nbecause be_tls_read() sets errno to ECONNRESET - despite that not having been\nreturned by the OS. But I guess that's a topic for another day.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 Dec 2023 11:33:16 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 11:33:16 -0800, Andres Freund wrote:\n> On 2023-12-08 10:51:01 -0800, Andres Freund wrote:\n> > On 2023-12-08 13:46:07 -0500, Tom Lane wrote:\n> > > Andres Freund <[email protected]> writes:\n> > > > On 2023-12-08 13:23:50 -0500, Tom Lane wrote:\n> > > >> Hmm, don't suppose you have a way to reproduce that?\n> > >\n> > > > After a bit of trying, yes. I put an abort() into pgtls_open_client(), after\n> > > > initialize_SSL(). Connecting does result in:\n> > > > LOG: could not accept SSL connection: Success\n> > >\n> > > OK. I can dig into that, unless you're already on it?\n>\n> [...]\n>\n> Seems like we should just treat errno == 0 as a reason to emit the \"EOF\n> detected\" message?\n\nI thought it'd be nice to have a test for this, particularly because it's not\nclear that the behaviour is consistent across openssl versions.\n\nI couldn't think of a way to do that with psql. But it's just a few lines of\nperl to gin up an \"SSL\" startup packet and then close the socket. I couldn't\nquite figure out when IO::Socket::INET was added, but I think it's likely been\nlong enough, I see references from 1999.\n\nThis worked easily on linux and freebsd, but not on windows and macos, where\nit seems to cause ECONNRESET. I thought that explicitly shutting down the\nsocket might help, but that just additionally caused freebsd to fail.\n\nWindows uses an older openssl, so it could also be caused by the behaviour\ndiffering back then.\n\nTo deal with that, I changed the test to instead check if \"not accept SSL\nconnection: Success\" is not logged. I'm not sure that actually would be\nlogged on windows, it does seem to have different strings for errors than\nother platforms.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 8 Dec 2023 14:15:54 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n>> I think I figured it it out. Looks like we need to translate a closed socket\n>> (recvfrom() returning 0) to ECONNRESET or such.\n\n> Seems like we should just treat errno == 0 as a reason to emit the \"EOF\n> detected\" message?\n\nAgreed. I think we want to do that after the initial handshake,\ntoo, so maybe as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 08 Dec 2023 17:29:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I thought it'd be nice to have a test for this, particularly because it's not\n> clear that the behaviour is consistent across openssl versions.\n\nPerhaps, but ...\n\n> To deal with that, I changed the test to instead check if \"not accept SSL\n> connection: Success\" is not logged.\n\n... testing only that much seems entirely not worth the cycles, given the\nshape of the patches we both just made. If we can't rely on \"errno != 0\"\nto ensure we won't get \"Success\", there is one heck of a lot of other\ncode that will be broken worse than this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Dec 2023 17:35:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On 2023-12-08 17:29:45 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> >> I think I figured it it out. Looks like we need to translate a closed socket\n> >> (recvfrom() returning 0) to ECONNRESET or such.\n> \n> > Seems like we should just treat errno == 0 as a reason to emit the \"EOF\n> > detected\" message?\n> \n> Agreed. I think we want to do that after the initial handshake,\n> too, so maybe as attached.\n\nI was wondering about that too. But if we do so, why not also do it for\nwrites?\n\n\n", "msg_date": "Fri, 8 Dec 2023 15:36:01 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 17:35:26 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I thought it'd be nice to have a test for this, particularly because it's not\n> > clear that the behaviour is consistent across openssl versions.\n> \n> Perhaps, but ...\n> \n> > To deal with that, I changed the test to instead check if \"not accept SSL\n> > connection: Success\" is not logged.\n> \n> ... testing only that much seems entirely not worth the cycles, given the\n> shape of the patches we both just made. If we can't rely on \"errno != 0\"\n> to ensure we won't get \"Success\", there is one heck of a lot of other\n> code that will be broken worse than this.\n\nI was certainly more optimistic about the usefullness of the test before\ndisocvering the above difficulties...\n\nI considered accepting both ECONNRESET and the errno = 0 phrasing, but after\ndiscovering that the phrasing differs between platforms that seemed less\nattractive.\n\nI guess the test might still provide some value, by ensuring those paths are\nreached.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 Dec 2023 15:40:15 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-08 17:29:45 -0500, Tom Lane wrote:\n>> Agreed. I think we want to do that after the initial handshake,\n>> too, so maybe as attached.\n\n> I was wondering about that too. But if we do so, why not also do it for\n> writes?\n\nWrites don't act that way, do they? EOF on a pipe gives you an error,\nnot silently reporting that zero bytes were written and leaving you\nto retry indefinitely.\n\nWhat I was wondering about was if we needed similar changes on the\nlibpq side, but it's still about reads not writes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Dec 2023 19:39:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-08 19:39:20 -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-12-08 17:29:45 -0500, Tom Lane wrote:\n> >> Agreed. I think we want to do that after the initial handshake,\n> >> too, so maybe as attached.\n>\n> > I was wondering about that too. But if we do so, why not also do it for\n> > writes?\n>\n> Writes don't act that way, do they? EOF on a pipe gives you an error,\n> not silently reporting that zero bytes were written and leaving you\n> to retry indefinitely.\n\nErr, yes. /me looks for a brown paper bag.\n\n\n> What I was wondering about was if we needed similar changes on the\n> libpq side, but it's still about reads not writes.\n\nPerhaps. It's probably harder to reach in practice. But there seems little\nreason to have a plausible codepath emitting \"SSL SYSCALL error: Success\", so\ninstead mapping errno == 0 to \"EOF detected\" pgtls_read() and\nopen_client_SSL() makes sense to me.\n\nI wish there were an easy userspace solution to simulating TCP connection\nfailures. I know how to do it with iptables et al, but that's not great for\nautomated testing in PG...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Dec 2023 09:10:00 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> I was wondering about that too. But if we do so, why not also do it for\n>> writes?\n\n> Writes don't act that way, do they? EOF on a pipe gives you an error,\n> not silently reporting that zero bytes were written and leaving you\n> to retry indefinitely.\n\nOn further reflection I realized that you're right so far as the SSL\ncode path goes, because SSL_write() can involve physical reads as well\nas writes, so at least in principle it's possible that we'd see EOF\nreported this way from that function.\n\nAlso, the libpq side does need work of the same sort, leading to the\nv2-0001 patch attached.\n\nI also realized that we have more or less the same problem at the\ncaller level, allowing a similar failure for non-SSL connections.\nSo I'm also proposing 0002 attached. Your results from aggregated\nlogs didn't show \"could not receive data from client: Success\" as a\ncommon case, but since we weren't bothering to zero errno beforehand,\nit's likely that such failures would show up with very random errnos.\n\nI took a quick look at the GSSAPI code path too, but it seems not to\nadd any new assumptions of this sort.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 09 Dec 2023 12:41:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-09 12:41:30 -0500, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <[email protected]> writes:\n> >> I was wondering about that too. But if we do so, why not also do it for\n> >> writes?\n> \n> > Writes don't act that way, do they? EOF on a pipe gives you an error,\n> > not silently reporting that zero bytes were written and leaving you\n> > to retry indefinitely.\n> \n> On further reflection I realized that you're right so far as the SSL\n> code path goes, because SSL_write() can involve physical reads as well\n> as writes, so at least in principle it's possible that we'd see EOF\n> reported this way from that function.\n\nHeh. I'll just claim that's what I was thinking about.\n\n\n> Also, the libpq side does need work of the same sort, leading to the\n> v2-0001 patch attached.\n\nI'd perhaps add a comment explaining why it's plausible that we'd see that\nthat in the write case.\n\n\n> I also realized that we have more or less the same problem at the\n> caller level, allowing a similar failure for non-SSL connections.\n> So I'm also proposing 0002 attached. Your results from aggregated\n> logs didn't show \"could not receive data from client: Success\" as a\n> common case, but since we weren't bothering to zero errno beforehand,\n> it's likely that such failures would show up with very random errnos.\n\nI did only look at the top ~100 internal errors, after trying to filter out\nextensions, i.e. the list wasn't exhaustive. There's also very few non-ssl\nconnections. But I just checked, and for that error message, I do see some\nXX000, but only in older versions. There's ENETUNREACH, ECONNRESET,\nETIMEDOUT, EHOSTUNREACH which these days are all handled as non XX000 by\nerrcode_for_socket_access().\n\n\n> diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c\n> index bd72a87bbb..76b647ce1c 100644\n> --- a/src/interfaces/libpq/fe-secure.c\n> +++ b/src/interfaces/libpq/fe-secure.c\n> @@ -211,6 +211,8 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)\n> \tint\t\t\tresult_errno = 0;\n> \tchar\t\tsebuf[PG_STRERROR_R_BUFLEN];\n> \n> +\tSOCK_ERRNO_SET(0);\n> +\n> \tn = recv(conn->sock, ptr, len, 0);\n> \n> \tif (n < 0)\n> @@ -232,6 +234,7 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)\n> \n> \t\t\tcase EPIPE:\n> \t\t\tcase ECONNRESET:\n> +\t\t\tcase 0:\t\t\t\t/* treat as EOF */\n> \t\t\t\tlibpq_append_conn_error(conn, \"server closed the connection unexpectedly\\n\"\n> \t\t\t\t\t\t\t\t\t\t\"\\tThis probably means the server terminated abnormally\\n\"\n> \t\t\t\t\t\t\t\t\t\t\"\\tbefore or while processing the request.\");\n\nIf we were treating it as EOF, we'd not \"queue\" an error message, no? Normally\nrecv() returns 0 in that case, so we'd just return, right?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 9 Dec 2023 13:06:42 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-12-09 12:41:30 -0500, Tom Lane wrote:\n>> On further reflection I realized that you're right so far as the SSL\n>> code path goes, because SSL_write() can involve physical reads as well\n>> as writes, so at least in principle it's possible that we'd see EOF\n>> reported this way from that function.\n\n> Heh. I'll just claim that's what I was thinking about.\n> I'd perhaps add a comment explaining why it's plausible that we'd see that\n> that in the write case.\n\nDone in v3 attached.\n\n>> I also realized that we have more or less the same problem at the\n>> caller level, allowing a similar failure for non-SSL connections.\n\n> If we were treating it as EOF, we'd not \"queue\" an error message, no? Normally\n> recv() returns 0 in that case, so we'd just return, right?\n\nDuh, right, so more like this version.\n\nI'm not actually sure that the fe-secure.c part of v3-0002 is\nnecessary, because it's guarding plain recv(2) which really shouldn't\nreturn -1 without setting errno. Still, it's a pretty harmless\naddition.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 09 Dec 2023 18:14:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On 08.12.23 19:14, Andres Freund wrote:\n> FWIW, I did some analysis on aggregated logs on a larger number of machines,\n> and it does look like that'd be a measurable increase in log volume. There are\n> a few voluminous internal errors in core, but the bigger issue is\n> extensions. They are typically much less disciplined about assigning error\n> codes than core PG is.\n\nGood point. Also, during development, I often just put elog(ERROR, \n\"real error later\").\n\n\n\n", "msg_date": "Mon, 11 Dec 2023 10:29:26 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Fri, Dec 8, 2023 at 1:34 PM Andres Freund <[email protected]> wrote:\n> Oh, very much agreed. But I suspect we won't quickly do the same for\n> out-of-core extensions...\n\nI feel like this is a problem that will sort itself out just fine. The\nrules about using ereport() and elog() could probably be better\ndocumented than they are, but doing that won't cause people to follow\nthe practices any more rigorously than they have been. However, a\nchange like this just might. If we make this policy change in core,\nthen extension authors will probably get pressure from users to clean\nup any calls that are emitting excessively verbose log output, and\nthat seems like a good thing.\n\nIt's impossible to make an omelet without breaking some eggs, but the\nthing we're talking about here is, IMHO, extremely important. Users\nare forever hitting weird errors in production that aren't easy to\nreproduce on test systems, and because most elog() calls are written\nwith the expectation that they won't be hit, they often contain\nminimal information, which IME makes it really difficult to understand\nwhat went wrong. A lot of these are things like - oh, this function\nexpected a valid value of some sort, say a relkind, and it got some\nnonsense value, say a zero byte. But where did that nonsense value\noriginate? That elog message can't tell you that, but a stack trace\nwill.\n\nThe last change we made in this area that, at least for me, massively\nimproved debuggability was the change to log the current query string\nwhen a backend crashes. That's such a huge help; I can't imagine going\nback to the old way where you had basically no idea what made things\ngo boom. I think doing something like this can have a similarly\npositive impact. It is going to take some work - from us and from\nextension authors - to tidy things up so that it doesn't produce a\nbunch of unwanted output, but the payoff will be the ability to\nactually find and fix the bugs instead of just saying to a customer\n\"hey, sucks that you hit a bug, let us know if you find a reproducer.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 11:11:35 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> The last change we made in this area that, at least for me, massively\n> improved debuggability was the change to log the current query string\n> when a backend crashes. That's such a huge help; I can't imagine going\n> back to the old way where you had basically no idea what made things\n> go boom. I think doing something like this can have a similarly\n> positive impact. It is going to take some work - from us and from\n> extension authors - to tidy things up so that it doesn't produce a\n> bunch of unwanted output, but the payoff will be the ability to\n> actually find and fix the bugs instead of just saying to a customer\n> \"hey, sucks that you hit a bug, let us know if you find a reproducer.\"\n\nIMO, we aren't really going to get a massive payoff from this with\nthe current backtrace output; it's just not detailed enough. It's\nbetter than nothing certainly, but to really move the goalposts\nwe'd need something approaching gdb's \"bt full\" output. I wonder\nif it'd be sane to try to auto-invoke gdb. That's just blue sky\nfor now, though. In the meantime, I agree with the proposal as it\nstands (that is, auto-backtrace on any XX000 error). We'll soon find\nout whether it's useless, or needs more detail to be really helpful,\nor is just right as it is. Once we have some practical experience\nwith it, we can course-correct as needed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Dec 2023 11:29:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, Dec 19, 2023 at 11:29 AM Tom Lane <[email protected]> wrote:\n> IMO, we aren't really going to get a massive payoff from this with\n> the current backtrace output; it's just not detailed enough. It's\n> better than nothing certainly, but to really move the goalposts\n> we'd need something approaching gdb's \"bt full\" output. I wonder\n> if it'd be sane to try to auto-invoke gdb. That's just blue sky\n> for now, though. In the meantime, I agree with the proposal as it\n> stands (that is, auto-backtrace on any XX000 error). We'll soon find\n> out whether it's useless, or needs more detail to be really helpful,\n> or is just right as it is. Once we have some practical experience\n> with it, we can course-correct as needed.\n\nThat all seems fair to me. I'm more optimistic than you are about\ngetting something useful out of the current backtrace output, but (1)\nI could be wrong, (2) I'd still like to have something better, and (3)\nimproving the backtrace output is a separate project from including\nbacktraces more frequently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 14:22:36 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Sun, 10 Dec 2023 at 00:14, Tom Lane <[email protected]> wrote:\n> I'm not actually sure that the fe-secure.c part of v3-0002 is\n> necessary, because it's guarding plain recv(2) which really shouldn't\n> return -1 without setting errno. Still, it's a pretty harmless\n> addition.\n\nv3-0002 seems have a very similar goal to v23-0002 in my non-blocking\nand encrypted cancel request patchset here:\nhttps://www.postgresql.org/message-id/flat/CAGECzQQirExbHe6uLa4C-sP%3DwTR1jazR_wgCWd4177QE-%3DVFDw%40mail.gmail.com#0b6cc1897c6d507cef49a3f3797181aa\n\nWould it be possible to merge that on instead or at least use the same\napproach as that one (i.e. return -2 on EOF). Otherwise I have to\nupdate that patchset to match the new style of communicating that\nthere is an EOF. Also I personally think a separate return value for\nEOF clearer when reading the code than checking for errno being 0.\n\n\n", "msg_date": "Wed, 20 Dec 2023 10:08:42 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Tue, 19 Dec 2023 at 17:12, Robert Haas <[email protected]> wrote:\n> On Fri, Dec 8, 2023 at 1:34 PM Andres Freund <[email protected]> wrote:\n> > Oh, very much agreed. But I suspect we won't quickly do the same for\n> > out-of-core extensions...\n>\n> I feel like this is a problem that will sort itself out just fine. The\n> rules about using ereport() and elog() could probably be better\n> documented than they are, but doing that won't cause people to follow\n> the practices any more rigorously than they have been. However, a\n> change like this just might. If we make this policy change in core,\n> then extension authors will probably get pressure from users to clean\n> up any calls that are emitting excessively verbose log output, and\n> that seems like a good thing.\n\nAs an extension author I wanted to make clear that Andres his concern\nis definitely not theoretical. Citus (as well as most other extensions\nme and our team at Microsoft maintains) use ereport without an error\ncode very often. And while we try to use elog actually only for\ninternal errors, there's definitely places where we haven't.\n\nWe've had \"adding error codes to all our errors\" on our backlog for\nyears though. I'm guessing this is mostly a combination of it being a\nboring task, it being a lot of work, and the impact not being\nparticularly huge (i.e. now users can check error codes for all our\nerrors wohoo!). If ereport without an errorcode would suddenly cause a\nlog flood in the next postgres release then suddenly the impact of\nadding error codes would increase drastically. And as Robert said we'd\nbasically be forced to adopt the pattern. Which I agree isn't\nnecessarily a bad thing.\n\nBut I'm not sure that smaller extensions that are not maintained by a\nteam that's paid to do so would be happy about this change. Also I\nthink we'd even change our extension to add errror codes to all\nereport calls if the stack traces are useful enough, because then the\nimpact of adding error codes suddenly increases a lot. So I think\nhaving a way for extensions to opt-in/opt-out of this change for their\nextension would be very much appreciated by those authors.\n\n\n", "msg_date": "Wed, 20 Dec 2023 10:30:33 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "Hi,\n\nOn 2023-12-20 10:08:42 +0100, Jelte Fennema-Nio wrote:\n> On Sun, 10 Dec 2023 at 00:14, Tom Lane <[email protected]> wrote:\n> > I'm not actually sure that the fe-secure.c part of v3-0002 is\n> > necessary, because it's guarding plain recv(2) which really shouldn't\n> > return -1 without setting errno. Still, it's a pretty harmless\n> > addition.\n> \n> v3-0002 seems have a very similar goal to v23-0002 in my non-blocking\n> and encrypted cancel request patchset here:\n> https://www.postgresql.org/message-id/flat/CAGECzQQirExbHe6uLa4C-sP%3DwTR1jazR_wgCWd4177QE-%3DVFDw%40mail.gmail.com#0b6cc1897c6d507cef49a3f3797181aa\n> \n> Would it be possible to merge that on instead or at least use the same\n> approach as that one (i.e. return -2 on EOF). Otherwise I have to\n> update that patchset to match the new style of communicating that\n> there is an EOF. Also I personally think a separate return value for\n> EOF clearer when reading the code than checking for errno being 0.\n\nTom's patch imo doesn't really introduce anything really new - we already deal\nwith EOF that way in other places. And it's how the standard APIs deal with\nthe issue. I'd not design it this way on a green field, but given the current\nstate Tom's approach seems more sensible...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 20 Dec 2023 02:30:01 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On Wed, 20 Dec 2023 at 11:30, Andres Freund <[email protected]> wrote:\n> Tom's patch imo doesn't really introduce anything really new - we already deal\n> with EOF that way in other places. And it's how the standard APIs deal with\n> the issue. I'd not design it this way on a green field, but given the current\n> state Tom's approach seems more sensible...\n\nOkay, while I think it's a really non-obvious way of checking for EOF,\nI agree that staying consistent with this non-obvious existing pattern\nis the best choice here. I also just noticed that the proposed patch\nis already merged.\n\nSo I just updated my patchset to use it. For my patchset this does\nintroduce a slight problem though: I'm using pqReadData, instead of\npqsecure_read directly. And pqReadData has other reasons for failing\nwithout setting an errno than just EOF. Specifically allocation\nfailures or passing an invalid socket.\n\nI see three options to handle this:\n1. Don't change pqReadData and simply consider all these EOF too from\nPQcancelPoll\n2. Set errno to something non-zero for these non EOF failures in pqReadData\n3. Return -2 from pqReadData on EOF\n\nAny preference on those? For now I went for option 1.\n\n\n", "msg_date": "Wed, 20 Dec 2023 14:49:53 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backtrace_on_internal_error" }, { "msg_contents": "On 19.12.23 17:29, Tom Lane wrote:\n> IMO, we aren't really going to get a massive payoff from this with\n> the current backtrace output; it's just not detailed enough. It's\n> better than nothing certainly, but to really move the goalposts\n> we'd need something approaching gdb's \"bt full\" output. I wonder\n> if it'd be sane to try to auto-invoke gdb. That's just blue sky\n> for now, though. In the meantime, I agree with the proposal as it\n> stands (that is, auto-backtrace on any XX000 error). We'll soon find\n> out whether it's useless, or needs more detail to be really helpful,\n> or is just right as it is. Once we have some practical experience\n> with it, we can course-correct as needed.\n\nBased on this, I have committed my original patch.\n\n\n\n", "msg_date": "Sat, 30 Dec 2023 12:11:59 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backtrace_on_internal_error" } ]
[ { "msg_contents": "Hi,\n\nxlogreader.c has a pointer overflow bug, as revealed by the\ncombination of -fsanitize=undefined -m32, the new 039_end_of_wal.pl\ntest and Robert's incremental backup patch[1]. The bad code tests\nwhether an object could fit using something like base + size <= end,\nwhich can be converted to something like size <= end - base to avoid\nthe overflow. See experimental fix patch, attached.\n\nI took a while to follow up because I wanted to understand exactly why\nit doesn't break in practice despite being that way since v15. I\nthink I have it now:\n\n1. In the case of a bad/garbage size at end-of-WAL, the\nfollowing-page checks will fail first before anything bad happens as a\nresult of the overflow.\n\n2. In the case of a real oversized record on current 64 bit\narchitectures including amd64, aarch64, power and riscv64, the pointer\ncan't really overflow anyway because the virtual address space is < 64\nbit, typically around 48, and record lengths are 32 bit.\n\n3. In the case of the 32 bit kernels I looked at including Linux,\nFreeBSD and cousins, Solaris and Windows the top 1GB plus a bit more\nof virtual address space is reserved for system use*, so I think a\nreal oversized record shouldn't be able to overflow the pointer there\neither.\n\nA 64 bit kernel running a 32 bit process could run into trouble,\nthough :-(. Those don't need to block out that high memory segment.\nYou'd need to have the WAL buffer in that address range and decode\nlarge enough real WAL records and then things could break badly. I\nguess 32/64 configurations must be rare these days outside developers\ntesting 32 bit code, and that is what happened here (ie CI); and with\nsome minor tweaks to the test it can be reached without Robert's patch\ntoo. There may of course be other more exotic systems that could\nbreak, but I don't know specifically what.\n\nTLDR; this is a horrible bug, but all damage seems to be averted on\n\"normal\" systems. The best thing I can say about all this is that the\nnew test found a bug, and the fix seems straightforward. I will study\nand test this some more, but wanted to share what I have so far.\n\n(*I think the old 32 bit macOS kernels might have been an exception to\nthis pattern but 32 bit kernels and even processes are history there.)\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLCuW_CE9nDAbUNV40G2FkpY_kcPZkaORyVBVive8FQHQ%40mail.gmail.com#d0d00ca5cc3f756656466adc9f2ec186", "msg_date": "Wed, 6 Dec 2023 00:03:53 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "UBSan pointer overflow in xlogreader.c" }, { "msg_contents": "On Wed, Dec 06, 2023 at 12:03:53AM +1300, Thomas Munro wrote:\n> xlogreader.c has a pointer overflow bug, as revealed by the\n> combination of -fsanitize=undefined -m32, the new 039_end_of_wal.pl\n> test and Robert's incremental backup patch[1]. The bad code tests\n> whether an object could fit using something like base + size <= end,\n> which can be converted to something like size <= end - base to avoid\n> the overflow. See experimental fix patch, attached.\n\nThe patch LGTM. I wonder if it might be worth creating some special\npointer arithmetic routines (perhaps using the stuff in common/int.h) to\nhelp prevent this sort of thing in the future. But that'd require you to\nrealize that your code is at risk of overflow, at which point it's probably\njust as easy to restructure the logic like you've done here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 12:04:02 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UBSan pointer overflow in xlogreader.c" }, { "msg_contents": "On Tue, Dec 5, 2023 at 1:04 PM Nathan Bossart <[email protected]> wrote:\n> On Wed, Dec 06, 2023 at 12:03:53AM +1300, Thomas Munro wrote:\n> > xlogreader.c has a pointer overflow bug, as revealed by the\n> > combination of -fsanitize=undefined -m32, the new 039_end_of_wal.pl\n> > test and Robert's incremental backup patch[1]. The bad code tests\n> > whether an object could fit using something like base + size <= end,\n> > which can be converted to something like size <= end - base to avoid\n> > the overflow. See experimental fix patch, attached.\n>\n> The patch LGTM. I wonder if it might be worth creating some special\n> pointer arithmetic routines (perhaps using the stuff in common/int.h) to\n> help prevent this sort of thing in the future. But that'd require you to\n> realize that your code is at risk of overflow, at which point it's probably\n> just as easy to restructure the logic like you've done here.\n\nThe patch LGTM, too. Thanks for investigating and writing the code.\nThe part about how the reserved kernel memory prevents the bug from\nappearing on 32-bit systems but not 64-bit systems running in 32-bit\nmode is pretty interesting -- I don't want to think about how long it\nwould have taken me to figure that out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 15:48:33 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UBSan pointer overflow in xlogreader.c" }, { "msg_contents": "On Tue, Dec 05, 2023 at 03:48:33PM -0500, Robert Haas wrote:\n> The patch LGTM, too. Thanks for investigating and writing the code.\n> The part about how the reserved kernel memory prevents the bug from\n> appearing on 32-bit systems but not 64-bit systems running in 32-bit\n> mode is pretty interesting -- I don't want to think about how long it\n> would have taken me to figure that out.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Dec 2023 15:01:41 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UBSan pointer overflow in xlogreader.c" }, { "msg_contents": "On Tue, Dec 5, 2023 at 4:01 PM Nathan Bossart <[email protected]> wrote:\n> +1\n\nSo, Thomas ... any chance you could commit this? So that my patch\nstops making cfbot sad?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 09:57:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UBSan pointer overflow in xlogreader.c" }, { "msg_contents": "On Fri, Dec 8, 2023 at 3:57 AM Robert Haas <[email protected]> wrote:\n> On Tue, Dec 5, 2023 at 4:01 PM Nathan Bossart <[email protected]> wrote:\n> > +1\n>\n> So, Thomas ... any chance you could commit this? So that my patch\n> stops making cfbot sad?\n\nDone. Thanks both for the reviews.\n\n\n", "msg_date": "Fri, 8 Dec 2023 16:17:47 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: UBSan pointer overflow in xlogreader.c" }, { "msg_contents": "On Thu, Dec 7, 2023 at 10:18 PM Thomas Munro <[email protected]> wrote:\n> On Fri, Dec 8, 2023 at 3:57 AM Robert Haas <[email protected]> wrote:\n> > On Tue, Dec 5, 2023 at 4:01 PM Nathan Bossart <[email protected]> wrote:\n> > > +1\n> >\n> > So, Thomas ... any chance you could commit this? So that my patch\n> > stops making cfbot sad?\n>\n> Done. Thanks both for the reviews.\n\nThank you!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Dec 2023 11:29:03 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UBSan pointer overflow in xlogreader.c" } ]
[ { "msg_contents": "In [0] it was discussed that we could make attstattarget a nullable \ncolumn, instead of always storing an explicit -1 default value for most \ncolumns. This patch implements this.\n\nThis changes the pg_attribute field attstattarget into a nullable field \nin the variable-length part of the row. If no value is set by the user \nfor attstattarget, it is now null instead of previously -1. This saves \nspace in pg_attribute and tuple descriptors for most practical \nscenarios. (ATTRIBUTE_FIXED_PART_SIZE is reduced from 108 to 104.) \nAlso, null is the semantically more correct value.\n\nThe ANALYZE code internally continues to represent the default \nstatistics target by -1, so that that code can avoid having to deal with \nnull values. But that is now contained to ANALYZE code. The DDL code \ndeals with attstattarget possibly null.\n\nFor system columns, the field is now always null but the effective value \n0 (don't analyze) is assumed.\n\nTo set a column's statistics target to the default value, the new \ncommand form ALTER TABLE ... SET STATISTICS DEFAULT can be used. (SET \nSTATISTICS -1 still works.)\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/d07ffc2b-e0e8-77f7-38fb-be921dff71af%40enterprisedb.com", "msg_date": "Tue, 5 Dec 2023 13:52:36 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Make attstattarget nullable" }, { "msg_contents": "Here is an updated patch rebased over 3e2e0d5ad7.\n\nThe 0001 patch stands on its own, but I also tacked on two additional \nWIP patches that simplify some pg_attribute handling and make these \nkinds of refactorings simpler in the future. See description in the \npatches.\n\n\nOn 05.12.23 13:52, Peter Eisentraut wrote:\n> In [0] it was discussed that we could make attstattarget a nullable \n> column, instead of always storing an explicit -1 default value for most \n> columns.  This patch implements this.\n> \n> This changes the pg_attribute field attstattarget into a nullable field \n> in the variable-length part of the row.  If no value is set by the user \n> for attstattarget, it is now null instead of previously -1.  This saves \n> space in pg_attribute and tuple descriptors for most practical \n> scenarios.  (ATTRIBUTE_FIXED_PART_SIZE is reduced from 108 to 104.) \n> Also, null is the semantically more correct value.\n> \n> The ANALYZE code internally continues to represent the default \n> statistics target by -1, so that that code can avoid having to deal with \n> null values.  But that is now contained to ANALYZE code.  The DDL code \n> deals with attstattarget possibly null.\n> \n> For system columns, the field is now always null but the effective value \n> 0 (don't analyze) is assumed.\n> \n> To set a column's statistics target to the default value, the new \n> command form ALTER TABLE ... SET STATISTICS DEFAULT can be used.  (SET \n> STATISTICS -1 still works.)\n> \n> \n> [0]: \n> https://www.postgresql.org/message-id/flat/d07ffc2b-e0e8-77f7-38fb-be921dff71af%40enterprisedb.com", "msg_date": "Sat, 23 Dec 2023 13:56:29 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 2023-Dec-23, Peter Eisentraut wrote:\n\n> Here is an updated patch rebased over 3e2e0d5ad7.\n> \n> The 0001 patch stands on its own, but I also tacked on two additional WIP\n> patches that simplify some pg_attribute handling and make these kinds of\n> refactorings simpler in the future. See description in the patches.\n\nI didn't look at 0002 and 0003, since they're marked as WIP. (But I did\nlike the removal that happens in 0003, so I hope these two also make it\nto 17).\n\n> On 05.12.23 13:52, Peter Eisentraut wrote:\n> > In [0] it was discussed that we could make attstattarget a nullable\n> > column, instead of always storing an explicit -1 default value for most\n> > columns.  This patch implements this.\n\nSeems reasonable. Do we really need a catversion bump for this?\n\nI like that we now have SET STATISTICS DEFAULT rather than -1 to reset\nto default. Do we want to document that setting explicitly to -1\ncontinues to have that behavior? (I would add something like \"Setting\nto a value of -1 is an obsolete spelling to get the same behavior.\"\nafter the phrase that explains DEFAULT in the ALTER TABLE manpage.)\n\nI noticed that equalTupleDescs no longer compares attstattarget, and\nthis is because the field is not in TupleDesc anymore. I looked at the\ncallers of equalTupleDescs and I think this is exactly what we want\n(precisely because attstattarget is no longer in TupleDesc.)\n\n> > This changes the pg_attribute field attstattarget into a nullable field\n> > in the variable-length part of the row.\n\nI don't think we use \"the variable-length part of the row\" as a term\nanywhere. We only have the variable-length columns, and we made a bit\nof a mistake in using CATALOG_VARLEN to differentiate the part of the\ncatalogs that are not mapped to the structs (because at the time those\nwere in effect only the variable length fields). I think this is\nlargely not a problem, but let's be careful with how we word the related\ncomments. So:\n\nI think the comment next to \"#ifdef CATALOG_VARLEN\" is now a bit\nmisleading, because the field immediately below is effectively not\nvarlena. Maybe make it\n#ifdef CATALOG_VARLEN\t\t\t/* nullable/varlena fields start here */\n\nIn RemoveAttributeById, a comment says\n\"Clear the other variable-length fields.\"\nbut this is no longer fully correct. Again maybe make it \"... the other\nnullable or variable-length fields\".\n\nIn get_attstattarget() I think we should return 0 for dropped columns\nwithout reading attstattarget, which is useless anyway, and if it did\nhappen to return non-null, it might cause us to do stuff, which would be\na waste.\n\nIt's annoying that the new code in index_concurrently_swap() is more\nverbose than the code being replaced, but it seems OK to me, since it\nallows us to distinguish a null value in attstattarget from actual 0\nwithout complicating the get_attstattarget API (which I think we would\nhave to do if we wanted to use it here.)\n\nI don't have anything else on this patch at this point.\n\nThanks\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:16:30 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 10.01.24 14:16, Alvaro Herrera wrote:\n>> Here is an updated patch rebased over 3e2e0d5ad7.\n>>\n>> The 0001 patch stands on its own, but I also tacked on two additional WIP\n>> patches that simplify some pg_attribute handling and make these kinds of\n>> refactorings simpler in the future. See description in the patches.\n> \n> I didn't look at 0002 and 0003, since they're marked as WIP. (But I did\n> like the removal that happens in 0003, so I hope these two also make it\n> to 17).\n\nHere is an updated patch set. I have addressed your comments on 0001. \nI looked again at 0002 and 0003 and I was quite happy with them, so I \njust removed the WIP label and also added a few more code comments, but \notherwise didn't change anything.\n\n> Seems reasonable. Do we really need a catversion bump for this?\n\nYes, this changes the order of the fields in pg_attribute.\n\n> I like that we now have SET STATISTICS DEFAULT rather than -1 to reset\n> to default. Do we want to document that setting explicitly to -1\n> continues to have that behavior? (I would add something like \"Setting\n> to a value of -1 is an obsolete spelling to get the same behavior.\"\n> after the phrase that explains DEFAULT in the ALTER TABLE manpage.)\n\ndone\n\n> I noticed that equalTupleDescs no longer compares attstattarget, and\n> this is because the field is not in TupleDesc anymore. I looked at the\n> callers of equalTupleDescs and I think this is exactly what we want\n> (precisely because attstattarget is no longer in TupleDesc.)\n\nYes, I had investigated that in some detail, and I think it's ok. I \nthink equalTupleDescs() is actually mostly useless and I plan to start a \nseparate discussion on that.\n\n>>> This changes the pg_attribute field attstattarget into a nullable field\n>>> in the variable-length part of the row.\n> \n> I don't think we use \"the variable-length part of the row\" as a term\n> anywhere. We only have the variable-length columns, and we made a bit\n> of a mistake in using CATALOG_VARLEN to differentiate the part of the\n> catalogs that are not mapped to the structs (because at the time those\n> were in effect only the variable length fields). I think this is\n> largely not a problem, but let's be careful with how we word the related\n> comments. So:\n\nYeah, there are multiple ways to interpret this. There are fields with \nvarlena headers, but there are also fields that are not-fixed-length as \nfar as struct access to catalog tuples is concerned, and the two not the \nsame.\n\n> I think the comment next to \"#ifdef CATALOG_VARLEN\" is now a bit\n> misleading, because the field immediately below is effectively not\n> varlena. Maybe make it\n> #ifdef CATALOG_VARLEN\t\t\t/* nullable/varlena fields start here */\n\ndone\n\n> In RemoveAttributeById, a comment says\n> \"Clear the other variable-length fields.\"\n> but this is no longer fully correct. Again maybe make it \"... the other\n> nullable or variable-length fields\".\n\ndone\n\n> In get_attstattarget() I think we should return 0 for dropped columns\n> without reading attstattarget, which is useless anyway, and if it did\n> happen to return non-null, it might cause us to do stuff, which would be\n> a waste.\n\nI ended up deciding to get rid of get_attstattarget() altogether and \njust do the fetching inline in examine_attribute(). Because the \nprevious API and what you are discussing here is over-designed, since \nthe only caller doesn't call it with dropped columns or system columns \nanyway. This way these issues are contained in the ANALYZE code, not in \na very general place like lsyscache.c.\n\n> It's annoying that the new code in index_concurrently_swap() is more\n> verbose than the code being replaced, but it seems OK to me, since it\n> allows us to distinguish a null value in attstattarget from actual 0\n> without complicating the get_attstattarget API (which I think we would\n> have to do if we wanted to use it here.)\n\nYeah, this was annoying. Originally, I had it even more complicated, \nbecause I was trying to check if the actual (non-null) values are the \nsame. But then I realized the new value is never set at this point. I \nthink what the code is actually about is clearer now. And of course the \n0003 patch gets rid of it anyway.", "msg_date": "Thu, 11 Jan 2024 11:22:37 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 2024-Jan-11, Peter Eisentraut wrote:\n\n> On 10.01.24 14:16, Alvaro Herrera wrote:\n\n> > Seems reasonable. Do we really need a catversion bump for this?\n> \n> Yes, this changes the order of the fields in pg_attribute.\n\nAh, right.\n\n> > In get_attstattarget() I think we should return 0 for dropped columns\n> > without reading attstattarget, which is useless anyway, and if it did\n> > happen to return non-null, it might cause us to do stuff, which would be\n> > a waste.\n> \n> I ended up deciding to get rid of get_attstattarget() altogether and just do\n> the fetching inline in examine_attribute(). Because the previous API and\n> what you are discussing here is over-designed, since the only caller doesn't\n> call it with dropped columns or system columns anyway. This way these\n> issues are contained in the ANALYZE code, not in a very general place like\n> lsyscache.c.\n\nSounds good.\n\nMaybe instead of having examine_attribute hand a -1 target to the\nanalyze functions, we could just put default_statistics_target there.\nAnalyze functions would never receive negative values, and we could\nremove that from the analyze functions. Maybe make\nVacAttrStats->attstattarget unsigned while at it. (This could be a\nseparate patch.)\n\n\n> > It's annoying that the new code in index_concurrently_swap() is more\n> > verbose than the code being replaced, but it seems OK to me, since it\n> > allows us to distinguish a null value in attstattarget from actual 0\n> > without complicating the get_attstattarget API (which I think we would\n> > have to do if we wanted to use it here.)\n> \n> Yeah, this was annoying. Originally, I had it even more complicated,\n> because I was trying to check if the actual (non-null) values are the same.\n> But then I realized the new value is never set at this point. I think what\n> the code is actually about is clearer now.\n\nYeah, it's neat and the comment is clear enough.\n\n> And of course the 0003 patch gets rid of it anyway.\n\nI again didn't look at 0002 and 0003 very closely, but from 10,000 feet\nit looks mostly reasonable -- but I think naming the struct\nFormData_pg_attribute_extra is not a great idea, as it looks like there\nwould have to be a catalog named pg_attribute_extra -- and I don't think\nI would make the \"non-Data\" pointer-to-struct typedef either.\n\n\n\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 12 Jan 2024 12:16:37 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "BTW I wanted to but didn't say so explicitly, so here goes: 0001 looks\nready to go in.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Fri, 12 Jan 2024 12:27:12 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 12.01.24 12:16, Alvaro Herrera wrote:\n>>> In get_attstattarget() I think we should return 0 for dropped columns\n>>> without reading attstattarget, which is useless anyway, and if it did\n>>> happen to return non-null, it might cause us to do stuff, which would be\n>>> a waste.\n>>\n>> I ended up deciding to get rid of get_attstattarget() altogether and just do\n>> the fetching inline in examine_attribute(). Because the previous API and\n>> what you are discussing here is over-designed, since the only caller doesn't\n>> call it with dropped columns or system columns anyway. This way these\n>> issues are contained in the ANALYZE code, not in a very general place like\n>> lsyscache.c.\n> \n> Sounds good.\n\nI have committed this first patch.\n\n> Maybe instead of having examine_attribute hand a -1 target to the\n> analyze functions, we could just put default_statistics_target there.\n> Analyze functions would never receive negative values, and we could\n> remove that from the analyze functions. Maybe make\n> VacAttrStats->attstattarget unsigned while at it. (This could be a\n> separate patch.)\n\nBut I now remembered why I didn't do this. The extended statistics code \nneeds to know whether the statistics target was set or left as default, \nbecause it will then apply its own sequence of logic to determine a \nfinal value. (Maybe there is a way to untangle this further, but it's \nnot as obvious as it seems.)\n\nAt which point I then realized that extended statistics have their own \nstatistics target catalog field and command, and we really should change \nthat to match the changes done to attstattarget. So here is another \npatch that does all that again for stxstattarget. It's meant to mirror \nthe attstattarget changes exactly.\n\n>> And of course the 0003 patch gets rid of it anyway.\n> \n> I again didn't look at 0002 and 0003 very closely, but from 10,000 feet\n> it looks mostly reasonable -- but I think naming the struct\n> FormData_pg_attribute_extra is not a great idea, as it looks like there\n> would have to be a catalog named pg_attribute_extra -- and I don't think\n> I would make the \"non-Data\" pointer-to-struct typedef either.\n\nI agree that this naming was problematic. After some introverted \nbikeshedding, I changed it to FormExtraData_pg_attribute. Obviously, \nother solutions are possible. I also removed the typedef as you suggested.", "msg_date": "Mon, 15 Jan 2024 16:54:27 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "Hi Peter,\n\nI took a look at this patch today. I think it makes sense to do this,\nand I haven't found any major issues with any of the three patches. A\ncouple minor comments:\n\n0001\n----\n\n1) I think this bit in ALTER STATISTICS docs is wrong:\n\n- <term><replaceable class=\"parameter\">new_target</replaceable></term>\n+ <term><literal>SET STATISTICS { <replaceable\nclass=\"parameter\">integer</replaceable> | DEFAULT }</literal></term>\n\nbecause it means we now have list entries for name, ..., new_name,\nnew_schema, and then suddenly \"SET STATISTICS { integer | DEFAULT }\".\nThat's a bit weird.\n\n\n2) The newtarget handling in AlterStatistics seems rather confusing. Why\ndoes it get set to -1 just to ignore the value later? For a while I was\n99% sure ALTER STATISTICS ... SET STATISTICS DEFAULT will set the field\nto -1. Maybe ditching the first if block and directly checking\nstmt->stxstattarget before setting repl_val/repl_null would be better?\n\n\n3) I find it a bit tedious that making the stxstattarget field nullable\nmeans we now have to translate NULL to -1 in so many places. I know why\nit's that way, but it's ... not very convenient :-(\n\n\n0002\n----\n\n1) I think InsertPgAttributeTuples comment probably needs to document\nwhat the new tupdesc_extra parameter does.\n\n\n0003\n----\nno comment, seems fine\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 6 Mar 2024 22:34:54 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 06.03.24 22:34, Tomas Vondra wrote:\n> 0001\n> ----\n> \n> 1) I think this bit in ALTER STATISTICS docs is wrong:\n> \n> - <term><replaceable class=\"parameter\">new_target</replaceable></term>\n> + <term><literal>SET STATISTICS { <replaceable\n> class=\"parameter\">integer</replaceable> | DEFAULT }</literal></term>\n> \n> because it means we now have list entries for name, ..., new_name,\n> new_schema, and then suddenly \"SET STATISTICS { integer | DEFAULT }\".\n> That's a bit weird.\n\nOk, how would you change it? List out the full clauses of the other \nvariants under Parameters as well?\n\nWe have similar inconsistencies on other ALTER reference pages, so I'm \nnot sure what the preferred approach is.\n\n> 2) The newtarget handling in AlterStatistics seems rather confusing. Why\n> does it get set to -1 just to ignore the value later? For a while I was\n> 99% sure ALTER STATISTICS ... SET STATISTICS DEFAULT will set the field\n> to -1. Maybe ditching the first if block and directly checking\n> stmt->stxstattarget before setting repl_val/repl_null would be better?\n\nBut we also need to continue accepting -1 for default on input. The \ncurrent code achieves that, the proposed variant would not.\n\nNote that this patch matches the equivalent patch for attstattarget \n(4f622503d6d), which uses the same logic. We could change it if we have \na better idea, but then we should change both.\n\n> 0002\n> ----\n> \n> 1) I think InsertPgAttributeTuples comment probably needs to document\n> what the new tupdesc_extra parameter does.\n\nYes, I'll update that comment.\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 13:47:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "\n\nOn 3/12/24 13:47, Peter Eisentraut wrote:\n> On 06.03.24 22:34, Tomas Vondra wrote:\n>> 0001\n>> ----\n>>\n>> 1) I think this bit in ALTER STATISTICS docs is wrong:\n>>\n>> -      <term><replaceable\n>> class=\"parameter\">new_target</replaceable></term>\n>> +      <term><literal>SET STATISTICS { <replaceable\n>> class=\"parameter\">integer</replaceable> | DEFAULT }</literal></term>\n>>\n>> because it means we now have list entries for name, ..., new_name,\n>> new_schema, and then suddenly \"SET STATISTICS { integer | DEFAULT }\".\n>> That's a bit weird.\n> \n> Ok, how would you change it?  List out the full clauses of the other\n> variants under Parameters as well?\n\nI'd go with a parameter, essentially exactly as it used to be, except\nfor adding the DEFAULT option. So the list would define new_target, and\nmention DEFAULT as a special value.\n\n> We have similar inconsistencies on other ALTER reference pages, so I'm\n> not sure what the preferred approach is.\n> \n\nYeah, the other reference pages may have some inconsistencies too, but\nlet's not add more.\n\n>> 2) The newtarget handling in AlterStatistics seems rather confusing. Why\n>> does it get set to -1 just to ignore the value later? For a while I was\n>> 99% sure ALTER STATISTICS ... SET STATISTICS DEFAULT will set the field\n>> to -1. Maybe ditching the first if block and directly checking\n>> stmt->stxstattarget before setting repl_val/repl_null would be better?\n> \n> But we also need to continue accepting -1 for default on input.  The\n> current code achieves that, the proposed variant would not.\n> \n\nOK, I did not realize that. But then maybe this should be explained in a\ncomment before the new \"if\" block, because people won't realize why it\nneeds to be this way.\n\n> Note that this patch matches the equivalent patch for attstattarget\n> (4f622503d6d), which uses the same logic.  We could change it if we have\n> a better idea, but then we should change both.\n> \n>> 0002\n>> ----\n>>\n>> 1) I think InsertPgAttributeTuples comment probably needs to document\n>> what the new tupdesc_extra parameter does.\n> \n> Yes, I'll update that comment.\n> \n\nOK.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 Mar 2024 14:32:11 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 12.03.24 14:32, Tomas Vondra wrote:\n> On 3/12/24 13:47, Peter Eisentraut wrote:\n>> On 06.03.24 22:34, Tomas Vondra wrote:\n>>> 0001\n>>> ----\n>>>\n>>> 1) I think this bit in ALTER STATISTICS docs is wrong:\n>>>\n>>> -      <term><replaceable\n>>> class=\"parameter\">new_target</replaceable></term>\n>>> +      <term><literal>SET STATISTICS { <replaceable\n>>> class=\"parameter\">integer</replaceable> | DEFAULT }</literal></term>\n>>>\n>>> because it means we now have list entries for name, ..., new_name,\n>>> new_schema, and then suddenly \"SET STATISTICS { integer | DEFAULT }\".\n>>> That's a bit weird.\n>>\n>> Ok, how would you change it?  List out the full clauses of the other\n>> variants under Parameters as well?\n> \n> I'd go with a parameter, essentially exactly as it used to be, except\n> for adding the DEFAULT option. So the list would define new_target, and\n> mention DEFAULT as a special value.\n\nOk, done that way (I think).\n\n>>> 2) The newtarget handling in AlterStatistics seems rather confusing. Why\n>>> does it get set to -1 just to ignore the value later? For a while I was\n>>> 99% sure ALTER STATISTICS ... SET STATISTICS DEFAULT will set the field\n>>> to -1. Maybe ditching the first if block and directly checking\n>>> stmt->stxstattarget before setting repl_val/repl_null would be better?\n>>\n>> But we also need to continue accepting -1 for default on input.  The\n>> current code achieves that, the proposed variant would not.\n> \n> OK, I did not realize that. But then maybe this should be explained in a\n> comment before the new \"if\" block, because people won't realize why it\n> needs to be this way.\n\nIn the new version, I tried to write this more explicitly, and updated \ntablecmds.c to match.", "msg_date": "Thu, 14 Mar 2024 11:13:41 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "\n\nOn 3/14/24 11:13, Peter Eisentraut wrote:\n> On 12.03.24 14:32, Tomas Vondra wrote:\n>> On 3/12/24 13:47, Peter Eisentraut wrote:\n>>> On 06.03.24 22:34, Tomas Vondra wrote:\n>>>> 0001\n>>>> ----\n>>>>\n>>>> 1) I think this bit in ALTER STATISTICS docs is wrong:\n>>>>\n>>>> -      <term><replaceable\n>>>> class=\"parameter\">new_target</replaceable></term>\n>>>> +      <term><literal>SET STATISTICS { <replaceable\n>>>> class=\"parameter\">integer</replaceable> | DEFAULT }</literal></term>\n>>>>\n>>>> because it means we now have list entries for name, ..., new_name,\n>>>> new_schema, and then suddenly \"SET STATISTICS { integer | DEFAULT }\".\n>>>> That's a bit weird.\n>>>\n>>> Ok, how would you change it?  List out the full clauses of the other\n>>> variants under Parameters as well?\n>>\n>> I'd go with a parameter, essentially exactly as it used to be, except\n>> for adding the DEFAULT option. So the list would define new_target, and\n>> mention DEFAULT as a special value.\n> \n> Ok, done that way (I think).\n> \n\nSeems OK to me.\n\n>>>> 2) The newtarget handling in AlterStatistics seems rather confusing.\n>>>> Why\n>>>> does it get set to -1 just to ignore the value later? For a while I was\n>>>> 99% sure ALTER STATISTICS ... SET STATISTICS DEFAULT will set the field\n>>>> to -1. Maybe ditching the first if block and directly checking\n>>>> stmt->stxstattarget before setting repl_val/repl_null would be better?\n>>>\n>>> But we also need to continue accepting -1 for default on input.  The\n>>> current code achieves that, the proposed variant would not.\n>>\n>> OK, I did not realize that. But then maybe this should be explained in a\n>> comment before the new \"if\" block, because people won't realize why it\n>> needs to be this way.\n> \n> In the new version, I tried to write this more explicitly, and updated\n> tablecmds.c to match.\n\nWFM. It still seems a bit hard to read, but I don't know how to do it\nbetter. I guess it's how it has to be to deal with multiple default\nvalues in a backwards-compatible way. Good thing is it's localized in\ntwo places.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 Mar 2024 15:46:00 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On 14.03.24 15:46, Tomas Vondra wrote:\n>>>>> 2) The newtarget handling in AlterStatistics seems rather confusing.\n>>>>> Why\n>>>>> does it get set to -1 just to ignore the value later? For a while I was\n>>>>> 99% sure ALTER STATISTICS ... SET STATISTICS DEFAULT will set the field\n>>>>> to -1. Maybe ditching the first if block and directly checking\n>>>>> stmt->stxstattarget before setting repl_val/repl_null would be better?\n>>>>\n>>>> But we also need to continue accepting -1 for default on input.  The\n>>>> current code achieves that, the proposed variant would not.\n>>>\n>>> OK, I did not realize that. But then maybe this should be explained in a\n>>> comment before the new \"if\" block, because people won't realize why it\n>>> needs to be this way.\n>>\n>> In the new version, I tried to write this more explicitly, and updated\n>> tablecmds.c to match.\n> \n> WFM. It still seems a bit hard to read, but I don't know how to do it\n> better. I guess it's how it has to be to deal with multiple default\n> values in a backwards-compatible way. Good thing is it's localized in\n> two places.\n\nI have committed this patch series. Thanks.\n\n\n\n", "msg_date": "Sun, 17 Mar 2024 13:51:39 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make attstattarget nullable" }, { "msg_contents": "On Sun, Mar 17, 2024 at 01:51:39PM +0100, Peter Eisentraut wrote:\n> I have committed this patch series. Thanks.\n\nMy compiler is worried that \"newtarget\" might be getting used\nuninitialized. AFAICT there's no actual risk here, so I think initializing\nit to 0 is sufficient. I'll commit the attached patch shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 17 Mar 2024 14:29:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make attstattarget nullable" } ]
[ { "msg_contents": "Hi,\n\nI've encountered the following segfault:\n\n#0: 0x0000000104e821a8 postgres`list_head(l=0x7f7f7f7f7f7f7f7f) at\npg_list.h:130:17\n#1: 0x0000000104e81c9c postgres`PreCommit_Notify at async.c:932:16\n#2: 0x0000000104dd02f8 postgres`CommitTransaction at xact.c:2236:2\n#3: 0x0000000104dcfc24 postgres`CommitTransactionCommand at xact.c:3061:4\n#4: 0x000000010528a880 postgres`finish_xact_command at postgres.c:2777:3\n#5: 0x00000001052883ac postgres`exec_simple_query(query_string=\"notify\ntest;\") at postgres.c:1298:4\n\nThis happens when a transaction block fails and a ProcessUtility hook\nsends a notification during the rollback command.\n\nWhen a transaction block fails, it will enter in a TBLOCK_ABORT state,\nwaiting for a rollback. Calling rollback will switch to a\nTBLOCK_ABORT_END state and will only go through CleanupTransaction.\nIf a hook sends a notification during the rollback command, a\nnotification will be queued but its content will be wiped when the\nTopTransactionContext is destroyed.\nTrying to send a notification immediately after will segfault in\nPreCommit_Notify as pendingNotifies->events will be invalid.\n\nThere's a test_notify_rollback test module attached to the patch that reproduces\nthe issue.\n\nMoving notification clean up from AbortTransaction to CleanupTransaction fixes\nthe issue as it will clear pendingActions in the same function that destroys the\nTopTransactionContext.\n\nRegards,\nAnthonin", "msg_date": "Tue, 5 Dec 2023 18:33:38 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Possible segfault when sending notification within a ProcessUtility\n hook" }, { "msg_contents": "Anthonin Bonnefoy <[email protected]> writes:\n> This happens when a transaction block fails and a ProcessUtility hook\n> sends a notification during the rollback command.\n\nWhy should we regard that as anything other than a bug in the\nProcessUtility hook? A failed transaction should not send any\nnotifies.\n\n> Moving notification clean up from AbortTransaction to CleanupTransaction fixes\n> the issue as it will clear pendingActions in the same function that destroys the\n> TopTransactionContext.\n\nMaybe that would be okay, or maybe not, but I'm disinclined to\nmess with it without a better argument for changing it. It seems\nlike there would still be room to break things with mistimed\ncalls to async.c functions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Dec 2023 15:03:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible segfault when sending notification within a\n ProcessUtility hook" }, { "msg_contents": "> On Tue, Dec 5, 2023 at 9:03 PM Tom Lane <[email protected]> wrote:\n> Why should we regard that as anything other than a bug in the\n> ProcessUtility hook? A failed transaction should not send any\n> notifies.\n\nFair point. That was also my initial assumption but I thought that the\ntransaction\nstate was not available from a hook as I've missed\nIsAbortedTransactionBlockState.\n\nI will rely on IsAbortedTransactionBlockState to avoid this case,\nthanks for the input.\n\nRegards,\nAnthonin.\n\n\n", "msg_date": "Wed, 6 Dec 2023 14:20:37 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible segfault when sending notification within a\n ProcessUtility hook" } ]
[ { "msg_contents": "Hi,\n\nPlease find the attached patch for $subject and associated test. Please review.\n\n--\nThanks and Regards,\nKrishnakumar (KK).\n[Microsoft]", "msg_date": "Tue, 5 Dec 2023 09:36:52 -0800", "msg_from": "Krishnakumar R <[email protected]>", "msg_from_op": true, "msg_subject": "Add checks in pg_rewind to abort if backup_label file is present" }, { "msg_contents": "On 05/12/2023 19:36, Krishnakumar R wrote:\n> Hi,\n> \n> Please find the attached patch for $subject and associated test. Please review.\n\nThanks for picking up this long-standing TODO!\n\n> +/*\n> + * Check if a file is present using the connection to the\n> + * database.\n> + */\n> +static bool\n> +libpq_is_file_present(rewind_source *source, const char *path)\n> +{\n> +\tPGconn\t *conn = ((libpq_source *) source)->conn;\n> +\tPGresult *res;\n> +\tconst char *paramValues[1];\n> +\n> +\tparamValues[0] = path;\n> +\tres = PQexecParams(conn, \"SELECT pg_stat_file($1)\",\n> +\t\t\t\t\t 1, NULL, paramValues, NULL, NULL, 1);\n> +\tif (PQresultStatus(res) != PGRES_TUPLES_OK)\n> +\t\treturn false;\n> +\n> +\treturn true;\n> +}\n\nThe backup_label file cannot be present when the server is running. No \nneed to check for that when connected to a live server.\n\n> --- a/src/bin/pg_rewind/pg_rewind.c\n> +++ b/src/bin/pg_rewind/pg_rewind.c\n> @@ -729,7 +729,11 @@ perform_rewind(filemap_t *filemap, rewind_source *source,\n> static void\n> sanityChecks(void)\n> {\n> -\t/* TODO Check that there's no backup_label in either cluster */\n> +\tif (source->is_file_present(source, \"backup_label\"))\n> +\t\tpg_fatal(\"The backup_label file is present in source cluster\");\n> +\n> +\tif (is_file_present(datadir_target, \"backup_label\"))\n> +\t\tpg_fatal(\"The backup_label file is present in target cluster\");\n> \n> \t/* Check system_identifier match */\n> \tif (ControlFile_target.system_identifier != ControlFile_source.system_identifier)\n\nThe error message isn't very user friendly. It's pretty dangerous \nactually: I think a lot of users would just delete the backup_label file \nwhen they see that message. Because then the file is no longer present \nand problem solved, right?\n\nThe point of checking for backup_label is that if it's present, the \ncluster wasn't really shut down cleanly. The correct fix is to start it, \nlet WAL recovery finish, and shut it down again. The error message \nshould make that clear. Perhaps make it similar to the existing \"target \nserver must be shut down cleanly\" message.\n\nI think today if you try to run pg_rewind on a cluster that was restored \nfrom a backup, so that backup_label is present, you get the \"target \nserver must be shut down cleanly\" message. But we don't have any tests \nfor it. We do have a test for when the server is still running, but not \nfor the restored-from-backup case. Would be nice to add one.\n\nPerhaps move the backup_label check later in sanityChecks(), after \nchecking the state in the control file. That way, you still normally hit \nthe \"target server must be shut down cleanly\" case, and the backup_label \ncheck would be just an additional \"can't happen\" sanity check.\n\nIn createBackupLabel() we have this:\n\n> \t/* TODO: move old file out of the way, if any. */\n> \topen_target_file(\"backup_label\", true); /* BACKUP_LABEL_FILE */\n> \twrite_target_range(buf, 0, len);\n> \tclose_target_file();\n\nThat TODO comment needs to go away now. And we probably should complain \nif the file already exists. With your patch, we already checked earlier \nthat it doesn't exists, so if it exists when we reach that code, \nsomething's gone wrong.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 22:14:23 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add checks in pg_rewind to abort if backup_label file is present" } ]
[ { "msg_contents": "The file is only referenced in Meson and MSVC scripts from what I can \ntell, and the Meson reference is protected by a Windows check.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 05 Dec 2023 12:37:44 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Remove WIN32 conditional compilation from win32common.c" }, { "msg_contents": "On 05/12/2023 20:37, Tristan Partin wrote:\n> The file is only referenced in Meson and MSVC scripts from what I can\n> tell, and the Meson reference is protected by a Windows check.\n\nThere are a bunch of files like win32common.c:\n\n$ ls src/port/win32*.c\nsrc/port/win32common.c\nsrc/port/win32dlopen.c\nsrc/port/win32env.c\nsrc/port/win32error.c\nsrc/port/win32fdatasync.c\nsrc/port/win32fseek.c\nsrc/port/win32gai_strerror.c\nsrc/port/win32getrusage.c\nsrc/port/win32gettimeofday.c\nsrc/port/win32link.c\nsrc/port/win32ntdll.c\nsrc/port/win32pread.c\nsrc/port/win32pwrite.c\nsrc/port/win32security.c\nsrc/port/win32setlocale.c\nsrc/port/win32stat.c\n\nOf these, win32stat.c and win32fseek.c also contain \"#ifdef WIN32\", but \nothers don't. So I concur that the most common pattern in these files is \nto not use #ifdef WIN32, and +1 for making them consistent.\n\nI removed those from win32stat.c and win32fseek.c, too, and committed. \nThanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 12 Feb 2024 11:58:24 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove WIN32 conditional compilation from win32common.c" } ]
[ { "msg_contents": "CTYPE, which handles character classification and upper/lowercasing\nbehavior, may be simpler than it first appears. We may be able to get\na net decrease in complexity by just building in most (or perhaps all)\nof the functionality.\n\nUnicode offers relatively simple rules for CTYPE-like functionality\nbased on data files. There are a few exceptions and a few options,\nwhich I'll address below.\n\n(In contrast, collation varies a lot from locale to locale, and has a\nlot more options and nuance than ctype.)\n\n=== Proposal ===\n\nParse some Unicode data files into static lookup tables in .h files\n(similar to what we already do for normalization) and provide\nfunctions to perform the right lookups according to Unicode\nrecommentations[1][2]. Then expose the functionality as either a\nspecially-named locale for the libc provider, or as part of the\nbuilt-in collation provider which I previously proposed[3]. (Provided\npatches don't expose the functionality yet; I'm looking for feedback\nfirst.)\n\nUsing libc or ICU for a CTYPE provider would still be supported, but\nas I explain below, there's not nearly as much reason to do so as you\nmight expect. As far as I can tell, using an external provider for\nCTYPE functionality is mostly unnecessary complexity and magic.\n\nThere's still plenty of reason to use the plain \"C\" semantics, if\ndesired, but those semantics are already built-in.\n\n=== Benefits ===\n\n * platform-independent ctype semantics based on Unicode, not tied to\n any dependency's implementation\n * ability to combine fast memcmp() collation with rich ctype\n semantics\n * user-visible semantics can be documented and tested\n * stability within a PG major version\n * transparency of changes: tables would be checked in to .h files,\n so whoever runs the \"update-unicode\" build target would see if\n there are unexpected or impactful changes that should be addressed\n in the release notes\n * the built-in tables themselves can be tested exhaustively by\n comparing with ICU so we can detect trivial parsing errors and the\n like\n\n=== Character Classification ===\n\nCharacter classification is used for regexes, e.g. whether a character\nis a member of the \"[[:digit:]]\" (\"\\d\") or \"[[:punct:]]\"\nclass. Unicode defines what character properties map into these\nclasses in TR #18 [1], specifying both a \"Standard\" variant and a\n\"POSIX Compatible\" variant. The main difference with the POSIX variant\nis that symbols count as punctuation.\n\nCharacter classification in Unicode does not vary from locale to\nlocale. The same character is considered to be a member of the same\nclasses regardless of locale (in other words, there's no\n\"tailoring\"). There is no strong compatibility guarantee around the\nclassification of characters, but it doesn't seem to change much in\npractice (I could collect more data here if it matters).\n\nIn glibc, character classification is not affected by the locale as\nfar as I can tell -- all non-\"C\" locales behave like \"C.UTF-8\"\n(perhaps other libc implementations or versions or custom locales\nbehave differently -- corrections welcome). There are some differences\nbetween \"C.UTF-8\" and what Unicode seems to recommend, and I'm not\nentirely sure why those differences exist or whether those differences\nare important for anything other than compatibility.\n\nNote: ICU offers character classification based on Unicode standards,\ntoo, but the fact that it's an external dependency makes it a\ndifficult-to-test black box that is not tied to a PG major\nversion. Also, we currently don't use the APIs that Unicode\nrecommends; so in Postgres today, ICU-based character classification\nis further from Unicode than glibc character classification.\n\n=== LOWER()/INITCAP()/UPPER() ===\n\nThe LOWER() and UPPER() functions are defined in the SQL spec with\nsurprising detail, relying on specific Unicode General Category\nassignments. How to map characters seems to be left (implicitly) up to\nUnicode. If the input string is normalized, the output string must be\nnormalized, too. Weirdly, there's no room in the SQL spec to localize\nLOWER()/UPPER() at all to handle issues like [1]. Also, the standard\nspecifies one example, which is that \"ß\" becomes \"SS\" when folded to\nupper case. INITCAP() is not in the SQL spec.\n\nIn Unicode, lowercasing and uppercasing behavior is a mapping[2], and\nalso backed by a strong compatibility guarantee that \"case pairs\" will\nalways remain case pairs[4]. The mapping may be \"simple\"\n(context-insensitive, locale-insensitive, not adding any code points),\nor \"full\" (may be context-sensitive, locale-sensitive, and one code\npoint may turn into 1-3 code points).\n\nTitlecasing (INITCAP() in Postgres) in Unicode is similar to\nupper/lowercasing, except that it has the additional complexity of\nfinding word boundaries, which have a non-trivial definition. To\nsimplify, we'd either use the Postgres definition (alphanumeric) or\nthe \"word\" character class specified in [1]. If someone wants more\nsophisticated word segmentation they could use ICU.\n\nWhile \"full\" case mapping sounds more complex, there are actually very\nfew cases to consider and they are covered in another (small) data\nfile. That data file covers ~100 code points that convert to multiple\ncode points when the case changes (e.g. \"ß\" -> \"SS\"), 7 code points\nthat have context-sensitive mappings, and three locales which have\nspecial conversions (\"lt\", \"tr\", and \"az\") for a few code points.\n\nICU can do the simple case mapping (u_tolower(), etc.) or full mapping\n(u_strToLower(), etc.). I see one difference in ICU that I can't yet\nexplain for the full titlecase mapping of a singular \\+000345.\n\nglibc in UTF8 (at least in my tests) just does the simple upper/lower\ncase mapping, extended with simple mappings for the locales with\nspecial conversions (which I think are exactly the same 3 locales\nmentioned above). libc doesn't do titlecase. If the resuling character\nisn't representable in the server encoding, I think libc just maps the\ncharacter to itself, though I should test this assumption.\n\n=== Encodings ===\n\nIt's easiest to implement these rules in UTF8, but possible for any\nencoding where we can decode to a Unicode code point.\n\n=== Patches ===\n\n0001 & 0002 are just cleanup. I intend to commit them unless someone\nhas a comment.\n\n0003 implements character classification (\"Standard\" and \"POSIX\nCompatible\" variants) but doesn't actually use them for anything.\n\n0004 implements \"simple\" case mapping, and a partial implementation of\n\"full\" case mapping. Again, does not use them yet.\n\n=== Questions ===\n\n* Is a built-in ctype provider a reasonable direction for Postgres as\n a project?\n* Does it feel like it would be simpler or more complex than what\n we're doing now?\n* Do we want to just try to improve our ICU support instead?\n* Do we want the built-in provider to be one thing, or have a few\n options (e.g. \"standard\" or \"posix\" character classification;\n \"simple\" or \"full\" case mapping)?\n\n\nRegards,\n\tJeff Davis\n\n\n[1] http://www.unicode.org/reports/tr18/#Compatibility_Properties\n[2] https://www.unicode.org/versions/Unicode15.0.0/ch03.pdf#G33992\n[3]\nhttps://www.postgresql.org/message-id/flat/[email protected]\n[4] https://www.unicode.org/policies/stability_policy.html#Case_Pair\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Tue, 05 Dec 2023 15:46:06 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Built-in CTYPE provider" }, { "msg_contents": "On 12/5/23 3:46 PM, Jeff Davis wrote:\n> === Character Classification ===\n> \n> Character classification is used for regexes, e.g. whether a character\n> is a member of the \"[[:digit:]]\" (\"\\d\") or \"[[:punct:]]\"\n> class. Unicode defines what character properties map into these\n> classes in TR #18 [1], specifying both a \"Standard\" variant and a\n> \"POSIX Compatible\" variant. The main difference with the POSIX variant\n> is that symbols count as punctuation.\n> \n> === LOWER()/INITCAP()/UPPER() ===\n> \n> The LOWER() and UPPER() functions are defined in the SQL spec with\n> surprising detail, relying on specific Unicode General Category\n> assignments. How to map characters seems to be left (implicitly) up to\n> Unicode. If the input string is normalized, the output string must be\n> normalized, too. Weirdly, there's no room in the SQL spec to localize\n> LOWER()/UPPER() at all to handle issues like [1]. Also, the standard\n> specifies one example, which is that \"ß\" becomes \"SS\" when folded to\n> upper case. INITCAP() is not in the SQL spec.\n> \n> === Questions ===\n> \n> * Is a built-in ctype provider a reasonable direction for Postgres as\n> a project?\n> * Does it feel like it would be simpler or more complex than what\n> we're doing now?\n> * Do we want to just try to improve our ICU support instead?\n> * Do we want the built-in provider to be one thing, or have a few\n> options (e.g. \"standard\" or \"posix\" character classification;\n> \"simple\" or \"full\" case mapping)?\n\n\nGenerally, I am in favor of this - I think we need to move in the\ndirection of having an in-database option around unicode for PG users,\ngiven how easy it is for administrators to mis-manage dependencies.\nEspecially when OS admins can be different from DB admins, and when\nnobody really understands risks of changing libs with in-place moves to\nnew operating systems - except for like 4 of us on the mailing lists.\n\nMy biggest concern is around maintenance. Every year Unicode is\nassigning new characters to existing code points, and those existing\ncode points can of course already be stored in old databases before libs\nare updated. When users start to notice that regex [[:digit:]] or\nupper/lower functions aren't working correctly with characters in their\nDB, they'll probably come asking for fixes. And we may end up with\nsomething like the timezone database where we need to periodically add a\nmore current ruleset - albeit alongside as a new version in this case.\n\nHere are direct links to charts of newly assigned characters from the\nlast few Unicode updates:\n\n2022: https://www.unicode.org/charts/PDF/Unicode-15.0/\n2021: https://www.unicode.org/charts/PDF/Unicode-14.0/\n2020: https://www.unicode.org/charts/PDF/Unicode-13.0/\n2019: https://www.unicode.org/charts/PDF/Unicode-12.0/\n\nIf I'm reading the Unicode 15 update correctly, PostgreSQL regex\nexpressions with [[:digit:]] will not correctly identify Kaktovik or Nag\nMundari or Kawi digits without that update to character type specs.\n\nIf I'm reading the Unicode 12 update correctly, then upper/lower\nfunctions aren't going to work correctly on Latin Glottal A and I and U\ncharacters without that update to character type specs.\n\nOverall I see a lot fewer Unicode updates involving upper/lower than I\ndo with digits - especially since new scripts often involve their own\nnumbering characters which makes new digits more common.\n\nBut lets remember that people like to build indexes on character\nclassification functions like upper/lower, for case insensitive\nsearching. It's another case where the index will be corrupted if\nsomeone happened to store Latin Glottal vowels in their database and\nthen we update libs to the latest character type rules.\n\nSo even with something as basic as character type, if we're going to do\nit right, we still need to either version it or definitively decide that\nwe're not going to every support newly added Unicode characters like\nLatin Glottals.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Tue, 12 Dec 2023 13:14:23 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2023-12-12 at 13:14 -0800, Jeremy Schneider wrote:\n> My biggest concern is around maintenance. Every year Unicode is\n> assigning new characters to existing code points, and those existing\n> code points can of course already be stored in old databases before\n> libs\n> are updated.\n\nIs the concern only about unassigned code points?\n\nI already committed a function \"unicode_assigned()\" to test whether a\nstring contains only assigned code points, which can be used in a\nCHECK() constraint. I also posted[5] an idea about a per-database\noption that could reject the storage of any unassigned code point,\nwhich would make it easier for users highly concerned about\ncompatibility.\n\n> And we may end up with\n> something like the timezone database where we need to periodically\n> add a\n> more current ruleset - albeit alongside as a new version in this\n> case.\n\nThere's a build target \"update-unicode\" which is run to pull in new\nUnicode data files and parse them into static C arrays (we already do\nthis for the Unicode normalization tables). So I agree that the tables\nshould be updated but I don't understand why that's a problem.\n\n> If I'm reading the Unicode 15 update correctly, PostgreSQL regex\n> expressions with [[:digit:]] will not correctly identify Kaktovik or\n> Nag\n> Mundari or Kawi digits without that update to character type specs.\n\nYeah, if we are behind in the Unicode version, then results won't be\nthe most up-to-date. But ICU or libc could also be behind in the\nUnicode version.\n\n> But lets remember that people like to build indexes on character\n> classification functions like upper/lower, for case insensitive\n> searching.\n\nUPPER()/LOWER() are based on case mapping, not character\nclassification.\n\nI intend to introduce a SQL-level CASEFOLD() function that would obey\nUnicode casefolding rules, which have very strong compatibility\nguarantees[6] (essentially, if you are only using assigned code points,\nyou are fine).\n\n> It's another case where the index will be corrupted if\n> someone happened to store Latin Glottal vowels in their database and\n> then we update libs to the latest character type rules.\n\nI don't agree with this characterization at all.\n\n (a) It's not \"another case\". Corruption of an index on LOWER() can\nhappen today. My proposal makes the situation better, not worse.\n (b) These aren't libraries, I am proposing built-in Unicode tables\nthat only get updated in a new major PG version.\n (c) It likely only affects a small number of indexes and it's easier\nfor an administrator to guess which ones might be affected, making it\neasier to just rebuild those indexes.\n (d) It's not a problem if you stick to assigned code points.\n\n> So even with something as basic as character type, if we're going to\n> do\n> it right, we still need to either version it or definitively decide\n> that\n> we're not going to every support newly added Unicode characters like\n> Latin Glottals.\n\nIf, by \"version it\", you mean \"update the data tables in new Postgres\nversions\", then I agree. If you mean that one PG version would need to\nsupport many versions of Unicode, I don't agree.\n\nRegards,\n\tJeff Davis\n\n[5]\nhttps://postgr.es/m/[email protected]\n\n[6] https://www.unicode.org/policies/stability_policy.html#Case_Folding\n\n\n\n", "msg_date": "Wed, 13 Dec 2023 05:28:30 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tJeff Davis wrote:\n\n> While \"full\" case mapping sounds more complex, there are actually\n> very few cases to consider and they are covered in another (small)\n> data file. That data file covers ~100 code points that convert to\n> multiple code points when the case changes (e.g. \"ß\" -> \"SS\"), 7\n> code points that have context-sensitive mappings, and three locales\n> which have special conversions (\"lt\", \"tr\", and \"az\") for a few code\n> points.\n\nBut there are CLDR mappings on top of that.\n\nAccording to the Unicode FAQ\n\n https://unicode.org/faq/casemap_charprop.html#5\n\n Q: Does the default case mapping work for every language? What\n about the default case folding?\n\n [...]\n\n To make case mapping language sensitive, the Unicode Standard\n specificially allows implementations to tailor the mappings for\n each language, but does not provide the necessary data. The file\n SpecialCasing.txt is included in the Standard as a guide to a few\n of the more important individual character mappings needed for\n specific languages, notably the Greek script and the Turkic\n languages. However, for most language-specific mappings and\n tailoring, users should refer to CLDR and other resources.\n\nIn particular \"el\" (modern greek) has case mapping rules that\nICU seems to implement, but \"el\" is missing from the list\n(\"lt\", \"tr\", and \"az\") you identified.\n\nThe CLDR case mappings seem to be found in\nhttps://github.com/unicode-org/cldr/tree/main/common/transforms\nin *-Lower.xml and *-Upper.xml\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 13 Dec 2023 16:34:15 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-13 at 16:34 +0100, Daniel Verite wrote:\n> But there are CLDR mappings on top of that.\n\nI see, thank you.\n\nWould it still be called \"full\" case mapping to only use the mappings\nin SpecialCasing.txt? And would that be useful?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 13 Dec 2023 09:12:25 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-13 at 16:34 +0100, Daniel Verite wrote:\n> In particular \"el\" (modern greek) has case mapping rules that\n> ICU seems to implement, but \"el\" is missing from the list\n> (\"lt\", \"tr\", and \"az\") you identified.\n\nI compared with glibc el_GR.UTF-8 and el_CY.UTF-8 locales, and the\nctype semantics match C.UTF-8 for all code points. glibc is not doing\nthis additional tailoring for \"el\".\n\nTherefore I believe the builtin CTYPE would be very useful for case\nmapping (both \"simple\" and \"full\") even without this additional\ntailoring.\n\nYou are correct that ICU will still have some features that won't be\nsupported by the builtin provider. Better word boundary semantics in\nINITCAP() are another advantage.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 14 Dec 2023 06:01:59 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12/13/23 5:28 AM, Jeff Davis wrote:\n> On Tue, 2023-12-12 at 13:14 -0800, Jeremy Schneider wrote:\n>> My biggest concern is around maintenance. Every year Unicode is\n>> assigning new characters to existing code points, and those existing\n>> code points can of course already be stored in old databases before\n>> libs\n>> are updated.\n> \n> Is the concern only about unassigned code points?\n> \n> I already committed a function \"unicode_assigned()\" to test whether a\n> string contains only assigned code points, which can be used in a\n> CHECK() constraint. I also posted[5] an idea about a per-database\n> option that could reject the storage of any unassigned code point,\n> which would make it easier for users highly concerned about\n> compatibility.\n\nI didn't know about this. Did a few smoke tests against today's head on\ngit and it's nice to see the function working as expected. :)\n\n\ntest=# select unicode_version();\n unicode_version\n-----------------\n 15.1\n\ntest=# select chr(3212),unicode_assigned(chr(3212));\n chr | unicode_assigned\n-----+------------------\n ಌ | t\n\n-- unassigned code point inside assigned block\ntest=# select chr(3213),unicode_assigned(chr(3213));\n chr | unicode_assigned\n-----+------------------\n ಍ | f\n\ntest=# select chr(3214),unicode_assigned(chr(3214));\n chr | unicode_assigned\n-----+------------------\n ಎ | t\n\n-- unassigned block\ntest=# select chr(67024),unicode_assigned(chr(67024));\n chr | unicode_assigned\n-----+------------------\n 𐗐 | f\n\ntest=# select chr(67072),unicode_assigned(chr(67072));\n chr | unicode_assigned\n-----+------------------\n 𐘀 | t\n\nLooking closer, patches 3 and 4 look like an incremental extension of\nthis earlier idea; the perl scripts download data from unicode.org and\nwe've specifically defined Unicode version 15.1 and the scripts turn the\ndata tables inside-out into C data structures optimized for lookup. That\nC code is then checked in to the PostgreSQL source code files\nunicode_category.h and unicode_case_table.h - right?\n\nAm I reading correctly that these two patches add C functions\npg_u_prop_* and pg_u_is* (patch 3) and unicode_*case (patch 4) but we\ndon't yet reference these functions anywhere? So this is just getting\nsome plumbing in place?\n\n\n\n>> And we may end up with\n>> something like the timezone database where we need to periodically\n>> add a\n>> more current ruleset - albeit alongside as a new version in this\n>> case.\n> \n> There's a build target \"update-unicode\" which is run to pull in new\n> Unicode data files and parse them into static C arrays (we already do\n> this for the Unicode normalization tables). So I agree that the tables\n> should be updated but I don't understand why that's a problem.\n\nI don't want to get stuck on this. I agree with the general approach of\nbeginning to add a provider for locale functions inside the database. We\nhave awhile before Unicode 16 comes out. Plenty of time for bikeshedding\n\nMy prediction is that updating this built-in provider eventually won't\nbe any different from ICU or glibc. It depends a bit on how we\nspecifically built on this plumbing - but when Unicode 16 comes out, i\nI'll try to come up with a simple repro on a default DB config where\nchanging the Unicode version causes corruption (it was pretty easy to\ndemonstrate for ICU collation, if you knew where to look)... but I don't\nthink that discussion should derail this commit, because for now we're\njust starting the process of getting Unicode 15.1 into the PostgreSQL\ncode base. We can cross the \"update\" bridge when we come to it.\n\nLater on down the road, from a user perspective, I think we should be\ncareful about confusion where providers are used inconsistently. It's\nnot great if one function follow built-in Unicode 15.1 rules but another\nfunction uses Unicode 13 rules because it happened to call an ICU\nfunction or a glibc function. We could easily end up with multiple\nproviders processing different parts of a single SQL statement, which\ncould lead to strange results in some cases.\n\nIdeally a user just specifies a default provider their database, and the\nrules for that version of Unicode are used as consistently as possible -\nunless a user explicitly overrides their choice in a table/column\ndefinition, query, etc. But it might take a little time and work to get\nto this point.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Fri, 15 Dec 2023 16:30:39 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, 2023-12-15 at 16:30 -0800, Jeremy Schneider wrote:\n> Looking closer, patches 3 and 4 look like an incremental extension of\n> this earlier idea;\n\nYes, it's essentially the same thing extended to a few more files. I\ndon't know if \"incremental\" is the right word though; this is a\nsubstantial extension of the idea.\n\n> the perl scripts download data from unicode.org and\n> we've specifically defined Unicode version 15.1 and the scripts turn\n> the\n> data tables inside-out into C data structures optimized for lookup.\n> That\n> C code is then checked in to the PostgreSQL source code files\n> unicode_category.h and unicode_case_table.h - right?\n\nYes. The standard build process shouldn't be downloading files, so the\nstatic tables are checked in. Also, seeing the diffs of the static\ntables improves the visibility of changes in case there's some mistake\nor big surprise.\n\n> Am I reading correctly that these two patches add C functions\n> pg_u_prop_* and pg_u_is* (patch 3) and unicode_*case (patch 4) but we\n> don't yet reference these functions anywhere? So this is just getting\n> some plumbing in place?\n\nCorrect. Perhaps I should combine these into the builtin provider\nthread, but these are independently testable and reviewable.\n\n> > \n> My prediction is that updating this built-in provider eventually\n> won't\n> be any different from ICU or glibc.\n\nThe built-in provider will have several advantages because it's tied to\na PG major version:\n\n * A physical replica can't have different semantics than the primary.\n * Easier to document and test.\n * Changes are more transparent and can be documented in the release\nnotes, so that administrators can understand the risks and blast radius\nat pg_upgrade time.\n\n> Later on down the road, from a user perspective, I think we should be\n> careful about confusion where providers are used inconsistently. It's\n> not great if one function follow built-in Unicode 15.1 rules but\n> another\n> function uses Unicode 13 rules because it happened to call an ICU\n> function or a glibc function. We could easily end up with multiple\n> providers processing different parts of a single SQL statement, which\n> could lead to strange results in some cases.\n\nThe whole concept of \"providers\" is that they aren't consistent with\neach other. ICU, libc, and the builtin provider will all be based on\ndifferent versions of Unicode. That's by design.\n\nThe built-in provider will be a bit better in the sense that it's\nconsistent with the normalization functions, and the other providers\naren't.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Mon, 18 Dec 2023 11:45:46 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, Dec 18, 2023 at 2:46 PM Jeff Davis <[email protected]> wrote:\n> The whole concept of \"providers\" is that they aren't consistent with\n> each other. ICU, libc, and the builtin provider will all be based on\n> different versions of Unicode. That's by design.\n>\n> The built-in provider will be a bit better in the sense that it's\n> consistent with the normalization functions, and the other providers\n> aren't.\n\nFWIW, the idea that we're going to develop a built-in provider seems\nto be solid, for the reasons Jeff mentions: it can be stable, and\nunder our control. But it seems like we might need built-in providers\nfor everything rather than just CTYPE to get those advantages, and I\nfear we'll get sucked into needing a lot of tailoring rather than just\nbeing able to get by with one \"vanilla\" implementation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 15:59:03 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2023-12-19 at 15:59 -0500, Robert Haas wrote:\n> FWIW, the idea that we're going to develop a built-in provider seems\n> to be solid, for the reasons Jeff mentions: it can be stable, and\n> under our control. But it seems like we might need built-in providers\n> for everything rather than just CTYPE to get those advantages, and I\n> fear we'll get sucked into needing a lot of tailoring rather than\n> just\n> being able to get by with one \"vanilla\" implementation.\n\nFor the database default collation, I suspect a lot of users would jump\nat the chance to have \"vanilla\" semantics. Tailoring is more important\nfor individual collation objects than for the database-level collation.\n\nThere are reasons you might select a tailored database collation, like\nif the set of users accessing it are mostly from a single locale, or if\nthe application connected to the database is expecting it in a certain\nform.\n\nBut there are a lot of users for whom neither of those things are true,\nand it makes zero sense to order all of the text indexes in the\ndatabase according to any one particular locale. I think these users\nwould prioritize stability and performance for the database collation,\nand then use COLLATE clauses with ICU collations where necessary.\n\nThe question for me is how good the \"vanilla\" semantics need to be to\nbe useful as a database-level collation. Most of the performance and\nstability problems come from collation, so it makes sense to me to\nprovide a fast and stable memcmp collation paired with richer ctype\nsemantics (as proposed here). Users who want something more probably\nwant the Unicode \"root\" collation, which can be provided by ICU today.\n\nI am also still concerned that we have the wrong defaults. Almost\nnobody thinks libc is a great provider, but that's the default, and\nthere were problems trying to change that default to ICU in 16. If we\nhad a builtin provider, that might be a better basis for a default\n(safe, fast, always available, and documentable). Then, at least if\nsomeone picks a different locale at initdb time, they would be doing so\nintentionally, rather than implicitly accepting index corruption risks\nbased on an environment variable.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 19 Dec 2023 16:18:01 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tJeff Davis wrote:\n\n\n> But there are a lot of users for whom neither of those things are true,\n> and it makes zero sense to order all of the text indexes in the\n> database according to any one particular locale. I think these users\n> would prioritize stability and performance for the database collation,\n> and then use COLLATE clauses with ICU collations where necessary.\n\n+1\n\n> I am also still concerned that we have the wrong defaults. Almost\n> nobody thinks libc is a great provider, but that's the default, and\n> there were problems trying to change that default to ICU in 16. If we\n> had a builtin provider, that might be a better basis for a default\n> (safe, fast, always available, and documentable). Then, at least if\n> someone picks a different locale at initdb time, they would be doing so\n> intentionally, rather than implicitly accepting index corruption risks\n> based on an environment variable.\n\nYes. The introduction of the bytewise-sorting, locale-agnostic\nC.UTF-8 in glibc is also a step in the direction of providing better\ndefaults for apps like Postgres, that need both long-term stability\nin sorts and Unicode coverage for ctype-dependent functions.\n\nBut C.UTF-8 is not available everywhere, and there's still the\nproblem that Unicode updates through libc are not aligned\nwith Postgres releases.\n\nICU has the advantage of cross-OS compatibility,\nbut it does not provide any collation with bytewise sorting\nlike C or C.UTF-8, and we don't allow a combination like\n\"C\" for sorting and ICU for ctype operations. When opting\nfor a locale provider, it has to be for both sorting\nand ctype, so an installation that needs cross-OS\ncompatibility, good Unicode support and long-term stability\nof indexes cannot get that with ICU as we expose it\ntoday.\n\nIf the Postgres default was bytewise sorting+locale-agnostic\nctype functions directly derived from Unicode data files,\nas opposed to libc/$LANG at initdb time, the main\nannoyance would be that \"ORDER BY textcol\" would no\nlonger be the human-favored sort.\nFor the presentation layer, we would have to write for instance\n ORDER BY textcol COLLATE \"unicode\" for the root collation\nor a specific region-country if needed.\nBut all the rest seems better, especially cross-OS compatibity,\ntruly immutable and faster indexes for fields that\ndon't require linguistic ordering, alignment between Unicode\nupdates and Postgres updates.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 20 Dec 2023 13:49:20 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-20 at 13:49 +0100, Daniel Verite wrote:\n> If the Postgres default was bytewise sorting+locale-agnostic\n> ctype functions directly derived from Unicode data files,\n> as opposed to libc/$LANG at initdb time, the main\n> annoyance would be that \"ORDER BY textcol\" would no\n> longer be the human-favored sort.\n> For the presentation layer, we would have to write for instance\n>  ORDER BY textcol COLLATE \"unicode\" for the root collation\n> or a specific region-country if needed.\n> But all the rest seems better, especially cross-OS compatibity,\n> truly immutable and faster indexes for fields that\n> don't require linguistic ordering, alignment between Unicode\n> updates and Postgres updates.\n\nThank you, that summarizes exactly the compromise that I'm trying to\nreach.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 20 Dec 2023 11:13:12 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Dec 20, 2023 at 2:13 PM Jeff Davis <[email protected]> wrote:\n> On Wed, 2023-12-20 at 13:49 +0100, Daniel Verite wrote:\n> > If the Postgres default was bytewise sorting+locale-agnostic\n> > ctype functions directly derived from Unicode data files,\n> > as opposed to libc/$LANG at initdb time, the main\n> > annoyance would be that \"ORDER BY textcol\" would no\n> > longer be the human-favored sort.\n> > For the presentation layer, we would have to write for instance\n> > ORDER BY textcol COLLATE \"unicode\" for the root collation\n> > or a specific region-country if needed.\n> > But all the rest seems better, especially cross-OS compatibity,\n> > truly immutable and faster indexes for fields that\n> > don't require linguistic ordering, alignment between Unicode\n> > updates and Postgres updates.\n>\n> Thank you, that summarizes exactly the compromise that I'm trying to\n> reach.\n\nThis makes sense to me, too, but it feels like it might work out\nbetter for speakers of English than for speakers of other languages.\nRight now, I tend to get databases that default to en_US.utf8, and if\nthe default changed to C.utf8, then the case-comparison behavior might\nbe different but the letters would still sort in the right order. For\nsomeone who is currently defaulting to es_ES.utf8 or fr_FR.utf8, a\nchange to C.utf8 would be a much bigger problem, I would think. Their\nalphabet isn't in code point order, and so things would be\nalphabetized wrongly. That might be OK if they don't care about\nordering for any purpose other than equality lookups, but otherwise\nit's going to force them to change the default, where today they don't\nhave to do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 14:24:55 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-20 at 14:24 -0500, Robert Haas wrote:\n> This makes sense to me, too, but it feels like it might work out\n> better for speakers of English than for speakers of other languages.\n\nThere's very little in the way of locale-specific tailoring for ctype\nbehaviors in ICU or glibc -- only for the 'az', 'el', 'lt', and 'tr'\nlocales. While English speakers like us may benefit from being aligned\nwith the default ctype behaviors, those behaviors are not at all\nspecific to 'en' locales in ICU or glibc.\n\nCollation varies a lot more between locales. I wouldn't call memcmp\nideal for English ('Zebra' comes before 'apple', which seems wrong to\nme). If memcmp sorting does favor any particular group, I would say it\nfavors programmers more than English speakers. But that could just be\nmy perspective and I certainly understand the point that memcmp\nordering is more tolerable for some languages than others.\n\n> Right now, I tend to get databases that default to en_US.utf8, and if\n> the default changed to C.utf8, then the case-comparison behavior\n> might\n> be different\n\nen_US.UTF-8 and C.UTF-8 have the same ctype behavior.\n\n> For\n> someone who is currently defaulting to es_ES.utf8 or fr_FR.utf8, a\n> change to C.utf8 would be a much bigger problem, I would think.\n\nThose locales all have the same ctype behavior.\n\nIt turns out that that en_US.UTF-8 and fr_FR.UTF-8 also have the same\ncollation order -- no tailoring beyond root collation according to CLDR\nfiles for 'en' and 'fr' (though note that 'fr_CA' does have tailoring).\nThat doesn't mean the experience of switching to memcmp order is\nexactly the same for a French speaker and an English speaker, but I\nthink it's interesting.\n\n> That might be OK if they don't care about\n> ordering for any purpose other than equality lookups, but otherwise\n> it's going to force them to change the default, where today they\n> don't\n> have to do that.\n\nTo be clear, I haven't proposed changing the initdb default. This\nthread is about adding a builtin provider with builtin ctype, which I\nbelieve a lot of users would like.\n\nIt also might be the best chance we have to get to a reasonable default\nbehavior at some point in the future. It would be always available,\nfast, stable, better semantics than \"C\" for many locales, and we can\ndocument it. In any case, we don't need to decide that now. If the\nbuiltin provider is useful, we should do it.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 20 Dec 2023 14:57:16 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12/5/23 3:46 PM, Jeff Davis wrote:\n> CTYPE, which handles character classification and upper/lowercasing\n> behavior, may be simpler than it first appears. We may be able to get\n> a net decrease in complexity by just building in most (or perhaps all)\n> of the functionality.\n> \n> === Character Classification ===\n> \n> Character classification is used for regexes, e.g. whether a character\n> is a member of the \"[[:digit:]]\" (\"\\d\") or \"[[:punct:]]\"\n> class. Unicode defines what character properties map into these\n> classes in TR #18 [1], specifying both a \"Standard\" variant and a\n> \"POSIX Compatible\" variant. The main difference with the POSIX variant\n> is that symbols count as punctuation.\n> \n> === LOWER()/INITCAP()/UPPER() ===\n> \n> The LOWER() and UPPER() functions are defined in the SQL spec with\n> surprising detail, relying on specific Unicode General Category\n> assignments. How to map characters seems to be left (implicitly) up to\n> Unicode. If the input string is normalized, the output string must be\n> normalized, too. Weirdly, there's no room in the SQL spec to localize\n> LOWER()/UPPER() at all to handle issues like [1]. Also, the standard\n> specifies one example, which is that \"ß\" becomes \"SS\" when folded to\n> upper case. INITCAP() is not in the SQL spec.\n\nI'll be honest, even though this is primarily about CTYPE and not\ncollation, I still need to keep re-reading the initial email slowly to\nlet it sink in and better understand it... at least for me, it's complex\nto reason through. 🙂\n\nI'm trying to make sure I understand clearly what the user impact/change\nis that we're talking about: after a little bit of brainstorming and\nlooking through the PG docs, I'm actually not seeing much more than\nthese two things you've mentioned here: the set of regexp_* functions PG\nprovides, and these three generic functions. That alone doesn't seem\nhighly concerning.\n\nI haven't checked the source code for the regexp_* functions yet, but\nare these just passing through to an external library? Are we actually\nable to easily change the CTYPE provider for them? If nobody\nknows/replies then I'll find some time to look.\n\nOne other thing that comes to mind: how does the parser do case folding\nfor relation names? Is that using OS-provided libc as of today? Or did\nwe code it to use ICU if that's the DB default? I'm guessing libc, and\nglobal catalogs probably need to be handled in a consistent manner, even\nacross different encodings.\n\n(Kindof related... did you ever see the demo where I create a user named\n'🏃' and then I try to connect to a database with non-unicode encoding?\n💥😜 ...at least it seems to be able to walk the index without decoding\nstrings to find other users - but the way these global catalogs work\nscares me a little bit)\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Wed, 20 Dec 2023 15:47:51 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12/20/23 3:47 PM, Jeremy Schneider wrote:\n> On 12/5/23 3:46 PM, Jeff Davis wrote:\n>> CTYPE, which handles character classification and upper/lowercasing\n>> behavior, may be simpler than it first appears. We may be able to get\n>> a net decrease in complexity by just building in most (or perhaps all)\n>> of the functionality.\n> \n> I'll be honest, even though this is primarily about CTYPE and not\n> collation, I still need to keep re-reading the initial email slowly to\n> let it sink in and better understand it... at least for me, it's complex\n> to reason through. 🙂\n> \n> I'm trying to make sure I understand clearly what the user impact/change\n> is that we're talking about: after a little bit of brainstorming and\n> looking through the PG docs, I'm actually not seeing much more than\n> these two things you've mentioned here: the set of regexp_* functions PG\n> provides, and these three generic functions. That alone doesn't seem\n> highly concerning.\n\nI missed citext, which extends impact to replace(), split_part(),\nstrpos() and translate(). There are also the five *_REGEX() functions\nfrom the SQL standard which I assume are just calling the PG functions.\n\nI just saw the krb_caseins_users GUC, which reminds me that PLs also\nhave their own case functions. And of course extensions. I'm not saying\nany of this is in scope for the change here, but I'm just trying to wrap\nmy brain around all the places we've got CTYPE processing happening, to\nbetter understand the big picture. It might help tease out unexpected\nsmall glitches from changing one thing but not another one.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Wed, 20 Dec 2023 16:04:39 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12/20/23 4:04 PM, Jeremy Schneider wrote:\n> On 12/20/23 3:47 PM, Jeremy Schneider wrote:\n>> On 12/5/23 3:46 PM, Jeff Davis wrote:\n>>> CTYPE, which handles character classification and upper/lowercasing\n>>> behavior, may be simpler than it first appears. We may be able to get\n>>> a net decrease in complexity by just building in most (or perhaps all)\n>>> of the functionality.\n>>\n>> I'll be honest, even though this is primarily about CTYPE and not\n>> collation, I still need to keep re-reading the initial email slowly to\n>> let it sink in and better understand it... at least for me, it's complex\n>> to reason through. 🙂\n>>\n>> I'm trying to make sure I understand clearly what the user impact/change\n>> is that we're talking about: after a little bit of brainstorming and\n>> looking through the PG docs, I'm actually not seeing much more than\n>> these two things you've mentioned here: the set of regexp_* functions PG\n>> provides, and these three generic functions. That alone doesn't seem\n>> highly concerning.\n> \n> I missed citext, which extends impact to replace(), split_part(),\n> strpos() and translate(). There are also the five *_REGEX() functions\n> from the SQL standard which I assume are just calling the PG functions.\n\nfound some more. here's my running list of everything user-facing I see\nin core PG code so far that might involve case:\n\n* upper/lower/initcap\n* regexp_*() and *_REGEXP()\n* ILIKE, operators ~* !~* ~~ !~~ ~~* !~~*\n* citext + replace(), split_part(), strpos() and translate()\n* full text search - everything is case folded\n* unaccent? not clear to me whether CTYPE includes accent folding\n* ltree\n* pg_trgm\n* core PG parser, case folding of relation names\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Wed, 20 Dec 2023 16:29:02 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Dec 20, 2023 at 5:57 PM Jeff Davis <[email protected]> wrote:\n> Those locales all have the same ctype behavior.\n\nSigh. I keep getting confused about how that works...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 19:38:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-20 at 16:29 -0800, Jeremy Schneider wrote:\n> found some more. here's my running list of everything user-facing I\n> see\n> in core PG code so far that might involve case:\n> \n> * upper/lower/initcap\n> * regexp_*() and *_REGEXP()\n> * ILIKE, operators ~* !~* ~~ !~~ ~~* !~~*\n> * citext + replace(), split_part(), strpos() and translate()\n> * full text search - everything is case folded\n> * unaccent? not clear to me whether CTYPE includes accent folding\n\nNo, ctype has nothing to do with accents as far as I can tell. I don't\nknow if I'm using the right terminology, but I think \"case\" is a\nvariant of a character whereas \"accent\" is a modifier/mark, and the\nmark is a separate concept from the character itself.\n\n> * ltree\n> * pg_trgm\n> * core PG parser, case folding of relation names\n\nLet's separate it into groups.\n\n(1) Callers that use a collation OID or pg_locale_t:\n\n * collation & hashing\n * upper/lower/initcap\n * regex, LIKE, formatting\n * pg_trgm (which uses regexes)\n * maybe postgres_fdw, but might just be a passthrough\n * catalog cache (always uses DEFAULT_COLLATION_OID)\n * citext (always uses DEFAULT_COLLATION_OID, but probably shouldn't)\n\n(2) A long tail of callers that depend on what LC_CTYPE/LC_COLLATE are\nset to, or use ad-hoc ASCII-only semantics:\n\n * core SQL parser downcase_identifier()\n * callers of pg_strcasecmp() (DDL, etc.)\n * GUC name case folding\n * full text search (\"mylocale = 0 /* TODO */\")\n * a ton of stuff uses isspace(), isdigit(), etc.\n * various callers of tolower()/toupper()\n * some selfuncs.c stuff\n * ...\n\nMight have missed some places.\n\nThe user impact of a new builtin provider would affect (1), but only\nfor those actually using the provider. So there's no compatibility risk\nthere, but it's good to understand what it will affect.\n\nWe can, on a case-by-case basis, also consider using the new APIs I'm\nproposing for instances of (2). There would be some compatibility risk\nthere for existing callers, and we'd have to consider whether it's\nworth it or not. Ideally, new callers would either use the new APIs or\nuse the pg_ascii_* APIs.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 21 Dec 2023 14:24:01 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-20 at 15:47 -0800, Jeremy Schneider wrote:\n\n> One other thing that comes to mind: how does the parser do case\n> folding\n> for relation names? Is that using OS-provided libc as of today? Or\n> did\n> we code it to use ICU if that's the DB default? I'm guessing libc,\n> and\n> global catalogs probably need to be handled in a consistent manner,\n> even\n> across different encodings.\n\nThe code is in downcase_identifier():\n\n /* \n * SQL99 specifies Unicode-aware case normalization, which we don't \n * yet have the infrastructure for...\n */\n if (ch >= 'A' && ch <= 'Z')\n ch += 'a' - 'A';\n else if (enc_is_single_byte && IS_HIGHBIT_SET(ch) && isupper(ch))\n ch = tolower(ch);\n result[i] = (char) ch;\n\nMy proposal would add the infrastructure that the comment above says is\nmissing.\n\nIt seems like we should be using the database collation at this point\nbecause you don't want inconsistency between the catalogs and the\nparser here. Then again, the SQL spec doesn't seem to support tailoring\nof case conversions, so maybe we are avoiding it for that reason? Or\nmaybe we're avoiding catalog access? Or perhaps the work for ICU just\nwasn't done here yet?\n\n> (Kindof related... did you ever see the demo where I create a user\n> named\n> '🏃' and then I try to connect to a database with non-unicode\n> encoding?\n> 💥😜  ...at least it seems to be able to walk the index without\n> decoding\n> strings to find other users - but the way these global catalogs work\n> scares me a little bit)\n\nI didn't see that specific demo, but in general we seem to change\nbetween pg_wchar and unicode code points too freely, so I'm not\nsurprised that something went wrong.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 21 Dec 2023 15:00:26 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tRobert Haas wrote:\n\n> For someone who is currently defaulting to es_ES.utf8 or fr_FR.utf8,\n> a change to C.utf8 would be a much bigger problem, I would\n> think. Their alphabet isn't in code point order, and so things would\n> be alphabetized wrongly.\n\n> That might be OK if they don't care about ordering for any purpose\n> other than equality lookups, but otherwise it's going to force them\n> to change the default, where today they don't have to do that.\n\nSure, in whatever collation setup we expose, we need to keep\nit possible and even easy to sort properly with linguistic rules.\n\nBut some reasons to use $LANG as the default locale/collation\nare no longer as good as they used to be, I think.\n\nStarting with v10/ICU we have many pre-created ICU locales with\nfixed names, and starting with v16, we can simply write \"ORDER BY\ntextfield COLLATE unicode\" which is good enough in most cases. So\nthe configuration \"bytewise sort by default\" / \"linguistic sort on-demand\"\nhas become more realistic.\n\nBy contrast in the pre-v10 days with only libc collations, an\napplication could have no idea which collations were going to be\navailable on the server, and how they were named precisely, as this\nvaries across OSes and across installs even with the same OS.\nOn Windows, I think that before v16 initdb did not create any libc\ncollation beyond C/POSIX and the default language/region of the OS.\n\nIn that libc context, if a db wants the C locale by default for\nperformance and truly immutable indexes, but the client app needs to\noccasionally do in-db linguistic sorts, the app needs to figure out\nwhich collation name will work for that. This is hard if you don't\ntarget a specific installation that guarantees that such or such\ncollation is going to be installed.\nWhereas if the linguistic locale is the default, the app never needs\nto know its name or anything about it. So it's done that way,\nlinguistic by default. But that leaves databases with many\nindexes sorted linguistically instead of bytewise for fields\nthat semantically never need any linguistic sort.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 22 Dec 2023 12:26:43 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-20 at 13:49 +0100, Daniel Verite wrote:\n> \n> But C.UTF-8 is not available everywhere, and there's still the\n> problem that Unicode updates through libc are not aligned\n> with Postgres releases.\n\nAttached is an implementation of a built-in provider for the \"C.UTF-8\"\nlocale. That way applications (and tests!) can count on C.UTF-8 always\nbeing available on any platform; and it also aligns with the Postgres\nUnicode updates. Documentation is sparse and the patch is a bit rough,\nbut feedback is welcome -- it does have some basic tests which can be\nused as a guide.\n\nThe C.UTF-8 locale, briefly, is a UTF-8 locale that provides simple\ncollation semantics (code point order) but rich ctype semantics\n(lower/upper/initcap and regexes). This locale is for users who want\nproper Unicode semantics for character operations (upper/lower,\nregexes), but don't need a specific natural-language string sort order\nto apply to all queries and indexes in their system. One might use it\nas the database default collation, and use COLLATE clauses (i.e.\nCOLLATE UNICODE) where more specific behavior is needed.\n\nThe builtin C.UTF-8 locale has the following advantages over using the\nlibc C.UTF-8 locale:\n\n * Collation performance: the builtin provider uses memcmp and\nabbreviated keys. In libc, these advantages are only available for the\nC locale.\n\n * Unicode version is aligned with other parts of Postgres, like\nnormalization.\n\n * Available on all platforms with exactly the same semantics.\n\n * Testable and documentable.\n\n * Avoids index corruption risks. In theory libc C.UTF-8 should also\nhave stable collation, but that is not 100% true. In the builtin\nprovider it is 100% stable.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 27 Dec 2023 17:26:35 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2023-12-27 at 17:26 -0800, Jeff Davis wrote:\n> Attached is an implementation of a built-in provider for the \"C.UTF-\n> 8\"\n\nAttached a more complete version that fixes a few bugs, stabilizes the\ntests, and improves the documentation. I optimized the performance, too\n-- now it's beating both libc's \"C.utf8\" and ICU \"en-US-x-icu\" for both\ncollation and case mapping (numbers below).\n\nIt's really nice to finally be able to have platform-independent tests\nthat work on any UTF-8 database.\n\nSimple character classification:\n\n SELECT 'Á' ~ '[[:alpha:]]' COLLATE C_UTF8;\n\nCase mapping is more interesting (note that accented characters are\nbeing properly mapped, and it's using the titlecase variant \"Dž\"):\n\n SELECT initcap('axxE áxxÉ DŽxxDŽ Džxxx džxxx' COLLATE C_UTF8);\n initcap \n --------------------------\n Axxe Áxxé Džxxdž Džxxx Džxxx\n\nEven more interesting -- test that non-latin characters can still be a\nmember of a case-insensitive range:\n\n -- capital delta is member of lowercase range gamma to lambda\n SELECT 'Δ' ~* '[γ-λ]' COLLATE C_UTF8;\n -- small delta is member of uppercase range gamma to lambda\n SELECT 'δ' ~* '[Γ-Λ]' COLLATE C_UTF8;\n\nMoreover, a lot of this behavior is locked in by strong Unicode\nguarantees like [1] and [2]. Behavior that can change probably won't\nchange very often, and in any case will be tied to a PG major version.\n\nAll of these behaviors are very close to what glibc \"C.utf8\" does on my\nmachine. The case transformations are identical (except titlecasing\nbecause libc doesn't support it). The character classifications have\nsome differences, which might be worth discussing, but I didn't see\nanything terribly concerning (I am following the unicode\nrecommendations[3] on this topic).\n\nPerformance:\n\n Sotring 10M strings:\n libc \"C\" 14s\n builtin C_UTF8 14s\n libc \"C.utf8\" 20s\n ICU \"en-US-x-icu\" 31s\n\n Running UPPER() on 10M strings:\n libc \"C\" 03s\n builtin C_UTF8 07s\n libc \"C.utf8\" 08s\n ICU \"en-US-x-icu\" 15s\n\nI didn't investigate or optimize regexes / pattern matching yet, but I\ncan do similar optimizations if there's any gap.\n\nNote that I implemented the \"simple\" case mapping (which is what glibc\ndoes) and the \"posix compatible\"[3] flavor of character classification\n(which is closer to what glibc does than the \"standard\" flavor\"). I\nopted to use title case mapping for initcap(), which is a difference\nfrom libc and I may go back to just upper/lower. These seem like\nreasonable choices if we're going to name the locale after C.UTF-8.\n\nRegards,\n\tJeff Davis\n\n[1] https://www.unicode.org/policies/stability_policy.html#Case_Pair\n[2] https://www.unicode.org/policies/stability_policy.html#Identity\n[3] http://www.unicode.org/reports/tr18/#Compatibility_Properties", "msg_date": "Thu, 28 Dec 2023 18:57:16 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12/28/23 6:57 PM, Jeff Davis wrote:\n> \n> Attached a more complete version that fixes a few bugs, stabilizes the\n> tests, and improves the documentation. I optimized the performance, too\n> -- now it's beating both libc's \"C.utf8\" and ICU \"en-US-x-icu\" for both\n> collation and case mapping (numbers below).\n> \n> It's really nice to finally be able to have platform-independent tests\n> that work on any UTF-8 database.\n\nThanks for all your work on this, Jeff\n\nI didn't know about the Unicode stability policy. Since it's formal\npolicy, I agree this provides some assumptions we can safely build on.\n\nI'm working my way through these patches but it's taking a little time\nfor me. I hadn't tracked with the \"builtin\" thread last summer so I'm\ncoming up to speed on that now too. I'm hopeful that something along\nthese lines gets into pg17. The pg17 cycle is going to start heating up\npretty soon.\n\nI agree with merging the threads, even though it makes for a larger\npatch set. It would be great to get a unified \"builtin\" provider in\nplace for the next major.\n\nI also still want to parse my way through your email reply about the two\ngroups of callers, and what this means for user experience.\n\nhttps://www.postgresql.org/message-id/7774b3a64f51b3375060c29871cf2b02b3e85dab.camel%40j-davis.com\n\n> Let's separate it into groups.\n> (1) Callers that use a collation OID or pg_locale_t:\n> (2) A long tail of callers that depend on what LC_CTYPE/LC_COLLATE are\n> set to, or use ad-hoc ASCII-only semantics:\n\nIn the first list it seems that some callers might be influenced by a\nCOLLATE clause or table definition while others always take the database\ndefault? It still seems a bit odd to me if different providers can be\nused for different parts of a single SQL. But it might not be so bad - I\nhaven't fully thought through it yet and I'm still kicking the tires on\nmy test build over here.\n\nIs there any reason we couldn't commit the minor cleanup (patch 0001)\nnow? It's less than 200 lines and pretty straightforward.\n\nI wonder if, after a year of running the builtin provider in production,\nwhether we might consider adding to the builtin provider a few locales\nwith simple but more reasonable ordering for european and asian\nlanguages? Maybe just grabbing a current version of DUCET with no\nintention of ever updating it? I don't know how bad sorting is with\nplain DUCET for things like french or spanish or german, but surely it's\nnot as unusable as code point order? Anyone who needs truly accurate or\nupdated or customized linguistic sorting can always go to ICU, and take\nresponsibility for their ICU upgrades, but something basic and static\nmight meet the needs of 99% of postgres users indefinitely.\n\nBy the way - my coworker Josh (who I don't think posts much on the\nhackers list here, but shares an unhealthy inability to look away from\ndatabase unicode disasters) passed along this link today which I think\nis a fantastic list of surprises about programming and strings\n(generally unicode).\n\nhttps://jeremyhussell.blogspot.com/2017/11/falsehoods-programmers-believe-about.html#main\n\nMake sure to click the link to show the counterexamples and discussion,\nthat's the best part.\n\n-Jeremy\n\n\nPS. I was joking around today that the the second best part is that it's\nproof that people named Jeremy are always brilliant within their field.\n😂 Josh said its just a subset of \"always trust people whose names start\nwith J\" which seems fair. Unfortunately I can't yet think of a way to\nshoehorn the rest of the amazing PG hackers on this thread into the joke.\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Mon, 8 Jan 2024 17:17:48 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12/28/23 6:57 PM, Jeff Davis wrote:\n> On Wed, 2023-12-27 at 17:26 -0800, Jeff Davis wrote:\n> Attached a more complete version that fixes a few bugs, stabilizes the\n> tests, and improves the documentation. I optimized the performance, too\n> -- now it's beating both libc's \"C.utf8\" and ICU \"en-US-x-icu\" for both\n> collation and case mapping (numbers below).\n> \n> It's really nice to finally be able to have platform-independent tests\n> that work on any UTF-8 database.\n\nI think we missed something in psql, pretty sure I applied all the\npatches but I see this error:\n\n=# \\l\nERROR: 42703: column d.datlocale does not exist\nLINE 8: d.datlocale as \"Locale\",\n ^\nHINT: Perhaps you meant to reference the column \"d.daticulocale\".\nLOCATION: errorMissingColumn, parse_relation.c:3720\n\n=====\n\nThis is interesting. Jeff your original email didn't explicitly show any\nother initcap() results, but on Ubuntu 22.04 (glibc 2.35) I see\ndifferent results:\n\n=# SELECT initcap('axxE áxxÉ DŽxxDŽ Džxxx džxxx');\n initcap\n--------------------------\n Axxe Áxxé DŽxxdž DŽxxx DŽxxx\n\n=# SELECT initcap('axxE áxxÉ DŽxxDŽ Džxxx džxxx' COLLATE C_UTF8);\n initcap\n--------------------------\n Axxe Áxxé Džxxdž Džxxx Džxxx\n\nThe COLLATE sql syntax feels awkward to me. In this example, we're just\nusing it to attach locale info to the string, and there's not actually\nany collation involved here. Not sure if COLLATE comes from the\nstandard, and even if it does I'm not sure whether the standard had\nupper/lowercase in mind.\n\nThat said, I think the thing that mainly matters will be the CREATE\nDATABASE syntax and the database default.\n\nI want to try a few things with table-level defaults that differ from\ndatabase-level defaults, especially table-level ICU defaults because I\nthink a number of PostgreSQL users set that up in the years before we\nsupported DB-level ICU. Some people will probably keep using their\nold/existing schema-creation scripts even after they begin provisioning\nnew systems with new database-level defaults.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 14:17:44 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-01-09 at 14:17 -0800, Jeremy Schneider wrote:\n> I think we missed something in psql, pretty sure I applied all the\n> patches but I see this error:\n> \n> =# \\l\n> ERROR:  42703: column d.datlocale does not exist\n> LINE 8:   d.datlocale as \"Locale\",\n> \n\nThank you. I'll fix this in the next patch set.\n\n> This is interesting. Jeff your original email didn't explicitly show\n> any\n> other initcap() results, but on Ubuntu 22.04 (glibc 2.35) I see\n> different results:\n> \n> =# SELECT initcap('axxE áxxÉ DŽxxDŽ Džxxx džxxx');\n>          initcap\n> --------------------------\n>  Axxe Áxxé DŽxxdž DŽxxx DŽxxx\n> \n> =# SELECT initcap('axxE áxxÉ DŽxxDŽ Džxxx džxxx' COLLATE C_UTF8);\n>          initcap\n> --------------------------\n>  Axxe Áxxé Džxxdž Džxxx Džxxx\n\nThe reason for this difference is because libc doesn't support\ntitlecase. In the next patch set, I'll not use titlecase (only\nuppercase/lowercase even for initcap()), and then it will match libc\n100%.\n\n> The COLLATE sql syntax feels awkward to me. In this example, we're\n> just\n> using it to attach locale info to the string, and there's not\n> actually\n> any collation involved here. Not sure if COLLATE comes from the\n> standard, and even if it does I'm not sure whether the standard had\n> upper/lowercase in mind.\n\nThe standard doesn't use the COLLATE clause for case mapping, but also\ndoesn't offer any other mechanism to, e.g., get case mapping according\nto the \"tr_TR\" locale.\n\nI think what Postgres does here, re-purposing the COLLATE clause to\nallow tailoring of case mapping, is imperfect but reasonable. My\nproposal doesn't change that.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 09 Jan 2024 14:31:59 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-01-08 at 17:17 -0800, Jeremy Schneider wrote:\n\n> I agree with merging the threads, even though it makes for a larger\n> patch set. It would be great to get a unified \"builtin\" provider in\n> place for the next major.\n\nI believe that's possible and that this proposal is quite close (hoping\nto get something in this 'fest). The tables I'm introducing have\nexhaustive test coverage, so there's not a lot of risk there. And the\nbuiltin provider itself is an optional feature, so it won't be\ndisruptive.\n\n> \n> In the first list it seems that some callers might be influenced by a\n> COLLATE clause or table definition while others always take the\n> database\n> default? It still seems a bit odd to me if different providers can be\n> used for different parts of a single SQL.\n\nRight, that can happen today, and my proposal doesn't change that.\nBasically those are cases where the caller was never properly onboarded\nto our collation system, like the ts_locale.c routines.\n\n> Is there any reason we couldn't commit the minor cleanup (patch 0001)\n> now? It's less than 200 lines and pretty straightforward.\n\nSure, I'll commit that fairly soon then.\n\n> I wonder if, after a year of running the builtin provider in\n> production,\n> whether we might consider adding to the builtin provider a few\n> locales\n> with simple but more reasonable ordering for european and asian\n> languages?\n\nI won't rule that out completely, but there's a lot we would need to do\nto get there. Even assuming we implement that perfectly, we'd need to\nmake sure it's a reasonable scope for Postgres as a project and that we\nhave more than one person willing to maintain it. Similar things have\nbeen rejected before for similar reasons.\n\nWhat I'm proposing for v17 is much simpler: basically some lookup\ntables, which is just an extension of what we're already doing for\nnormalization.\n\n> https://jeremyhussell.blogspot.com/2017/11/falsehoods-programmers-believe-about.html#main\n> \n> Make sure to click the link to show the counterexamples and\n> discussion,\n> that's the best part.\n\nYes, it can be hard to reason about this stuff but I believe Unicode\nprovides a lot of good data and guidance to work from. If you think my\nproposal relies on one of those assumptions let me know.\n\nTo the extent that I do rely on any of those assumptions, it's mostly\nto match libc's \"C.UTF-8\" behavior.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 09 Jan 2024 14:55:44 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 1/9/24 2:31 PM, Jeff Davis wrote:\n> On Tue, 2024-01-09 at 14:17 -0800, Jeremy Schneider wrote:\n>> I think we missed something in psql, pretty sure I applied all the\n>> patches but I see this error:\n>>\n>> =# \\l\n>> ERROR:  42703: column d.datlocale does not exist\n>> LINE 8:   d.datlocale as \"Locale\",\n>>\n> \n> Thank you. I'll fix this in the next patch set.\n\nVery minor nitpick/request. Not directly with this patch set but with\nthe category_test which is related and also recently committed.\n\nI'm testing out \"make update-unicode\" scripts, and due to my system ICU\nversion being different from the PostgreSQL unicode version I get the\nexpected warnings from category_test:\n\nPostgres Unicode Version: 15.1\nICU Unicode Version: 14.0\nSkipped 5116 codepoints unassigned in ICU due to Unicode version mismatch.\ncategory_test: All tests successful!\n\nI know it's minor, but I saw the 5116 skipped codepoints and saw \"all\ntests succeeded\" but I thought the output might be a little nicer if we\nshowed the count of tests that succeeded. For example:\n\nPostgres Unicode Version: 15.1\nICU Unicode Version: 14.0\nSkipped 5116 codepoints unassigned in ICU due to Unicode version mismatch.\ncategory_test: All 1108996 tests successful!\n\nIt's a minor tweak to a script that I don't think even runs in the\nstandard build; any objections? Patch attached.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider", "msg_date": "Tue, 9 Jan 2024 23:35:07 -0800", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tJeff Davis wrote:\n\n> Attached a more complete version that fixes a few bugs\n\n[v15 patch]\n\nWhen selecting the builtin provider with initdb, I'm getting the\nfollowing setup:\n\n$ bin/initdb --locale=C.UTF-8 --locale-provider=builtin -D/tmp/pgdata\n \n The database cluster will be initialized with this locale configuration:\n default collation provider: builtin\n default collation locale:\t C.UTF-8\n LC_COLLATE: C.UTF-8\n LC_CTYPE:\t C.UTF-8\n LC_MESSAGES: C.UTF-8\n LC_MONETARY: C.UTF-8\n LC_NUMERIC: C.UTF-8\n LC_TIME:\t C.UTF-8\n The default database encoding has accordingly been set to \"UTF8\".\n The default text search configuration will be set to \"english\".\n\nThis is from an environment where LANG=fr_FR.UTF-8\n\nI would expect all LC_* variables to be fr_FR.UTF-8, and the default\ntext search configuration to be \"french\". It is what happens\nwhen selecting ICU as the provider in the same environment:\n\n$ bin/initdb --icu-locale=en --locale-provider=icu -D/tmp/pgdata\n\n Using language tag \"en\" for ICU locale \"en\".\n The database cluster will be initialized with this locale configuration:\n default collation provider: icu\n default collation locale:\t en\n LC_COLLATE: fr_FR.UTF-8\n LC_CTYPE:\t fr_FR.UTF-8\n LC_MESSAGES: fr_FR.UTF-8\n LC_MONETARY: fr_FR.UTF-8\n LC_NUMERIC: fr_FR.UTF-8\n LC_TIME:\t fr_FR.UTF-8\n The default database encoding has accordingly been set to \"UTF8\".\n The default text search configuration will be set to \"french\".\n\nThe collation setup does not influence the rest of the localization.\nThe problem AFAIU is that --locale has two distinct\nmeanings in the v15 patch:\n--locale-provider=X --locale=Y means use \"X\" as the provider\nwith \"Y\" as datlocale, and it means use \"Y\" as the locale for all\nlocalized libc functionalities.\n\nI wonder what would happen if invoking\n bin/initdb --locale=C.UTF-8 --locale-provider=builtin -D/tmp/pgdata\non a system where C.UTF-8 does not exist as a libc locale.\nWould it fail? (I don't have an OS like this to test ATM, will try later).\n\nA related comment is about naming the builtin locale C.UTF-8, the same\nname as in libc. On one hand this is semantically sound, but on the\nother hand, it's likely to confuse people. What about using completely\ndifferent names, like \"pg_unicode\" or something else prefixed by \"pg_\"\nboth for the locale name and the collation name (currently\nC.UTF-8/c_utf8)?\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 10 Jan 2024 23:56:23 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-01-10 at 23:56 +0100, Daniel Verite wrote:\n> $ bin/initdb --locale=C.UTF-8 --locale-provider=builtin -D/tmp/pgdata\n>  \n>   The database cluster will be initialized with this locale\n> configuration:\n>     default collation provider:  builtin\n>     default collation locale:    C.UTF-8\n>     LC_COLLATE:  C.UTF-8\n>     LC_CTYPE:    C.UTF-8\n>     LC_MESSAGES: C.UTF-8\n>     LC_MONETARY: C.UTF-8\n>     LC_NUMERIC:  C.UTF-8\n>     LC_TIME:     C.UTF-8\n>   The default database encoding has accordingly been set to \"UTF8\".\n>   The default text search configuration will be set to \"english\".\n> \n> This is from an environment where LANG=fr_FR.UTF-8\n> \n> I would expect all LC_* variables to be fr_FR.UTF-8, and the default\n> text search configuration to be \"french\".\n\nYou can get the behavior you want by doing:\n\n initdb --builtin-locale=C.UTF-8 --locale-provider=builtin \\\n -D/tmp/pgdata\n\nwhere \"--builtin-locale\" is analogous to \"--icu-locale\".\n\nIt looks like I forgot to document the new initdb option, which seems\nto be the source of the confusion. Sorry, I'll fix that in the next\npatch set. (See examples in the initdb tests.)\n\nI think this answers some of your follow-up questions as well.\n\n> A related comment is about naming the builtin locale C.UTF-8, the\n> same\n> name as in libc. On one hand this is semantically sound, but on the\n> other hand, it's likely to confuse people. What about using\n> completely\n> different names, like \"pg_unicode\" or something else prefixed by\n> \"pg_\"\n> both for the locale name and the collation name (currently\n> C.UTF-8/c_utf8)?\n\nI'm flexible on naming, but here are my thoughts:\n\n* A \"pg_\" prefix makes sense.\n\n* If we named it something like \"pg_unicode\" someone might expect it to\nsort using the root collation.\n\n* The locale name \"C.UTF-8\" is nice because it implies things about\nboth the collation and the character behavior. It's also nice because\non at least some platforms, the behavior is almost identical to the\nlibc locale of the same name.\n\n* UCS_BASIC might be a good name, because it also seems to carry the\nright meanings, but that name is already taken.\n\n* We also might to support variations, such as full case mapping (which\nuppercases \"ß\" to \"SS\", as the SQL standard requires), or perhaps the\n\"standard\" flavor of regexes (which don't count all symbols as\npunctuation). Leaving some room to name those variations would be a\ngood idea.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 11 Jan 2024 11:05:36 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-01-09 at 14:17 -0800, Jeremy Schneider wrote:\n> I think we missed something in psql, pretty sure I applied all the\n> patches but I see this error:\n> \n> =# \\l\n> ERROR:  42703: column d.datlocale does not exist\n> LINE 8:   d.datlocale as \"Locale\",\n>           ^\n> HINT:  Perhaps you meant to reference the column \"d.daticulocale\".\n> LOCATION:  errorMissingColumn, parse_relation.c:3720\n\nI think you're connecting to a patched server with an older version of\npsql, so it doesn't know the catalog column was renamed.\n\npg_dump and pg_upgrade don't have that problem because they throw an\nerror when connecting to a newer server.\n\nBut for psql, that's perfectly reasonable to connect to a newer server.\nHave we dropped or renamed catalog columns used by psql backslash\ncommands before, and if so, how do we handle that?\n\nI can just not rename that column, but that's a bit frustrating because\nit means I'd need to add a new column to pg_database, which seems\nredundant.\n\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 11 Jan 2024 15:36:33 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-01-10 at 23:56 +0100, Daniel Verite wrote:\n> A related comment is about naming the builtin locale C.UTF-8, the\n> same\n> name as in libc. On one hand this is semantically sound, but on the\n> other hand, it's likely to confuse people. What about using\n> completely\n> different names, like \"pg_unicode\" or something else prefixed by\n> \"pg_\"\n> both for the locale name and the collation name (currently\n> C.UTF-8/c_utf8)?\n\nNew version attached. Changes:\n\n * Named collation object PG_C_UTF8, which seems like a good idea to\nprevent name conflicts with existing collations. The locale name is\nstill C.UTF-8, which still makes sense to me because it matches the\nbehavior of the libc locale of the same name so closely.\n\n * Added missing documentation for initdb --builtin-locale\n\n * Refactored the upper/lower/initcap implementations\n\n * Improved tests for case conversions where the byte length of the\nUTF8-encoded string changes (the string length doesn't change because\nwe don't do full case mapping).\n\n * No longer uses titlecase mappings -- libc doesn't do that, so it was\nan unnecessary difference in case mapping behavior.\n\n * Improved test report per Jeremy's suggestion: now it reports the\nnumber of codepoints tested.\n\n\nJeremy also raised a problem with old versions of psql connecting to a\nnew server: the \\l and \\dO won't work. Not sure exactly what to do\nthere, but I could work around it by adding a new field rather than\nrenaming (though that's not ideal).\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 11 Jan 2024 18:02:30 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-01-11 at 18:02 -0800, Jeff Davis wrote:\n> Jeremy also raised a problem with old versions of psql connecting to\n> a\n> new server: the \\l and \\dO won't work. Not sure exactly what to do\n> there, but I could work around it by adding a new field rather than\n> renaming (though that's not ideal).\n\nI did a bit of research for a precedent, and the closest I could find\nis that \\dp was broken between 14 and 15 by commit 07eee5a0dc.\n\nThat is some precedent, but it's more narrow. I think that might\njustify breaking \\dO in older clients, but \\l is used frequently.\n\nIt seems safer to just introduce new columns \"datbuiltinlocale\" and\n\"collbuiltinlocale\". They'll be nullable anyway.\n\nIf we want to clean this up we can do so as a separate commit. There\nare other parts of the catalog representation related to collation that\nwe might want to clean up as well.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 08:58:51 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tJeff Davis wrote:\n\n> > Jeremy also raised a problem with old versions of psql connecting to\n> > a\n> > new server: the \\l and \\dO won't work. Not sure exactly what to do\n> > there, but I could work around it by adding a new field rather than\n> > renaming (though that's not ideal).\n> \n> I did a bit of research for a precedent, and the closest I could find\n> is that \\dp was broken between 14 and 15 by commit 07eee5a0dc.\n\nAnother one is that version 12 broke \\d in older psql by\nremoving pg_class.relhasoids.\n\nISTM that in general the behavior of old psql vs new server does\nnot weight much against choosing optimal catalog changes.\n\nThere's also that warning at start informing users about it:\nWARNING: psql major version X, server major version Y.\n\t Some psql features might not work.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 12 Jan 2024 19:00:28 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, Jan 12, 2024 at 1:00 PM Daniel Verite <[email protected]> wrote:\n> ISTM that in general the behavior of old psql vs new server does\n> not weight much against choosing optimal catalog changes.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 13:13:04 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, 2024-01-12 at 19:00 +0100, Daniel Verite wrote:\n> Another one is that version 12 broke \\d in older psql by\n> removing pg_class.relhasoids.\n> \n> ISTM that in general the behavior of old psql vs new server does\n> not weight much against choosing optimal catalog changes.\n> \n> There's also that warning at start informing users about it:\n> WARNING: psql major version X, server major version Y.\n>          Some psql features might not work.\n\nGood point, I'll leave it as-is then. If someone complains I can rework\nit.\n\nAlso, the output of \\l changes from version to version, so if there are\nautomated tools processing the output then they'd have to change\nanyway.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 10:16:42 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, Jan 12, 2024 at 01:13:04PM -0500, Robert Haas wrote:\n> On Fri, Jan 12, 2024 at 1:00 PM Daniel Verite <[email protected]> wrote:\n>> ISTM that in general the behavior of old psql vs new server does\n>> not weight much against choosing optimal catalog changes.\n> \n> +1.\n\n+1. There is a good amount of effort put in maintaining downward\ncompatibility in psql. Upward compatibility would require more\nmanipulations of the stable branches to make older versions of psql\ncompatible with newer server versions. Brr.\n--\nMichael", "msg_date": "Mon, 15 Jan 2024 08:23:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Jeff Davis wrote:\n\n> New version attached.\n\n[v16]\n\nConcerning the target category_test, it produces failures with\nversions of ICU with Unicode < 15. The first one I see with Ubuntu\n22.04 (ICU 70.1) is:\n\ncategory_test: Postgres Unicode version:\t15.1\ncategory_test: ICU Unicode version:\t\t14.0\ncategory_test: FAILURE for codepoint 0x000c04\ncategory_test: Postgres property \nalphabetic/lowercase/uppercase/white_space/hex_digit/join_control:\n1/0/0/0/0/0\ncategory_test: ICU\tproperty \nalphabetic/lowercase/uppercase/white_space/hex_digit/join_control:\n0/0/0/0/0/0\n\nU+0C04 is a codepoint added in Unicode 11.\nhttps://en.wikipedia.org/wiki/Telugu_(Unicode_block)\n\nIn Unicode.txt:\n0C04;TELUGU SIGN COMBINING ANUSVARA ABOVE;Mn;0;NSM;;;;;N;;;;;\n\nIn Unicode 15, it has been assigned Other_Alphabetic in PropList.txt\n$ grep 0C04 PropList.txt \n0C04\t ; Other_Alphabetic # Mn\t TELUGU SIGN COMBINING ANUSVARA\nABOVE\n\nBut in Unicode 14 it was not there.\nAs a result its binary property UCHAR_ALPHABETIC has changed from\nfalse to true in ICU 72 vs previous versions.\n\nAs I understand, the stability policy says that such things happen.\nFrom https://www.unicode.org/policies/stability_policy.html\n\n Once a character is encoded, its properties may still be changed,\n but not in such a way as to change the fundamental identity of the\n character.\n\n The Consortium will endeavor to keep the values of the other\n properties as stable as possible, but some circumstances may arise\n that require changing them. Particularly in the situation where\n the Unicode Standard first encodes less well-documented characters\n and scripts, the exact character properties and behavior initially\n may not be well known.\n\n As more experience is gathered in implementing the characters,\n adjustments in the properties may become necessary. Examples of\n such properties include, but are not limited to, the following:\n\n - General_Category\n - Case mappings\n - Bidirectional properties\n [...]\n\nI've commented the exit(1) in category_test to collect all errors, and\nbuilt it with versions of ICU from 74 down to 60 (that is Unicode 10.0).\nResults are attached. As expected, the older the ICU version, the more\ndifferences are found against Unicode 15.1.\n\nI find these results interesting because they tell us what contents\ncan break regexp-based check constraints on upgrades.\n\nBut about category_test as a pass-or-fail kind of test, it can only be\nused when the Unicode version in ICU is the same as in Postgres.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Mon, 15 Jan 2024 15:30:16 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-01-15 at 15:30 +0100, Daniel Verite wrote:\n> Concerning the target category_test, it produces failures with\n> versions of ICU with Unicode < 15. The first one I see with Ubuntu\n> 22.04 (ICU 70.1) is:\n\n...\n\n> I find these results interesting because they tell us what contents\n> can break regexp-based check constraints on upgrades.\n\nThank you for collecting and consolidating this information.\n\n> But about category_test as a pass-or-fail kind of test, it can only\n> be\n> used when the Unicode version in ICU is the same as in Postgres.\n\nThe test has a few potential purposes:\n\n1. To see if there is some error in parsing the Unicode files and\nbuilding the arrays in the .h file. For instance, let's say the perl\nparser I wrote works fine on the Unicode 15.1 data file, but does\nsomething wrong on the 16.0 data file: the test would fail and we'd\ninvestigate. This is the most important reason for the test.\n\n2. To notice any quirks between how we interpret Unicode vs how ICU\ndoes.\n\n3. To help see \"interesting\" differences between different Unicode\nversions.\n\nFor #1 and #2, the best way to test is by using a version of ICU that\nuses the same Unicode version as Postgres. The one running update-\nunicode can try to recompile with the right one for the purposes of the\ntest. NB: There might be no version of ICU where the Unicode version\nexactly matches what we'd like to update to. In that case, we'd need to\nuse the closest version and do some manual validation that the\ngenerated tables are sane.\n\nFor #3, that is also interesting information to know about, but it's\nnot directly actionable. As you point out, Unicode does not guarantee\nthat these properties are static forever, so regexes can change\nbehavior when we update Unicode for the next PG version. That is a much\nlower risk than a collation change, but as you point out, is a risk for\nregexes inside of a CHECK constraint. If a user needs zero risk of\nsemantic changes for regexes, the only option is \"C\". Perhaps there\nshould be a separate test target for this mode so that it doesn't exit\nearly?\n\n(Note: case mapping has much stronger guarantees than the character\nclassification.)\n\nI will update the README to document how someone running update-unicode\nshould interpret the test results.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 Jan 2024 11:42:56 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 12.01.24 03:02, Jeff Davis wrote:\n> New version attached. Changes:\n> \n> * Named collation object PG_C_UTF8, which seems like a good idea to\n> prevent name conflicts with existing collations. The locale name is\n> still C.UTF-8, which still makes sense to me because it matches the\n> behavior of the libc locale of the same name so closely.\n\nI am catching up on this thread. The discussions have been very \ncomplicated, so maybe I didn't get it all.\n\nThe patches look pretty sound, but I'm questioning how useful this \nfeature is and where you plan to take it.\n\nEarlier in the thread, the aim was summarized as\n\n > If the Postgres default was bytewise sorting+locale-agnostic\n > ctype functions directly derived from Unicode data files,\n > as opposed to libc/$LANG at initdb time, the main\n > annoyance would be that \"ORDER BY textcol\" would no\n > longer be the human-favored sort.\n\nI think that would be a terrible direction to take, because it would \nregress the default sort order from \"correct\" to \"useless\". Aside from \nthe overall message this sends about how PostgreSQL cares about locales \nand Unicode and such.\n\nMaybe you don't intend for this to be the default provider? But then \nwho would really use it? I mean, sure, some people would, but how would \nyou even explain, in practice, the particular niche of users or use cases?\n\nMaybe if this new provider would be called \"minimal\", it might describe \nthe purpose better.\n\nI could see a use for this builtin provider if it also included the \ndefault UCA collation (what COLLATE UNICODE does now). Then it would \nprovide a \"common\" default behavior out of the box, and if you want more \nfine-tuning, you can go to ICU. There would still be some questions \nabout making sure the builtin behavior and the ICU behavior are \nconsistent (different Unicode versions, stock UCA vs CLDR, etc.). But \nfor practical purposes, it might work.\n\nThere would still be a risk with that approach, since it would \npermanently marginalize ICU functionality, in the sense that only some \nlocales would need ICU, and so we might not pay the same amount of \nattention to the ICU functionality.\n\nI would be curious what your overall vision is here? Is switching the \ndefault to ICU still your goal? Or do you want the builtin provider to \nbe the default? Or something else?\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 13:53:36 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tPeter Eisentraut wrote:\n\n> > If the Postgres default was bytewise sorting+locale-agnostic\n> > ctype functions directly derived from Unicode data files,\n> > as opposed to libc/$LANG at initdb time, the main\n> > annoyance would be that \"ORDER BY textcol\" would no\n> > longer be the human-favored sort.\n> \n> I think that would be a terrible direction to take, because it would \n> regress the default sort order from \"correct\" to \"useless\". Aside from \n> the overall message this sends about how PostgreSQL cares about\n> locales and Unicode and such.\n\nWell, offering a viable solution to avoid as much as possible\nthe dreaded:\n\n\"WARNING: collation \"xyz\" has version mismatch\n... HINT: Rebuild all objects affected by this collation...\"\n\nthat doesn't sound like a bad message to send. \n\nCurrently, to have in codepoint order the indexes that don't need a\nlinguistic order, you're supposed to use collate \"C\", which then means\nthat upper(), lower() etc.. don't work beyond ASCII.\nHere our Unicode support is not good enough, and the proposal\naddresses that.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 18 Jan 2024 20:42:10 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-01-18 at 13:53 +0100, Peter Eisentraut wrote:\n> I think that would be a terrible direction to take, because it would \n> regress the default sort order from \"correct\" to \"useless\".\n\nI don't agree that the current default is \"correct\". There are a lot of\nways it can be wrong:\n\n * the environment variables at initdb time don't reflect what the\nusers of the database actually want\n * there are so many different users using so many different\napplications connected to the database that no one \"correct\" sort order\nexists\n * libc has some implementation quirks\n * the version of Unicode that libc is based on is not what you expect\n * the version of libc is not what you expect\n\n>   Aside from \n> the overall message this sends about how PostgreSQL cares about\n> locales \n> and Unicode and such.\n\nUnicode is primarily about the semantics of characters and their\nrelationships. The patches I propose here do a great job of that.\n\nCollation (relationships between *strings*) is a part of Unicode, but\nnot the whole thing or even the main thing.\n\n> Maybe you don't intend for this to be the default provider?\n\nI am not proposing that this provider be the initdb-time default.\n\n>   But then\n> who would really use it? I mean, sure, some people would, but how\n> would \n> you even explain, in practice, the particular niche of users or use\n> cases?\n\nIt's for users who want to respect Unicode support text from\ninternational sources in their database; but are not experts on the\nsubject and don't know precisely what they want or understand the\nconsequences. If and when such users do notice a problem with the sort\norder, they'd handle it at that time (perhaps with a COLLATE clause, or\nsorting in the application).\n\n> Maybe if this new provider would be called \"minimal\", it might\n> describe \n> the purpose better.\n\n\"Builtin\" communicates that it's available everywhere (not a\ndependency), that specific behaviors can be documented and tested, and\nthat behavior doesn't change within a major version. I want to\ncommunicate all of those things.\n\n> I could see a use for this builtin provider if it also included the \n> default UCA collation (what COLLATE UNICODE does now).\n\nI won't rule that out, but I'm not proposing that right now and my\nproposal is already offering useful functionality.\n\n> There would still be a risk with that approach, since it would \n> permanently marginalize ICU functionality\n\nYeah, ICU already does a good job offering the root collation. I don't\nthink the builtin provider needs to do so.\n\n> I would be curious what your overall vision is here?\n\nVision:\n\n* The builtin provider will offer Unicode character semantics, basic\ncollation, platform-independence, and high performance. It can be used\non its own or in combination with ICU via the COLLATE clause.\n\n* ICU offers COLLATE UNICODE, locale tailoring, case-insensitive\nmatching, and customization with rules. It's the solution for\neverything from \"slightly more advanced\" to \"very advanced\".\n\n* libc would be for databases serving applications on the same machine\nwhere a matching sort order is helpful, risks to indexes are\nacceptable, and performance is not important.\n\n>   Is switching the \n> default to ICU still your goal?  Or do you want the builtin provider\n> to \n> be the default?\n\nIt's hard to answer this question while initdb chooses the database\ndefault collation based on the environment. Neither ICU nor the builtin\nprovider can reasonably handle whatever those environment variables\nmight be set to.\n\nStepping back from the focus on what initdb does, we should be\nproviding the right encouragement in documentation and packaging to\nguide users toward the right provider based their needs and the vision\noutlined above.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 14:03:30 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 18.01.24 23:03, Jeff Davis wrote:\n> On Thu, 2024-01-18 at 13:53 +0100, Peter Eisentraut wrote:\n>> I think that would be a terrible direction to take, because it would\n>> regress the default sort order from \"correct\" to \"useless\".\n> \n> I don't agree that the current default is \"correct\". There are a lot of\n> ways it can be wrong:\n> \n> * the environment variables at initdb time don't reflect what the\n> users of the database actually want\n> * there are so many different users using so many different\n> applications connected to the database that no one \"correct\" sort order\n> exists\n> * libc has some implementation quirks\n> * the version of Unicode that libc is based on is not what you expect\n> * the version of libc is not what you expect\n\nThese are arguments why the current defaults are not universally \nperfect, but I'd argue that they are still most often the right thing as \nthe default.\n\n>>   Aside from\n>> the overall message this sends about how PostgreSQL cares about\n>> locales\n>> and Unicode and such.\n> \n> Unicode is primarily about the semantics of characters and their\n> relationships. The patches I propose here do a great job of that.\n> \n> Collation (relationships between *strings*) is a part of Unicode, but\n> not the whole thing or even the main thing.\n\nI don't get this argument. Of course, people care about sorting and \nsort order. Whether you consider this part of Unicode or adjacent to \nit, people still want it.\n\n>> Maybe you don't intend for this to be the default provider?\n> \n> I am not proposing that this provider be the initdb-time default.\n\nok\n\n>>   But then\n>> who would really use it? I mean, sure, some people would, but how\n>> would\n>> you even explain, in practice, the particular niche of users or use\n>> cases?\n> \n> It's for users who want to respect Unicode support text from\n> international sources in their database; but are not experts on the\n> subject and don't know precisely what they want or understand the\n> consequences. If and when such users do notice a problem with the sort\n> order, they'd handle it at that time (perhaps with a COLLATE clause, or\n> sorting in the application).\n\n> Vision:\n\n> * ICU offers COLLATE UNICODE, locale tailoring, case-insensitive\n> matching, and customization with rules. It's the solution for\n> everything from \"slightly more advanced\" to \"very advanced\".\n\nI am astonished by this. In your world, do users not want their text \ndata sorted? Do they not care what the sort order is? You consider UCA \nsort order an \"advanced\" feature?\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 19:49:56 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-01-22 at 19:49 +0100, Peter Eisentraut wrote:\n\n> > \n> I don't get this argument.  Of course, people care about sorting and \n> sort order.  Whether you consider this part of Unicode or adjacent to\n> it, people still want it.\n\nYou said that my proposal sends a message that we somehow don't care\nabout Unicode, and I strongly disagree. The built-in provider I'm\nproposing does implement Unicode semantics.\n\nSurely a database that offers UCS_BASIC (a SQL spec feature) isn't\nsending a message that it doesn't care about Unicode, and neither is my\nproposal.\n\n> > \n> > * ICU offers COLLATE UNICODE, locale tailoring, case-insensitive\n> > matching, and customization with rules. It's the solution for\n> > everything from \"slightly more advanced\" to \"very advanced\".\n> \n> I am astonished by this.  In your world, do users not want their text\n> data sorted?  Do they not care what the sort order is? \n\nI obviously care about Unicode and collation. I've put a lot of effort\nrecently into contributions in this area, and I wouldn't have done that\nif I thought users didn't care. You've made much greater contributions\nand I thank you for that.\n\nThe logical conclusion of your line of argument would be that libc's\n\"C.UTF-8\" locale and UCS_BASIC simply should not exist. But they do\nexist, and for good reason.\n\nOne of those good reasons is that only *human* users care about the\nhuman-friendliness of sort order. If Postgres is just feeding the\nresults to another system -- or an application layer that re-sorts the\ndata anyway -- then stability, performance, and interoperability matter\nmore than human-friendliness. (Though Unicode character semantics are\nstill useful even when the data is not going directly to a human.)\n\n> You consider UCA \n> sort order an \"advanced\" feature?\n\nI said \"slightly more advanced\" compared with \"basic\". \"Advanced\" can\nbe taken in either a positive way (\"more useful\") or a negative way\n(\"complex\"). I'm sorry for the misunderstanding, but my point was this:\n\n* The builtin provider is for people who are fine with code point order\nand no tailoring, but want Unicode character semantics, collation\nstability, and performance.\n\n* ICU is the right solution for anyone who wants human-friendly\ncollation or tailoring, and is willing to put up with some collation\nstability risk and lower collation performance.\n\nBoth have their place and the user is free to mix and match as needed,\nthanks to the COLLATE clause for columns and queries.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 15:33:54 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Review of the v16 patch set:\n\n(Btw., I suppose you started this patch series with 0002 because some \n0001 was committed earlier. But I have found this rather confusing. I \nthink it's ok to renumber from 0001 for each new version.)\n\n* v16-0002-Add-Unicode-property-tables.patch\n\nVarious comments are updated to include the term \"character class\". I \ndon't recognize that as an official Unicode term. There are categories \nand properties. Let's check this.\n\nSome files need heavy pgindent and perltidy. You were surely going to \ndo this eventually, but maybe you want to do this sooner to check \nwhether you like the results.\n\n- src/common/unicode/Makefile\n\nThis patch series adds some new post-update-unicode tests. Should we \nhave a separate target for each or just one common \"unicode test\" \ntarget? Not sure.\n\n- .../generate-unicode_category_table.pl\n\nThe trailing commas handling ($firsttime etc.) is not necessary with \nC99. The code can be simplified.\n\nFor this kind of code:\n\n+print $OT <<\"HEADER\";\n\nlet's use a common marker like EOS instead of a different one for each \nblock. That just introduces unnecessary variations.\n\n- src/common/unicode_category.c\n\nThe mask stuff at the top could use more explanation. It's impossible\nto figure out exactly what, say, PG_U_PC_MASK does.\n\nLine breaks in the different pg_u_prop_* functions are gratuitously \ndifferent.\n\nIs it potentially confusing that only some pg_u_prop_* have a posix\nvariant? Would it be better for a consistent interface to have a\n\"posix\" argument for each and just ignore it if not used? Not sure.\n\nLet's use size_t instead of Size for new code.\n\n\n* v16-0003-Add-unicode-case-mapping-tables-and-functions.patch\n\nSeveral of the above points apply here analogously.\n\n\n* v16-0004-Catalog-changes-preparing-for-builtin-collation-.patch\n\nThis is mostly a straightforward renaming patch, but there are some \nchanges in initdb and pg_dump that pre-assume the changes in the next \npatch, like which locale columns apply for which providers. I think it \nwould be better for the historical record to make this a straight \nrenaming patch and move those semantic changes to the next patch (or a \nseparate intermediate patch, if you prefer).\n\n- src/bin/psql/describe.c\n- src/test/regress/expected/psql.out\n\nThis would be a good opportunity to improve the output columns for \ncollations. The updated view is now:\n\n+ Schema | Name | Provider | Collate | Ctype | Locale | ICU Rules | \nDeterministic?\n+--------+------+----------+---------+-------+--------+-----------+----------------\n\nThis is historically grown but suboptimal. Why is Locale after Collate \nand Ctype, and why does it show both? I think we could have just the \nLocale column, and if the libc provider is used with different \ncollate/ctype (very rare!), we somehow write that into the single locale \ncolumn.\n\n(A change like this would be a separate patch.)\n\n\n* v16-0005-Introduce-collation-provider-builtin-for-C-and-C.patch\n\nAbout this initdb --builtin-locale option and analogous options \nelsewhere: Maybe we should flip this around and provide a --libc-locale \noption, and have all the other providers just use the --locale option. \nThis would be more consistent with the fact that it's libc that is \nspecial in this context.\n\nDo we even need the \"C\" locale? We have established that \"C.UTF-8\" is \nuseful, but if that is easily available, who would need \"C\"?\n\nSome changes in this patch appear to be just straight renamings, like in\nsrc/backend/utils/init/postinit.c and \nsrc/bin/pg_upgrade/t/002_pg_upgrade.pl. Maybe those should be put into \nthe previous patch instead.\n\nOn the collation naming: My expectation would have been that the \n\"C.UTF-8\" locale would be exposed as the UCS_BASIC collation. And the \n\"C\" locale as some other name (or not at all, see above). You have this \nthe other way around.\n\n\n", "msg_date": "Wed, 7 Feb 2024 10:53:36 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-02-07 at 10:53 +0100, Peter Eisentraut wrote:\n> Various comments are updated to include the term \"character class\". \n> I \n> don't recognize that as an official Unicode term.  There are\n> categories \n> and properties.  Let's check this.\n\nIt's based on\nhttps://www.unicode.org/reports/tr18/#Compatibility_Properties\n\nso I suppose the right name is \"properties\".\n\n> Is it potentially confusing that only some pg_u_prop_* have a posix\n> variant?  Would it be better for a consistent interface to have a\n> \"posix\" argument for each and just ignore it if not used?  Not sure.\n\nI thought about it but didn't see a clear advantage one way or another.\n\n> About this initdb --builtin-locale option and analogous options \n> elsewhere:  Maybe we should flip this around and provide a --libc-\n> locale \n> option, and have all the other providers just use the --locale\n> option. \n> This would be more consistent with the fact that it's libc that is \n> special in this context.\n\nWould --libc-locale affect all the environment variables or just\nLC_CTYPE/LC_COLLATE? How do we avoid breakage?\n\nI like the general direction here but we might need to phase in the\noption or come up with a new name. Suggestions welcome.\n\n> Do we even need the \"C\" locale?  We have established that \"C.UTF-8\"\n> is \n> useful, but if that is easily available, who would need \"C\"?\n\nI don't think we should encourage its use generally but I also don't\nthink it will disappear any time soon. Some people will want it on\nsimplicity grounds. I hope fewer people will use \"C\" when we have a\nbetter builtin option.\n\n> Some changes in this patch appear to be just straight renamings, like\n> in\n> src/backend/utils/init/postinit.c and \n> src/bin/pg_upgrade/t/002_pg_upgrade.pl.  Maybe those should be put\n> into \n> the previous patch instead.\n> \n> On the collation naming: My expectation would have been that the \n> \"C.UTF-8\" locale would be exposed as the UCS_BASIC collation.\n\nI'd like that. We have to sort out a couple things first, though:\n\n1. The SQL spec mentions the capitalization of \"ß\" as \"SS\"\nspecifically. Should UCS_BASIC use the unconditional mappings in\nSpecialCasing.txt? I already have some code to do that (not posted\nyet).\n\n2. Should UCS_BASIC use the \"POSIX\" or \"Standard\" variant of regex\nproperties? (The main difference seems to be whether symbols get\ntreated as punctuation or not.)\n\n3. What do we do about potential breakage for existing users of\nUCS_BASIC who might be expecting C-like behavior?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 12 Feb 2024 18:01:29 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 13.02.24 03:01, Jeff Davis wrote:\n> 1. The SQL spec mentions the capitalization of \"ß\" as \"SS\"\n> specifically. Should UCS_BASIC use the unconditional mappings in\n> SpecialCasing.txt? I already have some code to do that (not posted\n> yet).\n\nIt is my understanding that \"correct\" Unicode case conversion needs to \nuse at least some parts of SpecialCasing.txt. The header of the file says\n\n\"For compatibility, the UnicodeData.txt file only contains simple case \nmappings for characters where they are one-to-one and independent of \ncontext and language. The data in this file, combined with the simple \ncase mappings in UnicodeData.txt, defines the full case mappings [...]\"\n\nI read this as, just using UnicodeData.txt by itself is incomplete.\n\nI think we need to use the \"Unconditional\" mappings and the \"Conditional \nLanguage-Insensitive\" mappings (which is just Greek sigma). Obviously, \nskip the \"Language-Sensitive\" mappings.\n\n\n", "msg_date": "Tue, 13 Feb 2024 07:24:32 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-02-07 at 10:53 +0100, Peter Eisentraut wrote:\n> Review of the v16 patch set:\n> \n> (Btw., I suppose you started this patch series with 0002 because some\n> 0001 was committed earlier.  But I have found this rather confusing. \n> I \n> think it's ok to renumber from 0001 for each new version.)\n\nFixed.\n\n> Various comments are updated to include the term \"character class\". \n> I \n> don't recognize that as an official Unicode term.  There are\n> categories \n> and properties.  Let's check this.\n\nChanged to \"properties\" or \"compatibility properties\", except for a\ncouple places in the test. The test compares against ICU, which does\nuse the term \"character classes\" when discussing regexes:\n\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/uchar_8h.html#details\n\n> Some files need heavy pgindent and perltidy.\n\nDone.\n\n> This patch series adds some new post-update-unicode tests.  Should we\n> have a separate target for each or just one common \"unicode test\" \n> target?  Not sure.\n\nI didn't make a change here. I suspect anyone updating unicode would\nwant to run them all, but I don't have a strong opinion.\n\n> - .../generate-unicode_category_table.pl\n> \n> The trailing commas handling ($firsttime etc.) is not necessary with \n> C99.  The code can be simplified.\n\nThank you, fixed.\n\n> For this kind of code:\n> \n> +print $OT <<\"HEADER\";\n\nDone. I used the <<\"EOS\" style which is more friendly to emacs, but I'm\nnot sure if that's right for the project style.\n\n> Is it potentially confusing that only some pg_u_prop_* have a posix\n> variant?  Would it be better for a consistent interface to have a\n> \"posix\" argument for each and just ignore it if not used?  Not sure.\n\nI don't have a strong opinion here, so I didn't make a change. I can if\nyou think it's cleaner.\n\n> Let's use size_t instead of Size for new code.\n\nDone.\n\n> * v16-0003-Add-unicode-case-mapping-tables-and-functions.patch\n> \n> Several of the above points apply here analogously.\n\nFixed, I think.\n\n> * v16-0004-Catalog-changes-preparing-for-builtin-collation-.patch\n> \n> This is mostly a straightforward renaming patch, but there are some \n> changes in initdb and pg_dump that pre-assume the changes in the next\n> patch, like which locale columns apply for which providers.  I think\n> it \n> would be better for the historical record to make this a straight \n> renaming patch and move those semantic changes to the next patch (or\n> a \n> separate intermediate patch, if you prefer).\n\nAgreed, put those non-renaming changes in the next patch.\n\n> - src/bin/psql/describe.c\n> - src/test/regress/expected/psql.out\n> \n> This would be a good opportunity to improve the output columns for \n> collations.  The updated view is now:\n> \n> + Schema | Name | Provider | Collate | Ctype | Locale | ICU Rules | \n> Deterministic?\n> +--------+------+----------+---------+-------+--------+-----------+--\n> --------------\n> \n> This is historically grown but suboptimal.  Why is Locale after\n> Collate \n> and Ctype, and why does it show both?  I think we could have just the\n> Locale column, and if the libc provider is used with different \n> collate/ctype (very rare!), we somehow write that into the single\n> locale \n> column.\n> \n> (A change like this would be a separate patch.)\n\nI didn't do this, yet.\n\n> * v16-0005-Introduce-collation-provider-builtin-for-C-and-C.patch\n> \n> About this initdb --builtin-locale option and analogous options \n> elsewhere:  Maybe we should flip this around and provide a --libc-\n> locale \n> option, and have all the other providers just use the --locale\n> option. \n> This would be more consistent with the fact that it's libc that is \n> special in this context.\n\nI agree that libc is the odd one out. I'm not quite sure how we should\nexpress that, though, because there are also the other environment\nvariables to worry about (e.g. LC_MESSAGES). Probably best as a\nseparate patch.\n\n> Some changes in this patch appear to be just straight renamings, like\n> in\n> src/backend/utils/init/postinit.c and \n> src/bin/pg_upgrade/t/002_pg_upgrade.pl.  Maybe those should be put\n> into \n> the previous patch instead.\n\nMoved renamings to the previous patch.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 15 Feb 2024 16:13:19 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-02-13 at 07:24 +0100, Peter Eisentraut wrote:\n> It is my understanding that \"correct\" Unicode case conversion needs\n> to \n> use at least some parts of SpecialCasing.txt.\n...\n> I think we need to use the \"Unconditional\" mappings and the\n> \"Conditional \n> Language-Insensitive\" mappings (which is just Greek sigma). \n> Obviously, \n> skip the \"Language-Sensitive\" mappings.\n\nAttached a new series.\n\nOverall I'm quite happy with this feature as well as the recent\nupdates. It expands a lot on what behavior we can actually document;\nthe character semantics are nearly as good as ICU; it's fast; and it\neliminates what is arguably the last reason to use libc (\"C collation\ncombined with some other CTYPE\").\n\nChanges:\n\n * Added a doc update for the \"standard collations\" (tiny patch, mostly\nseparate) which clarifies the collations that are always available, and\ndescribes them a bit better\n\n * Added built-in locale \"UCS_BASIC\" (is that name confusing?) which\nuses full case mapping and the standard properties:\n - \"ß\" uppercases to \"SS\"\n - \"Σ\" usually lowercases to \"σ\", except when the Final_Sigma\ncondition is met, in which case it lowercases to \"ς\"\n - initcap() uses titlecase variants (\"dž\" changes to \"Dž\")\n - in patterns/regexes, symbols (like \"=\") are not treated as\npunctuation\n\n * Changed the UCS_BASIC collation to use the builtin \"UCS_BASIC\"\nlocale with Unicode semantis. At first I was skeptical because it's a\nbehavior change, and I am still not sure we want to do that. But doing\nso would take us closer to both the SQL spec as well as Unicode; and\nalso this kind of character behavior change is less likely to cause a\nproblem than a collation behavior change.\n\n * The built-in locale \"C.UTF-8\" still exists, which uses Unicode\nsimple case mapping and the POSIX compatible properties (no change\nhere).\n\nImplementation-wise:\n\n * I introduced the CaseKind enum, which seemed to clean up a few\nthings and reduce code duplication between upper/lower/titlecase. It\nalso leaves room for introducing case folding later.\n\n * Introduced a \"case-ignorable\" table to properly implement the\nFinal_Sigma rule.\n\nLoose ends:\n\n * Right now you can't mix all of the full case mapping behavior with\nINITCAP(), it just does simple titlecase mapping. I'm not sure we want\nto get too fancy here; after all, INITCAP() is not a SQL standard\nfunction and it's documented in a narrow fashion that doesn't seem to\nleave a lot of room to be very smart. ICU does a few extra things\nbeyond what I did:\n - it accepts a word break iterator to the case conversion function\n - it provides some built-in word break iterators\n - it also has some configurable \"break adjustment\" behavior[1][2]\nwhich re-aligns the start of the word, and I'm not entirely sure why\nthat isn't done in the word break iterator or the titlecasing rules\n\nRegards,\n\tJeff Davis\n\n\n[1]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/stringoptions_8h.html#a4975f537b9960f0330b233061ef0608d\n[2]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/stringoptions_8h.html#afc65fa226cac9b8eeef0e877b8a7744e", "msg_date": "Mon, 26 Feb 2024 19:01:37 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-02-26 at 19:01 -0800, Jeff Davis wrote:\n>  * Right now you can't mix all of the full case mapping behavior with\n> INITCAP(), it just does simple titlecase mapping. I'm not sure we\n> want\n> to get too fancy here; after all, INITCAP() is not a SQL standard\n> function and it's documented in a narrow fashion that doesn't seem to\n> leave a lot of room to be very smart. ICU does a few extra things\n> beyond what I did:\n>   - it accepts a word break iterator to the case conversion function\n>   - it provides some built-in word break iterators\n>   - it also has some configurable \"break adjustment\" behavior[1][2]\n> which re-aligns the start of the word, and I'm not entirely sure why\n> that isn't done in the word break iterator or the titlecasing rules\n\nAttached v19 which addresses this issue. It does proper Unicode\ntitlecasing with a word boundary iterator as an argument. For initcap,\nit just uses a simple word boundary iterator that breaks whenever\nisalnum() changes.\n\nIt came out cleaner this way, ultimately, and it feels more complete\neven though the behavior isn't much different. It's also easier to\ncomment the relationship of the functions to Unicode. I removed\nCaseKind from the public API but still use it internally to avoid code\nduplication.\n\nI made one other change, which is that (for now) I undid the UCS_BASIC\nchange until we are sure we want to change it. Instead, I have builtin\ncollations PG_C_UTF8 and PG_UNICODE_FAST. I used the name \"FAST\" to\nindicate that the collation uses fast memcmp() rather than a real\ncollation, but the Unicode character support is all there (including\nfull case mapping). I'm open to suggestion here on naming.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 29 Feb 2024 21:05:34 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-02-29 at 21:05 -0800, Jeff Davis wrote:\n> Attached v19 which addresses this issue.\n\nI pushed the doc patch.\n\nAttached v20. I am going to start pushing some other patches. v20-0001\n(property tables) and v20-0003 (catalog iculocale -> locale) have been\nstable for a while so are likely to go in soon. v20-0002 (case mapping)\nalso feels close to me, but it went through significant changes to\nsupport full case mapping and titlecasing, so I'll see if there are\nmore comments.\n\nChanges in v20:\n\n * For titlecasing with the builtin \"C.UTF-8\" locale, do not perform\nword break adjustment, so it matches libc's \"C.UTF-8\" titlecasing\nbehavior more closely.\n\n * Add optimized table for ASCII code points when determining\ncategories and properties (this was already done for the case mapping\ntable).\n\n * Add a small patch to make UTF-8 functions inline, which speeds\nthings up substantially.\n\nPerformance:\n\nASCII-only data:\n\n lower initcap upper\n\n \"C\" (libc) 2426 3326 2341\n pg_c_utf8 2890 6570 2825\n pg_unicode_fast 2929 7140 2893\n \"C.utf8\" (libc) 5410 7810 5397\n \"en-US-x-icu\" 8320 65732 9367\n\nIncluding non-ASCII data:\n\n lower initcap upper\n\n \"C\" (libc) 2630 4677 2548\n pg_c_utf8 5471 10682 5431\n pg_unicode_fast 5582 12023 5587\n \"C.utf8\" (libc) 8126 11834 8106\n \"en-US-x-icu\" 14473 73655 15112\n\n\nThe new builtin collations nicely finish ahead of everything except \"C\"\n(with an exception where pg_unicode_fast is marginally slower at\ntitlecasing non-ASCII data than libc \"C.UTF-8\", which is likely due to\nthe word break adjustment semantics).\n\nI suspect the inlined UTF-8 functions also speed up a few other areas,\nbut I didn't measure.\n\nRegards,\n\tJeff Davis", "msg_date": "Sat, 02 Mar 2024 15:02:00 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Sat, 2024-03-02 at 15:02 -0800, Jeff Davis wrote:\n> Attached v20.\n\nAnd here's v22 (I didn't post v21).\n\nI committed Unicode property tables and functions, and the simple case\nmapping. I separated out the full case mapping changes (based on\nSpecialCasing.txt) into patch 0006.\n\nNot a lot of technical changes, but I cleaned up the remaining patches\nand put them into a nicer order with nicer commit messages.\n\n0001: Catalog renaming: colliculocale to colllocale and daticulocale to\ndatlocale.\n\n0002: Basic builtin collation provider that only supports \"C\".\n\n0003: C.UTF-8 locale for builtin collation provider and collation\npg_c_utf8.\n\n0004: Inline some UTF-8 functions to improve performance\n\n0005: Add a unicode_strtitle() function and move the implementation for\nthe builtin provider out of formatting.c.\n\n0006: Add full case mapping support\n\n0007: Add PG_UNICODE_FAST locale for builtin collation provider and\ncollation pg_unicode_fast. This behaves like the standard says\nUCS_BASIC should behave -- sort by code point order but use Unicode\ncharacter semantics with full case mapping.\n\n\n0004 and beyond could use some review. 0004 and 0005 are pretty simple\nand non-controversial. 0006 and 0007 are a bit more interesting and\ncould use some discussion if we want to go ahead with full case mapping\nin 17.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 07 Mar 2024 17:00:21 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 08.03.24 02:00, Jeff Davis wrote:\n> And here's v22 (I didn't post v21).\n> \n> I committed Unicode property tables and functions, and the simple case\n> mapping. I separated out the full case mapping changes (based on\n> SpecialCasing.txt) into patch 0006.\n\n> 0002: Basic builtin collation provider that only supports \"C\".\n\nOverall, this patch looks sound.\n\nIn the documentation, let's make the list of locale providers an actual \nlist instead of a sequence of <sect3>s.\n\nWe had some discussion on initdb option --builtin-locale and whether it \nshould be something more general. I'm ok with leaving it like this for \nnow and maybe consider as an \"open item\" for PG17.\n\nIn\n\n errmsg(\"parameter \\\"locale\\\" must be specified\")\n\nmake \"locale\" a placeholder. (See commit 36a14afc076).\n\nIt seems the builtin provider accepts both \"C\" and \"POSIX\" as locale\nnames, but the documentation says it must be \"C\". Maybe we don't need\nto accept \"POSIX\"? (Seeing that there are no plans for \"POSIX.UTF-8\",\nmaybe we just ignore the \"POSIX\" spelling altogether?)\n\nSpeaking of which, the code in postinit.c is inconsistent in that \nrespect with builtin_validate_locale(). Shouldn't postinit.c use\nbuiltin_validate_locale(), to keep it consistent?\n\nOr, there could be a general function that accepts a locale provider and \na locale string and validates everything together?\n\nIn initdb.c, this message\n\nprintf(_(\"The database cluster will be initialized with no locale.\\n\"));\n\nsounds a bit confusing. I think it's ok to show \"C\" as a locale. I'm\nnot sure we need to change the logic here.\n\nAlso in initdb.c, this message\n\npg_fatal(\"locale must be specified unless provider is libc\");\n\nshould be flipped around, like\n\nlocale must be specified if provider is %s\n\nIn pg_dump.c, dumpDatabase(), there are some new warning messages that\nare not specifically about the builtin provider. Are those existing\ndeficiencies? It's not clear to me.\n\nWhat are the changes in the pg_upgrade test about? Maybe explain the\nscenario it is trying to test briefly?\n\n\n> 0004: Inline some UTF-8 functions to improve performance\n\nMakes sense that inlining can be effective here. But why aren't you \njust inlining the existing function pg_utf_mblen()? Now we have two \nfunctions that do the same thing. And the comment at pg_utf_mblen() is \nremoved completely, so it's not clear anymore why it exists.\n\n\n> 0005: Add a unicode_strtitle() function and move the implementation for\n> the builtin provider out of formatting.c.\n\nIn the recent discussion you had expression some uncertainty about the \ndetailed semantics of this. INITCAP() was copied from Oracle, so we \ncould check there for reference, too. Or we go with full Unicode \nsemantics. I'm not clear on all the differences and tradeoffs, if there \nare any. In any case, it would be good if there were documentation or a \ncomment that somehow wrote down the resolution of this.\n\n\n\n", "msg_date": "Tue, 12 Mar 2024 09:24:14 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-03-12 at 09:24 +0100, Peter Eisentraut wrote:\n> In the documentation, let's make the list of locale providers an\n> actual \n> list instead of a sequence of <sect3>s.\n\nDone.\n\n> We had some discussion on initdb option --builtin-locale and whether\n> it \n> should be something more general.  I'm ok with leaving it like this\n> for \n> now and maybe consider as an \"open item\" for PG17.\n\nOK.\n\n> In\n> \n>      errmsg(\"parameter \\\"locale\\\" must be specified\")\n> \n> make \"locale\" a placeholder.  (See commit 36a14afc076).\n\nDone.\n\n> It seems the builtin provider accepts both \"C\" and \"POSIX\" as locale\n> names, but the documentation says it must be \"C\".  Maybe we don't\n> need\n> to accept \"POSIX\"?  (Seeing that there are no plans for \"POSIX.UTF-\n> 8\",\n> maybe we just ignore the \"POSIX\" spelling altogether?)\n\nAgreed, removed \"POSIX\".\n\n> Speaking of which, the code in postinit.c is inconsistent in that \n> respect with builtin_validate_locale().  Shouldn't postinit.c use\n> builtin_validate_locale(), to keep it consistent?\n\nAgreed, done.\n\n> Or, there could be a general function that accepts a locale provider\n> and \n> a locale string and validates everything together?\n\nThat's a good idea -- perhaps a separate cleanup patch?\n\n> In initdb.c, this message\n> \n> printf(_(\"The database cluster will be initialized with no\n> locale.\\n\"));\n> \n> sounds a bit confusing.  I think it's ok to show \"C\" as a locale. \n> I'm\n> not sure we need to change the logic here.\n\nAgreed, removed.\n\n> Also in initdb.c, this message\n> \n> pg_fatal(\"locale must be specified unless provider is libc\");\n> \n> should be flipped around, like\n> \n> locale must be specified if provider is %s\n\nDone.\n\n> In pg_dump.c, dumpDatabase(), there are some new warning messages\n> that\n> are not specifically about the builtin provider.  Are those existing\n> deficiencies?  It's not clear to me.\n\nI wouldn't call that a deficiency, but it seemed to be a convenient\nplace to do some extra sanity checking along with the minor\nreorganization I did in that area.\n\n> What are the changes in the pg_upgrade test about?  Maybe explain the\n> scenario it is trying to test briefly?\n\nIt's trying to be a better test for commit 9637badd9f, which eliminates\nneedless locale incompatibilities when performing a pg_upgrade.\n\nAt the time of that commit, the options for testing were fairly\nlimited, so I'm just expanding on that here a bit. It might be slightly\nover-engineered? I added some comments and cleaned it up.\n\n> > 0004: Inline some UTF-8 functions to improve performance\n> \n> Makes sense that inlining can be effective here.  But why aren't you \n> just inlining the existing function pg_utf_mblen()?  Now we have two \n> functions that do the same thing.  And the comment at pg_utf_mblen()\n> is \n> removed completely, so it's not clear anymore why it exists.\n\nI was trying to figure out what to do about USE_PRIVATE_ENCODING_FUNCS.\n\nIf libpq exports pg_utf_mblen(), it needs to continue to export that,\nor else it's an ABI break, right? So that means we need at least one\nextern copy of the function. See b6c7cfac88.\n\nThough now that I look at it, I'm not even calling the inlined version\nfrom my code -- I must have been using it in an earlier version and now\nnot. So I just left pg_utf_mblen() alone, and inlined unicode_to_utf8()\nand utf8_to_unicode().\n\n> > 0005: Add a unicode_strtitle() function and move the implementation\n> > for\n> > the builtin provider out of formatting.c.\n> \n> In the recent discussion you had expression some uncertainty about\n> the \n> detailed semantics of this.  INITCAP() was copied from Oracle, so we \n> could check there for reference, too.  Or we go with full Unicode \n> semantics.  I'm not clear on all the differences and tradeoffs, if\n> there \n> are any.  In any case, it would be good if there were documentation\n> or a \n> comment that somehow wrote down the resolution of this.\n\nThere are a few nuances that are different between the Unicode way to\ntitlecase a string and INITCAP():\n\n 1. For the initial character in a word, Unicode uses the titlecase\nmapping, whereas INITCAP (as the name suggests) uses the uppercase\nmapping.\n 2. Unicode uses full case mapping, which can change the length of the\nstring (e.g. mapping \"ß\" to the titlecase \"Ss\" -- though I've heard\nthat titlecasing \"ß\" doesn't make a lot of sense in German because\nwords typically don't begin with it). Full case mapping can also handle\ncontext-sensitive mappings, such as the \"final sigma\".\n 3. Unicode has a lot to say about word boundaries, whereas INITCAP()\njust uses the boundary between alnum and !alnum.\n\nThe unicode_strtitle() function is just a way to unify those\ndifferences into one implementation. A \"full\" parameter controls\nbehaviors 1 & 2, and a callback handles 3. If we just want to keep it\nsimple, we can leave it as the character-by-character algorithm in\nformatting.c.\n\nMy uncertainty was whether we really want INITCAP to be doing these\nmore sophisticated titlecasing transformations, or whether that should\nbe a separate sql function (title()? titlecase()?), or whether we just\ndon't need that functionality.\n\n\nNew series attached. I plan to commit 0001 very soon.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 13 Mar 2024 00:44:37 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-03-13 at 00:44 -0700, Jeff Davis wrote:\n> New series attached. I plan to commit 0001 very soon.\n\nCommitted the basic builtin provider, supporting only the \"C\" locale.\n\nThere were a few changes since the last version I posted:\n\n * Added simplistic validation of the locale name to initdb.c (missing\nbefore).\n * Consistently passed the locale name to\nget_collation_actual_version(). In the previous patch, the caller\nsometimes just passed NULL knowing that the builtin provider is not\nversioned, but that's not the caller's responsibility.\n * pg_dump previously had some minor refactoring, which you had some\nquestions about. I eliminated that and just kept it to the changes\nnecessary for the builtin provider.\n * createdb --help was missing the --builtin-locale option\n * improved error checking order in createdb() to avoid a confusing\nerror message.\n\nI also attached a rebased series.\n\n0001 (the C.UTF-8 locale) is also close. Considering that most of the\ninfrastructure is already in place, that's not a large patch. You many\nhave some comments about the way I'm canonicalizing and validating in\ninitdb -- that could be cleaner, but it feels like I should refactor\nthe surrounding code separately first.\n\n0002 (inlining utf8 functions) is also ready.\n\nFor 0003 and beyond, I'd like some validation that it's what you had in\nmind.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 14 Mar 2024 01:08:19 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 14.03.24 09:08, Jeff Davis wrote:\n> On Wed, 2024-03-13 at 00:44 -0700, Jeff Davis wrote:\n>> New series attached. I plan to commit 0001 very soon.\n> \n> Committed the basic builtin provider, supporting only the \"C\" locale.\n\nAs you were committing this, I had another review of \nv23-0001-Introduce-collation-provider-builtin.patch in progress. Some \nof the things I found you have already addressed in what you committed. \nPlease check the remaining comments.\n\n\n* doc/src/sgml/charset.sgml\n\nI don't understand the purpose of this sentence:\n\n\"When using this locale, the behavior may depend on the database encoding.\"\n\n\n* doc/src/sgml/ref/create_database.sgml\n\nThe new parameter builtin_locale is not documented.\n\n\n* src/backend/commands/collationcmds.c\n\nI think DefineCollation() should set collencoding = -1 for the\nCOLLPROVIDER_BUILTIN case. -1 stands for any encoding. Or at least\nexplain why not?\n\n\n* src/backend/utils/adt/pg_locale.c\n\nThis part is a bit confusing:\n\n+ cache_entry->collate_is_c = true;\n+ cache_entry->ctype_is_c = (strcmp(colllocale, \"C\") == 0);\n\nIs collate always C but ctype only sometimes? Does this anticipate\nfuture patches in this series? Maybe in this patch it should always\nbe true?\n\n\n* src/bin/initdb/initdb.c\n\n+ printf(_(\" --builtin-locale=LOCALE set builtin locale name \nfor new databases\\n\"));\n\nPut in a line break so that the right \"column\" lines up.\n\nThis output should line up better:\n\nThe database cluster will be initialized with this locale configuration:\n default collation provider: icu\n default collation locale: en\n LC_COLLATE: C\n LC_CTYPE: C\n ...\n\nAlso, why are there two spaces after \"provider: \"?\n\nAlso we call these locale provider on input, why are they collation\nproviders on output? What is a \"collation locale\"?\n\n\n* src/bin/pg_upgrade/t/002_pg_upgrade.pl\n\n+if ($oldnode->pg_version >= '17devel')\n\nThis is weird. >= is a numeric comparison, so providing a string with\nnon-digits is misleading at best.\n\n\n* src/test/icu/t/010_database.pl\n\n-# Test that LOCALE works for ICU locales if LC_COLLATE and LC_CTYPE\n-# are specified\n\nWhy remove this test?\n\n+my ($ret, $stdout, $stderr) = $node1->psql('postgres',\n+ q{CREATE DATABASE dbicu LOCALE_PROVIDER builtin LOCALE 'C' TEMPLATE \ndbicu}\n+);\n\nChange the name of the new database to be different from the name of\nthe template database.\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 09:54:35 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 14.03.24 09:08, Jeff Davis wrote:\n> 0001 (the C.UTF-8 locale) is also close. Considering that most of the\n> infrastructure is already in place, that's not a large patch. You many\n> have some comments about the way I'm canonicalizing and validating in\n> initdb -- that could be cleaner, but it feels like I should refactor\n> the surrounding code separately first.\n\nIf have tested this against the libc locale C.utf8 that was available on \nthe OS, and the behavior is consistent.\n\nI wonder if we should version the builtin locales too. We might make a \nmistake and want to change something sometime?\n\nTiny comments:\n\n* src/bin/scripts/t/020_createdb.pl\n\nThe two added tests should have different names that tells them apart\n(like the new initdb tests).\n\n* src/include/catalog/pg_collation.dat\n\nMaybe use 'and' instead of '&' in the description.\n\n> 0002 (inlining utf8 functions) is also ready.\n\nSeems ok.\n\n> For 0003 and beyond, I'd like some validation that it's what you had in\n> mind.\n\nI'll look into those later.\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 15:38:53 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-03-14 at 09:54 +0100, Peter Eisentraut wrote:\n> * doc/src/sgml/charset.sgml\n> \n> I don't understand the purpose of this sentence:\n> \n> \"When using this locale, the behavior may depend on the database\n> encoding.\"\n\nThe \"C\" locale (in either the builtin or libc provider) can sort\ndifferently in different encodings, because it's based on memcmp. For\ninstance:\n\n select U&'\\20AC' > U&'\\201A' collate \"C\";\n\nReturns true in UTF-8 and false in WIN1252. That's why UCS_BASIC is\nonly available in UTF-8, because (at least for some encodings) we'd\nhave to decode before comparison to get the code-point-order semantics\nright.\n\nIn other words, the \"C\" collation is not a well-defined order, but\nUCS_BASIC and C.UTF-8 are well-defined.\n\nSuggestions for better wording are welcome.\n\n> * doc/src/sgml/ref/create_database.sgml\n> \n> The new parameter builtin_locale is not documented.\n\nThank you, fixed in 0001 (review fixup).\n\n> * src/backend/commands/collationcmds.c\n> \n> I think DefineCollation() should set collencoding = -1 for the\n> COLLPROVIDER_BUILTIN case.  -1 stands for any encoding.  Or at least\n> explain why not?\n\nIn the attached v25-0001 (review fixup) I have made it the\nresponsibility of a function, and then extended that for the C.UTF-8\n(0002) and PG_UNICODE_FAST locales (0007).\n\n> * src/backend/utils/adt/pg_locale.c\n> \n> This part is a bit confusing:\n> \n> +           cache_entry->collate_is_c = true;\n> +           cache_entry->ctype_is_c = (strcmp(colllocale, \"C\") == 0);\n> \n> Is collate always C but ctype only sometimes?  Does this anticipate\n> future patches in this series?  Maybe in this patch it should always\n> be true?\n\nMade it a constant in v25-0001, and changed it in 0002\n\n> \n> * src/bin/initdb/initdb.c\n> \n> +   printf(_(\"      --builtin-locale=LOCALE   set builtin locale name\n> for new databases\\n\"));\n> \n> Put in a line break so that the right \"column\" lines up.\n\nFixed in 0001\n\n> This output should line up better:\n> \n> The database cluster will be initialized with this locale\n> configuration:\n>    default collation provider:  icu\n>    default collation locale:    en\n>    LC_COLLATE:  C\n>    LC_CTYPE:    C\n>    ...\n> \n> Also, why are there two spaces after \"provider:  \"?\n> \n> Also we call these locale provider on input, why are they collation\n> providers on output?  What is a \"collation locale\"?\n\nI tried to fix these things in 0001.\n\n> * src/bin/pg_upgrade/t/002_pg_upgrade.pl\n> \n> +if ($oldnode->pg_version >= '17devel')\n> \n> This is weird.  >= is a numeric comparison, so providing a string\n> with\n> non-digits is misleading at best.\n\nIt's actually not a numeric comparison, it's an overloaded comparison\nop for the Version class.\n\nSee 32dd2c1eff and:\nhttps://www.postgresql.org/message-id/1738174.1710274577%40sss.pgh.pa.us\n\n> * src/test/icu/t/010_database.pl\n> \n> -# Test that LOCALE works for ICU locales if LC_COLLATE and LC_CTYPE\n> -# are specified\n> \n> Why remove this test?\n\nIt must have been lost during a rebase, fixed in 0001.\n\n> Change the name of the new database to be different from the name of\n> the template database.\n\nFixed in 0001.\n\nNew series attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 14 Mar 2024 13:42:20 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-03-14 at 15:38 +0100, Peter Eisentraut wrote:\n> On 14.03.24 09:08, Jeff Davis wrote:\n> > 0001 (the C.UTF-8 locale) is also close...\n> \n> If have tested this against the libc locale C.utf8 that was available\n> on \n> the OS, and the behavior is consistent.\n\nThat was the goal, in spirit.\n\nBut to clarify: it's not guaranteed that the built-in C.UTF-8 is always\nthe same as the libc UTF-8, because different implementations do\ndifferent things. For instance, I saw significant differences on MacOS.\n\n> I wonder if we should version the builtin locales too.  We might make\n> a \n> mistake and want to change something sometime?\n\nI'm fine with that, see v25-0004 in the reply to your other mail.\n\nThe version only tracks sort order, and all of the builtin locales sort\nbased on memcmp(). But it's possible there are bugs in the\noptimizations around memcmp() (e.g. abbreviated keys, or some future\noptimization).\n\n> Tiny comments:\n> \n> * src/bin/scripts/t/020_createdb.pl\n> \n> The two added tests should have different names that tells them apart\n> (like the new initdb tests).\n> \n> * src/include/catalog/pg_collation.dat\n\nDone in v25-0002 (in reply to your other mail).\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 13:42:28 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> New series attached.\n\nCoverity thinks there's something wrong with builtin_validate_locale,\nand as far as I can tell it's right: the last ereport is unreachable,\nbecause required_encoding is never changed from its initial -1 value.\nIt looks like there's a chunk of logic missing there, or else that\nthe code could be simplified further.\n\n/srv/coverity/git/pgsql-git/postgresql/src/backend/utils/adt/pg_locale.c: 2519 in builtin_validate_locale()\n>>> CID 1594398: Control flow issues (DEADCODE)\n>>> Execution cannot reach the expression \"encoding != required_encoding\" inside this statement: \"if (required_encoding >= 0 ...\".\n2519 \tif (required_encoding >= 0 && encoding != required_encoding)\n2520 \t\tereport(ERROR,\n2521 \t\t\t\t(errcode(ERRCODE_WRONG_OBJECT_TYPE),\n2522 \t\t\t\t errmsg(\"encoding \\\"%s\\\" does not match locale \\\"%s\\\"\",\n2523 \t\t\t\t\t\tpg_encoding_to_char(encoding), locale)));\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Mar 2024 17:46:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Sun, 2024-03-17 at 17:46 -0400, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > New series attached.\n> \n> Coverity thinks there's something wrong with builtin_validate_locale,\n> and as far as I can tell it's right: the last ereport is unreachable,\n> because required_encoding is never changed from its initial -1 value.\n> It looks like there's a chunk of logic missing there, or else that\n> the code could be simplified further.\n\nThank you, it was a bit of over-generalization in anticipation of\nfuture patches.\n\nIt may be moot soon, but I committed a fix now.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 18 Mar 2024 10:00:03 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> It may be moot soon, but I committed a fix now.\n\nThanks, but it looks like 846311051 introduced a fresh issue.\nMSVC is complaining about\n\n[21:37:15.349] c:\\cirrus\\src\\backend\\utils\\adt\\pg_locale.c(2515) : warning C4715: 'builtin_locale_encoding': not all control paths return a value\n\nThis is causing all CI jobs to fail the \"compiler warnings\" check.\n\nProbably the best fix is the traditional\n\n\treturn <something>; /* keep compiler quiet */\n\nbut I'm not sure what the best default result is in this function.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 18:04:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-03-18 at 18:04 -0400, Tom Lane wrote:\n> This is causing all CI jobs to fail the \"compiler warnings\" check.\n\nI did run CI before checkin, and it passed:\n\nhttps://cirrus-ci.com/build/5382423490330624\n\nIf I open up the windows build, I see the warning:\n\nhttps://cirrus-ci.com/task/5199979044667392\n\nbut I didn't happen to check this time.\n\n> Probably the best fix is the traditional\n> \n>         return <something>;    /* keep compiler quiet */\n> \n> but I'm not sure what the best default result is in this function.\n\nIn inverted the check so that I didn't have to choose a default.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 18 Mar 2024 15:48:34 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Mon, 2024-03-18 at 18:04 -0400, Tom Lane wrote:\n>> This is causing all CI jobs to fail the \"compiler warnings\" check.\n\n> I did run CI before checkin, and it passed:\n> https://cirrus-ci.com/build/5382423490330624\n\nWeird, why did it not report with the same level of urgency?\nBut anyway, thanks for fixing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 18:54:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, Mar 19, 2024 at 11:55 AM Tom Lane <[email protected]> wrote:\n> Jeff Davis <[email protected]> writes:\n> > On Mon, 2024-03-18 at 18:04 -0400, Tom Lane wrote:\n> >> This is causing all CI jobs to fail the \"compiler warnings\" check.\n>\n> > I did run CI before checkin, and it passed:\n> > https://cirrus-ci.com/build/5382423490330624\n>\n> Weird, why did it not report with the same level of urgency?\n> But anyway, thanks for fixing.\n\nMaybe I misunderstood this exchange but ...\n\nCurrently Windows warnings don't make any CI tasks fail ie turn red,\nwhich is why Jeff's run is all green in his personal github repo.\nUnlike gcc and clang, and MinGW cross-build warnings which cause the\nspecial \"CompilerWarnings\" CI task to fail (red). That task is\nrunning on a Linux system so it can't use MSVC. The idea of keeping\nit separate from the \"main\" Linux, FreeBSD, macOS tasks (which use\ngcc, clang, clang respectively) was that it's nicer to try to run the\nactual tests even if there is a pesky warning, so having it in a\nseparate task gets you that info without blocking other progress, and\nit also tries with and without assertions (a category of warning\nhazard, eg unused variables when assertions are off).\n\nBut I did teach cfbot to do some extra digging through the logs,\nlooking for various interesting patterns[1], including non-error\nwarnings, and if it finds anything interesting it shows a little\nclickable ⚠ symbol on the front page.\n\nIf there is something like -Werror on MSVC we could turn that on for\nthe main Windows test, but that might also be a bit annoying. Perhaps\nthere is another way: we could have it compile and test everything,\nallowing warnings, but also then grep the build log afterwards in a\nnew step that fails if any warnings were there? Then Jeff would have\ngot a failure in his personal CI run. Or something like that.\n\n[1] https://github.com/macdice/cfbot/blob/master/cfbot_work_queue.py\n\n\n", "msg_date": "Tue, 19 Mar 2024 16:38:57 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Tue, Mar 19, 2024 at 11:55 AM Tom Lane <[email protected]> wrote:\n>>>> This is causing all CI jobs to fail the \"compiler warnings\" check.\n\n>>> I did run CI before checkin, and it passed:\n\n> Maybe I misunderstood this exchange but ...\n\n> Currently Windows warnings don't make any CI tasks fail ie turn red,\n> which is why Jeff's run is all green in his personal github repo.\n> ...\n> But I did teach cfbot to do some extra digging through the logs,\n\nAh. What I should have said was \"it's causing cfbot to complain\nabout every patch\".\n\nSeems like the divergence in the pass criterion is not such a\ngreat idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 00:03:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "* v25-0001-Address-more-review-comments-on-commit-2d819a08a.patch\n\nThis was committed.\n\n* v25-0002-Support-C.UTF-8-locale-in-the-new-builtin-collat.patch\n\nLooks ok.\n\n* v25-0003-Inline-basic-UTF-8-functions.patch\n\nok\n\n* v25-0004-Use-version-for-builtin-collations.patch\n\nNot sure about the version format \"1.0\", which implies some sort of \nmajor/minor or component-based system. I would just use \"1\".\n\n* v25-0005-Add-unicode_strtitle-for-Unicode-Default-Case-Co.patch\n* v25-0006-Support-Unicode-full-case-mapping-and-conversion.patch\n* v25-0007-Support-PG_UNICODE_FAST-locale-in-the-builtin-co.patch\n\n0005 and 0006 don't contain any test cases. So I guess they are really \nonly usable via 0007. Is that understanding correct?\n\nBtw., tested initcap() on Oracle:\n\nselect initcap('džudo') from dual;\n\n(which uses the precomposed U+01F3) and the result is\n\nDŽudo\n\n(with the precomposed uppercase character). So that matches the \nbehavior proposed in your 0002 patch.\n\nAre there any test cases that illustrate the word boundary changes in \npatch 0005? It might be useful to test those against Oracle as well.\n\n\n\n", "msg_date": "Tue, 19 Mar 2024 13:41:35 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-03-19 at 13:41 +0100, Peter Eisentraut wrote:\n> * v25-0002-Support-C.UTF-8-locale-in-the-new-builtin-collat.patch\n> \n> Looks ok.\n\nCommitted.\n\n> * v25-0003-Inline-basic-UTF-8-functions.patch\n\nCommitted.\n\n> * v25-0004-Use-version-for-builtin-collations.patch\n> \n> Not sure about the version format \"1.0\", which implies some sort of \n> major/minor or component-based system.  I would just use \"1\".\n\nThe v26 patch was not quite complete, so I didn't commit it yet.\nAttached v27-0001 and 0002.\n\n0002 is necessary because otherwise lc_collate_is_c() short-circuits\nthe version check in pg_newlocale_from_collation(). With 0002, the code\nis simpler and all paths go through pg_newlocale_from_collation(), and\nthe version check happens even when lc_collate_is_c().\n\nBut perhaps there was a reason the code was the way it was, so\nsubmitting for review in case I missed something.\n\n> 0005 and 0006 don't contain any test cases.  So I guess they are\n> really \n> only usable via 0007.  Is that understanding correct?\n\n0005 is not a functional change, it's just a refactoring to use a\ncallback, which is preparation for 0007.\n\n> Are there any test cases that illustrate the word boundary changes in\n> patch 0005?  It might be useful to test those against Oracle as well.\n\nThe tests include initcap('123abc') which is '123abc' in the PG_C_UTF8\ncollation vs '123Abc' in PG_UNICODE_FAST.\n\nThe reason for the latter behavior is that the Unicode Default Case\nConversion algorithm for toTitlecase() advances to the next Cased\ncharacter before mapping to titlecase, and digits are not Cased. ICU\nhas a configurable adjustment, and defaults in a way that produces\n'123abc'.\n\nNew rebased series attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 20 Mar 2024 17:13:26 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 21.03.24 01:13, Jeff Davis wrote:\n>> Are there any test cases that illustrate the word boundary changes in\n>> patch 0005?  It might be useful to test those against Oracle as well.\n> The tests include initcap('123abc') which is '123abc' in the PG_C_UTF8\n> collation vs '123Abc' in PG_UNICODE_FAST.\n> \n> The reason for the latter behavior is that the Unicode Default Case\n> Conversion algorithm for toTitlecase() advances to the next Cased\n> character before mapping to titlecase, and digits are not Cased. ICU\n> has a configurable adjustment, and defaults in a way that produces\n> '123abc'.\n\nI think this might be too big of a compatibility break. So far, \ninitcap('123abc') has always returned '123abc'. If the new collation \nreturns '123Abc' now, then that's quite a change. These are not some \nobscure Unicode special case characters, after all.\n\nWhat is the ICU configuration incantation for this? Maybe we could have \nthe builtin provider understand some of that, too.\n\nOr we should create a function separate from initcap.\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:51:49 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, 2024-03-22 at 15:51 +0100, Peter Eisentraut wrote:\n> I think this might be too big of a compatibility break.  So far, \n> initcap('123abc') has always returned '123abc'.  If the new collation\n> returns '123Abc' now, then that's quite a change.  These are not some\n> obscure Unicode special case characters, after all.\n\nIt's a new collation, so I'm not sure it's a compatibility break. But\nyou are right that it is against documentation and expectations for\nINITCAP().\n\n> What is the ICU configuration incantation for this?  Maybe we could\n> have \n> the builtin provider understand some of that, too.\n\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/stringoptions_8h.html#a4975f537b9960f0330b233061ef0608d\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/stringoptions_8h.html#afc65fa226cac9b8eeef0e877b8a7744e\n\n> Or we should create a function separate from initcap.\n\nIf we create a new function, that also gives us the opportunity to\naccept optional arguments to control the behavior rather than relying\non collation for every decision.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:26:10 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Hello Jeff,\n\n21.03.2024 03:13, Jeff Davis wrote:\n> On Tue, 2024-03-19 at 13:41 +0100, Peter Eisentraut wrote:\n>> * v25-0002-Support-C.UTF-8-locale-in-the-new-builtin-collat.patch\n>>\n>> Looks ok.\n> Committed.\n\nPlease look at a Valgrind-detected error caused by the following query\n(starting from f69319f2f):\nSELECT lower('Π' COLLATE pg_c_utf8);\n\n==00:00:00:03.487 1429669== Invalid read of size 1\n==00:00:00:03.487 1429669==    at 0x7C64A5: convert_case (unicode_case.c:107)\n==00:00:00:03.487 1429669==    by 0x7C6666: unicode_strlower (unicode_case.c:70)\n==00:00:00:03.487 1429669==    by 0x66B218: str_tolower (formatting.c:1698)\n==00:00:00:03.488 1429669==    by 0x6D6C55: lower (oracle_compat.c:55)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 24 Mar 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Sun, 2024-03-24 at 14:00 +0300, Alexander Lakhin wrote:\n> Please look at a Valgrind-detected error caused by the following\n> query\n> (starting from f69319f2f):\n> SELECT lower('Π' COLLATE pg_c_utf8);\n\nThank you for the report!\n\nFixed in 503c0ad976.\n\nValgrind did not detect the problem in my setup, so I added a unit test\nin case_test.c where it's easier to see the valgrind problem.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Sun, 24 Mar 2024 16:41:20 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "There is no technical content in this mail, but I'd like to\nshow appreciation for your work on this. I hope this will\neventually remove one of the great embarrassments when using\nPostgreSQL: the dependency on operation system collations.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 25 Mar 2024 08:07:35 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 22.03.24 18:26, Jeff Davis wrote:\n> On Fri, 2024-03-22 at 15:51 +0100, Peter Eisentraut wrote:\n>> I think this might be too big of a compatibility break.  So far,\n>> initcap('123abc') has always returned '123abc'.  If the new collation\n>> returns '123Abc' now, then that's quite a change.  These are not some\n>> obscure Unicode special case characters, after all.\n> \n> It's a new collation, so I'm not sure it's a compatibility break. But\n> you are right that it is against documentation and expectations for\n> INITCAP().\n> \n>> What is the ICU configuration incantation for this?  Maybe we could\n>> have\n>> the builtin provider understand some of that, too.\n> \n> https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/stringoptions_8h.html#a4975f537b9960f0330b233061ef0608d\n> https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/stringoptions_8h.html#afc65fa226cac9b8eeef0e877b8a7744e\n> \n>> Or we should create a function separate from initcap.\n> \n> If we create a new function, that also gives us the opportunity to\n> accept optional arguments to control the behavior rather than relying\n> on collation for every decision.\n\nRight. I thought when you said there is an ICU configuration for it, \nthat it might be like collation options that you specify in the locale \nstring. But it appears it is only an internal API setting. So that, in \nmy mind, reinforces the opinion that we should leave initcap() as is and \nmake a new function that exposes the new functionality. (This does not \nhave to be part of this patch set.)\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 08:29:47 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-03-25 at 08:29 +0100, Peter Eisentraut wrote:\n> Right.  I thought when you said there is an ICU configuration for it,\n> that it might be like collation options that you specify in the\n> locale \n> string.  But it appears it is only an internal API setting.  So that,\n> in \n> my mind, reinforces the opinion that we should leave initcap() as is\n> and \n> make a new function that exposes the new functionality.  (This does\n> not \n> have to be part of this patch set.)\n\nOK, I'll propose a \"title\" or \"titlecase\" function for 18, along with\n\"casefold\" (which I was already planning to propose).\n\nWhat do you think about UPPER/LOWER and full case mapping? Should there\nbe extra arguments for full vs simple case mapping, or should it come\nfrom the collation?\n\nIt makes sense that the \"dotted vs dotless i\" behavior comes from the\ncollation because that depends on locale. But full-vs-simple case\nmapping is not really a locale question. For instance:\n\n select lower('0Σ' collate \"en-US-x-icu\") AS lower_sigma,\n lower('ΑΣ' collate \"en-US-x-icu\") AS lower_final_sigma,\n upper('ß' collate \"en-US-x-icu\") AS upper_eszett;\n lower_sigma | lower_final_sigma | upper_eszett \n -------------+-------------------+--------------\n 0σ | ας | SS\n\nproduces the same results for any ICU collation.\n\nThere's also another reason to consider it an argument rather than a\ncollation property, which is that it might be dependent on some other\nfield in a row. I could imagine someone wanting to do:\n\n SELECT\n UPPER(some_field,\n full => true,\n dotless_i => CASE other_field WHEN ...)\n FROM ...\n\nThat makes sense for a function in the target list, because different\ncustomers might be from different locales and therefore want different\ntreatment of the dotted-vs-dotless-i.\n\nThoughts? Should we use the collation by default but then allow\nparameters to override? Or should we just consider this a new set of\nfunctions?\n\n(All of this is v18 material, of course.)\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:52:56 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 21.03.24 01:13, Jeff Davis wrote:\n> The v26 patch was not quite complete, so I didn't commit it yet.\n> Attached v27-0001 and 0002.\n> \n> 0002 is necessary because otherwise lc_collate_is_c() short-circuits\n> the version check in pg_newlocale_from_collation(). With 0002, the code\n> is simpler and all paths go through pg_newlocale_from_collation(), and\n> the version check happens even when lc_collate_is_c().\n> \n> But perhaps there was a reason the code was the way it was, so\n> submitting for review in case I missed something.\n> \n>> 0005 and 0006 don't contain any test cases.  So I guess they are\n>> really\n>> only usable via 0007.  Is that understanding correct?\n> 0005 is not a functional change, it's just a refactoring to use a\n> callback, which is preparation for 0007.\n> \n>> Are there any test cases that illustrate the word boundary changes in\n>> patch 0005?  It might be useful to test those against Oracle as well.\n> The tests include initcap('123abc') which is '123abc' in the PG_C_UTF8\n> collation vs '123Abc' in PG_UNICODE_FAST.\n> \n> The reason for the latter behavior is that the Unicode Default Case\n> Conversion algorithm for toTitlecase() advances to the next Cased\n> character before mapping to titlecase, and digits are not Cased. ICU\n> has a configurable adjustment, and defaults in a way that produces\n> '123abc'.\n> \n> New rebased series attached.\n\nThe patch set v27 is ok with me, modulo (a) discussion about initcap \nsemantics, and (b) what collation to assign to ucs_basic, which can be \nrevisited later.\n\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:04:28 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 25.03.24 18:52, Jeff Davis wrote:\n> OK, I'll propose a \"title\" or \"titlecase\" function for 18, along with\n> \"casefold\" (which I was already planning to propose).\n\n(Yay, casefold will be useful.)\n\n> What do you think about UPPER/LOWER and full case mapping? Should there\n> be extra arguments for full vs simple case mapping, or should it come\n> from the collation?\n> \n> It makes sense that the \"dotted vs dotless i\" behavior comes from the\n> collation because that depends on locale. But full-vs-simple case\n> mapping is not really a locale question. For instance:\n> \n> select lower('0Σ' collate \"en-US-x-icu\") AS lower_sigma,\n> lower('ΑΣ' collate \"en-US-x-icu\") AS lower_final_sigma,\n> upper('ß' collate \"en-US-x-icu\") AS upper_eszett;\n> lower_sigma | lower_final_sigma | upper_eszett\n> -------------+-------------------+--------------\n> 0σ | ας | SS\n> \n> produces the same results for any ICU collation.\n\nI think of a collation describing what language a text is in. So it \nmakes sense that \"dotless i\" depends on the locale/collation.\n\nFull vs. simple case mapping is more of a legacy compatibility question, \nin my mind. There is some expectation/precedent that C.UTF-8 uses \nsimple case mapping, but beyond that, I don't see a reason why someone \nwould want to explicitly opt for simple case mapping, other than if they \nneed length preservation or something, but if they need that, then they \nare going to be in a world of pain in Unicode anyway.\n\n> There's also another reason to consider it an argument rather than a\n> collation property, which is that it might be dependent on some other\n> field in a row. I could imagine someone wanting to do:\n> \n> SELECT\n> UPPER(some_field,\n> full => true,\n> dotless_i => CASE other_field WHEN ...)\n> FROM ...\n\nCan you index this usefully? It would only work if the user query \nmatches exactly this pattern?\n\n> That makes sense for a function in the target list, because different\n> customers might be from different locales and therefore want different\n> treatment of the dotted-vs-dotless-i.\n\nThere is also the concept of a session collation, which we haven't \nimplemented, but it would address this kind of use. But there again the \nproblem is indexing. But maybe indexing isn't as important for case \nconversion as it is for sorting.\n\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:14:46 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tJeff Davis wrote:\n\n> The tests include initcap('123abc') which is '123abc' in the PG_C_UTF8\n> collation vs '123Abc' in PG_UNICODE_FAST.\n> \n> The reason for the latter behavior is that the Unicode Default Case\n> Conversion algorithm for toTitlecase() advances to the next Cased\n> character before mapping to titlecase, and digits are not Cased. ICU\n> has a configurable adjustment, and defaults in a way that produces\n> '123abc'.\n\nEven aside from ICU, there's a different behavior between glibc\nand pg_c_utf8 glibc for codepoints in the decimal digit category \noutside of the US-ASCII range '0'..'9',\n\nselect initcap(concat(chr(0xff11), 'a') collate \"C.utf8\"); -- glibc 2.35\n initcap \n---------\n 1a\n\nselect initcap(concat(chr(0xff11), 'a') collate \"pg_c_utf8\");\n initcap \n---------\n 1A\n\nBoth collations consider that chr(0xff11) is not a digit\n(isdigit()=>false) but C.utf8 says that it's alpha, whereas pg_c_utf8\nsays it's neither digit nor alpha.\n\nAFAIU this is why in the above initcap() call, pg_c_utf8 considers\nthat 'a' is the first alphanumeric, whereas C.utf8 considers that '1'\nis the first alphanumeric, leading to different capitalizations.\n\nComparing the 3 providers:\n\nWITH v(provider,type,result) AS (values\n ('ICU', 'isalpha', chr(0xff11) ~ '[[:alpha:]]' collate \"unicode\"),\n ('glibc', 'isalpha', chr(0xff11) ~ '[[:alpha:]]' collate \"C.utf8\"),\n ('builtin', 'isalpha', chr(0xff11) ~ '[[:alpha:]]' collate \"pg_c_utf8\"),\n ('ICU', 'isdigit', chr(0xff11) ~ '[[:digit:]]' collate \"unicode\"),\n ('glibc', 'isdigit', chr(0xff11) ~ '[[:digit:]]' collate \"C.utf8\"),\n ('builtin', 'isdigit', chr(0xff11) ~ '[[:digit:]]' collate \"pg_c_utf8\")\n )\nselect * from v\n\\crosstabview\n\n\n provider | isalpha | isdigit \n----------+---------+---------\n ICU\t | f\t | t\n glibc\t | t\t | f\n builtin | f\t | f\n\n\nAre we fine with pg_c_utf8 differing from both ICU's point of view\n(U+ff11 is digit and not alpha) and glibc point of view (U+ff11 is not\ndigit, but it's alpha)?\n\nAside from initcap(), this is going to be significant for regular\nexpressions.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 27 Mar 2024 16:53:33 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-03-27 at 16:53 +0100, Daniel Verite wrote:\n>  provider | isalpha | isdigit \n> ----------+---------+---------\n>  ICU      | f       | t\n>  glibc    | t       | f\n>  builtin  | f       | f\n\nThe \"ICU\" above is really the behvior of the Postgres ICU provider as\nwe implemented it, it's not something forced on us by ICU.\n\nFor the ICU provider, pg_wc_isalpha() is defined as u_isalpha()[1] and\npg_wc_isdigit() is defined as u_isdigit()[2]. Those, in turn, are\ndefined by ICU to be equivalent to java.lang.Character.isLetter() and\njava.lang.Character.isDigit().\n\nICU documents[3] how regex character classes should be implemented\nusing the ICU APIs, and cites Unicode TR#18 [4] as the source. Despite\nbeing under the heading \"...for C/POSIX character classes...\", [3] says\nit's based on the \"Standard\" variant of [4], rather than \"POSIX\nCompatible\".\n\n(Aside: the Postgres ICU provider doesn't match what [3] suggests for\nthe \"alpha\" class. For the character U+FF11 it doesn't matter, but I\nsuspect there are differences for other characters. This should be\nfixed.)\n\nThe differences between PG_C_UTF8 and what ICU suggests are just\nbecause the former uses the \"POSIX Compatible\" definitions and the\nlatter uses \"Standard\".\n\nI implemented both the \"Standard\" and \"POSIX Compatible\" compatibility\nproperties in ad49994538, so it would be easy to change what PG_C_UTF8\nuses.\n\n[1]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/uchar_8h.html#aecff8611dfb1814d1770350378b3b283\n[2] \nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/uchar_8h.html#a42b37828d86daa0fed18b381130ce1e6\n[3] \nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/uchar_8h.html#details\n[4] \nhttp://www.unicode.org/reports/tr18/#Compatibility_Properties\n\n> Are we fine with pg_c_utf8 differing from both ICU's point of view\n> (U+ff11 is digit and not alpha) and glibc point of view (U+ff11 is\n> not\n> digit, but it's alpha)?\n\nYes, some differences are to be expected.\n\nBut I'm fine making a change to PG_C_UTF8 if it makes sense, as long as\nwe can point to something other than \"glibc version 2.35 does it this\nway\".\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:40:19 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-03-26 at 08:04 +0100, Peter Eisentraut wrote:\n> The patch set v27 is ok with me, modulo (a) discussion about initcap \n> semantics, and (b) what collation to assign to ucs_basic, which can\n> be \n> revisited later.\n\nI held off on the refactoring patch for lc_{ctype|collate}_is_c().\nThere's an explicit \"NB: pg_newlocale_from_collation is only supposed\nto be called on non-C-equivalent locales\" comment in DefineCollation().\n\nWhat I'd like to do is make it possible to create valid pg_locale_t\nobjects out of C locales, which can be used anywhere a real locale can\nbe used. Callers can still check lc_{collate|ctype}_is_c() for various\nreasons; but if they did call pg_newlocale_from_collation on a C locale\nit would at least work for the pg_locale.h APIs. That would be a\nslightly simpler and safer API, and make it easier to do the collation\nversion check consistently.\n\nThat's not very complicated, but it's a bit invasive and probably out\nof scope for v17. It might be part of another change I had intended for\na while, which is to make NULL an invalid pg_locale_t, and use a\ndifferent representation to mean \"use the server environment\". That\nwould clean up a lot of checks for NULL.\n\nFor now, we'd still like to add the version number to the builtin\ncollations, so that leaves us with two options:\n\n(a) Perform the version check in lc_{collate|ctype}_is_c(), which\nduplicates some code and creates some inconsistency in how the version\nis checked for different providers.\n\n(b) Don't worry about it and just commit the version change in v27-\n0001. The version check is already performed correctly on the database\nwithout changes, even if the locale is \"C\". And there are already three\nbuilt-in \"C\" collations: \"C\", \"POSIX\", and UCS_BASIC; so it's not clear\nwhy someone would create even more of them. And even if they did, there\nwould be no reason to give them a warning because we haven't\nincremented the version, so there's no chance of a mismatch.\n\nI'm inclined toward (b). Thoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 27 Mar 2024 15:13:55 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-03-26 at 08:04 +0100, Peter Eisentraut wrote:\n> The patch set v27 is ok with me, modulo (a) discussion about initcap \n> semantics, and (b) what collation to assign to ucs_basic, which can\n> be \n> revisited later.\n\nAttached v28.\n\nThe remaining patches are for full case mapping and PG_UNICODE_FAST. \n\nI am fine waiting until July to get these remaining patches committed.\nThat would give us time to sort out details like:\n\n* Get consensus that it's OK to change UCS_BASIC.\n* Figure out if we need a pg-specific locale and whether\nPG_UNICODE_FAST is the right name.\n* Make sure that full case mapping interacts with regexes in a sane way\n(probably it needs to just fall back to simple case mapping, but\nperhaps that's worth a discussion).\n* Implement case folding.\n* Implement a more unicode-friendly TITLECASE() function, which could\noffer a number of options that don't fit well with INITCAP().\n* Figure out if UPPER()/LOWER() should also have some of those options.\n\nThoughts?\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 01 Apr 2024 12:52:31 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-03-26 at 08:14 +0100, Peter Eisentraut wrote:\n> \n> Full vs. simple case mapping is more of a legacy compatibility\n> question, \n> in my mind.  There is some expectation/precedent that C.UTF-8 uses \n> simple case mapping, but beyond that, I don't see a reason why\n> someone \n> would want to explicitly opt for simple case mapping, other than if\n> they \n> need length preservation or something, but if they need that, then\n> they \n> are going to be in a world of pain in Unicode anyway.\n\nI mostly agree, though there are some other purposes for the simple\nmapping:\n\n* a substitute for case folding: lower() with simple case mapping will\nwork better for that purpose than lower() with full case mapping (after\nwe have casefold(), this won't be a problem)\n\n* simple case mapping is conceptually simpler, and that's a benefit by\nitself in some situations -- maybe the 1:1 assumption exists other\nplaces in their application\n\n> > There's also another reason to consider it an argument rather than\n> > a\n> > collation property, which is that it might be dependent on some\n> > other\n> > field in a row. I could imagine someone wanting to do:\n> > \n> >     SELECT\n> >       UPPER(some_field,\n> >             full => true,\n> >             dotless_i => CASE other_field WHEN ...)\n> >     FROM ...\n> \n> Can you index this usefully?  It would only work if the user query \n> matches exactly this pattern?\n\nIn that example, UPPER is used in the target list -- the WHERE clause\nmight be indexable. The UPPER is just used for display purposes, and\nmay depend on some locale settings stored in another table associated\nwith a particular user.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 03 Apr 2024 16:19:02 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 01.04.24 21:52, Jeff Davis wrote:\n> On Tue, 2024-03-26 at 08:04 +0100, Peter Eisentraut wrote:\n>> The patch set v27 is ok with me, modulo (a) discussion about initcap\n>> semantics, and (b) what collation to assign to ucs_basic, which can\n>> be\n>> revisited later.\n> \n> Attached v28.\n> \n> The remaining patches are for full case mapping and PG_UNICODE_FAST.\n> \n> I am fine waiting until July to get these remaining patches committed.\n> That would give us time to sort out details like:\n> \n> * Get consensus that it's OK to change UCS_BASIC.\n> * Figure out if we need a pg-specific locale and whether\n> PG_UNICODE_FAST is the right name.\n> * Make sure that full case mapping interacts with regexes in a sane way\n> (probably it needs to just fall back to simple case mapping, but\n> perhaps that's worth a discussion).\n> * Implement case folding.\n> * Implement a more unicode-friendly TITLECASE() function, which could\n> offer a number of options that don't fit well with INITCAP().\n> * Figure out if UPPER()/LOWER() should also have some of those options.\n> \n> Thoughts?\n\nYeah, I think it's good to give some more time to work out these things. \n The features committed for PG17 so far are solid, so it's a good point \nto pause.\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 14:05:27 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Hi,\n\n+command_ok(\n+ [\n+ 'initdb', '--no-sync',\n+ '--locale-provider=builtin', '-E UTF-8',\n+ '--builtin-locale=C.UTF-8', \"$tempdir/data8\"\n+ ],\n+ 'locale provider builtin with -E UTF-8 --builtin-locale=C.UTF-8');\n\nThis Sun animal recently turned on --enable-tap-tests, and that ↑ failed[1]:\n\n# Running: initdb --no-sync --locale-provider=builtin -E UTF-8\n--builtin-locale=C.UTF-8\n/home/marcel/build-farm-15/buildroot/HEAD/pgsql.build/src/bin/initdb/tmp_check/tmp_test_XvK1/data8\nThe files belonging to this database system will be owned by user \"marcel\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with this locale configuration:\n locale provider: builtin\n default collation: C.UTF-8\n LC_COLLATE: en_US\n LC_CTYPE: en_US\n LC_MESSAGES: C\n LC_MONETARY: en_US\n LC_NUMERIC: en_US\n LC_TIME: en_US\ninitdb: error: encoding mismatch\ninitdb: detail: The encoding you selected (UTF8) and the encoding that\nthe selected locale uses (LATIN1) do not match. This would lead to\nmisbehavior in various character string processing functions.\ninitdb: hint: Rerun initdb and either do not specify an encoding\nexplicitly, or choose a matching combination.\n[14:04:12.462](0.036s) not ok 28 - locale provider builtin with -E\nUTF-8 --builtin-locale=C.UTF-8\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=margay&dt=2024-04-04%2011%3A42%3A40\n\n\n", "msg_date": "Fri, 5 Apr 2024 11:22:12 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, 2024-04-05 at 11:22 +1300, Thomas Munro wrote:\n> Hi,\n> \n> +command_ok(\n> +       [\n> +               'initdb', '--no-sync',\n> +               '--locale-provider=builtin', '-E UTF-8',\n> +               '--builtin-locale=C.UTF-8', \"$tempdir/data8\"\n> +       ],\n> +       'locale provider builtin with -E UTF-8 --builtin-\n> locale=C.UTF-8');\n\n...\n\n>   LC_COLLATE:  en_US\n>   LC_CTYPE:    en_US\n>   LC_MESSAGES: C\n>   LC_MONETARY: en_US\n>   LC_NUMERIC:  en_US\n>   LC_TIME:     en_US\n> initdb: error: encoding mismatch\n> initdb: detail: The encoding you selected (UTF8) and the encoding\n> that\n> the selected locale uses (LATIN1) do not match.\n\nThank you for the report.\n\nI fixed it in e2a2357671 by forcing the environment locale to C which\nis compatible with any encoding. The test still forces the encoding to\nUTF-8 and the collation to the builtin C.UTF-8.\n\nIn passing, I noticed some unrelated regression test failures when I\nset LANG=tr_TR: tsearch, tsdict, json, and jsonb. There's an additional\nfailure in the updatable_views test when LANG=tr_TR.utf8. I haven't\nlooked into the details yet.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 04 Apr 2024 16:38:11 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Mar 20, 2024 at 05:13:26PM -0700, Jeff Davis wrote:\n> On Tue, 2024-03-19 at 13:41 +0100, Peter Eisentraut wrote:\n> > * v25-0002-Support-C.UTF-8-locale-in-the-new-builtin-collat.patch\n> > \n> > Looks ok.\n> \n> Committed.\n\n> <varlistentry>\n> + <term><literal>pg_c_utf8</literal></term>\n> + <listitem>\n> + <para>\n> + This collation sorts by Unicode code point values rather than natural\n> + language order. For the functions <function>lower</function>,\n> + <function>initcap</function>, and <function>upper</function>, it uses\n> + Unicode simple case mapping. For pattern matching (including regular\n> + expressions), it uses the POSIX Compatible variant of Unicode <ulink\n> + url=\"https://www.unicode.org/reports/tr18/#Compatibility_Properties\">Compatibility\n> + Properties</ulink>. Behavior is efficient and stable within a\n> + <productname>Postgres</productname> major version. This collation is\n> + only available for encoding <literal>UTF8</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nlower(), initcap(), upper(), and regexp_matches() are PROVOLATILE_IMMUTABLE.\nUntil now, we've delegated that responsibility to the user. The user is\nsupposed to somehow never update libc or ICU in a way that changes outcomes\nfrom these functions. Now that postgresql.org is taking that responsibility\nfor builtin C.UTF-8, how should we govern it? I think the above text and [1]\nconvey that we'll update the Unicode data between major versions, making\nfunctions like lower() effectively STABLE. Is that right?\n\n(This thread had some discussion[2] that datcollversion/collversion won't\nnecessarily change when a major versions changes lower() behavior.)\n\n[1] https://postgr.es/m/[email protected]\n[2] https://postgr.es/m/[email protected]\n\n\n", "msg_date": "Sat, 29 Jun 2024 15:08:57 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Sat, 2024-06-29 at 15:08 -0700, Noah Misch wrote:\n> lower(), initcap(), upper(), and regexp_matches() are\n> PROVOLATILE_IMMUTABLE.\n> Until now, we've delegated that responsibility to the user.  The user\n> is\n> supposed to somehow never update libc or ICU in a way that changes\n> outcomes\n> from these functions.\n\nTo me, \"delegated\" connotes a clear and organized transfer of\nresponsibility to the right person to solve it. In that sense, I\ndisagree that we've delegated it.\n\nWhat's happened here is evolution of various choices that seemed\nreasonable at the time. Unfortunately, the consequences that are hard\nfor us to manage and even harder for users to manage themselves.\n\n>   Now that postgresql.org is taking that responsibility\n> for builtin C.UTF-8, how should we govern it?  I think the above text\n> and [1]\n> convey that we'll update the Unicode data between major versions,\n> making\n> functions like lower() effectively STABLE.  Is that right?\n\nMarking them STABLE is not a viable option, that would break a lot of\nvalid use cases, e.g. an index on LOWER().\n\nUnicode already has its own governance, including a stability policy\nthat includes case mapping:\n\nhttps://www.unicode.org/policies/stability_policy.html#Case_Pair\n\nGranted, that policy does not guarantee that the results will never\nchange. In particular, the results can change if using unassinged code\npoitns that are later assigned to Cased characters.\n\nThat's not terribly common though; for instance, there are zero changes\nin uppercase/lowercase behavior between Unicode 14.0 (2021) and 15.1\n(current) -- even for code points that were unassigned in 14.0 and\nlater assigned. I checked this by modifying case_test.c to look at\nunassigned code points as well.\n\nThere's a greater chance that character properties can change (e.g.\nwhether a character is \"alphabetic\" or not) in new releases of Unicode.\nSuch properties can affect regex character classifications, and in some\ncases the results of initcap (because it uses the \"alphanumeric\"\nclassification to determine word boundaries).\n\nI don't think we need code changes for 17. Some documentation changes\nmight be helpful, though. Should we have a note around LOWER()/UPPER()\nthat users should REINDEX any dependent indexes when the provider is\nupdated?\n\n> (This thread had some discussion[2] that datcollversion/collversion\n> won't\n> necessarily change when a major versions changes lower() behavior.)\n\ndatcollversion/collversion track the vertsion of the collation\nspecifically (text ordering only), not the ctype (character semantics).\nWhen using the libc provider, get_collation_actual_version() completely\nignores the ctype.\n\nIt would be interesting to consider tracking the versions separately,\nthough.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 01 Jul 2024 12:24:15 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, Jul 01, 2024 at 12:24:15PM -0700, Jeff Davis wrote:\n> On Sat, 2024-06-29 at 15:08 -0700, Noah Misch wrote:\n> > lower(), initcap(), upper(), and regexp_matches() are\n> > PROVOLATILE_IMMUTABLE.\n> > Until now, we've delegated that responsibility to the user.� The user\n> > is\n> > supposed to somehow never update libc or ICU in a way that changes\n> > outcomes\n> > from these functions.\n> \n> To me, \"delegated\" connotes a clear and organized transfer of\n> responsibility to the right person to solve it. In that sense, I\n> disagree that we've delegated it.\n\nGood point.\n\n> > � Now that postgresql.org is taking that responsibility\n> > for builtin C.UTF-8, how should we govern it?� I think the above text\n> > and [1]\n> > convey that we'll update the Unicode data between major versions,\n> > making\n> > functions like lower() effectively STABLE.� Is that right?\n> \n> Marking them STABLE is not a viable option, that would break a lot of\n> valid use cases, e.g. an index on LOWER().\n\nI agree.\n\n> I don't think we need code changes for 17. Some documentation changes\n> might be helpful, though. Should we have a note around LOWER()/UPPER()\n> that users should REINDEX any dependent indexes when the provider is\n> updated?\n\nI agree the v17 code is fine. Today, a user can (with difficulty) choose\ndependency libraries so regexp_matches() is IMMUTABLE, as marked. I don't\nwant $SUBJECT to be the ctype that, at some post-v17 version, can't achieve\nthat with unpatched PostgreSQL. Let's change the documentation to say this\nprovider uses a particular snapshot of Unicode data, taken around PostgreSQL\n17. We plan never to change that data, so IMMUTABLE functions can rely on the\ndata. If we provide a newer Unicode data set in the future, we'll provide it\nin such a way that DDL must elect the new data. How well would that suit your\nvision for this feature? An alternative would be to make pg_upgrade reject\noperating on a cluster that contains use of $SUBJECT.\n\n\n", "msg_date": "Mon, 1 Jul 2024 16:03:52 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-07-01 at 16:03 -0700, Noah Misch wrote:\n> I agree the v17 code is fine.  Today, a user can (with difficulty)\n> choose\n> dependency libraries so regexp_matches() is IMMUTABLE, as marked.  I\n> don't\n> want $SUBJECT to be the ctype that, at some post-v17 version, can't\n> achieve\n> that with unpatched PostgreSQL.\n\nWe aren't forcing anyone to use the builtin \"C.UTF-8\" locale. Anyone\ncan still use the builtin \"C\" locale (which never changes), or another\nprovider if they can sort out the difficulties (and live with the\nconsequences) of pinning the dependencies to a specific version.\n\n>   Let's change the documentation to say this\n> provider uses a particular snapshot of Unicode data, taken around\n> PostgreSQL\n> 17.  We plan never to change that data, so IMMUTABLE functions can\n> rely on the\n> data.\n\nWe can discuss this in the context of version 18 or the next time we\nplan to update Unicode. I don't think we should make such a promise in\nversion 17.\n\n>   If we provide a newer Unicode data set in the future, we'll provide\n> it\n> in such a way that DDL must elect the new data.  How well would that\n> suit your\n> vision for this feature?\n\nThomas tried tracking collation versions along with individual objects,\nand it had to be reverted (ec48314708).\n\nIt fits my vision to do something like that as a way of tightening\nthings up.\n\nBut there are some open design questions we need to settle, along with\na lot of work. So I don't think we should pre-emptively block all\nUnicode updates waiting for it.\n\n>   An alternative would be to make pg_upgrade reject\n> operating on a cluster that contains use of $SUBJECT.\n\nThat wouldn't help anyone.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 01 Jul 2024 18:19:08 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, Jul 01, 2024 at 06:19:08PM -0700, Jeff Davis wrote:\n> On Mon, 2024-07-01 at 16:03 -0700, Noah Misch wrote:\n> > � An alternative would be to make pg_upgrade reject\n> > operating on a cluster that contains use of $SUBJECT.\n> \n> That wouldn't help anyone.\n\nCan you say more about that? For the last decade at least, I think our\nstandard for new features has been to error rather than allow an operation\nthat creates a known path to wrong query results. I think that's a helpful\nstandard that we should continue to follow.\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:51:45 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 02.07.24 18:51, Noah Misch wrote:\n> On Mon, Jul 01, 2024 at 06:19:08PM -0700, Jeff Davis wrote:\n>> On Mon, 2024-07-01 at 16:03 -0700, Noah Misch wrote:\n>>>   An alternative would be to make pg_upgrade reject\n>>> operating on a cluster that contains use of $SUBJECT.\n>>\n>> That wouldn't help anyone.\n> \n> Can you say more about that? For the last decade at least, I think our\n> standard for new features has been to error rather than allow an operation\n> that creates a known path to wrong query results. I think that's a helpful\n> standard that we should continue to follow.\n\nI don't think the builtin locale provider is any different in this \nrespect from the other providers: The locale data might change and \nthere is a version mechanism to track that. We don't prevent pg_upgrade \nin scenarios like that for other providers.\n\n\n\n\n", "msg_date": "Wed, 3 Jul 2024 00:05:09 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Jul 03, 2024 at 12:05:09AM +0200, Peter Eisentraut wrote:\n> On 02.07.24 18:51, Noah Misch wrote:\n> > On Mon, Jul 01, 2024 at 06:19:08PM -0700, Jeff Davis wrote:\n> > > On Mon, 2024-07-01 at 16:03 -0700, Noah Misch wrote:\n> > > > � An alternative would be to make pg_upgrade reject\n> > > > operating on a cluster that contains use of $SUBJECT.\n> > > \n> > > That wouldn't help anyone.\n> > \n> > Can you say more about that? For the last decade at least, I think our\n> > standard for new features has been to error rather than allow an operation\n> > that creates a known path to wrong query results. I think that's a helpful\n> > standard that we should continue to follow.\n> \n> I don't think the builtin locale provider is any different in this respect\n> from the other providers: The locale data might change and there is a\n> version mechanism to track that. We don't prevent pg_upgrade in scenarios\n> like that for other providers.\n\nEach packager can choose their dependencies so the v16 providers don't have\nthe problem. With the $SUBJECT provider, a packager won't have that option.\n\n\n", "msg_date": "Tue, 2 Jul 2024 16:03:33 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, 2024-07-02 at 16:03 -0700, Noah Misch wrote:\n> Each packager can choose their dependencies so the v16 providers\n> don't have\n> the problem.  With the $SUBJECT provider, a packager won't have that\n> option.\n\nWhile nothing needs to be changed for 17, I agree that we may need to\nbe careful in future releases not to break things.\n\nBroadly speaking, you are right that we may need to freeze Unicode\nupdates or be more precise about versioning. But there's a lot of\nnuance to the problem, so I don't think we should pre-emptively promise\neither of those things right now.\n\nConsider:\n\n* Unless I made a mistake, the last three releases of Unicode (14.0,\n15.0, and 15.1) all have the exact same behavior for UPPER() and\nLOWER() -- even for unassigned code points. It would be silly to\npromise to stay with 15.1 and then realize that moving to 16.0 doesn't\ncreate any actual problem.\n\n* Unicode also offers \"case folding\", which has even stronger stability\nguarantees, and I plan to propose that soon. When implemented, it would\nbe preferred over LOWER()/UPPER() in index expressions for most use\ncases.\n\n* While someone can pin libc+ICU to particular versions, it's\nimpossible when using the official packages, and additionally requires\nusing something like [1], which just became available last year. I\ndon't think it's reasonable to put it forth as a matter-of-fact\nsolution.\n\n* Let's keep some perspective: we've lived for a long time with ALL\ntext indexes at serious risk of breakage. In contrast, the concerns you\nare raising now are about certain kinds of expression indexes over data\ncontaining certain unassigned code points. I am not dismissing that\nconcern, but the builtin provider moves us in the right direction and\nlet's not lose sight of that.\n\n\nGiven that no code changes for v17 are proposed, I suggest that we\nrefrain from making any declarations until the next version of Unicode\nis released. If the pattern holds, that will be around September, which\nstill leaves time to make reasonable decisions for v18.\n\nRegards,\n\tJeff Davis\n\n[1] https://github.com/awslabs/compat-collation-for-glibc\n\n\n\n", "msg_date": "Wed, 03 Jul 2024 14:19:07 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Jul 03, 2024 at 02:19:07PM -0700, Jeff Davis wrote:\n> * Unless I made a mistake, the last three releases of Unicode (14.0,\n> 15.0, and 15.1) all have the exact same behavior for UPPER() and\n> LOWER() -- even for unassigned code points. It would be silly to\n> promise to stay with 15.1 and then realize that moving to 16.0 doesn't\n> create any actual problem.\n\nI think you're saying that if some Unicode update changes the results of a\nSTABLE function but does not change the result of any IMMUTABLE function, we\nmay as well import that update. Is that about right? If so, I agree.\n\nIn addition to the options I listed earlier (error in pg_upgrade or document\nthat IMMUTABLE stands) I would be okay with a third option. Decide here that\nwe'll not adopt a Unicode update in a way that changes a v17 IMMUTABLE\nfunction result of the new provider. We don't need to write that in the\ndocumentation, since it's implicit in IMMUTABLE. Delete the \"stable within a\n<productname>Postgres</productname> major version\" documentation text.\n\n> * While someone can pin libc+ICU to particular versions, it's\n> impossible when using the official packages, and additionally requires\n> using something like [1], which just became available last year. I\n> don't think it's reasonable to put it forth as a matter-of-fact\n> solution.\n> \n> * Let's keep some perspective: we've lived for a long time with ALL\n> text indexes at serious risk of breakage. In contrast, the concerns you\n> are raising now are about certain kinds of expression indexes over data\n> containing certain unassigned code points. I am not dismissing that\n> concern, but the builtin provider moves us in the right direction and\n> let's not lose sight of that.\n\nI see you're trying to help users get less breakage, and that's a good goal.\nI agree $SUBJECT eliminates libc+ICU breakage, and libc+ICU breakage has hurt\nplenty. However, you proposed to update Unicode data and give REINDEX as the\nsolution to breakage this causes. Unlike libc+ICU breakage, the packager has\nno escape from that. That's a different kind of breakage proposition, and no\nnew PostgreSQL feature should do that. It's on a different axis from helping\nusers avoid libc+ICU breakage, and a feature doesn't get to credit helping on\none axis against a regression on the other axis. What am I missing here?\n\n> Given that no code changes for v17 are proposed, I suggest that we\n> refrain from making any declarations until the next version of Unicode\n> is released. If the pattern holds, that will be around September, which\n> still leaves time to make reasonable decisions for v18.\n\nSoon enough, a Unicode release will add one character to regexp [[:alpha:]].\nPostgreSQL will then need to decide what IMMUTABLE is going to mean. How does\nthat get easier in September?\n\nThanks,\nnm\n\n> [1] https://github.com/awslabs/compat-collation-for-glibc\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:26:41 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tNoah Misch wrote:\n\n> > I don't think the builtin locale provider is any different in this respect\n> > from the other providers: The locale data might change and there is a\n> > version mechanism to track that. We don't prevent pg_upgrade in scenarios\n> > like that for other providers.\n> \n> Each packager can choose their dependencies so the v16 providers don't have\n> the problem. With the $SUBJECT provider, a packager won't have that option.\n\nThe Unicode data files downloaded into src/common/unicode/\ndepend on the versions defined in Makefile.global.in:\n\n # Unicode data information\n\n # Before each major release, update these and run make update-unicode.\n\n # Pick a release from here: <https://www.unicode.org/Public/>. Note\n # that the most recent release listed there is often a pre-release;\n # don't pick that one, except for testing.\n UNICODE_VERSION = 15.1.0\n\n # Pick a release from here: <http://cldr.unicode.org/index/downloads>\n CLDR_VERSION = 45\n\n(CLDR_VERSION is apparently not used yet).\n\nWhen these versions get bumped, it seems like packagers could stick to\nprevious versions by just overriding these.\nWhen doing that, are there any function that may have an immutability\nbreakage problem with the built-in locale provider? (I would expect none).\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 05 Jul 2024 13:55:40 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, 2024-07-05 at 13:55 +0200, Daniel Verite wrote:\n> When these versions get bumped, it seems like packagers could stick\n> to\n> previous versions by just overriding these.\n\nThat's an interesting point. It's actually easier for a packager to pin\nUnicode to a specific version than to pin libc to a specific version.\n\n> When doing that, are there any function that may have an immutability\n> breakage problem with the built-in locale provider? (I would expect\n> none).\n\nRight, there wouldn't be any breakage without new data files.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 05 Jul 2024 10:13:09 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-04 at 14:26 -0700, Noah Misch wrote:\n> I think you're saying that if some Unicode update changes the results\n> of a\n> STABLE function but does not change the result of any IMMUTABLE\n> function, we\n> may as well import that update.  Is that about right?  If so, I\n> agree.\n\nIf you are proposing that Unicode updates should not be performed if\nthey affect the results of any IMMUTABLE function, then that's a new\npolicy.\n\nFor instance, the results of NORMALIZE() changed from PG15 to PG16 due\nto commit 1091b48cd7:\n\n SELECT NORMALIZE(U&'\\+01E030',nfkc)::bytea;\n\n Version 15: \\xf09e80b0\n\n Version 16: \\xd0b0\n\nI am neither endorsing nor opposing the new policy you propose at this\ntime, but deep in the sub-thread of one particular feature is not the\nright place to discuss it.\n\nPlease start a new thread for the proposed PG18 policy change and CC\nme. I happen to think that around the release of the next version of\nUnicode (in a couple months) would be the most productive time to have\nthat discussion, but you can start the discussion now if you like.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 05 Jul 2024 14:38:45 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, Jul 05, 2024 at 02:38:45PM -0700, Jeff Davis wrote:\n> On Thu, 2024-07-04 at 14:26 -0700, Noah Misch wrote:\n> > I think you're saying that if some Unicode update changes the results\n> > of a\n> > STABLE function but does not change the result of any IMMUTABLE\n> > function, we\n> > may as well import that update.� Is that about right?� If so, I\n> > agree.\n> \n> If you are proposing that Unicode updates should not be performed if\n> they affect the results of any IMMUTABLE function, then that's a new\n> policy.\n> \n> For instance, the results of NORMALIZE() changed from PG15 to PG16 due\n> to commit 1091b48cd7:\n> \n> SELECT NORMALIZE(U&'\\+01E030',nfkc)::bytea;\n> \n> Version 15: \\xf09e80b0\n> \n> Version 16: \\xd0b0\n\nAs a released feature, NORMALIZE() has a different set of remedies to choose\nfrom, and I'm not proposing one. I may have sidetracked this thread by\ntalking about remedies without an agreement that pg_c_utf8 has a problem. My\nquestion for the PostgreSQL maintainers is this:\n\n textregexeq(... COLLATE pg_c_utf8, '[[:alpha:]]') and lower(), despite being\n IMMUTABLE, will change behavior in some major releases. pg_upgrade does not\n have a concept of IMMUTABLE functions changing, so index scans will return\n wrong query results after upgrade. Is it okay for v17 to release a\n pg_c_utf8 planned to behave that way when upgrading v17 to v18+?\n\nIf the answer is yes, the open item closes. If the answer is no, determining\nthe remedy can come next.\n\n\nLest concrete details help anyone reading, here are some affected objects:\n\n CREATE TABLE t (s text COLLATE pg_c_utf8);\n INSERT INTO t VALUES (U&'\\+00a7dc'), (U&'\\+001dd3');\n CREATE INDEX iexpr ON t ((lower(s)));\n CREATE INDEX ipred ON t (s) WHERE s ~ '[[:alpha:]]';\n\nv17 can simulate the Unicode aspect of a v18 upgrade, like this:\n\n sed -i 's/^UNICODE_VERSION.*/UNICODE_VERSION = 16.0.0/' src/Makefile.global.in\n # ignore test failures (your ICU likely doesn't have the Unicode 16.0.0 draft)\n make -C src/common/unicode update-unicode\n make\n make install\n pg_ctl restart\n\nBehavior after that:\n\n-- 2 rows w/ seq scan, 0 rows w/ index scan\nSELECT 1 FROM t WHERE s ~ '[[:alpha:]]';\nSET enable_seqscan = off;\nSELECT 1 FROM t WHERE s ~ '[[:alpha:]]';\n\n-- ERROR: heap tuple (0,1) from table \"t\" lacks matching index tuple within index \"iexpr\"\nSELECT bt_index_parent_check('iexpr', heapallindexed => true);\n-- ERROR: heap tuple (0,1) from table \"t\" lacks matching index tuple within index \"ipred\"\nSELECT bt_index_parent_check('ipred', heapallindexed => true);\n\n\n", "msg_date": "Sat, 6 Jul 2024 12:51:29 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> As a released feature, NORMALIZE() has a different set of remedies to choose\n> from, and I'm not proposing one. I may have sidetracked this thread by\n> talking about remedies without an agreement that pg_c_utf8 has a problem. My\n> question for the PostgreSQL maintainers is this:\n\n> textregexeq(... COLLATE pg_c_utf8, '[[:alpha:]]') and lower(), despite being\n> IMMUTABLE, will change behavior in some major releases. pg_upgrade does not\n> have a concept of IMMUTABLE functions changing, so index scans will return\n> wrong query results after upgrade. Is it okay for v17 to release a\n> pg_c_utf8 planned to behave that way when upgrading v17 to v18+?\n\nI do not think it is realistic to define \"IMMUTABLE\" as meaning that\nthe function will never change behavior until the heat death of the\nuniverse. As a counterexample, we've not worried about applying\nbug fixes or algorithm improvements that change the behavior of\n\"immutable\" numeric computations. It might be unwise to do that\nin a minor release, but we certainly do it in major releases.\n\nI'd say a realistic policy is \"immutable means we don't intend to\nchange it within a major release\". If we do change the behavior,\neither as a bug fix or a major-release improvement, that should\nbe release-noted so that people know they have to rebuild dependent\nindexes and matviews.\n\nIt gets stickier for behaviors that aren't fully under our control,\nwhich is the case for a lot of locale-related things. We cannot then\npromise \"no changes within major releases\". But I do not think it\nis helpful to react to that fact by refusing to label such things\nimmutable. Then we'd just need another mutability classification,\nand it would effectively act the same as immutable does now, because\npeople will certainly wish to use these functions in indexes etc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2024 16:19:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "> \n> On Jul 6, 2024, at 12:51 PM, Noah Misch <[email protected]> wrote:\n> Behavior after that:\n> \n> -- 2 rows w/ seq scan, 0 rows w/ index scan\n> SELECT 1 FROM t WHERE s ~ '[[:alpha:]]';\n> SET enable_seqscan = off;\n> SELECT 1 FROM t WHERE s ~ '[[:alpha:]]';\n> \n> -- ERROR: heap tuple (0,1) from table \"t\" lacks matching index tuple within index \"iexpr\"\n> SELECT bt_index_parent_check('iexpr', heapallindexed => true);\n> -- ERROR: heap tuple (0,1) from table \"t\" lacks matching index tuple within index \"ipred\"\n> SELECT bt_index_parent_check('ipred', heapallindexed => true);\n\n\nOther databases do still ship built-in ancient versions of unicode (Db2 ships 4.0+ and Oracle ships 6.1+), and they have added new Unicode versions alongside the old but not removed the old versions. They claim to have “deprecated” old versions… but it seems they haven’t been able to get rid of them yet. Maybe some customer is willing to pay to continue deferring painful rebuilds needed to get rid of the old collation versions in commercial DBs?\n\nFor reference, see the table on slide 56 at https://www.pgevents.ca/events/pgconfdev2024/schedule/session/95-collations-from-a-to-z/ and also see https://ardentperf.com/2024/05/22/default-sort-order-in-db2-sql-server-oracle-postgres-17/ \n\nThanks for the illustration with actual Unicode 16 draft data.\n\nAlso, not directly related to this email… but reiterating a point I argued for in the recorded talk at pgconf.dev in Vancouver: a very strong argument for having the DB default to a stable unchanging built-in collation is that the dependency tracking makes it easy to identify objects in the database using non-default collations, and it’s easy to know exactly what needs to be rebuilt for a user to safely change some non-default collation provider’s behavior.\n\n-Jeremy\n\n\nSent from my TI-83\n\n\nOn Jul 6, 2024, at 12:51 PM, Noah Misch <[email protected]> wrote:Behavior after that:-- 2 rows w/ seq scan, 0 rows w/ index scanSELECT 1 FROM t WHERE s ~ '[[:alpha:]]';SET enable_seqscan = off;SELECT 1 FROM t WHERE s ~ '[[:alpha:]]';-- ERROR:  heap tuple (0,1) from table \"t\" lacks matching index tuple within index \"iexpr\"SELECT bt_index_parent_check('iexpr', heapallindexed => true);-- ERROR:  heap tuple (0,1) from table \"t\" lacks matching index tuple within index \"ipred\"SELECT bt_index_parent_check('ipred', heapallindexed => true);Other databases do still ship built-in ancient versions of unicode (Db2 ships 4.0+ and Oracle ships 6.1+), and they have added new Unicode versions alongside the old but not removed the old versions. They claim to have “deprecated” old versions… but it seems they haven’t been able to get rid of them yet. Maybe some customer is willing to pay to continue deferring painful rebuilds needed to get rid of the old collation versions in commercial DBs?For reference, see the table on slide 56 at https://www.pgevents.ca/events/pgconfdev2024/schedule/session/95-collations-from-a-to-z/ and also see https://ardentperf.com/2024/05/22/default-sort-order-in-db2-sql-server-oracle-postgres-17/ Thanks for the illustration with actual Unicode 16 draft data.Also, not directly related to this email… but reiterating a point I argued for in the recorded talk at pgconf.dev in Vancouver: a very strong argument for having the DB default to a stable unchanging built-in collation is that the dependency tracking makes it easy to identify objects in the database using non-default collations, and it’s easy to know exactly what needs to be rebuilt for a user to safely change some non-default collation provider’s behavior.-JeremySent from my TI-83", "msg_date": "Sat, 6 Jul 2024 13:37:06 -0700", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Sat, Jul 06, 2024 at 04:19:21PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > As a released feature, NORMALIZE() has a different set of remedies to choose\n> > from, and I'm not proposing one. I may have sidetracked this thread by\n> > talking about remedies without an agreement that pg_c_utf8 has a problem. My\n> > question for the PostgreSQL maintainers is this:\n> \n> > textregexeq(... COLLATE pg_c_utf8, '[[:alpha:]]') and lower(), despite being\n> > IMMUTABLE, will change behavior in some major releases. pg_upgrade does not\n> > have a concept of IMMUTABLE functions changing, so index scans will return\n> > wrong query results after upgrade. Is it okay for v17 to release a\n> > pg_c_utf8 planned to behave that way when upgrading v17 to v18+?\n> \n> I do not think it is realistic to define \"IMMUTABLE\" as meaning that\n> the function will never change behavior until the heat death of the\n> universe. As a counterexample, we've not worried about applying\n> bug fixes or algorithm improvements that change the behavior of\n> \"immutable\" numeric computations.\n\nTrue. There's a continuum from \"releases can change any IMMUTABLE function\"\nto \"index integrity always wins, even if a function is as wrong as 1+1=3\".\nI'm less concerned about the recent \"Incorrect results from numeric round\"\nthread, even though it's proposing to back-patch. I'm thinking about these\naggravating factors for $SUBJECT:\n\n- $SUBJECT is planning an annual cadence of this kind of change.\n\n- We already have ICU providing collation support for the same functions.\n Unlike $SUBJECT, ICU integration gives packagers control over when to accept\n corruption at pg_upgrade time.\n\n- SQL Server, DB2 and Oracle do their Unicode updates in a non-corrupting way.\n (See Jeremy Schneider's reply concerning DB2 and Oracle.)\n\n- lower() and regexp are more popular in index expressions than\n high-digit-count numeric calculations.\n\n> I'd say a realistic policy is \"immutable means we don't intend to\n> change it within a major release\". If we do change the behavior,\n> either as a bug fix or a major-release improvement, that should\n> be release-noted so that people know they have to rebuild dependent\n> indexes and matviews.\n\nIt sounds like you're very comfortable with $SUBJECT proceeding in its current\nform. Is that right?\n\n\n", "msg_date": "Mon, 8 Jul 2024 18:05:45 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> It sounds like you're very comfortable with $SUBJECT proceeding in its current\n> form. Is that right?\n\nI don't have an opinion on whether the overall feature design\nis well-chosen. But the mere fact that Unicode updates will\nfrom time to time change the behavior (presumably only in edge\ncases or for previously-unassigned code points) doesn't strike\nme as a big enough problem to justify saying these functions\ncan't be marked immutable anymore. Especially since we have been\nfaced with that problem all along anyway; we just didn't have a way\nto track or quantify it before, because locale changes happened\noutside code we control.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 21:17:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-07-08 at 18:05 -0700, Noah Misch wrote:\n> > I do not think it is realistic to define \"IMMUTABLE\" as meaning that\n> > the function will never change behavior until the heat death of the\n> > universe.  As a counterexample, we've not worried about applying\n> > bug fixes or algorithm improvements that change the behavior of\n> > \"immutable\" numeric computations.\n> \n> True.  There's a continuum from \"releases can change any IMMUTABLE function\"\n> to \"index integrity always wins, even if a function is as wrong as 1+1=3\".\n> I'm less concerned about the recent \"Incorrect results from numeric round\"\n> thread, even though it's proposing to back-patch.  I'm thinking about these\n> aggravating factors for $SUBJECT:\n> \n> - $SUBJECT is planning an annual cadence of this kind of change.\n> \n> - We already have ICU providing collation support for the same functions.\n>   Unlike $SUBJECT, ICU integration gives packagers control over when to accept\n>   corruption at pg_upgrade time.\n> \n> - SQL Server, DB2 and Oracle do their Unicode updates in a non-corrupting way.\n>   (See Jeremy Schneider's reply concerning DB2 and Oracle.)\n> \n> - lower() and regexp are more popular in index expressions than\n>   high-digit-count numeric calculations.\n\nMy personal exprience is that very few users are aware of or care about\nthe strict accuracy of the collation sort order and other locale aspects.\nBut they care a lot about index corruption.\n\nSo I'd argue that we should not have any breaking changes at all, even in\ncases where the provider is clearly wrong.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 09 Jul 2024 10:00:24 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, Jul 9, 2024 at 4:00 AM Laurenz Albe <[email protected]>\nwrote:\n\n>\n> My personal exprience is that very few users are aware of or care about\n> the strict accuracy of the collation sort order and other locale aspects.\n> But they care a lot about index corruption.\n>\n> So I'd argue that we should not have any breaking changes at all, even in\n> cases where the provider is clearly wrong.\n\n\n\nFWIW, using external ICU libraries is a nice solution for users who need\nstrict and up-to-date Unicode support.\n\nCell phones do often get support for new code points before databases. So\ndatabases can end up storing characters before they are aware of the\nmeaning. (Slide 27 in the pgconf.dev talk illustrates a recent timeline of\nUnicode & phone updates.)\n\n-Jeremy\n\n>\n\nOn Tue, Jul 9, 2024 at 4:00 AM Laurenz Albe <[email protected]> wrote:\nMy personal exprience is that very few users are aware of or care about\nthe strict accuracy of the collation sort order and other locale aspects.\nBut they care a lot about index corruption.\n\nSo I'd argue that we should not have any breaking changes at all, even in\ncases where the provider is clearly wrong.FWIW, using external ICU libraries is a nice solution for users who need strict and up-to-date Unicode support.Cell phones do often get support for new code points before databases. So databases can end up storing characters before they are aware of the meaning. (Slide 27 in the pgconf.dev talk illustrates a recent timeline of Unicode & phone updates.)-Jeremy", "msg_date": "Tue, 9 Jul 2024 10:51:39 -0400", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Mon, 2024-07-08 at 18:05 -0700, Noah Misch wrote:\n> I'm thinking about these\n> aggravating factors for $SUBJECT:\n\nThis is still marked as an open item for 17, but you've already\nacknowledged[1] that no code changes are necessary in version 17.\nUpgrades of Unicode take an active step from a committer, so it's not a\npressing problem for 18, either.\n\nThe idea that you're arguing against is \"stability within a PG major\nversion\". There's no new discovery here: it was listed under the\nheading of \"Benefits\" near the top of my initial proposal[2], and known\nto all reviewers.\n\nThis is not an Open Item for 17, and new policy discussions should not\nhappen deep in this subthread. Please resolve the Open Item, and feel\nfree to start a thread about policy changes in 18.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/[email protected]\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n\n", "msg_date": "Tue, 09 Jul 2024 16:20:12 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Tue, Jul 09, 2024 at 04:20:12PM -0700, Jeff Davis wrote:\n> On Mon, 2024-07-08 at 18:05 -0700, Noah Misch wrote:\n> > I'm thinking about these\n> > aggravating factors for $SUBJECT:\n> \n> This is still marked as an open item for 17, but you've already\n> acknowledged[1] that no code changes are necessary in version 17.\n\nLater posts on the thread made that obsolete. The next step is to settle the\nquestion at https://postgr.es/m/[email protected]. If that\nconclusion entails a remedy, v17 code changes may be part of that remedy.\n\n> The idea that you're arguing against is \"stability within a PG major\n> version\". There's no new discovery here: it was listed under the\n> heading of \"Benefits\" near the top of my initial proposal[2]\n\nThanks for that distillation. More specifically, I'm arguing against the lack\nof choice about instability across major versions:\n\n | ICU collations | pg_c_utf8\n----------------------------------|-------------------|----------\nCorruption within a major version | packager's choice | no\nCorruption at pg_upgrade time | packager's choice | yes\n\nI am a packager who chooses no-corruption (chooses stability). As a packager,\nthe pg_c_utf8 stability within major versions is never a bad thing, but it\ndoes not compensate for instability across major versions. I don't want a\nfuture in which pg_c_utf8 is the one provider that integrity-demanding\npg_upgrade users should not use.\n\n> and known to all reviewers.\n\nIf after https://postgr.es/m/[email protected] and\nhttps://postgr.es/m/[email protected] they think $SUBJECT\nshould continue as-committed, they should vote that way. Currently, we have\nmultiple votes in one direction and multiple votes in the other direction. If\nall three reviewers were to vote in the same direction (assuming no other new\nvotes), I argue that such a count would render whichever way they vote as the\nconclusion. Does that match your count?\n\n> [1]\n> https://www.postgresql.org/message-id/[email protected]\n> [2]\n> https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Thu, 11 Jul 2024 05:50:40 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-11 at 05:50 -0700, Noah Misch wrote:\n> > This is still marked as an open item for 17, but you've already\n> > acknowledged[1] that no code changes are necessary in version 17.\n> \n> Later posts on the thread made that obsolete.  The next step is to\n> settle the\n> question at https://postgr.es/m/[email protected]. \n> If that\n> conclusion entails a remedy, v17 code changes may be part of that\n> remedy.\n\nThis is the first time you've mentioned a code change in version 17. If\nyou have something in mind, please propose it. However, this feature\nfollowed the right policies at the time of commit, so there would need\nto be a strong consensus to accept such a change.\n\nAdditionally, I started a discussion on version 18 policy that may also\nresolve your concerns:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 08:48:46 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Jul 17, 2024 at 08:48:46AM -0700, Jeff Davis wrote:\n> On Thu, 2024-07-11 at 05:50 -0700, Noah Misch wrote:\n> > > This is still marked as an open item for 17, but you've already\n> > > acknowledged[1] that no code changes are necessary in version 17.\n> > \n> > Later posts on the thread made that obsolete.� The next step is to\n> > settle the\n> > question at https://postgr.es/m/[email protected].�\n> > If that\n> > conclusion entails a remedy, v17 code changes may be part of that\n> > remedy.\n> \n> This is the first time you've mentioned a code change in version 17. If\n\nThat's right.\n\n> you have something in mind, please propose it. However, this feature\n> followed the right policies at the time of commit, so there would need\n> to be a strong consensus to accept such a change.\n\nIf I'm counting the votes right, you and Tom have voted that the feature's\ncurrent state is okay, and I and Laurenz have voted that it's not okay. I\nstill hope more people will vote, to avoid dealing with the tie. Daniel,\nPeter, and Jeremy, you're all listed as reviewers on commit f69319f. Are you\nwilling to vote one way or the other on the question in\nhttps://postgr.es/m/[email protected]?\n\nA tie would become a decision against the unreleased behavior.\n\nIn the event of a decision against the unreleased behavior, reverting the\nfeature is the remedy that could proceed without further decision making.\n\n\n", "msg_date": "Wed, 17 Jul 2024 15:03:26 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-07-17 at 15:03 -0700, Noah Misch wrote:\n> If I'm counting the votes right\n\n...\n\n> , you and Tom have voted that the feature's\n> current state is okay, and I and Laurenz have voted that it's not\n> okay.\n\n...\n\n> A tie would become a decision against the unreleased behavior.\n\n...\n\n> In the event of a decision against the unreleased behavior, reverting\n> the\n> feature is the remedy that could proceed without further decision\n> making.\n\nYou haven't established that any problem actually exists in version 17,\nand your arguments have been a moving target throughout this subthread.\n\nI reject the procedural framework that you are trying to establish.\nVoting won't change the fact that the \"stability within a major\nversion\" that you are arguing against[1] was highlighted as a benefit\nin my initial proposal[2] for all reviewers to see.\n\nIf you press forward with this approach, I'll use judgement that is\nsufficiently deferential to the review process before making any hasty\ndecisions.\n\nAlternatively, I suggest that you participate in the thread that I\nstarted here:\n\nhttps://www.postgresql.org/message-id/d75d2d0d1d2bd45b2c332c47e3e0a67f0640b49c.camel%40j-davis.com\n\nwhich seems like a more direct (and more complete) path to a resolution\nof your concerns. I speak only for myself, but I assure you that I have\nan open mind in that discussion, and that I have no intention force a\nUnicode update past objections.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/[email protected]\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 23:06:43 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, 2024-07-17 at 15:03 -0700, Noah Misch wrote:\n> If I'm counting the votes right, you and Tom have voted that the feature's\n> current state is okay, and I and Laurenz have voted that it's not okay.\n\nMaybe I should expand my position.\n\nI am very much for the built-in CTYPE provider. When I said that I am against\nchanges in major versions, I mean changes that are likely to affect real-life\nusage patterns. If there are modifications affecting a code point that was\npreviously unassigned, it is *theoretically* possible, but very unlikely, that\nsomeone has stored it in a database. I would want to deliberate about any change\naffecting such a code point, and if the change seems highly desirable, we can\nconsider applying it.\n\nWhat I am against is routinely updating the built-in provider to adopt any changes\nthat Unicode makes.\n\nTo make a comparison with Tom's argument upthread: we have slightly changed how\nfloating point computations work, even though they are IMMUTABLE. But I'd argue\nthat very few people build indexes on the results of floating point arithmetic\n(and those who do are probably doing something wrong), so the risk is acceptable.\nBut people index strings all the time.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 18 Jul 2024 10:05:34 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "\tNoah Misch wrote:\n\n> If I'm counting the votes right, you and Tom have voted that the feature's\n> current state is okay, and I and Laurenz have voted that it's not okay. I\n> still hope more people will vote, to avoid dealing with the tie. Daniel,\n> Peter, and Jeremy, you're all listed as reviewers on commit f69319f. Are\n> you\n> willing to vote one way or the other on the question in\n> https://postgr.es/m/[email protected]?\n\nFor me, the current state is okay.\n\nIn the mentioned question, you're doing this:\n\n v17 can simulate the Unicode aspect of a v18 upgrade, like this:\n sed -i 's/^UNICODE_VERSION.*/UNICODE_VERSION = 16.0.0/'\nsrc/Makefile.global.in\n\nto force a Unicode upgrade. But a packager could do the same\nto force a Unicode downgrade, if they wanted.\n\nTherefore I don't agree with this summary in\n<[email protected]>:\n\n> | ICU collations | pg_c_utf8\n> ----------------------------------|-------------------|----------\n> Corruption within a major version | packager's choice | no\n> Corruption at pg_upgrade time | packager's choice | yes\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 18 Jul 2024 13:29:27 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, Jul 18, 2024 at 10:05:34AM +0200, Laurenz Albe wrote:\n> On Wed, 2024-07-17 at 15:03 -0700, Noah Misch wrote:\n> > If I'm counting the votes right, you and Tom have voted that the feature's\n> > current state is okay, and I and Laurenz have voted that it's not okay.\n> \n> Maybe I should expand my position.\n> \n> I am very much for the built-in CTYPE provider. When I said that I am against\n> changes in major versions, I mean changes that are likely to affect real-life\n> usage patterns. If there are modifications affecting a code point that was\n> previously unassigned, it is *theoretically* possible, but very unlikely, that\n> someone has stored it in a database. I would want to deliberate about any change\n> affecting such a code point, and if the change seems highly desirable, we can\n> consider applying it.\n> \n> What I am against is routinely updating the built-in provider to adopt any changes\n> that Unicode makes.\n\nGiven all the messages on this thread, if the feature remains in PostgreSQL, I\nadvise you to be ready to tolerate PostgreSQL \"routinely updating the built-in\nprovider to adopt any changes that Unicode makes\". Maybe someone will change\nsomething in v18 so it's not like that, but don't count on it.\n\nWould you like to change your vote to \"okay\", keep your vote at \"not okay\", or\nchange it to an abstention?\n\n> To make a comparison with Tom's argument upthread: we have slightly changed how\n> floating point computations work, even though they are IMMUTABLE. But I'd argue\n> that very few people build indexes on the results of floating point arithmetic\n> (and those who do are probably doing something wrong), so the risk is acceptable.\n> But people index strings all the time.\n\nAgreed.\n\n\n", "msg_date": "Thu, 18 Jul 2024 07:00:15 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-18 at 07:00 -0700, Noah Misch wrote:\n> What I am against is routinely updating the built-in provider to adopt any changes\n> > that Unicode makes.\n> \n> Given all the messages on this thread, if the feature remains in PostgreSQL, I\n> advise you to be ready to tolerate PostgreSQL \"routinely updating the built-in\n> provider to adopt any changes that Unicode makes\".  Maybe someone will change\n> something in v18 so it's not like that, but don't count on it.\n> \n> Would you like to change your vote to \"okay\", keep your vote at \"not okay\", or\n> change it to an abstention?\n\nIn that case I am against it. Against the \"routinely\" in particular.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 18 Jul 2024 16:45:20 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-18 at 07:00 -0700, Noah Misch wrote:\n> Given all the messages on this thread, if the feature remains in\n> PostgreSQL, I\n> advise you to be ready to tolerate PostgreSQL \"routinely updating the\n> built-in\n> provider to adopt any changes that Unicode makes\".\n\nYou mean messages from me, like:\n\n * \"I have no intention force a Unicode update\" [1]\n * \"While nothing needs to be changed for 17, I agree that we may need\nto be careful in future releases not to break things.\" [2]\n * \"...you are right that we may need to freeze Unicode updates or be\nmore precise about versioning...\" [2]\n * \"If you are proposing that Unicode updates should not be performed\nif they affect the results of any IMMUTABLE function...I am neither\nendorsing nor opposing...\" [3]\n\n?\n\nThe only source I can imagine for your concern -- please correct me if\nI'm wrong -- is that I declined to make a preemptive version 18 promise\ndeep in this version 17 Open Item subthread. But I have good reasons.\nFirst, if we promise not to update Unicode, that would also affect\nNORMALIZE(), so for the sake of transparency we need a top-level\ndiscussion. Second, an Open Item should be tightly scoped to what\nactually needs to happen in version 17 before release. And thirdly,\nsuch a promise would artificially limit the range of possible outcomes,\nwhich may include various compromises that are not 17 material.\n\nI'm perplexed as to why you don't engage in the version 18 policy\ndiscussion.\n\n>   Maybe someone will change\n> something in v18 so it's not like that, but don't count on it.\n\nThat's backwards. If nothing happens in v18, then there will be no\nbreaking Unicode change. It takes an active step by a committer to\nupdate Unicode.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/5edb38923b0b23eb643f61807ef772a237ab92cf.camel%40j-davis.com\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n[3]\nhttps://www.postgresql.org/message-id/1d178eb1bbd61da1bcfe4a11d6545e9cdcede1d1.camel%40j-davis.com\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:52:44 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Thu, 2024-07-18 at 07:00 -0700, Noah Misch wrote:\n>> Maybe someone will change\n>> something in v18 so it's not like that, but don't count on it.\n\n> That's backwards. If nothing happens in v18, then there will be no\n> breaking Unicode change. It takes an active step by a committer to\n> update Unicode.\n\nThis whole discussion seems quite bizarre to me. In the first\nplace, it is certain that Unicode will continue to evolve, and\nI can't believe that we'd just freeze pg_c_utf8 on the current\ndefinition forever. Whether the first change happens in v18\nor years later doesn't seem like a particularly critical point.\n\nIn the second place, I cannot understand why pg_c_utf8 is being\nheld to a mutability standard that we have never applied to any\nother locale-related functionality --- and indeed could not do\nso, since in most cases that functionality has been buried in\nlibraries we don't control. It seems to me to be already a\ngreat step forward that with pg_c_utf8, at least we can guarantee\nthat the behavior won't change without us knowing about it.\nNoah's desire to revert the feature makes the mutability situation\nstrictly worse, because people will have to continue to rely on\nOS-provided functionality that can change at any time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Jul 2024 13:03:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-18 at 16:45 +0200, Laurenz Albe wrote:\n> On Thu, 2024-07-18 at 07:00 -0700, Noah Misch wrote:\n> >  What I am against is routinely updating the built-in provider to\n> > adopt any changes\n> > > that Unicode makes.\n\nThat is a perfectly reasonable position; please add it to the version\n18 discussion[1].\n\n> > Given all the messages on this thread, if the feature remains in\n> > PostgreSQL, I\n> > advise you to be ready to tolerate PostgreSQL \"routinely updating\n> > the built-in\n> > provider to adopt any changes that Unicode makes\".  Maybe someone\n> > will change\n> > something in v18 so it's not like that, but don't count on it.\n\n...\n\n> In that case I am against it.  Against the \"routinely\" in particular.\n\nAlso, please see my response[2] to Noah. I don't believe his statement\nabove is an accurate characterization. There's plenty of opportunity\nfor deliberation and compromise in version 18, and my mind is still\nopen to pretty much everything, up to and including freezing Unicode\nupdates if necessary[3].\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/d75d2d0d1d2bd45b2c332c47e3e0a67f0640b49c.camel%40j-davis.com\n[2]\nhttps://www.postgresql.org/message-id/[email protected]\n[3]\nhttps://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Thu, 18 Jul 2024 10:13:34 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, Jul 18, 2024 at 01:03:31PM -0400, Tom Lane wrote:\n> This whole discussion seems quite bizarre to me. In the first\n> place, it is certain that Unicode will continue to evolve, and\n> I can't believe that we'd just freeze pg_c_utf8 on the current\n> definition forever. Whether the first change happens in v18\n> or years later doesn't seem like a particularly critical point.\n> \n> In the second place, I cannot understand why pg_c_utf8 is being\n> held to a mutability standard that we have never applied to any\n> other locale-related functionality --- and indeed could not do\n> so, since in most cases that functionality has been buried in\n> libraries we don't control. It seems to me to be already a\n\nWith libc and ICU providers, packagers have a way to avoid locale-related\nbehavior changes. That's the \"mutability standard\" I want pg_c_utf8 to join.\npg_c_utf8 is the one provider where packagers can't opt out[1] of annual\npg_upgrade-time index scan breakage on affected expression indexes.\n\n> great step forward that with pg_c_utf8, at least we can guarantee\n> that the behavior won't change without us knowing about it.\n> Noah's desire to revert the feature makes the mutability situation\n> strictly worse, because people will have to continue to rely on\n> OS-provided functionality that can change at any time.\n\nI see:\n- one step forward:\n \"string1 < string2\" won't change, forever, regardless of packager choices\n- one step backward:\n \"string ~ '[[:alpha:]]'\" will change at pg_upgrade time, regardless of packager choices\n\nI think one's perspective on the relative importance of the step forward and\nthe step backward depends on the sort of packages one uses today. Consider a\nuser of Debian packages with locale!=C, doing Debian upgrades and pg_upgrade.\nFor that user, pg_c_utf8 gets far less index corruption than an ICU locale.\nThe step forward is a great step forward _for this user_, and the step\nbackward is in the noise next to the step forward.\n\nI'm with a different kind of packager. I don't tolerate index scans returning\nwrong answers. To achieve that, my libc and ICU aren't changing collation\nbehavior. I suspect my packages won't offer a newer ICU behavior until\nPostgreSQL gets support for multiple ICU library versions per database. (SQL\nServer, DB2 and Oracle already do. I agree we can't freeze forever. The\nmultiple-versions feature gets more valuable each year.) _For this_ kind of\npackage, the step forward is a no-op. The step backward is the sole effect on\nthis kind of package.\n\nHow much does that pair of perspectives explain the contrast between my\n\"revert\" and your \"great step forward\"? We may continue to disagree on the\nultimate decision, but I hope I can make my position cease to appear bizarre\nto you.\n\nThanks,\nnm\n\n\n[1] Daniel Verite said packagers could patch src/Makefile.global.in and run\n\"make -C src/common/unicode update-unicode\". Editing src/Makefile.global.in\nis modifying PostgreSQL, not configuring a packager-facing option.\n\n\n", "msg_date": "Thu, 18 Jul 2024 16:39:08 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-18 at 13:03 -0400, Tom Lane wrote:\n> In the second place, I cannot understand why pg_c_utf8 is being\n> held to a mutability standard that we have never applied to any\n> other locale-related functionality --- and indeed could not do\n> so, since in most cases that functionality has been buried in\n> libraries we don't control.\n\nI believe that we should hold it to a higher standard *precisely\nbecause* the previous way that we handled mutability in locale-related\nfunctionality was a problem.\n\n> It seems to me to be already a\n> great step forward that with pg_c_utf8, at least we can guarantee\n> that the behavior won't change without us knowing about it.\n\n+1\n\nBut the greatness of the step depends on our readiness to be careful\nwith such changes.\n\n> Noah's desire to revert the feature makes the mutability situation\n> strictly worse, because people will have to continue to rely on\n> OS-provided functionality that can change at any time.\n\nI think everybody agrees that we don't want to expose users to data\ncorruption after an upgrade.\n\nIt understand Noah to take the position that anything less than\nstrict immutability would be worse than the current state, because\ncurrently a packager can choose to keep shipping the same old\nversion of libicu and avoid the problem completely.\n\nI don't buy that. First, the only binary distribution I have heard\nof that does that is EDB's Windows installer. Both the RPM and\nDebian packages don't.\n\nAnd until PostgreSQL defaults to using ICU, most people will use\nC library collations, and a packager cannot choose not to upgrade\nthe C library.\n\nI believe the built-in CTYPE provider is a good thing and a step\nforward. But to make it a big step forward, we should be extremely\ncareful with any changes in major releases that might require\nrebuilding indexes.\nThis is where I side with Noah.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 19 Jul 2024 09:44:21 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Thu, 2024-07-18 at 16:39 -0700, Noah Misch wrote:\n> I'm with a different kind of packager.  I don't tolerate index scans\n> returning\n> wrong answers.\n\nI doubt that. We've all been tolerating the risk of index scans\nreturning wrong results in some cases.\n\nConsider:\n\na. Some user creates an expression index on NORMALIZE(); vs.\nb. Some user chooses the builtin \"C.UTF-8\" locale and creates a partial\nindex with a predicate like \"string ~ '[[:alpha:]]'\" (or an expression\nindex on LOWER())\n\nBoth cases create a risk if we update Unicode in some future version.\nWhy are you unconcerned about case (a), but highly concerned about case\n(b)?\n\nNeither seem to be a pressing problem because updating Unicode is our\nchoice, so we have time to reach a compromise.\n\n> [1] Daniel Verite said packagers could patch src/Makefile.global.in\n> and run\n> \"make -C src/common/unicode update-unicode\".  Editing\n> src/Makefile.global.in\n> is modifying PostgreSQL, not configuring a packager-facing option.\n\nThen go to the other thread[1] and propose that it be exposed as a\npackager-facing option along with any proposed Unicode update. There\nare other potential compromises possible, so I don't think this 17\nsubthread is the right place to discuss it, but it strikes me as a\nreasonable proposal.\n\nI sincerely think you are overcomplicating matters with version 17\nprocedural motions. Let the community process play out in version 18\nlike normal, because there's no actual problem now, I see no reason\nyour objections would be taken less seriously later.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/d75d2d0d1d2bd45b2c332c47e3e0a67f0640b49c.camel%40j-davis.com\n\n\n\n\n", "msg_date": "Fri, 19 Jul 2024 08:50:41 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Fri, 2024-07-19 at 09:44 +0200, Laurenz Albe wrote:\n> But the greatness of the step depends on our readiness to be careful\n> with such changes.\n\nYou and Noah have been clear on that point, which is enough to make\n*me* careful with any Unicode updates in the future. I'll suggest once\nmore that you say so in the policy thread here:\n\nhttps://www.postgresql.org/message-id/d75d2d0d1d2bd45b2c332c47e3e0a67f0640b49c.camel%40j-davis.com\n\nwhich would get broader visibility and I believe provide you with\nstronger assurances that *everyone* will be careful with Unicode\nupdates.\n\nRegards,\n\tJeff Davis\n\n> \n\n\n", "msg_date": "Fri, 19 Jul 2024 09:26:20 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Jul 17, 2024 at 03:03:26PM -0700, Noah Misch wrote:\n> On Wed, Jul 17, 2024 at 08:48:46AM -0700, Jeff Davis wrote:\n> > On Thu, 2024-07-11 at 05:50 -0700, Noah Misch wrote:\n> > > > This is still marked as an open item for 17, but you've already\n> > > > acknowledged[1] that no code changes are necessary in version 17.\n> > > \n> > > Later posts on the thread made that obsolete.� The next step is to\n> > > settle the\n> > > question at https://postgr.es/m/[email protected].�\n> > > If that\n> > > conclusion entails a remedy, v17 code changes may be part of that\n> > > remedy.\n> > \n> > This is the first time you've mentioned a code change in version 17. If\n> \n> That's right.\n> \n> > you have something in mind, please propose it. However, this feature\n> > followed the right policies at the time of commit, so there would need\n> > to be a strong consensus to accept such a change.\n> \n> If I'm counting the votes right, you and Tom have voted that the feature's\n> current state is okay, and I and Laurenz have voted that it's not okay. I\n> still hope more people will vote, to avoid dealing with the tie. Daniel,\n> Peter, and Jeremy, you're all listed as reviewers on commit f69319f. Are you\n> willing to vote one way or the other on the question in\n> https://postgr.es/m/[email protected]?\n\nThe last vote arrived 6 days ago. So far, we have votes from Jeff, Noah, Tom,\nDaniel, and Laurenz. I'll keep the voting open for another 24 hours from now\nor 36 hours after the last vote, whichever comes last. If that schedule is\ntoo compressed for anyone, do share.\n\n\n", "msg_date": "Wed, 24 Jul 2024 08:19:13 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 24.07.24 17:19, Noah Misch wrote:\n> On Wed, Jul 17, 2024 at 03:03:26PM -0700, Noah Misch wrote:\n>> On Wed, Jul 17, 2024 at 08:48:46AM -0700, Jeff Davis wrote:\n>>> On Thu, 2024-07-11 at 05:50 -0700, Noah Misch wrote:\n>>>>> This is still marked as an open item for 17, but you've already\n>>>>> acknowledged[1] that no code changes are necessary in version 17.\n>>>>\n>>>> Later posts on the thread made that obsolete.  The next step is to\n>>>> settle the\n>>>> question at https://postgr.es/m/[email protected].\n>>>> If that\n>>>> conclusion entails a remedy, v17 code changes may be part of that\n>>>> remedy.\n>>>\n>>> This is the first time you've mentioned a code change in version 17. If\n>>\n>> That's right.\n>>\n>>> you have something in mind, please propose it. However, this feature\n>>> followed the right policies at the time of commit, so there would need\n>>> to be a strong consensus to accept such a change.\n>>\n>> If I'm counting the votes right, you and Tom have voted that the feature's\n>> current state is okay, and I and Laurenz have voted that it's not okay. I\n>> still hope more people will vote, to avoid dealing with the tie. Daniel,\n>> Peter, and Jeremy, you're all listed as reviewers on commit f69319f. Are you\n>> willing to vote one way or the other on the question in\n>> https://postgr.es/m/[email protected]?\n> \n> The last vote arrived 6 days ago. So far, we have votes from Jeff, Noah, Tom,\n> Daniel, and Laurenz. I'll keep the voting open for another 24 hours from now\n> or 36 hours after the last vote, whichever comes last. If that schedule is\n> too compressed for anyone, do share.\n\nMy opinion is that it is okay to release as is.\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 17:27:20 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On 7/24/24 11:19, Noah Misch wrote:\n> On Wed, Jul 17, 2024 at 03:03:26PM -0700, Noah Misch wrote:\n>> On Wed, Jul 17, 2024 at 08:48:46AM -0700, Jeff Davis wrote:\n>> > you have something in mind, please propose it. However, this feature\n>> > followed the right policies at the time of commit, so there would need\n>> > to be a strong consensus to accept such a change.\n>> \n>> If I'm counting the votes right, you and Tom have voted that the feature's\n>> current state is okay, and I and Laurenz have voted that it's not okay. I\n>> still hope more people will vote, to avoid dealing with the tie. Daniel,\n>> Peter, and Jeremy, you're all listed as reviewers on commit f69319f. Are you\n>> willing to vote one way or the other on the question in\n>> https://postgr.es/m/[email protected]?\n> \n> The last vote arrived 6 days ago. So far, we have votes from Jeff, Noah, Tom,\n> Daniel, and Laurenz. I'll keep the voting open for another 24 hours from now\n> or 36 hours after the last vote, whichever comes last. If that schedule is\n> too compressed for anyone, do share.\n\n\nIt isn't entirely clear to me exactly what we are voting on.\n\n* If someone votes +1 (current state is ok) -- that is pretty clear.\n* But if someone votes -1 (current state is not ok?) what does that mean\n in practice?\n - a revert?\n - we hold shipping 17 until we get consensus (via some plan or\n mitigation or whatever)?\n - something else?\n\nIn any case, I am a hard -1 against reverting. +0.5 on \"current state is \nok\", and +1 on \"current state is ok with agreement on what to do in 18\"\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 11:36:21 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Jul 24, 2024 at 9:27 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> > The last vote arrived 6 days ago. So far, we have votes from Jeff,\n> Noah, Tom,\n> > Daniel, and Laurenz. I'll keep the voting open for another 24 hours\n> from now\n> > or 36 hours after the last vote, whichever comes last. If that schedule\n> is\n> > too compressed for anyone, do share.\n>\n> My opinion is that it is okay to release as is.\n\n\nLike Jeff, I don’t think counting votes or putting names on one side or\nanother is the best way to decide things. Everyone has unique opinions and\nnuances, it’s not like there’s two groups that all agree together on\neverything and disagree with the other group. I don’t want my name put on a\nlist this way; there are some places where I agree and some places where I\ndisagree with most people 🙂\n\nI don’t know the code as intimately as some others on the lists, but I’m\nnot aware of any one-way doors that would create major difficulties for\nfuture v18+ ideas being discussed\n\nfwiw, I don’t want to pull this feature out of v17, I think it’s okay to\nrelease it\n\n-Jeremy\n\nOn Wed, Jul 24, 2024 at 9:27 AM Peter Eisentraut <[email protected]> wrote:\n> The last vote arrived 6 days ago.  So far, we have votes from Jeff, Noah, Tom,\n> Daniel, and Laurenz.  I'll keep the voting open for another 24 hours from now\n> or 36 hours after the last vote, whichever comes last.  If that schedule is\n> too compressed for anyone, do share.\n\nMy opinion is that it is okay to release as is.Like Jeff, I don’t think counting votes or putting names on one side or another is the best way to decide things. Everyone has unique opinions and nuances, it’s not like there’s two groups that all agree together on everything and disagree with the other group. I don’t want my name put on a list this way; there are some places where I agree and some places where I disagree with most people 🙂I don’t know the code as intimately as some others on the lists, but I’m not aware of any one-way doors that would create major difficulties for future v18+ ideas being discussedfwiw, I don’t want to pull this feature out of v17, I think it’s okay to release it-Jeremy", "msg_date": "Wed, 24 Jul 2024 09:44:42 -0600", "msg_from": "Jeremy Schneider <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" }, { "msg_contents": "On Wed, Jul 24, 2024 at 08:19:13AM -0700, Noah Misch wrote:\n> On Wed, Jul 17, 2024 at 03:03:26PM -0700, Noah Misch wrote:\n> > vote one way or the other on the question in\n> > https://postgr.es/m/[email protected]?\n> \n> I'll keep the voting open for another 24 hours from now\n> or 36 hours after the last vote, whichever comes last.\n\nI count 4.5 or 5 votes for \"okay\" and 2 votes for \"not okay\". I've moved the\nopen item to \"Non-bugs\".\n\nOn Wed, Jul 17, 2024 at 11:06:43PM -0700, Jeff Davis wrote:\n> You haven't established that any problem actually exists in version 17,\n> and your arguments have been a moving target throughout this subthread.\n\nI can understand that experience of yours. It wasn't my intent to make a\nmoving target. To be candid, I entered the thread with no doubts that you'd\nagree with the problem. When you and Tom instead shared a different view, I\nswitched to pursuing the votes to recognize the problem. (Voting then held\nthat pg_c_utf8 is okay as-is.)\n\nOn Thu, Jul 18, 2024 at 09:52:44AM -0700, Jeff Davis wrote:\n> On Thu, 2024-07-18 at 07:00 -0700, Noah Misch wrote:\n> > Given all the messages on this thread, if the feature remains in\n> > PostgreSQL, I\n> > advise you to be ready to tolerate PostgreSQL \"routinely updating the\n> > built-in\n> > provider to adopt any changes that Unicode makes\".\n> \n> You mean messages from me, like:\n> \n> * \"I have no intention force a Unicode update\" [1]\n> * \"While nothing needs to be changed for 17, I agree that we may need\n> to be careful in future releases not to break things.\" [2]\n> * \"...you are right that we may need to freeze Unicode updates or be\n> more precise about versioning...\" [2]\n> * \"If you are proposing that Unicode updates should not be performed\n> if they affect the results of any IMMUTABLE function...I am neither\n> endorsing nor opposing...\" [3]\n> \n> ?\n\nThose, plus all the other messages.\n\nOn Fri, Jul 19, 2024 at 08:50:41AM -0700, Jeff Davis wrote:\n> Consider:\n> \n> a. Some user creates an expression index on NORMALIZE(); vs.\n> b. Some user chooses the builtin \"C.UTF-8\" locale and creates a partial\n> index with a predicate like \"string ~ '[[:alpha:]]'\" (or an expression\n> index on LOWER())\n> \n> Both cases create a risk if we update Unicode in some future version.\n> Why are you unconcerned about case (a), but highly concerned about case\n> (b)?\n\nI am not unconcerned about (a), but the v17 beta process gave an opportunity\nto do something about (b) that it didn't give for (a). Also, I have never\nhandled a user report involving NORMALIZE(). I have handled user reports\naround regexp index inconsistency, e.g. the one at\nhttps://www.youtube.com/watch?v=kNH94tmpUus&t=1490s\n\n\n", "msg_date": "Fri, 26 Jul 2024 04:29:58 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Built-in CTYPE provider" } ]
[ { "msg_contents": "As I recently went into on the thread where we've been discussing my\nnbtree SAOP patch [1], there is good reason to suspect that one of the\noptimizations added by commit e0b1ee17 is buggy in the presence of an\nopfamily lacking the full set of cross-type comparisons. The attached\ntest case confirms that these suspicions were correct. Running the\ntese case against HEAD will lead to an assertion failure (or a wrong\nanswer when assertions are disabled).\n\nTo recap, the optimization in question (which is not to be confused\nwith the \"precheck\" optimization from the same commit) is based on the\nidea that _bt_first must always land the scan ahead of the position\nthat the scan would end on, were the scan direction to change (from\nforwards to backwards, say). It follows that inequality strategy scan\nkeys that are required in the opposite-to-scan direction *only* must\nbe redundant in the current scan direction (in the sense that\n_bt_checkkeys needn't bother comparing them at all). Unfortunately,\nthat rationale is at least slightly wrong.\n\nAlthough some version of the same assumption must really hold in the\ncase of required equality strategy scan keys (old comments in\n_bt_checkkeys and in _bt_first say as much), it isn't really\nguaranteed in the case of inequalities. In fact, the following\nsentence appears in old comments above _bt_preprocess_keys, directly\ncontradicting the theory behind the optimization in question:\n\n\"In general, when inequality keys are present, the initial-positioning\ncode only promises to position before the first possible match, not\nexactly at the first match, for a forward scan; or after the last\nmatch for a backward scan.\"\n\nMy test case mostly just demonstrates how to reproduce the scenario\ndescribed by this sentence.\n\nIt's probably possible to salvage the optimization, but that will\nrequire bookkeeping sufficient to detect these unsafe cases, so that\n_bt_checkkeys only skips the comparisons when it's truly safe. As far\nas I know, the only reason that inequalities differ from equalities is\nthis respect is the issue that the test case highlights. (There were\nalso issues with NULLs, but AFAICT Alexander dealt with that aspect of\nthe problem already.)\n\n[1] https://postgr.es/m/CAH2-Wz=BuxYEHxpNH0tPvo=+G1WtE1PamRoVU1dEVow1Vy9Y7A@mail.gmail.com\n-- \nPeter Geoghegan", "msg_date": "Tue, 5 Dec 2023 16:41:06 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> As I recently went into on the thread where we've been discussing my\n> nbtree SAOP patch [1], there is good reason to suspect that one of the\n> optimizations added by commit e0b1ee17 is buggy in the presence of an\n> opfamily lacking the full set of cross-type comparisons. The attached\n> test case confirms that these suspicions were correct. Running the\n> tese case against HEAD will lead to an assertion failure (or a wrong\n> answer when assertions are disabled).\n\nHmm ... I had not paid any attention to this commit, but the rationale\ngiven in the commit message is just flat wrong:\n\n Imagine the ordered B-tree scan for the query like this.\n \n SELECT * FROM tbl WHERE col > 'a' AND col < 'b' ORDER BY col;\n \n The (col > 'a') scan key will be always matched once we find the location to\n start the scan. The (col < 'b') scan key will match every item on the page\n as long as it matches the last item on the page.\n\nThat argument probably holds for the index's first column, but it is\ncompletely and obviously wrong for every following column. Nonetheless\nit looks like we're trying to apply the optimization to every scan key.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Dec 2023 19:53:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 4:53 PM Tom Lane <[email protected]> wrote:\n> Hmm ... I had not paid any attention to this commit, but the rationale\n> given in the commit message is just flat wrong:\n>\n> Imagine the ordered B-tree scan for the query like this.\n>\n> SELECT * FROM tbl WHERE col > 'a' AND col < 'b' ORDER BY col;\n>\n> The (col > 'a') scan key will be always matched once we find the location to\n> start the scan. The (col < 'b') scan key will match every item on the page\n> as long as it matches the last item on the page.\n>\n> That argument probably holds for the index's first column, but it is\n> completely and obviously wrong for every following column. Nonetheless\n> it looks like we're trying to apply the optimization to every scan key.\n\nJust to be clear, you're raising a concern that seems to me to apply\nto \"the other optimization\" from the same commit, specifically -- the\nprecheck optimization. Not the one I found a problem in. (They're\nclosely related but distinct optimizations.)\n\nI *think* that that part is handled correctly, because non-required\nscan keys are not affected (by either optimization). I have no\nspecific reason to doubt the proposition that 'b' could only be marked\nrequired in situations where it is indeed safe to assume that the col\n< 'b' condition must also apply to earlier tuples transitively (i.e.\nthis must be true because it was true for the the final tuple on the\npage during the _bt_readpage precheck).\n\nThat being said, I wouldn't rule out problems for the precheck\noptimization in the presence of opfamilies like the one from my test\ncase. I didn't get as far as exploring that side of things, at least.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Dec 2023 17:14:23 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 4:41 PM Peter Geoghegan <[email protected]> wrote:\n> \"In general, when inequality keys are present, the initial-positioning\n> code only promises to position before the first possible match, not\n> exactly at the first match, for a forward scan; or after the last\n> match for a backward scan.\"\n>\n> My test case mostly just demonstrates how to reproduce the scenario\n> described by this sentence.\n\nI just realized that my test case wasn't quite minimized correctly. It\ndepended on a custom function that was no longer created.\n\nAttached is a revised version that uses btint84cmp instead.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 5 Dec 2023 17:45:41 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Hi, Peter!\n\nOn Wed, Dec 6, 2023 at 3:46 AM Peter Geoghegan <[email protected]> wrote:\n> On Tue, Dec 5, 2023 at 4:41 PM Peter Geoghegan <[email protected]> wrote:\n> > \"In general, when inequality keys are present, the initial-positioning\n> > code only promises to position before the first possible match, not\n> > exactly at the first match, for a forward scan; or after the last\n> > match for a backward scan.\"\n> >\n> > My test case mostly just demonstrates how to reproduce the scenario\n> > described by this sentence.\n>\n> I just realized that my test case wasn't quite minimized correctly. It\n> depended on a custom function that was no longer created.\n>\n> Attached is a revised version that uses btint84cmp instead.\n\nThank you for raising this issue. Preprocessing of btree scan keys is\nnormally removing the redundant scan keys. However, redundant scan\nkeys aren't removed when they have arguments of different types.\nPlease give me a bit of time to figure out how to workaround this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 6 Dec 2023 06:05:52 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 8:06 PM Alexander Korotkov <[email protected]> wrote:\n> Thank you for raising this issue. Preprocessing of btree scan keys is\n> normally removing the redundant scan keys. However, redundant scan\n> keys aren't removed when they have arguments of different types.\n> Please give me a bit of time to figure out how to workaround this.\n\nCouldn't you condition the use of the optimization on\n_bt_preprocess_keys being able to use cross-type operators when it\nchecked for redundant or contradictory scan keys? The vast majority of\nindex scans will be able to do that.\n\nAs I said already, what you're doing here isn't all that different to\nthe way that we rely on required equality strategy scan keys being\nused to build our initial insertion scan key, that determines where\nthe scan is initially positioned to within _bt_first. Inequalities\naren't all that different to equalities.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Dec 2023 20:20:01 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 8:15 PM Peter Geoghegan <[email protected]> wrote:\n> Just to be clear, you're raising a concern that seems to me to apply\n> to \"the other optimization\" from the same commit, specifically -- the\n> precheck optimization. Not the one I found a problem in. (They're\n> closely related but distinct optimizations.)\n\nIt isn't very clear from the commit message that this commit is doing\ntwo different things, and in fact I'm still unclear on what exactly\nthe other optimization is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 08:11:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, 6 Dec 2023 at 14:11, Robert Haas <[email protected]> wrote:\n>\n> On Tue, Dec 5, 2023 at 8:15 PM Peter Geoghegan <[email protected]> wrote:\n> > Just to be clear, you're raising a concern that seems to me to apply\n> > to \"the other optimization\" from the same commit, specifically -- the\n> > precheck optimization. Not the one I found a problem in. (They're\n> > closely related but distinct optimizations.)\n>\n> It isn't very clear from the commit message that this commit is doing\n> two different things, and in fact I'm still unclear on what exactly\n> the other optimization is.\n\nI feel that Peter refered to these two distinct optimizations:\n\n1. When scanning an index in ascending order using scankey a > 1 (so,\none that defines a start point of the scan), we don't need to check\nitems for consistency with that scankey once we've found the first\nvalue that is consistent with the scankey, as all future values will\nalso be consistent with the scankey (if we assume no concurrent page\ndeletions).\n\n2. When scanning an index in ascending order using scankey a < 10 (one\nthat defines an endpoint of the scan), we can look ahead and check if\nthe last item on the page is consistent. If so, then all other items\non the page will also be consistent with that scankey.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 6 Dec 2023 14:27:24 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, Dec 6, 2023 at 8:27 AM Matthias van de Meent\n<[email protected]> wrote:\n> I feel that Peter refered to these two distinct optimizations:\n>\n> 1. When scanning an index in ascending order using scankey a > 1 (so,\n> one that defines a start point of the scan), we don't need to check\n> items for consistency with that scankey once we've found the first\n> value that is consistent with the scankey, as all future values will\n> also be consistent with the scankey (if we assume no concurrent page\n> deletions).\n>\n> 2. When scanning an index in ascending order using scankey a < 10 (one\n> that defines an endpoint of the scan), we can look ahead and check if\n> the last item on the page is consistent. If so, then all other items\n> on the page will also be consistent with that scankey.\n\nOh, interesting. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Dec 2023 08:32:13 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, Dec 6, 2023 at 5:27 AM Matthias van de Meent\n<[email protected]> wrote:\n> On Wed, 6 Dec 2023 at 14:11, Robert Haas <[email protected]> wrote:\n> > It isn't very clear from the commit message that this commit is doing\n> > two different things, and in fact I'm still unclear on what exactly\n> > the other optimization is.\n>\n> I feel that Peter refered to these two distinct optimizations:\n\nRight.\n\n> 2. When scanning an index in ascending order using scankey a < 10 (one\n> that defines an endpoint of the scan), we can look ahead and check if\n> the last item on the page is consistent. If so, then all other items\n> on the page will also be consistent with that scankey.\n\nAlso worth noting that it could be \"scankey a = 10\". That is, the\nprecheck optimization (i.e. the optimization that's not the target of\nmy test case) can deal with equalities and inequalities just as well\n(any scan key that's required in the current scan direction is\nsupported). On the other hand the required-in-opposite-direction-only\noptimization (i.e. the target of my test case) only applies to\ninequality strategy scan keys.\n\nIt kinda makes sense to explain both concepts using an example that\ninvolves both > and < strategy inequalities, since that makes the\nsymmetry between the two optimizations clearer.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 Dec 2023 08:44:16 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 8:20 PM Peter Geoghegan <[email protected]> wrote:\n> On Tue, Dec 5, 2023 at 8:06 PM Alexander Korotkov <[email protected]> wrote:\n> > Thank you for raising this issue. Preprocessing of btree scan keys is\n> > normally removing the redundant scan keys. However, redundant scan\n> > keys aren't removed when they have arguments of different types.\n> > Please give me a bit of time to figure out how to workaround this.\n>\n> Couldn't you condition the use of the optimization on\n> _bt_preprocess_keys being able to use cross-type operators when it\n> checked for redundant or contradictory scan keys? The vast majority of\n> index scans will be able to do that.\n\nSome quick experimentation shows that my test case works as expected\nonce _bt_preprocess_keys is taught to remember that it has seen a\nmaybe-unsafe case, which it stashes in a special new field from the\nscan's state for later. As I said, this field can be treated as a\ncondition of applying the required-in-opposite-direction-only\noptimization in _bt_readpage().\n\nThis new field would be analogous to the existing\nrequiredMatchedByPrecheck state used by _bt_readpage() to determine\nwhether it can apply the required-in-same-direction optimization. The\nnew field works for the whole scan instead of just for one page, and\nit works based on information from \"behind the scan\" instead of\ninformation \"just ahead of the scan\". But the basic idea is the same.\n\n_bt_preprocess_keys is rather complicated. It is perhaps tempting to\ndo this in a targeted way, that specifically limits itself to the exact\ncases that we know to be unsafe. But it might be okay to just disable\nthe optimization in most or all cases where _bt_compare_scankey_args()\nreturns false. That's likely to be very rare in practice, anyway (who\nreally uses opfamilies like these?). Not really sure where to come\ndown on that.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 Dec 2023 09:31:56 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, Dec 6, 2023 at 5:27 AM Matthias van de Meent\n<[email protected]> wrote:\n> 1. When scanning an index in ascending order using scankey a > 1 (so,\n> one that defines a start point of the scan), we don't need to check\n> items for consistency with that scankey once we've found the first\n> value that is consistent with the scankey, as all future values will\n> also be consistent with the scankey (if we assume no concurrent page\n> deletions).\n\nBTW, I don't think that page deletion is a concern for these\noptimizations in the way that it is for the similar idea of \"dynamic\nprefix compression\", which works against insertion-type scan keys\n(used to descend the tree and to do an initial binary search of a leaf\npage).\n\nWe already rely on the first call to _bt_readpage (the one that takes\nplace from within _bt_first rather than from _bt_next) passing a page\noffset number that's exactly at the start of where matches begin --\nthis is crucial in the case of scans with required equality strategy\nscan keys (most scans). If we just skipped the _bt_binsrch and passed\nP_FIRSTDATAKEY(opaque) to _bt_readpage within _bt_first instead, that\nwould break lots of queries. So the _bt_binsrch step within _bt_first\nisn't just an optimization -- it's crucial. This is nothing new.\n\nRecall that _bt_readpage only deals with search-type scan keys,\nmeaning scan keys that use a simple operator (so it uses = operators\nwith the equality strategy, as opposed to using a 3-way ORDER\nproc/support function 1 that can tell the difference between < and >).\nIn general _bt_readpage doesn't know how to tell the difference\nbetween a tuple that's before the start of = matches, and a tuple\nthat's at (or after) the end of any = matches. If it is ever allowed\nto conflate these two cases, then we'll overlook matching tuples,\nwhich is of course wrong (it'll terminate the scan before it even\nstarts). It is up to the caller (really just _bt_first) to never call\n_bt_readpage in a way that allows this confusion to take place --\nwhich is what makes the _bt_binsrch step crucial.\n\nA race condition with page deletion might allow the key space covered\nby a leaf page to \"widen\" after we've left its parent, but before we\narrive on the leaf page. But the _bt_binsrch step within _bt_first\nhappens *after* we land on and lock that leaf page, in any case. So\nthere is no risk of the scan ever doing anything with\nconcurrently-inserted index tuples. In general we only have to worry\nabout such race conditions when descending the tree -- they're not a\nproblem after the scan has reached the leaf level and established an\ninitial page offset number. (The way that _bt_readpage processes whole\npages in one atomic step helps with this sort of thing, too. We can\nalmost pretend that the B-Tree structure is immutable, even though\nthat's obviously not really true at all. We know that we'll always\nmake forward progress through the key space by remembering the next\npage to visit when processing each page.)\n\nMy test case broke the required-in-opposite-direction-only\noptimization by finding a way in which\nrequired-in-opposite-direction-only inequalities were not quite the\nsame as required equalities with respect to this business about the\nprecise leaf page offset number that the scan begins at. They make\n*almost* the same set of guarantees (note in particular that both will\nbe transformed into insertion scan key/3-way ORDER proc scan keys by\n_bt_first's initial positioning code), but there is at least one\nspecial case that applies only to inequalities. I had to play games\nwith weird incomplete opfamilies to actually break the optimization --\nthat was required to tickle the special case in just the right/wrong\nway.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 Dec 2023 10:54:45 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, 6 Dec 2023 at 19:55, Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Dec 6, 2023 at 5:27 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > 1. When scanning an index in ascending order using scankey a > 1 (so,\n> > one that defines a start point of the scan), we don't need to check\n> > items for consistency with that scankey once we've found the first\n> > value that is consistent with the scankey, as all future values will\n> > also be consistent with the scankey (if we assume no concurrent page\n> > deletions).\n>\n> BTW, I don't think that page deletion is a concern for these\n> optimizations in the way that it is for the similar idea of \"dynamic\n> prefix compression\", which works against insertion-type scan keys\n> (used to descend the tree and to do an initial binary search of a leaf\n> page).\n>\n> We already rely on the first call to _bt_readpage (the one that takes\n> place from within _bt_first rather than from _bt_next) passing a page\n> offset number that's exactly at the start of where matches begin --\n> this is crucial in the case of scans with required equality strategy\n> scan keys (most scans). If we just skipped the _bt_binsrch and passed\n> P_FIRSTDATAKEY(opaque) to _bt_readpage within _bt_first instead, that\n> would break lots of queries. So the _bt_binsrch step within _bt_first\n> isn't just an optimization -- it's crucial. This is nothing new.\n\nI was thinking more along the lines of page splits+deletions while\nwe're doing _bt_stepright(), but forgot to consider that we first lock\nthe right sibling, and only then release the left sibling for splits,\nso we should be fine here.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 6 Dec 2023 20:14:24 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, Dec 6, 2023 at 11:14 AM Matthias van de Meent\n<[email protected]> wrote:\n> I was thinking more along the lines of page splits+deletions while\n> we're doing _bt_stepright(), but forgot to consider that we first lock\n> the right sibling, and only then release the left sibling for splits,\n> so we should be fine here.\n\nIn general the simplest (and possibly most convincing) arguments for\nthe correctness of optimizations like the ones that Alexander added\nrely on seeing that the only way that the optimization can be wrong is\nif some more fundamental and long established thing was also wrong. We\ncould try to prove that the new optimization is correct (or wrong),\nbut it is often more helpful to \"prove\" that some much more\nfundamental thing is correct instead, if that provides us with a\nuseful corollary about the new thing also being correct.\n\nTake the _bt_readpage precheck optimization, for example. Rather than\nthinking about the key space and transitive rules, it might be more\nhelpful to focus on what must have been true in earlier Postgres\nversions, and what we can expect to still hold now. The only way that\nthat optimization could be wrong is if the same old _bt_checkkeys\nlogic that decides when to terminate the scan (by setting\ncontinuescan=false) always had some propensity to \"change its mind\"\nabout ending the scan, at least when it somehow got the opportunity to\nsee tuples after the first tuple that it indicated should end the\nscan. That's not quite bulletproof, of course (it's not like older\nPostgres versions actually provided _bt_checkkeys with opportunities\nto \"change its mind\" in this sense), but it's a useful starting point\nIME. It helps to build intuition.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 Dec 2023 11:50:16 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, Dec 6, 2023 at 6:05 AM Alexander Korotkov <[email protected]> wrote:\n> On Wed, Dec 6, 2023 at 3:46 AM Peter Geoghegan <[email protected]> wrote:\n> > On Tue, Dec 5, 2023 at 4:41 PM Peter Geoghegan <[email protected]> wrote:\n> > > \"In general, when inequality keys are present, the initial-positioning\n> > > code only promises to position before the first possible match, not\n> > > exactly at the first match, for a forward scan; or after the last\n> > > match for a backward scan.\"\n> > >\n> > > My test case mostly just demonstrates how to reproduce the scenario\n> > > described by this sentence.\n> >\n> > I just realized that my test case wasn't quite minimized correctly. It\n> > depended on a custom function that was no longer created.\n> >\n> > Attached is a revised version that uses btint84cmp instead.\n>\n> Thank you for raising this issue. Preprocessing of btree scan keys is\n> normally removing the redundant scan keys. However, redundant scan\n> keys aren't removed when they have arguments of different types.\n> Please give me a bit of time to figure out how to workaround this.\n\nI dig into the problem. I think this assumption is wrong in my commit.\n\n\"When the key is required for opposite direction scan, it must be\nalready satisfied by_bt_first() ...\"\n\nIn your example \"foo = 90\" is satisfied by_bt_first(), but \"foo >\n99::int8\" is not. I think this could be resolved by introducing a\nseparate flag exactly distinguishing scan keys used for _bt_first().\nI'm going to post the patch doing this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 8 Dec 2023 20:30:18 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Fri, Dec 8, 2023 at 8:30 PM Alexander Korotkov <[email protected]> wrote:\n> On Wed, Dec 6, 2023 at 6:05 AM Alexander Korotkov <[email protected]> wrote:\n> > On Wed, Dec 6, 2023 at 3:46 AM Peter Geoghegan <[email protected]> wrote:\n> > > On Tue, Dec 5, 2023 at 4:41 PM Peter Geoghegan <[email protected]> wrote:\n> > > > \"In general, when inequality keys are present, the initial-positioning\n> > > > code only promises to position before the first possible match, not\n> > > > exactly at the first match, for a forward scan; or after the last\n> > > > match for a backward scan.\"\n> > > >\n> > > > My test case mostly just demonstrates how to reproduce the scenario\n> > > > described by this sentence.\n> > >\n> > > I just realized that my test case wasn't quite minimized correctly. It\n> > > depended on a custom function that was no longer created.\n> > >\n> > > Attached is a revised version that uses btint84cmp instead.\n> >\n> > Thank you for raising this issue. Preprocessing of btree scan keys is\n> > normally removing the redundant scan keys. However, redundant scan\n> > keys aren't removed when they have arguments of different types.\n> > Please give me a bit of time to figure out how to workaround this.\n>\n> I dig into the problem. I think this assumption is wrong in my commit.\n>\n> \"When the key is required for opposite direction scan, it must be\n> already satisfied by_bt_first() ...\"\n>\n> In your example \"foo = 90\" is satisfied by_bt_first(), but \"foo >\n> 99::int8\" is not. I think this could be resolved by introducing a\n> separate flag exactly distinguishing scan keys used for _bt_first().\n> I'm going to post the patch doing this.\n\nThe draft patch is attached. It requires polishing and proper\ncommenting. But I hope the basic idea is clear. Do you think this is\nthe way forward?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Fri, 8 Dec 2023 20:46:01 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Fri, Dec 8, 2023 at 10:46 AM Alexander Korotkov <[email protected]> wrote:\n> > In your example \"foo = 90\" is satisfied by_bt_first(), but \"foo >\n> > 99::int8\" is not. I think this could be resolved by introducing a\n> > separate flag exactly distinguishing scan keys used for _bt_first().\n> > I'm going to post the patch doing this.\n>\n> The draft patch is attached. It requires polishing and proper\n> commenting. But I hope the basic idea is clear. Do you think this is\n> the way forward?\n\nDoes this really need to work at the scan key level, rather than at\nthe whole-scan level? Wouldn't it make more sense to just totally\ndisable it for the whole scan, since we'll barely ever need to do that\nanyway?\n\nMy ScalarArrayOpExpr patch will need to disable this optimization,\nsince with that patch in place we don't necessarily go through\n_bt_first each time the search-type scan keys must change. We might\nneed to check a few tuples from before the _bt_first-wise position of\nthe next set of array values, which is a problem with\nopposite-direction-only inequalities (it's a little bit like the\nsituation from my test case, actually). That's partly why I'd prefer\nthis to work at the whole-scan level (though I also just don't think\nthat inventing SK_BT_BT_FIRST makes much sense).\n\nI think that you should make it clearer that this whole optimization\nonly applies to required *inequalities*, which can be required in the\nopposite direction *only*. It should be more obvious from looking at\nthe code that the optimization doesn't apply to required equality\nstrategy scan keys (those are always required in *both* scan\ndirections or in neither direction, so unlike the closely related\nprefix optimization added by the same commit, they just can't use the\noptimization that we need to fix here).\n\nBTW, do we really need to keep around the BTScanOpaqueData.firstPage\nfield? Why can't the call to _bt_readpage from _bt_first (and from\n_bt_endpoint) just pass \"firstPage=true\" as a simple argument? Note\nthat the first call to _bt_readpage must take place from _bt_first (or\nfrom _bt_endpoint). The first _bt_first call is already kind of\nspecial, in a way that is directly related to this issue. I added some\ncomments about that to today's commit c9c0589fda, in fact -- I think\nit's an important issue in general.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Dec 2023 18:29:17 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Hi, Peter!\n\nOn Sat, Dec 9, 2023 at 4:29 AM Peter Geoghegan <[email protected]> wrote:\n> Does this really need to work at the scan key level, rather than at\n> the whole-scan level? Wouldn't it make more sense to just totally\n> disable it for the whole scan, since we'll barely ever need to do that\n> anyway?\n>\n> My ScalarArrayOpExpr patch will need to disable this optimization,\n> since with that patch in place we don't necessarily go through\n> _bt_first each time the search-type scan keys must change. We might\n> need to check a few tuples from before the _bt_first-wise position of\n> the next set of array values, which is a problem with\n> opposite-direction-only inequalities (it's a little bit like the\n> situation from my test case, actually). That's partly why I'd prefer\n> this to work at the whole-scan level (though I also just don't think\n> that inventing SK_BT_BT_FIRST makes much sense).\n>\n> I think that you should make it clearer that this whole optimization\n> only applies to required *inequalities*, which can be required in the\n> opposite direction *only*. It should be more obvious from looking at\n> the code that the optimization doesn't apply to required equality\n> strategy scan keys (those are always required in *both* scan\n> directions or in neither direction, so unlike the closely related\n> prefix optimization added by the same commit, they just can't use the\n> optimization that we need to fix here).\n>\n> BTW, do we really need to keep around the BTScanOpaqueData.firstPage\n> field? Why can't the call to _bt_readpage from _bt_first (and from\n> _bt_endpoint) just pass \"firstPage=true\" as a simple argument? Note\n> that the first call to _bt_readpage must take place from _bt_first (or\n> from _bt_endpoint). The first _bt_first call is already kind of\n> special, in a way that is directly related to this issue. I added some\n> comments about that to today's commit c9c0589fda, in fact -- I think\n> it's an important issue in general.\n\nPlease, check the attached patchset.\n\nThe first patch removes the BTScanOpaqueData.firstPage field as you\nproposed. I think this is a good idea, thank you for the proposal.\n\nRegarding the requiredOppositeDir bug. I don't want to lose the\ngenerality of the optimization. I could work for different cases, for\nexample.\nWHERE col1 > val1 AND col1 < val2\nWHERE col1 = val1 AND col2 > val2 AND col2 < val3\nWHERE col1 = val1 AND col2 = val2 AND col3 > val3 AND col3 < val4\nAnd there could be additional scan keys, which shouldn't be skipped.\nBut that shouldn't mean we shouldn't skip others.\n\nSee the second patch for my next proposal to fix the problem. Instead\nof relying on _bt_first(), let's rely on the first matched item on the\npage. Once it's found, we may skip scan keys required for the opposite\ndirection. What do you think?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 11 Dec 2023 17:56:13 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Will you be in Prague this week? If not this might have to wait.\n\nWill you be in Prague this week? If not this might have to wait.", "msg_date": "Mon, 11 Dec 2023 08:16:07 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Mon, Dec 11, 2023 at 5:56 PM Alexander Korotkov <[email protected]> wrote:\n> > BTW, do we really need to keep around the BTScanOpaqueData.firstPage\n> > field? Why can't the call to _bt_readpage from _bt_first (and from\n> > _bt_endpoint) just pass \"firstPage=true\" as a simple argument? Note\n> > that the first call to _bt_readpage must take place from _bt_first (or\n> > from _bt_endpoint). The first _bt_first call is already kind of\n> > special, in a way that is directly related to this issue. I added some\n> > comments about that to today's commit c9c0589fda, in fact -- I think\n> > it's an important issue in general.\n>\n> Please, check the attached patchset.\n\nSorry, I forgot the attachment. Here it is.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 11 Dec 2023 20:59:32 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Mon, Dec 11, 2023 at 6:16 PM Peter Geoghegan <[email protected]> wrote:\n> Will you be in Prague this week? If not this might have to wait.\n\nSorry, I wouldn't be in Prague this week. Due to my current\nimmigration status, I can't travel.\nI wish you to have a lovely time in Prague. I'm OK to wait, review\nonce you can. I will probably provide a more polished version\nmeanwhile.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 12 Dec 2023 15:22:16 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Tue, Dec 12, 2023 at 3:22 PM Alexander Korotkov <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 6:16 PM Peter Geoghegan <[email protected]> wrote:\n> > Will you be in Prague this week? If not this might have to wait.\n>\n> Sorry, I wouldn't be in Prague this week. Due to my current\n> immigration status, I can't travel.\n> I wish you to have a lovely time in Prague. I'm OK to wait, review\n> once you can. I will probably provide a more polished version\n> meanwhile.\n\nPlease find the revised patchset attached. It comes with revised\ncomments and commit messages. Besides bug fixing the second patch\nmakes optimization easier to understand. Now the flag used for\nskipping checks of same direction required keys is named\ncontinuescanPrechecked and means exactly that *continuescan flag is\nknown to be true for the last item on the page.\n\nAny objections to pushing these two patches?\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 25 Dec 2023 00:50:52 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Hi, Alexander!\n\nOn Mon, 25 Dec 2023 at 02:51, Alexander Korotkov <[email protected]>\nwrote:\n\n> On Tue, Dec 12, 2023 at 3:22 PM Alexander Korotkov <[email protected]>\n> wrote:\n> >\n> > On Mon, Dec 11, 2023 at 6:16 PM Peter Geoghegan <[email protected]> wrote:\n> > > Will you be in Prague this week? If not this might have to wait.\n> >\n> > Sorry, I wouldn't be in Prague this week. Due to my current\n> > immigration status, I can't travel.\n> > I wish you to have a lovely time in Prague. I'm OK to wait, review\n> > once you can. I will probably provide a more polished version\n> > meanwhile.\n>\n> Please find the revised patchset attached. It comes with revised\n> comments and commit messages. Besides bug fixing the second patch\n> makes optimization easier to understand. Now the flag used for\n> skipping checks of same direction required keys is named\n> continuescanPrechecked and means exactly that *continuescan flag is\n> known to be true for the last item on the page.\n>\n> Any objections to pushing these two patches?\n>\n\nI've reviewed both patches:\n0001 - is a pure refactoring replacing argument transfer from via struct\nmember to transfer explicitly as a function argument. It's justified by\nthe fact firstPage is localized only to several places. The patch looks\nsimple and good enough.\n\n0002:\ncontinuescanPrechecked is semantically much better than\nprevious requiredMatchedByPrecheck which confused me earlier. Thanks!\n\n From the new comments, it looks a little bit hard to understand who does\nwhat. Semantics \"if caller told\" in comments looks more clear to me. Could\nyou especially give attention to the comments:\n\n\"If they wouldn't be matched, then the *continuescan flag would be set for\nthe current item and the last item on the page accordingly.\"\n\"If the key is required for the opposite direction scan, we need to know\nthere was already at least one matching item on the page. For those keys.\"\n\n> Prechecking the value of the continuescan flag for the last item on the\n>+ * page (according to the scan direction).\nMaybe, in this case, it would be more clear like: \"...(for backwards scan\nit will be the first item on a page)\"\n\nOtherwise the patch 0002 looks like a good fix for the bug to be pushed.\n\nKind regards,\nPavel Borisov\n\nHi, Alexander!On Mon, 25 Dec 2023 at 02:51, Alexander Korotkov <[email protected]> wrote:On Tue, Dec 12, 2023 at 3:22 PM Alexander Korotkov <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 6:16 PM Peter Geoghegan <[email protected]> wrote:\n> > Will you be in Prague this week? If not this might have to wait.\n>\n> Sorry, I wouldn't be in Prague this week.  Due to my current\n> immigration status, I can't travel.\n> I wish you to have a lovely time in Prague.  I'm OK to wait, review\n> once you can.  I will probably provide a more polished version\n> meanwhile.\n\nPlease find the revised patchset attached.  It comes with revised\ncomments and commit messages.  Besides bug fixing the second patch\nmakes optimization easier to understand.  Now the flag used for\nskipping checks of same direction required keys is named\ncontinuescanPrechecked and means exactly that *continuescan flag is\nknown to be true for the last item on the page.\n\nAny objections to pushing these two patches?I've reviewed both patches:0001 - is a pure refactoring replacing argument transfer from via struct member to transfer explicitly as a function argument. It's justified by the fact firstPage is localized only to several places. The patch looks simple and good enough.0002:continuescanPrechecked is semantically much better than previous requiredMatchedByPrecheck which confused me earlier. Thanks!From the new comments, it looks a little bit hard to understand who does what. Semantics \"if caller told\" in comments looks more clear to me. Could you especially give attention to the comments:\"If they wouldn't be matched, then the *continuescan flag would be set for the current item and the last item on the page accordingly.\"\"If the key is required for the opposite direction scan, we need to know there was already at least one matching item on the page.  For those keys.\"> Prechecking the value of the continuescan flag for the last item on the>+\t * page (according to the scan direction).Maybe, in this case, it would be more clear like: \"...(for backwards scan it will be the first item on a page)\"Otherwise the patch 0002 looks like a good fix for the bug to be pushed.Kind regards,Pavel Borisov", "msg_date": "Mon, 25 Dec 2023 22:32:37 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Pavel,\n\nOn Mon, Dec 25, 2023 at 8:32 PM Pavel Borisov <[email protected]> wrote:\n> I've reviewed both patches:\n> 0001 - is a pure refactoring replacing argument transfer from via struct member to transfer explicitly as a function argument. It's justified by the fact firstPage is localized only to several places. The patch looks simple and good enough.\n>\n> 0002:\n> continuescanPrechecked is semantically much better than previous requiredMatchedByPrecheck which confused me earlier. Thanks!\n>\n> From the new comments, it looks a little bit hard to understand who does what. Semantics \"if caller told\" in comments looks more clear to me. Could you especially give attention to the comments:\n>\n> \"If they wouldn't be matched, then the *continuescan flag would be set for the current item and the last item on the page accordingly.\"\n> \"If the key is required for the opposite direction scan, we need to know there was already at least one matching item on the page. For those keys.\"\n>\n> > Prechecking the value of the continuescan flag for the last item on the\n> >+ * page (according to the scan direction).\n> Maybe, in this case, it would be more clear like: \"...(for backwards scan it will be the first item on a page)\"\n>\n> Otherwise the patch 0002 looks like a good fix for the bug to be pushed.\n\nThank you for your review. I've revised comments to meet your suggestions.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 26 Dec 2023 21:35:37 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Alexander,\n\nOn Tue, 26 Dec 2023 at 23:35, Alexander Korotkov <[email protected]>\nwrote:\n\n> Pavel,\n>\n> On Mon, Dec 25, 2023 at 8:32 PM Pavel Borisov <[email protected]>\n> wrote:\n> > I've reviewed both patches:\n> > 0001 - is a pure refactoring replacing argument transfer from via struct\n> member to transfer explicitly as a function argument. It's justified by the\n> fact firstPage is localized only to several places. The patch looks simple\n> and good enough.\n> >\n> > 0002:\n> > continuescanPrechecked is semantically much better than previous\n> requiredMatchedByPrecheck which confused me earlier. Thanks!\n> >\n> > From the new comments, it looks a little bit hard to understand who does\n> what. Semantics \"if caller told\" in comments looks more clear to me. Could\n> you especially give attention to the comments:\n> >\n> > \"If they wouldn't be matched, then the *continuescan flag would be set\n> for the current item and the last item on the page accordingly.\"\n> > \"If the key is required for the opposite direction scan, we need to know\n> there was already at least one matching item on the page. For those keys.\"\n> >\n> > > Prechecking the value of the continuescan flag for the last item on the\n> > >+ * page (according to the scan direction).\n> > Maybe, in this case, it would be more clear like: \"...(for backwards\n> scan it will be the first item on a page)\"\n> >\n> > Otherwise the patch 0002 looks like a good fix for the bug to be pushed.\n>\n> Thank you for your review. I've revised comments to meet your suggestions.\n>\nThank you for revised comments! I think they are good enough.\n\nRegards,\nPavel\n\nAlexander,On Tue, 26 Dec 2023 at 23:35, Alexander Korotkov <[email protected]> wrote:Pavel,\n\nOn Mon, Dec 25, 2023 at 8:32 PM Pavel Borisov <[email protected]> wrote:\n> I've reviewed both patches:\n> 0001 - is a pure refactoring replacing argument transfer from via struct member to transfer explicitly as a function argument. It's justified by the fact firstPage is localized only to several places. The patch looks simple and good enough.\n>\n> 0002:\n> continuescanPrechecked is semantically much better than previous requiredMatchedByPrecheck which confused me earlier. Thanks!\n>\n> From the new comments, it looks a little bit hard to understand who does what. Semantics \"if caller told\" in comments looks more clear to me. Could you especially give attention to the comments:\n>\n> \"If they wouldn't be matched, then the *continuescan flag would be set for the current item and the last item on the page accordingly.\"\n> \"If the key is required for the opposite direction scan, we need to know there was already at least one matching item on the page.  For those keys.\"\n>\n> > Prechecking the value of the continuescan flag for the last item on the\n> >+ * page (according to the scan direction).\n> Maybe, in this case, it would be more clear like: \"...(for backwards scan it will be the first item on a page)\"\n>\n> Otherwise the patch 0002 looks like a good fix for the bug to be pushed.\n\nThank you for your review.  I've revised comments to meet your suggestions.Thank you for revised comments! I think they are good enough.Regards,Pavel", "msg_date": "Wed, 27 Dec 2023 15:18:35 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Wed, Dec 27, 2023 at 1:18 PM Pavel Borisov <[email protected]> wrote:\n> Thank you for revised comments! I think they are good enough.\n\nPushed, thank you!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 27 Dec 2023 14:36:18 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Hi!\n\nMaybe _bt_readpage(scan, dir, start, true) needed at this line:\n\nhttps://github.com/postgres/postgres/blob/b4080fa3dcf6c6359e542169e0e81a0662c53ba8/src/backend/access/nbtree/nbtsearch.c#L2501\n\n?\n\nDo we really need to try prechecking the continuescan flag here?\n\nAnd the current \"false\" in the last arg does not match the previous code before 06b10f80ba\nand the current comment above.\n\nWould be very grateful for clarification.\n\nWith the best regards!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:29:38 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "Hi, Anton!\nLooks like an oversight when refactoring BTScanOpaqueData.firstPage into\nusing function argument in 06b10f80ba4.\n\n@@ -2487,14 +2486,13 @@ _bt_endpoint(IndexScanDesc scan, ScanDirection dir)\n104\n105 /* remember which buffer we have pinned */\n106 so->currPos.buf = buf;\n107 - so->firstPage = true;\n108\n109 _bt_initialize_more_data(so, dir);\n110\n111 /*\n112 * Now load data from the first page of the scan.\n113 */\n114 - if (!_bt_readpage(scan, dir, start))\n115 + if (!_bt_readpage(scan, dir, start, false))\n\nAttached is a fix.\nThank you!\n\nRegards,\nPavel\n\n\nOn Fri, 22 Mar 2024 at 11:29, Anton A. Melnikov <[email protected]>\nwrote:\n\n> Hi!\n>\n> Maybe _bt_readpage(scan, dir, start, true) needed at this line:\n>\n>\n> https://github.com/postgres/postgres/blob/b4080fa3dcf6c6359e542169e0e81a0662c53ba8/src/backend/access/nbtree/nbtsearch.c#L2501\n>\n> ?\n>\n> Do we really need to try prechecking the continuescan flag here?\n>\n> And the current \"false\" in the last arg does not match the previous code\n> before 06b10f80ba\n> and the current comment above.\n>\n> Would be very grateful for clarification.\n>\n> With the best regards!\n>\n> --\n> Anton A. Melnikov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>", "msg_date": "Fri, 22 Mar 2024 12:02:49 +0400", "msg_from": "Pavel Borisov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On 22.03.2024 11:02, Pavel Borisov wrote:\n> \n> Attached is a fix.\n\nThanks!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 22 Mar 2024 11:14:48 +0300", "msg_from": "\"Anton A. Melnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "I've noticed this patch and had a quick look at it. As far as I\nunderstand, this bug\ndoes not lead to an incorrect matching, resulting only in degradation in\nspeed.\nAnyway, consider this patch useful, hope it will be committed soon.\n\n-- \nBest regards,\nMaxim Orlov.\n\nI've noticed this patch and had a quick look at it.  As far as I understand, this bug does not lead to an incorrect matching, resulting only in degradation in speed.  Anyway, consider this patch useful, hope it will be committed soon.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 22 Mar 2024 12:58:28 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" }, { "msg_contents": "On Fri, Mar 22, 2024 at 11:58 AM Maxim Orlov <[email protected]> wrote:\n> I've noticed this patch and had a quick look at it. As far as I understand, this bug\n> does not lead to an incorrect matching, resulting only in degradation in speed.\n> Anyway, consider this patch useful, hope it will be committed soon.\n\nPushed.\nThanks to Maxim and Pavel.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:26:47 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug in nbtree optimization to skip > operator comparisons (or <\n comparisons in backwards scans)" } ]
[ { "msg_contents": "hi.\n\nstatic void\nExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate);\n\nnot declared in src/backend/executor/nodeModifyTable.c.\ndo we need to add the declaration?\n\n\n", "msg_date": "Wed, 6 Dec 2023 09:50:02 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "ExecSetupTransitionCaptureState not declared in nodeModifyTable.c" }, { "msg_contents": "jian he <[email protected]> writes:\n> static void\n> ExecSetupTransitionCaptureState(ModifyTableState *mtstate, EState *estate);\n\n> not declared in src/backend/executor/nodeModifyTable.c.\n> do we need to add the declaration?\n\nNot if the compiler's not complaining about it. We don't have a\npolicy requiring all static functions to be forward-declared.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Dec 2023 21:02:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ExecSetupTransitionCaptureState not declared in nodeModifyTable.c" } ]
[ { "msg_contents": "Hi,\n\nWhen testing streaming replication with a physical slot. I found an unexpected\nbehavior that the walsender could use an invalidated physical slot for\nstreaming.\n\nThis occurs when the primary slot is invalidated due to reaching the\nmax_slot_wal_keep_size before initializing the streaming replication\n(e.g. before creating the standby). Attach a draft script(test_invalidated_slot.sh)\nwhich can reproduce this.\n\nOnce the slot is invalidated, it can no longer protect the WALs and\nRows, as these invalidated slots are not considered in functions like\nReplicationSlotsComputeRequiredXmin().\n\nBesides, the walsender could advance the restart_lsn of an invalidated slot,\nthen user won't be able to know that if the slot is actually validated or not,\nbecause the 'conflicting' of view pg_replication_slot could be set back to\nnull.\n\nSo, I think it's a bug and one idea to fix is to check the validity of the physical slot in\nStartReplication() after acquiring the slot like what the attachment does,\nwhat do you think ?\n\nBest Regards,\nHou Zhijie", "msg_date": "Wed, 6 Dec 2023 12:55:29 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Forbid the use of invalidated physical slots in streaming\n replication." }, { "msg_contents": "On Wed, Dec 6, 2023 at 6:25 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Hi,\n>\n> When testing streaming replication with a physical slot. I found an unexpected\n> behavior that the walsender could use an invalidated physical slot for\n> streaming.\n>\n> This occurs when the primary slot is invalidated due to reaching the\n> max_slot_wal_keep_size before initializing the streaming replication\n> (e.g. before creating the standby). Attach a draft script(test_invalidated_slot.sh)\n> which can reproduce this.\n\nInteresting. Thanks for the script. It reproduces the problem for me easily.\n\n>\n> Once the slot is invalidated, it can no longer protect the WALs and\n> Rows, as these invalidated slots are not considered in functions like\n> ReplicationSlotsComputeRequiredXmin().\n>\n> Besides, the walsender could advance the restart_lsn of an invalidated slot,\n> then user won't be able to know that if the slot is actually validated or not,\n> because the 'conflicting' of view pg_replication_slot could be set back to\n> null.\n\nIn this case, since the basebackup was taken after the slot was\ninvalidated, it does not require the WAL that was removed. But it\nseems that once the streaming starts, the slot sprints to life again\nand gets validated again. Here's pg_replication_slot output after the\nstandby starts\n#select * from pg_replication_slots ;\n slot_name | plugin | slot_type | datoid | database |\ntemporary | active | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase | conflic\nting\n-------------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------+--------\n-----\n logicalslot | test_decoding | logical | 5 | postgres | f\n | f | | | 739 | |\n0/1513B08 | lost | | f | t\n physical | | physical | | | f\n | t | 341925 | 752 | | 0/404CB78 |\n | unreserved | 16462984 | f |\n(2 rows)\n\nwhich looks quite similar to the output when slot was valid after creation\n slot_name | plugin | slot_type | datoid | database |\ntemporary | active | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase | conflic\nting\n-------------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------+--------\n-----\n logicalslot | test_decoding | logical | 5 | postgres | f\n | f | | | 739 | 0/1513AD0 |\n0/1513B08 | unreserved | -1591888 | f | f\n physical | | physical | | | f\n | f | | | | 0/14F0DF0 |\n | unreserved | -1591888 | f |\n(2 rows)\n\n>\n> So, I think it's a bug and one idea to fix is to check the validity of the physical slot in\n> StartReplication() after acquiring the slot like what the attachment does,\n> what do you think ?\n\nI am not sure whether that's necessarily a bug. Of course, we don't\nexpect invalid slots to be used but in this case I don't see any harm.\nThe invalid slot has been revived and has all the properties set just\nlike a valid slot. So it must be considered in\nReplicationSlotsComputeRequiredXmin() as well. I haven't verified it\nmyself though. In case the WAL is really lost and is requested by the\nstandby it will throw an error \"requested WAL segment [0-9A-F]+ has\nalready been removed\". So no harm there as well.\n\nI haven't been able to locate the code which makes the slot valid\nthough. So I can't say whether the behaviour is intended or not.\nLooking at StartReplication() comment\n/*\n* We don't need to verify the slot's restart_lsn here; instead we\n* rely on the caller requesting the starting point to use. If the\n* WAL segment doesn't exist, we'll fail later.\n*/\nit looks like the behaviour is not completely unexpected.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 7 Dec 2023 17:12:53 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forbid the use of invalidated physical slots in streaming\n replication." }, { "msg_contents": "On Thursday, December 7, 2023 7:43 PM Ashutosh Bapat <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> On Wed, Dec 6, 2023 at 6:25 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > When testing streaming replication with a physical slot. I found an\r\n> > unexpected behavior that the walsender could use an invalidated\r\n> > physical slot for streaming.\r\n> >\r\n> > This occurs when the primary slot is invalidated due to reaching the\r\n> > max_slot_wal_keep_size before initializing the streaming replication\r\n> > (e.g. before creating the standby). Attach a draft\r\n> > script(test_invalidated_slot.sh) which can reproduce this.\r\n> \r\n> Interesting. Thanks for the script. It reproduces the problem for me easily.\r\n\r\nThanks for testing and replying!\r\n\r\n> \r\n> >\r\n> > Once the slot is invalidated, it can no longer protect the WALs and\r\n> > Rows, as these invalidated slots are not considered in functions like\r\n> > ReplicationSlotsComputeRequiredXmin().\r\n> >\r\n> > Besides, the walsender could advance the restart_lsn of an invalidated\r\n> > slot, then user won't be able to know that if the slot is actually\r\n> > validated or not, because the 'conflicting' of view\r\n> > pg_replication_slot could be set back to null.\r\n> \r\n> In this case, since the basebackup was taken after the slot was invalidated, it\r\n> does not require the WAL that was removed. But it seems that once the\r\n> streaming starts, the slot sprints to life again and gets validated again. Here's\r\n> pg_replication_slot output after the standby starts.\r\n\r\nActually, It doesn't bring the invalidated slot back to life completely.\r\nThe slot's view data looks valid while the 'invalidated' flag of this slot is still\r\nRS_INVAL_WAL_REMOVED (user are not aware of it.)\r\n\r\n\r\n> \r\n> >\r\n> > So, I think it's a bug and one idea to fix is to check the validity of\r\n> > the physical slot in\r\n> > StartReplication() after acquiring the slot like what the attachment\r\n> > does, what do you think ?\r\n> \r\n> I am not sure whether that's necessarily a bug. Of course, we don't expect\r\n> invalid slots to be used but in this case I don't see any harm.\r\n> The invalid slot has been revived and has all the properties set just like a valid\r\n> slot. So it must be considered in\r\n> ReplicationSlotsComputeRequiredXmin() as well. I haven't verified it myself\r\n> though. In case the WAL is really lost and is requested by the standby it will\r\n> throw an error \"requested WAL segment [0-9A-F]+ has already been\r\n> removed\". So no harm there as well.\r\n\r\nSince the 'invalidated' field of the slot is still (RS_INVAL_WAL_REMOVED), even\r\nif the walsender advances the restart_lsn, the slot will not be considered in the\r\nReplicationSlotsComputeRequiredXmin(), so the WALs and Rows are not safe\r\nand that's why I think it's a bug.\r\n\r\nAfter looking closer, it seems this behavior started from 15f8203 which introduced the\r\nReplicationSlotInvalidationCause 'invalidated', after that we check the invalidated enum\r\nin Xmin/Lsn computation function.\r\n\r\nIf we want to go back to previous behavior, we need to revert/adjust the check\r\nfor invalidated in ReplicationSlotsComputeRequiredXmin(), but since the logical\r\ndecoding(walsender) disallow using invalidated slot, so I feel it's consistent\r\nto do similar check for physical one. Besides, pg_replication_slot_advance()\r\nalso disallow passing invalidated slot to it as well. What do you think ?\r\n\r\nBest Regards,\r\nHou zj \r\n", "msg_date": "Thu, 7 Dec 2023 13:00:35 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Forbid the use of invalidated physical slots in streaming\n replication." }, { "msg_contents": "> > > pg_replication_slot could be set back to null.\n> >\n> > In this case, since the basebackup was taken after the slot was invalidated, it\n> > does not require the WAL that was removed. But it seems that once the\n> > streaming starts, the slot sprints to life again and gets validated again. Here's\n> > pg_replication_slot output after the standby starts.\n>\n> Actually, It doesn't bring the invalidated slot back to life completely.\n> The slot's view data looks valid while the 'invalidated' flag of this slot is still\n> RS_INVAL_WAL_REMOVED (user are not aware of it.)\n>\n\nI was mislead by the code in pg_get_replication_slots(). I did not\nread it till the following\n\n--- code ----\ncase WALAVAIL_REMOVED:\n\n/*\n* If we read the restart_lsn long enough ago, maybe that file\n* has been removed by now. However, the walsender could have\n* moved forward enough that it jumped to another file after\n* we looked. If checkpointer signalled the process to\n* termination, then it's definitely lost; but if a process is\n* still alive, then \"unreserved\" seems more appropriate.\n*\n* If we do change it, save the state for safe_wal_size below.\n*/\n--- code ---\n\nI see now how an invalid slot's wal status can be reported as\nunreserved. So I think it's a bug.\n\n>\n> >\n> > >\n> > > So, I think it's a bug and one idea to fix is to check the validity of\n> > > the physical slot in\n> > > StartReplication() after acquiring the slot like what the attachment\n> > > does, what do you think ?\n> >\n> > I am not sure whether that's necessarily a bug. Of course, we don't expect\n> > invalid slots to be used but in this case I don't see any harm.\n> > The invalid slot has been revived and has all the properties set just like a valid\n> > slot. So it must be considered in\n> > ReplicationSlotsComputeRequiredXmin() as well. I haven't verified it myself\n> > though. In case the WAL is really lost and is requested by the standby it will\n> > throw an error \"requested WAL segment [0-9A-F]+ has already been\n> > removed\". So no harm there as well.\n>\n> Since the 'invalidated' field of the slot is still (RS_INVAL_WAL_REMOVED), even\n> if the walsender advances the restart_lsn, the slot will not be considered in the\n> ReplicationSlotsComputeRequiredXmin(), so the WALs and Rows are not safe\n> and that's why I think it's a bug.\n>\n> After looking closer, it seems this behavior started from 15f8203 which introduced the\n> ReplicationSlotInvalidationCause 'invalidated', after that we check the invalidated enum\n> in Xmin/Lsn computation function.\n>\n> If we want to go back to previous behavior, we need to revert/adjust the check\n> for invalidated in ReplicationSlotsComputeRequiredXmin(), but since the logical\n> decoding(walsender) disallow using invalidated slot, so I feel it's consistent\n> to do similar check for physical one. Besides, pg_replication_slot_advance()\n> also disallow passing invalidated slot to it as well. What do you think ?\n\nWhat happens if you run your script on a build prior to 15f8203?\nPurely from reading the code, it looks like the physical slot would\nsprint back to life since its restart_lsn would be updated. But I am\nnot able to see what happens to invalidated_at. It probably remains a\nvalid LSN and the slot would still not be considred in xmin\ncalculation. It will be good to be compatible to pre-15f8203\nbehaviour.\n\nI think making logical and physical slot behaviour consistent would be\nbetter but if the inconsistent behaviour is out there for some\nreleases, changing that now will break backward compatibility.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 8 Dec 2023 19:14:38 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forbid the use of invalidated physical slots in streaming\n replication." }, { "msg_contents": "On Fri, 8 Dec 2023 at 19:15, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> > > > pg_replication_slot could be set back to null.\n> > >\n> > > In this case, since the basebackup was taken after the slot was invalidated, it\n> > > does not require the WAL that was removed. But it seems that once the\n> > > streaming starts, the slot sprints to life again and gets validated again. Here's\n> > > pg_replication_slot output after the standby starts.\n> >\n> > Actually, It doesn't bring the invalidated slot back to life completely.\n> > The slot's view data looks valid while the 'invalidated' flag of this slot is still\n> > RS_INVAL_WAL_REMOVED (user are not aware of it.)\n> >\n>\n> I was mislead by the code in pg_get_replication_slots(). I did not\n> read it till the following\n>\n> --- code ----\n> case WALAVAIL_REMOVED:\n>\n> /*\n> * If we read the restart_lsn long enough ago, maybe that file\n> * has been removed by now. However, the walsender could have\n> * moved forward enough that it jumped to another file after\n> * we looked. If checkpointer signalled the process to\n> * termination, then it's definitely lost; but if a process is\n> * still alive, then \"unreserved\" seems more appropriate.\n> *\n> * If we do change it, save the state for safe_wal_size below.\n> */\n> --- code ---\n>\n> I see now how an invalid slot's wal status can be reported as\n> unreserved. So I think it's a bug.\n>\n> >\n> > >\n> > > >\n> > > > So, I think it's a bug and one idea to fix is to check the validity of\n> > > > the physical slot in\n> > > > StartReplication() after acquiring the slot like what the attachment\n> > > > does, what do you think ?\n> > >\n> > > I am not sure whether that's necessarily a bug. Of course, we don't expect\n> > > invalid slots to be used but in this case I don't see any harm.\n> > > The invalid slot has been revived and has all the properties set just like a valid\n> > > slot. So it must be considered in\n> > > ReplicationSlotsComputeRequiredXmin() as well. I haven't verified it myself\n> > > though. In case the WAL is really lost and is requested by the standby it will\n> > > throw an error \"requested WAL segment [0-9A-F]+ has already been\n> > > removed\". So no harm there as well.\n> >\n> > Since the 'invalidated' field of the slot is still (RS_INVAL_WAL_REMOVED), even\n> > if the walsender advances the restart_lsn, the slot will not be considered in the\n> > ReplicationSlotsComputeRequiredXmin(), so the WALs and Rows are not safe\n> > and that's why I think it's a bug.\n> >\n> > After looking closer, it seems this behavior started from 15f8203 which introduced the\n> > ReplicationSlotInvalidationCause 'invalidated', after that we check the invalidated enum\n> > in Xmin/Lsn computation function.\n> >\n> > If we want to go back to previous behavior, we need to revert/adjust the check\n> > for invalidated in ReplicationSlotsComputeRequiredXmin(), but since the logical\n> > decoding(walsender) disallow using invalidated slot, so I feel it's consistent\n> > to do similar check for physical one. Besides, pg_replication_slot_advance()\n> > also disallow passing invalidated slot to it as well. What do you think ?\n>\n> What happens if you run your script on a build prior to 15f8203?\n> Purely from reading the code, it looks like the physical slot would\n> sprint back to life since its restart_lsn would be updated. But I am\n> not able to see what happens to invalidated_at. It probably remains a\n> valid LSN and the slot would still not be considred in xmin\n> calculation. It will be good to be compatible to pre-15f8203\n> behaviour.\n\nI have changed the patch status to \"Waiting on Author\", as some of the\nqueries requested by Ashutosh are not yet addressed.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 8 Jan 2024 10:25:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forbid the use of invalidated physical slots in streaming\n replication." }, { "msg_contents": "On Thu, Dec 7, 2023 at 8:00 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n> After looking closer, it seems this behavior started from 15f8203 which introduced the\n> ReplicationSlotInvalidationCause 'invalidated', after that we check the invalidated enum\n> in Xmin/Lsn computation function.\n\nAdding Andres in Cc, as that was his commit.\n\nIt's not entirely clear to me how this feature was intended to\ninteract with physical replication slots. I found seemingly-relevant\ndocumentation in two places:\n\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE\nhttps://www.postgresql.org/docs/current/view-pg-replication-slots.html\n\nIn the latter, it says \"unreserved means that the slot no longer\nretains the required WAL files and some of them are to be removed at\nthe next checkpoint. This state can return to reserved or extended.\"\nBut if a slot becomes invalid in such a way that it cannot return to a\nvalid state later, then this isn't accurate.\n\nI have a feeling that the actual behavior here has evolved and the\ndocumentation hasn't kept up. And I wonder whether we need a more\ncomprehensive explanation of the intended behavior in this section:\n\nhttps://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 15:17:43 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forbid the use of invalidated physical slots in streaming\n replication." }, { "msg_contents": "On Mon, 8 Jan 2024 at 10:25, vignesh C <[email protected]> wrote:\n>\n> On Fri, 8 Dec 2023 at 19:15, Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > > > > pg_replication_slot could be set back to null.\n> > > >\n> > > > In this case, since the basebackup was taken after the slot was invalidated, it\n> > > > does not require the WAL that was removed. But it seems that once the\n> > > > streaming starts, the slot sprints to life again and gets validated again. Here's\n> > > > pg_replication_slot output after the standby starts.\n> > >\n> > > Actually, It doesn't bring the invalidated slot back to life completely.\n> > > The slot's view data looks valid while the 'invalidated' flag of this slot is still\n> > > RS_INVAL_WAL_REMOVED (user are not aware of it.)\n> > >\n> >\n> > I was mislead by the code in pg_get_replication_slots(). I did not\n> > read it till the following\n> >\n> > --- code ----\n> > case WALAVAIL_REMOVED:\n> >\n> > /*\n> > * If we read the restart_lsn long enough ago, maybe that file\n> > * has been removed by now. However, the walsender could have\n> > * moved forward enough that it jumped to another file after\n> > * we looked. If checkpointer signalled the process to\n> > * termination, then it's definitely lost; but if a process is\n> > * still alive, then \"unreserved\" seems more appropriate.\n> > *\n> > * If we do change it, save the state for safe_wal_size below.\n> > */\n> > --- code ---\n> >\n> > I see now how an invalid slot's wal status can be reported as\n> > unreserved. So I think it's a bug.\n> >\n> > >\n> > > >\n> > > > >\n> > > > > So, I think it's a bug and one idea to fix is to check the validity of\n> > > > > the physical slot in\n> > > > > StartReplication() after acquiring the slot like what the attachment\n> > > > > does, what do you think ?\n> > > >\n> > > > I am not sure whether that's necessarily a bug. Of course, we don't expect\n> > > > invalid slots to be used but in this case I don't see any harm.\n> > > > The invalid slot has been revived and has all the properties set just like a valid\n> > > > slot. So it must be considered in\n> > > > ReplicationSlotsComputeRequiredXmin() as well. I haven't verified it myself\n> > > > though. In case the WAL is really lost and is requested by the standby it will\n> > > > throw an error \"requested WAL segment [0-9A-F]+ has already been\n> > > > removed\". So no harm there as well.\n> > >\n> > > Since the 'invalidated' field of the slot is still (RS_INVAL_WAL_REMOVED), even\n> > > if the walsender advances the restart_lsn, the slot will not be considered in the\n> > > ReplicationSlotsComputeRequiredXmin(), so the WALs and Rows are not safe\n> > > and that's why I think it's a bug.\n> > >\n> > > After looking closer, it seems this behavior started from 15f8203 which introduced the\n> > > ReplicationSlotInvalidationCause 'invalidated', after that we check the invalidated enum\n> > > in Xmin/Lsn computation function.\n> > >\n> > > If we want to go back to previous behavior, we need to revert/adjust the check\n> > > for invalidated in ReplicationSlotsComputeRequiredXmin(), but since the logical\n> > > decoding(walsender) disallow using invalidated slot, so I feel it's consistent\n> > > to do similar check for physical one. Besides, pg_replication_slot_advance()\n> > > also disallow passing invalidated slot to it as well. What do you think ?\n> >\n> > What happens if you run your script on a build prior to 15f8203?\n> > Purely from reading the code, it looks like the physical slot would\n> > sprint back to life since its restart_lsn would be updated. But I am\n> > not able to see what happens to invalidated_at. It probably remains a\n> > valid LSN and the slot would still not be considred in xmin\n> > calculation. It will be good to be compatible to pre-15f8203\n> > behaviour.\n>\n> I have changed the patch status to \"Waiting on Author\", as some of the\n> queries requested by Ashutosh are not yet addressed.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 21:48:38 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forbid the use of invalidated physical slots in streaming\n replication." } ]
[ { "msg_contents": "Hi,\n\nThere is an ongoing thread [1] for adding missing SQL error codes to\nPANIC and FATAL error reports in xlogrecovery.c file. I did the same\nbut for xlog.c and relcache.c files.\n\nI couldn't find a suitable error code for the \"cache lookup failed for\nrelation\" error in relcache.c and this error comes up in many places.\nWould it be reasonable to create a new error code specifically for\nthis?\n\nAny kind of feedback would be appreciated.\n\n[1] https://www.postgresql.org/message-id/CAPMWgZ8g17Myb5ZRE5aTNowUohafk4j48mZ_5_Zn9JnR5p2u0w%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 6 Dec 2023 16:03:57 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Add missing error codes to PANIC/FATAL error reports in xlog.c and\n relcache.c" }, { "msg_contents": "> On 6 Dec 2023, at 14:03, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> There is an ongoing thread [1] for adding missing SQL error codes to\n> PANIC and FATAL error reports in xlogrecovery.c file. I did the same\n> but for xlog.c and relcache.c files.\n\n-\telog(PANIC, \"space reserved for WAL record does not match what was written\");\n+\tereport(PANIC,\n+\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n+\t\t\t errmsg(\"space reserved for WAL record does not match what was written\")));\n\nelogs turned into ereports should use errmsg_internal() to keep the strings\nfrom being translated.\n\n-\telog(FATAL, \"could not write init file\");\n+\tereport(FATAL,\n+\t\t\t(errcode_for_file_access(),\n+\t\t\t errmsg(\"could not write init file\")));\n\nIs it worthwhile adding %m on these to get a little more help when debugging\nerrors that shouldn't happen?\n\n-\telog(FATAL, \"could not write init file\");\n+\tereport(FATAL,\n+\t\t\t(errcode_for_file_access(),\n\nThe extra parenthesis are no longer needed, I don't know if we have a policy to\nremove them when changing an ereport call but we should at least not introduce\nnew ones.\n\n-\telog(FATAL, \"cannot read pg_class without having selected a database\");\n+\tereport(FATAL,\n+\t\t\t(errcode(ERRCODE_INTERNAL_ERROR),\n\nereport (and thus elog) already defaults to ERRCODE_INTERNAL_ERROR for ERROR or\nhigher, so unless there is a better errcode an elog() call if preferrable here.\n\n> I couldn't find a suitable error code for the \"cache lookup failed for\n> relation\" error in relcache.c and this error comes up in many places.\n> Would it be reasonable to create a new error code specifically for\n> this?\n\nWe use ERRCODE_UNDEFINED_OBJECT for similar errors elsewhere, perhaps we can\nuse that for these as well?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 22 Feb 2024 14:55:48 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" }, { "msg_contents": "Hi,\n\nThanks for the review!\n\nOn Thu, 22 Feb 2024 at 16:55, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 6 Dec 2023, at 14:03, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> > There is an ongoing thread [1] for adding missing SQL error codes to\n> > PANIC and FATAL error reports in xlogrecovery.c file. I did the same\n> > but for xlog.c and relcache.c files.\n>\n> - elog(PANIC, \"space reserved for WAL record does not match what was written\");\n> + ereport(PANIC,\n> + (errcode(ERRCODE_DATA_CORRUPTED),\n> + errmsg(\"space reserved for WAL record does not match what was written\")));\n>\n> elogs turned into ereports should use errmsg_internal() to keep the strings\n> from being translated.\n\nDoes errmsg_internal() need to be used all the time when turning elogs\ninto ereports? errmsg_internal()'s comment says that \"This should be\nused for \"can't happen\" cases that are probably not worth spending\ntranslation effort on.\". Is it enough to check if the error message\nhas a translation, and then decide the use of errmsg_internal() or\nerrmsg()?\n\n> - elog(FATAL, \"could not write init file\");\n> + ereport(FATAL,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not write init file\")));\n>\n> Is it worthwhile adding %m on these to get a little more help when debugging\n> errors that shouldn't happen?\n\nI believe it is worthwhile, so I will add.\n\n> - elog(FATAL, \"could not write init file\");\n> + ereport(FATAL,\n> + (errcode_for_file_access(),\n>\n> The extra parenthesis are no longer needed, I don't know if we have a policy to\n> remove them when changing an ereport call but we should at least not introduce\n> new ones.\n>\n> - elog(FATAL, \"cannot read pg_class without having selected a database\");\n> + ereport(FATAL,\n> + (errcode(ERRCODE_INTERNAL_ERROR),\n>\n> ereport (and thus elog) already defaults to ERRCODE_INTERNAL_ERROR for ERROR or\n> higher, so unless there is a better errcode an elog() call if preferrable here.\n\nI did not know these, thanks.\n\n> > I couldn't find a suitable error code for the \"cache lookup failed for\n> > relation\" error in relcache.c and this error comes up in many places.\n> > Would it be reasonable to create a new error code specifically for\n> > this?\n>\n> We use ERRCODE_UNDEFINED_OBJECT for similar errors elsewhere, perhaps we can\n> use that for these as well?\n\nIt looks okay to me, ERRCODE_UNDEFINED_OBJECT is mostly used for the\n'not exist' errors and it seems the main reason for the 'cache lookup\nfailed for relation' error is because heap tuple does not exist\nanymore.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Fri, 23 Feb 2024 15:09:26 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" }, { "msg_contents": "> On 23 Feb 2024, at 13:09, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> Does errmsg_internal() need to be used all the time when turning elogs\n> into ereports? errmsg_internal()'s comment says that \"This should be\n> used for \"can't happen\" cases that are probably not worth spending\n> translation effort on.\". Is it enough to check if the error message\n> has a translation, and then decide the use of errmsg_internal() or\n> errmsg()?\n\nIf it's an elog then it won't have a translation as none are included in the\ntranslation set. If the errmsg is generic enough to be translated anyways via\nanother (un)related ereport call then we of course use that, but ideally not\ncreate new ones.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 23 Feb 2024 13:33:07 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" }, { "msg_contents": "Hi,\n\nOn Fri, 23 Feb 2024 at 15:34, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 23 Feb 2024, at 13:09, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> > Does errmsg_internal() need to be used all the time when turning elogs\n> > into ereports? errmsg_internal()'s comment says that \"This should be\n> > used for \"can't happen\" cases that are probably not worth spending\n> > translation effort on.\". Is it enough to check if the error message\n> > has a translation, and then decide the use of errmsg_internal() or\n> > errmsg()?\n>\n> If it's an elog then it won't have a translation as none are included in the\n> translation set. If the errmsg is generic enough to be translated anyways via\n> another (un)related ereport call then we of course use that, but ideally not\n> create new ones.\n\nThanks for the explanation.\n\nAll of your feedback is addressed in v2.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 26 Feb 2024 15:42:33 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" }, { "msg_contents": "> On 26 Feb 2024, at 13:42, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> All of your feedback is addressed in v2.\n\nNothing sticks out from reading through these patches, they seem quite ready to\nme. Being able to filter and analyze on errorcodes is likely to be more\nimportant going forward as more are running fleets of instances. I'm marking\nthese Ready for Committer, unless there are objections I think we should go\nahead with these. There are probably more errors in the system which could\nbenefit from the same treatment, but we need to start somewhere.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 6 Mar 2024 09:59:24 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" }, { "msg_contents": "> On 6 Mar 2024, at 09:59, Daniel Gustafsson <[email protected]> wrote:\n\n> Nothing sticks out from reading through these patches, they seem quite ready to\n> me.\n\nTook another look at this today and committed it. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 3 Apr 2024 11:11:31 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" }, { "msg_contents": "Hi,\n\nOn Wed, 3 Apr 2024 at 12:11, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 6 Mar 2024, at 09:59, Daniel Gustafsson <[email protected]> wrote:\n>\n> > Nothing sticks out from reading through these patches, they seem quite ready to\n> > me.\n>\n> Took another look at this today and committed it. Thanks!\n\nThanks for the commit!\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 3 Apr 2024 15:12:00 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add missing error codes to PANIC/FATAL error reports in xlog.c\n and relcache.c" } ]
[ { "msg_contents": "I have been looking into what it would take to get rid of the \ncustom_read_write and custom_query_jumble for the RangeTblEntry node \ntype. This is one of the larger and more complex exceptions left.\n\n(Similar considerations would also apply to the Constraint node type.)\n\nAllegedly, only certain fields of RangeTblEntry are valid based on \nrtekind. But exactly which ones seems to be documented and handled \ninconsistently. It seems that over time, new RTE kinds have \"borrowed\" \nfields that notionally belong to other RTE kinds, which is technically \nnot a problem but creates a bit of a mess when trying to understand all \nthis.\n\nI have some WIP patches to accompany this discussion.\n\nLet's start with the jumble function. I suppose that this was just \ncarried over from the pg_stat_statements-specific code without any \ndetailed review. For example, the \"inh\" field is documented to be valid \nin all RTEs, but it's only jumbled for RTE_RELATION. The \"lateral\" \nfield isn't looked at at all. I wouldn't be surprised if there are more \ncases like this.\n\nIn the first attached patch, I remove _jumbleRangeTblEntry() and instead \nadd per-field query_jumble_ignore annotations to approximately match the \nbehavior of the previous custom code. The pg_stat_statements test suite \nhas some coverage of this. I get rid of switch on rtekind; this should \nbe technically correct, since we do the equal and copy functions like \nthis also. So for example the \"inh\" field is now considered in each \ncase. But I left \"lateral\" alone. I suspect several of these new \nquery_jumble_ignore should actually be dropped because the code was \nwrong before.\n\nIn the second patch, I'm removing the switch on rtekind from \nrange_table_mutator_impl(). This should be fine because all the \nsubroutines can handle unset/NULL fields. And it removes one more place \nthat needs to track knowledge about which fields are valid when.\n\nIn the third patch, I'm removing the custom read/write functions for \nRangeTblEntry. Those functions wanted to have a few fields at the front \nto make the dump more legible; I'm doing that now by moving the fields \nup in the actual struct.\n\nNot done here, but something we should do is restructure the \ndocumentation of RangeTblEntry itself. I'm still thinking about the \nbest way to structure this, but I'm thinking more like noting for each \nfield when it's used, instead by block like it is now, which makes it \nawkward if a new RTE wants to borrow some fields.\n\nNow one could probably rightfully complain that having all these unused \nfields dumped would make the RangeTblEntry serialization bigger. I'm \nnot sure who big of a problem that actually is, considering how many \noften-unset fields other node types have. But it deserves some \nconsideration. I think the best way to work around that would be to \nhave a mode that omits fields that have their default value (zero). \nThis would be more generally useful; for example Query also has a bunch \nof fields that are not often set. I think this would be pretty easy to \nimplement, for example like\n\n#define WRITE_INT_FIELD(fldname) \\\n if (full_mode || node->fldname) \\\n appendStringInfo(str, \" :\" CppAsString(fldname) \" %d\", \nnode->fldname)\n\nThere is also the discussion over at [0] about larger redesigns of the \nnode serialization format. I'm also interested in that, but here I'm \nmainly trying to remove more special cases to make that kind of work \neasier in the future.\n\nAny thoughts about the direction?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CACxu%3DvL_SD%3DWJiFSJyyBuZAp_2v_XBqb1x9JBiqz52a_g9z3jA%40mail.gmail.com", "msg_date": "Wed, 6 Dec 2023 21:02:20 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "automating RangeTblEntry node support" }, { "msg_contents": "On Wed, 6 Dec 2023 at 21:02, Peter Eisentraut <[email protected]> wrote:\n>\n> I have been looking into what it would take to get rid of the\n> custom_read_write and custom_query_jumble for the RangeTblEntry node\n> type. This is one of the larger and more complex exceptions left.\n> [...]\n> Now one could probably rightfully complain that having all these unused\n> fields dumped would make the RangeTblEntry serialization bigger. I'm\n> not sure who big of a problem that actually is, considering how many\n> often-unset fields other node types have. But it deserves some\n> consideration. I think the best way to work around that would be to\n> have a mode that omits fields that have their default value (zero).\n> This would be more generally useful; for example Query also has a bunch\n> of fields that are not often set. I think this would be pretty easy to\n> implement, for example like\n\nActually, I've worked on this last weekend, and got some good results.\nIt did need some fine-tuning and field annotations, but got raw\nnodeToString sizes down 50%+ for the pg_rewrite table's ev_action\ncolumn, and compressed-with-pglz size of pg_rewrite total down 30%+.\n\n> #define WRITE_INT_FIELD(fldname) \\\n> if (full_mode || node->fldname) \\\n> appendStringInfo(str, \" :\" CppAsString(fldname) \" %d\",\n> node->fldname)\n>\n> There is also the discussion over at [0] about larger redesigns of the\n> node serialization format. I'm also interested in that, but here I'm\n> mainly trying to remove more special cases to make that kind of work\n> easier in the future.\n>\n> Any thoughts about the direction?\n\nI've created a new thread [0] with my patch. It actually didn't need\n_that_ many manual changes - most of it was just updating the\ngen_node_support.pl code generation, and making the macros do a good\njob.\n\nIn general I'm all for reducing special cases, so +1 on the idea. I'll\nhave to check the specifics of the patches at a later point in time.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 6 Dec 2023 22:20:03 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On 06.12.23 21:02, Peter Eisentraut wrote:\n> I have been looking into what it would take to get rid of the \n> custom_read_write and custom_query_jumble for the RangeTblEntry node \n> type.  This is one of the larger and more complex exceptions left.\n> \n> (Similar considerations would also apply to the Constraint node type.)\n\nIn this updated patch set, I have also added the treatment of the \nConstraint type. (I also noted that the manual read/write functions for \nthe Constraint type are out-of-sync again, so simplifying this would be \nreally helpful.) I have also added commit messages to each patch.\n\nThe way I have re-ordered the patch series now, I think patches 0001 \nthrough 0003 are candidates for inclusion after review, patch 0004 still \nneeds a bit more analysis and testing, as described therein.", "msg_date": "Mon, 15 Jan 2024 11:37:15 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On 1/15/24 02:37, Peter Eisentraut wrote:\n> In this updated patch set, I have also added the treatment of the Constraint type.  (I also noted \n> that the manual read/write functions for the Constraint type are out-of-sync again, so simplifying \n> this would be really helpful.)  I have also added commit messages to each patch.\n> \n> The way I have re-ordered the patch series now, I think patches 0001 through 0003 are candidates for \n> inclusion after review, patch 0004 still needs a bit more analysis and testing, as described therein.\n\nI had to apply the first patch by hand (on 9f13376396), so this looks due for a rebase. Patches 2-4 \napplied fine.\n\nCompiles & passes tests after each patch.\n\nThe overall idea seems like a good improvement to me.\n\nA few remarks about cleaning up the RangeTblEntry comments:\n\nAfter the fourth patch we have a \"Fields valid in all RTEs\" comment twice in the struct, once at the \ntop and once at the bottom. It's fine IMO but maybe the second could be \"More fields valid in all RTEs\"?\n\nThe new order of fields in RangleTblEntry matches the intro comment, which seems like another small \nbenefit.\n\nIt seems like we are moving away from ever putting RTEKind-specific fields into a union as suggested \nby the FIXME comment here. It was written in 2002. Is it time to remove it?\n\nThis now needs to say \"above\" not \"below\":\n\n /*\n * join_using_alias is an alias clause attached directly to JOIN/USING. It\n * is different from the alias field (below) in that it does not hide the\n * range variables of the tables being joined.\n */\n Alias *join_using_alias pg_node_attr(query_jumble_ignore);\n\nRe bloating the serialization output, we could leave this last patch until after the work on that \nother thread is done to skip default-valued items.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]\n\n\n", "msg_date": "Fri, 16 Feb 2024 12:36:24 -0800", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On Fri, 16 Feb 2024 at 21:36, Paul Jungwirth\n<[email protected]> wrote:\n>\n> On 1/15/24 02:37, Peter Eisentraut wrote:\n> > In this updated patch set, I have also added the treatment of the Constraint type. (I also noted\n> > that the manual read/write functions for the Constraint type are out-of-sync again, so simplifying\n> > this would be really helpful.) I have also added commit messages to each patch.\n> >\n> > The way I have re-ordered the patch series now, I think patches 0001 through 0003 are candidates for\n> > inclusion after review, patch 0004 still needs a bit more analysis and testing, as described therein.\n>\n> Re bloating the serialization output, we could leave this last patch until after the work on that\n> other thread is done to skip default-valued items.\n\nI'm not sure that the cleanup which is done when changing a RTE's\nrtekind is also complete enough for this purpose.\nThings like inline_cte_walker change the node->rtekind, which could\nleave residual junk data in fields that are currently dropped during\nserialization (as the rtekind specifically ignores those fields), but\nwhich would add overhead when the default omission is expected to\nhandle these fields; as they could then contain junk. It looks like\nthere is some care about zeroing now unused fields, but I haven't\nchecked that it covers all cases and fields to the extent required so\nthat removing this specialized serializer would have zero impact on\nsize once the default omission patch is committed.\n\nAn additional patch with a single function that for this purpose\nclears junk fields from RTEs that changed kind would be appreciated:\nit is often hand-coded at those locations the kind changes, but that's\nmore sensitive to programmer error.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Sun, 18 Feb 2024 00:06:19 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On 18.02.24 00:06, Matthias van de Meent wrote:\n> I'm not sure that the cleanup which is done when changing a RTE's\n> rtekind is also complete enough for this purpose.\n> Things like inline_cte_walker change the node->rtekind, which could\n> leave residual junk data in fields that are currently dropped during\n> serialization (as the rtekind specifically ignores those fields), but\n> which would add overhead when the default omission is expected to\n> handle these fields; as they could then contain junk. It looks like\n> there is some care about zeroing now unused fields, but I haven't\n> checked that it covers all cases and fields to the extent required so\n> that removing this specialized serializer would have zero impact on\n> size once the default omission patch is committed.\n> \n> An additional patch with a single function that for this purpose\n> clears junk fields from RTEs that changed kind would be appreciated:\n> it is often hand-coded at those locations the kind changes, but that's\n> more sensitive to programmer error.\n\nYes, interesting idea. Or maybe an assert-like function that checks an \nexisting structure for consistency. Or maybe both. I'll try this out.\n\nIn the meantime, if there are no remaining concerns, I propose to commit \nthe first two patches\n\nRemove custom Constraint node read/write implementations\nRemove custom _jumbleRangeTblEntry()\n\n\n\n", "msg_date": "Tue, 20 Feb 2024 08:57:25 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On 20.02.24 08:57, Peter Eisentraut wrote:\n> On 18.02.24 00:06, Matthias van de Meent wrote:\n>> I'm not sure that the cleanup which is done when changing a RTE's\n>> rtekind is also complete enough for this purpose.\n>> Things like inline_cte_walker change the node->rtekind, which could\n>> leave residual junk data in fields that are currently dropped during\n>> serialization (as the rtekind specifically ignores those fields), but\n>> which would add overhead when the default omission is expected to\n>> handle these fields; as they could then contain junk. It looks like\n>> there is some care about zeroing now unused fields, but I haven't\n>> checked that it covers all cases and fields to the extent required so\n>> that removing this specialized serializer would have zero impact on\n>> size once the default omission patch is committed.\n>>\n>> An additional patch with a single function that for this purpose\n>> clears junk fields from RTEs that changed kind would be appreciated:\n>> it is often hand-coded at those locations the kind changes, but that's\n>> more sensitive to programmer error.\n> \n> Yes, interesting idea.  Or maybe an assert-like function that checks an \n> existing structure for consistency.  Or maybe both.  I'll try this out.\n> \n> In the meantime, if there are no remaining concerns, I propose to commit \n> the first two patches\n> \n> Remove custom Constraint node read/write implementations\n> Remove custom _jumbleRangeTblEntry()\n\nAfter a few side quests, here is an updated patch set. (I had committed \nthe first of the two patches mentioned above, but not yet the second one.)\n\nv3-0001-Remove-obsolete-comment.patch\nv3-0002-Improve-comment.patch\n\nThese just update a few comments around the RangeTblEntry definition.\n\nv3-0003-Reformat-some-node-comments.patch\nv3-0004-Remove-custom-_jumbleRangeTblEntry.patch\n\nThis is pretty much the same patch as before. I have now split it up to \nfirst reformat the comments to make room for the node annotations. This \npatch is now also pgindent-proof. After some side quest discussions, \nthe set of fields to jumble seems correct now, so commit message \ncomments to the contrary have been dropped.\n\nv3-0005-Make-RangeTblEntry-dump-order-consistent.patch\n\nI separated that from the 0008 patch below. I think it useful even if \nwe don't go ahead with 0008 now, for example in dumps from the debugger, \nand just in general to keep everything more consistent.\n\nv3-0006-WIP-AssertRangeTblEntryIsValid.patch\n\nThis is in response to some of the discussions where there was some \ndoubt whether all fields are always filled and cleared correctly when \nthe RTE kind is changed. Seems correct as far as this goes. I didn't \nknow of a good way to hook this in, so I put it into the write/read \nfunctions, which is obviously a bit weird if I'm proposing to remove \nthem later. Consider it a proof of concept.\n\nv3-0007-Simplify-range_table_mutator_impl-and-range_table.patch\nv3-0008-WIP-Remove-custom-RangeTblEntry-node-read-write-i.patch\n\nAt this point, I'm not too stressed about pressing forward with these \nlast two patches. We can look at them again perhaps if we make progress \non a more compact node output format. When I started this thread, I had \na lot of questions about various details about the RangeTblEntry struct, \nand we have achieved many answers during the discussions, so I'm happy \nwith the progress. So for PG17, I'd like to just do patches 0001..0005.", "msg_date": "Mon, 11 Mar 2024 10:29:38 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On Mon, Mar 11, 2024 at 5:29 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 20.02.24 08:57, Peter Eisentraut wrote:\n> > On 18.02.24 00:06, Matthias van de Meent wrote:\n> >> I'm not sure that the cleanup which is done when changing a RTE's\n> >> rtekind is also complete enough for this purpose.\n> >> Things like inline_cte_walker change the node->rtekind, which could\n> >> leave residual junk data in fields that are currently dropped during\n> >> serialization (as the rtekind specifically ignores those fields), but\n> >> which would add overhead when the default omission is expected to\n> >> handle these fields; as they could then contain junk. It looks like\n> >> there is some care about zeroing now unused fields, but I haven't\n> >> checked that it covers all cases and fields to the extent required so\n> >> that removing this specialized serializer would have zero impact on\n> >> size once the default omission patch is committed.\n> >>\n> >> An additional patch with a single function that for this purpose\n> >> clears junk fields from RTEs that changed kind would be appreciated:\n> >> it is often hand-coded at those locations the kind changes, but that's\n> >> more sensitive to programmer error.\n> >\n> > Yes, interesting idea. Or maybe an assert-like function that checks an\n> > existing structure for consistency. Or maybe both. I'll try this out.\n> >\n> > In the meantime, if there are no remaining concerns, I propose to commit\n> > the first two patches\n> >\n> > Remove custom Constraint node read/write implementations\n> > Remove custom _jumbleRangeTblEntry()\n>\n> After a few side quests, here is an updated patch set. (I had committed\n> the first of the two patches mentioned above, but not yet the second one.)\n>\n> v3-0001-Remove-obsolete-comment.patch\n> v3-0002-Improve-comment.patch\n>\n> These just update a few comments around the RangeTblEntry definition.\n>\n> v3-0003-Reformat-some-node-comments.patch\n> v3-0004-Remove-custom-_jumbleRangeTblEntry.patch\n>\n> This is pretty much the same patch as before. I have now split it up to\n> first reformat the comments to make room for the node annotations. This\n> patch is now also pgindent-proof. After some side quest discussions,\n> the set of fields to jumble seems correct now, so commit message\n> comments to the contrary have been dropped.\n>\n> v3-0005-Make-RangeTblEntry-dump-order-consistent.patch\n>\n> I separated that from the 0008 patch below. I think it useful even if\n> we don't go ahead with 0008 now, for example in dumps from the debugger,\n> and just in general to keep everything more consistent.\n>\n> v3-0006-WIP-AssertRangeTblEntryIsValid.patch\n>\n> This is in response to some of the discussions where there was some\n> doubt whether all fields are always filled and cleared correctly when\n> the RTE kind is changed. Seems correct as far as this goes. I didn't\n> know of a good way to hook this in, so I put it into the write/read\n> functions, which is obviously a bit weird if I'm proposing to remove\n> them later. Consider it a proof of concept.\n>\n> v3-0007-Simplify-range_table_mutator_impl-and-range_table.patch\n> v3-0008-WIP-Remove-custom-RangeTblEntry-node-read-write-i.patch\n>\n> At this point, I'm not too stressed about pressing forward with these\n> last two patches. We can look at them again perhaps if we make progress\n> on a more compact node output format. When I started this thread, I had\n> a lot of questions about various details about the RangeTblEntry struct,\n> and we have achieved many answers during the discussions, so I'm happy\n> with the progress. So for PG17, I'd like to just do patches 0001..0005.\n>\n\n\nPatches 1 thru 5 look good to me\n\ncheers\n\nandrew\n\nOn Mon, Mar 11, 2024 at 5:29 AM Peter Eisentraut <[email protected]> wrote:On 20.02.24 08:57, Peter Eisentraut wrote:\n> On 18.02.24 00:06, Matthias van de Meent wrote:\n>> I'm not sure that the cleanup which is done when changing a RTE's\n>> rtekind is also complete enough for this purpose.\n>> Things like inline_cte_walker change the node->rtekind, which could\n>> leave residual junk data in fields that are currently dropped during\n>> serialization (as the rtekind specifically ignores those fields), but\n>> which would add overhead when the default omission is expected to\n>> handle these fields; as they could then contain junk. It looks like\n>> there is some care about zeroing now unused fields, but I haven't\n>> checked that it covers all cases and fields to the extent required so\n>> that removing this specialized serializer would have zero impact on\n>> size once the default omission patch is committed.\n>>\n>> An additional patch with a single function that for this purpose\n>> clears junk fields from RTEs that changed kind would be appreciated:\n>> it is often hand-coded at those locations the kind changes, but that's\n>> more sensitive to programmer error.\n> \n> Yes, interesting idea.  Or maybe an assert-like function that checks an \n> existing structure for consistency.  Or maybe both.  I'll try this out.\n> \n> In the meantime, if there are no remaining concerns, I propose to commit \n> the first two patches\n> \n> Remove custom Constraint node read/write implementations\n> Remove custom _jumbleRangeTblEntry()\n\nAfter a few side quests, here is an updated patch set.  (I had committed \nthe first of the two patches mentioned above, but not yet the second one.)\n\nv3-0001-Remove-obsolete-comment.patch\nv3-0002-Improve-comment.patch\n\nThese just update a few comments around the RangeTblEntry definition.\n\nv3-0003-Reformat-some-node-comments.patch\nv3-0004-Remove-custom-_jumbleRangeTblEntry.patch\n\nThis is pretty much the same patch as before.  I have now split it up to \nfirst reformat the comments to make room for the node annotations.  This \npatch is now also pgindent-proof.  After some side quest discussions, \nthe set of fields to jumble seems correct now, so commit message \ncomments to the contrary have been dropped.\n\nv3-0005-Make-RangeTblEntry-dump-order-consistent.patch\n\nI separated that from the 0008 patch below.  I think it useful even if \nwe don't go ahead with 0008 now, for example in dumps from the debugger, \nand just in general to keep everything more consistent.\n\nv3-0006-WIP-AssertRangeTblEntryIsValid.patch\n\nThis is in response to some of the discussions where there was some \ndoubt whether all fields are always filled and cleared correctly when \nthe RTE kind is changed.  Seems correct as far as this goes.  I didn't \nknow of a good way to hook this in, so I put it into the write/read \nfunctions, which is obviously a bit weird if I'm proposing to remove \nthem later.  Consider it a proof of concept.\n\nv3-0007-Simplify-range_table_mutator_impl-and-range_table.patch\nv3-0008-WIP-Remove-custom-RangeTblEntry-node-read-write-i.patch\n\nAt this point, I'm not too stressed about pressing forward with these \nlast two patches.  We can look at them again perhaps if we make progress \non a more compact node output format.  When I started this thread, I had \na lot of questions about various details about the RangeTblEntry struct, \nand we have achieved many answers during the discussions, so I'm happy \nwith the progress.  So for PG17, I'd like to just do patches 0001..0005.Patches 1 thru 5 look good to mecheersandrew", "msg_date": "Thu, 21 Mar 2024 05:51:18 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automating RangeTblEntry node support" }, { "msg_contents": "On 21.03.24 10:51, Andrew Dunstan wrote:\n> At this point, I'm not too stressed about pressing forward with these\n> last two patches.  We can look at them again perhaps if we make\n> progress\n> on a more compact node output format.  When I started this thread, I\n> had\n> a lot of questions about various details about the RangeTblEntry\n> struct,\n> and we have achieved many answers during the discussions, so I'm happy\n> with the progress.  So for PG17, I'd like to just do patches 0001..0005.\n> \n> Patches 1 thru 5 look good to me\n\nThanks for checking. I have committed these (1 through 5) and will \nclose the commit fest entry.\n\n\n", "msg_date": "Fri, 22 Mar 2024 07:49:51 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: automating RangeTblEntry node support" } ]
[ { "msg_contents": "Hi,\n\nPFA a patch that reduces the output size of nodeToString by 50%+ in\nmost cases (measured on pg_rewrite), which on my system reduces the\ntotal size of pg_rewrite by 33% to 472KiB. This does keep the textual\npg_node_tree format alive, but reduces its size signficantly.\n\nThe basic techniques used are\n - Don't emit scalar fields when they contain a default value, and\nmake the reading code aware of this.\n - Reasonable defaults are set for most datatypes, and overrides can\nbe added with new pg_node_attr() attributes. No introspection into\nnon-null Node/Array/etc. is being done though.\n - Reset more fields to their default values before storing the values.\n - Don't write trailing 0s in outDatum calls for by-ref types. This\nsaves many bytes for Name fields, but also some other pre-existing\nentry points.\n\nFuture work will probably have to be on a significantly different\nstorage format, as the textual format is about to hit its entropy\nlimits.\n\nSee also [0], [1] and [2], where complaints about the verbosity of\nnodeToString were vocalized.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2WgGexDM63dOvndLdAWwA6uSmSsc97jmrCuNmrF1JEDK7w%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/CACxu%3DvL_SD%3DWJiFSJyyBuZAp_2v_XBqb1x9JBiqz52a_g9z3jA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/4b27fc50-8cd6-46f5-ab20-88dbaadca645%40eisentraut.org", "msg_date": "Wed, 6 Dec 2023 22:08:38 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Reducing output size of nodeToString" }, { "msg_contents": "On 06.12.23 22:08, Matthias van de Meent wrote:\n> PFA a patch that reduces the output size of nodeToString by 50%+ in\n> most cases (measured on pg_rewrite), which on my system reduces the\n> total size of pg_rewrite by 33% to 472KiB. This does keep the textual\n> pg_node_tree format alive, but reduces its size signficantly.\n> \n> The basic techniques used are\n> - Don't emit scalar fields when they contain a default value, and\n> make the reading code aware of this.\n> - Reasonable defaults are set for most datatypes, and overrides can\n> be added with new pg_node_attr() attributes. No introspection into\n> non-null Node/Array/etc. is being done though.\n> - Reset more fields to their default values before storing the values.\n> - Don't write trailing 0s in outDatum calls for by-ref types. This\n> saves many bytes for Name fields, but also some other pre-existing\n> entry points.\n> \n> Future work will probably have to be on a significantly different\n> storage format, as the textual format is about to hit its entropy\n> limits.\n\nOne thing that was mentioned repeatedly is that we might want different \nformats for human consumption and for machine storage.\n\nFor human consumption, I would like some format like what you propose, \nbecause it generally omits the \"unset\" or \"uninteresting\" fields.\n\nBut since you also talk about the size of pg_rewrite, I wonder whether \nit would be smaller if we just didn't write the field names at all but \ninstead all the field values. (This should be pretty easy to test, \nsince the read functions currently ignore the field names anyway; you \ncould just write out all field names as \"x\" and see what happens.)\n\nI don't much like the way your patch uses the term \"default\". Most of \nthese default values are not defaults at all, but perhaps \"most common \nvalues\". In theory, I would expect a default value to be initialized by \nmakeNode(). (That could be an interesting feature, but let's stay \nfocused here.) But even then most of these \"defaults\" wouldn't be \nappropriate for a real default value. This part seems quite \ncontroversial to me, and I would like to see some more details about how \nmuch this specifically really saves.\n\nI don't quite understand why in your patch you have some fields as \noptional and some not. Or is that what WRITE_NODE_FIELD() vs. \nWRITE_NODE_FIELD_OPT() means? How is it decided which one to use?\n\nThe part that clears out the location fields in pg_rewrite entries might \nbe worth considering as a separate patch. Could you explain it more? \nDoes it affect location pointers when using views at all?\n\n\n\n", "msg_date": "Thu, 7 Dec 2023 11:26:10 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 7 Dec 2023 at 11:26, Peter Eisentraut <[email protected]> wrote:\n>\n> On 06.12.23 22:08, Matthias van de Meent wrote:\n> > PFA a patch that reduces the output size of nodeToString by 50%+ in\n> > most cases (measured on pg_rewrite), which on my system reduces the\n> > total size of pg_rewrite by 33% to 472KiB. This does keep the textual\n> > pg_node_tree format alive, but reduces its size signficantly.\n> >\n> > The basic techniques used are\n> > - Don't emit scalar fields when they contain a default value, and\n> > make the reading code aware of this.\n> > - Reasonable defaults are set for most datatypes, and overrides can\n> > be added with new pg_node_attr() attributes. No introspection into\n> > non-null Node/Array/etc. is being done though.\n> > - Reset more fields to their default values before storing the values.\n> > - Don't write trailing 0s in outDatum calls for by-ref types. This\n> > saves many bytes for Name fields, but also some other pre-existing\n> > entry points.\n> >\n> > Future work will probably have to be on a significantly different\n> > storage format, as the textual format is about to hit its entropy\n> > limits.\n>\n> One thing that was mentioned repeatedly is that we might want different\n> formats for human consumption and for machine storage.\n> For human consumption, I would like some format like what you propose,\n> because it generally omits the \"unset\" or \"uninteresting\" fields.\n>\n> But since you also talk about the size of pg_rewrite, I wonder whether\n> it would be smaller if we just didn't write the field names at all but\n> instead all the field values. (This should be pretty easy to test,\n> since the read functions currently ignore the field names anyway; you\n> could just write out all field names as \"x\" and see what happens.)\n\nI've been thinking about using a more binary storage format similar to\nprotobuf (but with system knowledge baked in, instead of PB's\ndefaults), but that would be dependent on functions that change the\noutput functions of pg_node_tree too, which Michel mentioned he would\nwork on a year ago (iiuc).\n\nI think it would be a logical next step after this, but this patch is\njust on building infrastructure that reduces the stored size without\ngetting in the way of Michel's work, if there was any result.\n\n> I don't much like the way your patch uses the term \"default\". Most of\n> these default values are not defaults at all, but perhaps \"most common\n> values\".\n\nYes, some 'defaults' are curated, but they have sound logic behind\nthem: *typmod is essentially always copied from an attypmod, which\ndefaults to -1. *isnull for any constant is generally unset. Many of\nthose other fields (once initialized by the relevant code) default to\nthose values I used.\n\n> In theory, I would expect a default value to be initialized by\n> makeNode(). (That could be an interesting feature, but let's stay\n> focused here.) But even then most of these \"defaults\" wouldn't be\n> appropriate for a real default value. This part seems quite\n> controversial to me, and I would like to see some more details about how\n> much this specifically really saves.\n\nThe tuning of these \"defaults\" got the savings from 20-30% to this\n50%+ reduction in raw size.\n\n> I don't quite understand why in your patch you have some fields as\n> optional and some not. Or is that what WRITE_NODE_FIELD() vs.\n> WRITE_NODE_FIELD_OPT() means? How is it decided which one to use?\n\nI use _OPT when I know the value is likely to be its defualt value,\nand don't change over to _OPT when I know with great certainty the\nvalue is going to be dynamic, such as relation ID in RTEs, but this is\nonly relevant for manual code as generated code essentially always\nuses the _OPT paths.\n\n> The part that clears out the location fields in pg_rewrite entries might\n> be worth considering as a separate patch. Could you explain it more?\n> Does it affect location pointers when using views at all?\n\nViews don't store the original query string, so the location pointers\nin views point to locations in a now non-existent query string.\nAdditionally, unless WRITE_READ_PARSE_PLAN_TREES is defined,\nREAD_LOCATION_FIELD does not actually read the stored value but\ninstead stores -1 in the indicated field, so in most cases there won't\nbe any difference between the deserialized data before and after this\npart of the patch; the only difference is the amount of debugable\ninformation stored in the view's internal data.\nNote that resetting them to 'invalid' value thus makes sense, and\nimproves compressibility and allows removal from the serialized format\nwhen serialization omits fields with default values.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 7 Dec 2023 12:18:28 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 7 Dec 2023 at 10:09, Matthias van de Meent\n<[email protected]> wrote:\n> PFA a patch that reduces the output size of nodeToString by 50%+ in\n> most cases (measured on pg_rewrite), which on my system reduces the\n> total size of pg_rewrite by 33% to 472KiB. This does keep the textual\n> pg_node_tree format alive, but reduces its size significantly.\n\nIt would be very cool to have the technology proposed by Andres back\nin 2019 [1]. With that, we could easily write various output\nfunctions. One could be compact and easily machine-readable and\nanother designed to be better for humans for debugging purposes.\n\nWe could also easily serialize plans to binary format for copying to\nparallel workers rather than converting them to a text-based\nserialized format. It would also allow us to do things like serialize\nPREPAREd plans into a nicely compact single allocation that we could\njust pfree in a single pfree call on DEALLOCATE.\n\nLikely we could just use the existing Perl scripts to form the\nmetadata arrays rather than the clang parsing stuff Andres used in his\npatch.\n\nAnyway, just wanted to ensure you knew about this idea.\n\nDavid\n\n[1] https://postgr.es/m/flat/20190828234136.fk2ndqtld3onfrrp%40alap3.anarazel.de\n\n\n", "msg_date": "Fri, 8 Dec 2023 01:09:07 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 7 Dec 2023 at 13:09, David Rowley <[email protected]> wrote:\n>\n> On Thu, 7 Dec 2023 at 10:09, Matthias van de Meent\n> <[email protected]> wrote:\n> > PFA a patch that reduces the output size of nodeToString by 50%+ in\n> > most cases (measured on pg_rewrite), which on my system reduces the\n> > total size of pg_rewrite by 33% to 472KiB. This does keep the textual\n> > pg_node_tree format alive, but reduces its size significantly.\n>\n> It would be very cool to have the technology proposed by Andres back\n> in 2019 [1]. With that, we could easily write various output\n> functions. One could be compact and easily machine-readable and\n> another designed to be better for humans for debugging purposes.\n>\n> We could also easily serialize plans to binary format for copying to\n> parallel workers rather than converting them to a text-based\n> serialized format. It would also allow us to do things like serialize\n> PREPAREd plans into a nicely compact single allocation that we could\n> just pfree in a single pfree call on DEALLOCATE.\n\nI'm not sure what benefit you're refering to. If you mean \"it's more\ncompact than the current format\" then sure; but the other points can\nalready be covered by either the current nodeToString format, or by\nnodeCopy-ing the prepared plan into its own MemoryContext, which would\nallow us to do essentially the same thing.\n\n> Likely we could just use the existing Perl scripts to form the\n> metadata arrays rather than the clang parsing stuff Andres used in his\n> patch.\n>\n> Anyway, just wanted to ensure you knew about this idea.\n\nI knew about that thread thread, but didn't notice the metadata arrays\npart of it, which indeed looks interesting for this patch. Thanks for\npointing it out. I'll see if I can incorporate parts of that into this\npatchset.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 14 Dec 2023 07:21:30 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 06.12.23 22:08, Matthias van de Meent wrote:\n> PFA a patch that reduces the output size of nodeToString by 50%+ in\n> most cases (measured on pg_rewrite), which on my system reduces the\n> total size of pg_rewrite by 33% to 472KiB. This does keep the textual\n> pg_node_tree format alive, but reduces its size signficantly.\n> \n> The basic techniques used are\n> - Don't emit scalar fields when they contain a default value, and\n> make the reading code aware of this.\n> - Reasonable defaults are set for most datatypes, and overrides can\n> be added with new pg_node_attr() attributes. No introspection into\n> non-null Node/Array/etc. is being done though.\n> - Reset more fields to their default values before storing the values.\n> - Don't write trailing 0s in outDatum calls for by-ref types. This\n> saves many bytes for Name fields, but also some other pre-existing\n> entry points.\n\nBased on our discussions, my understanding is that you wanted to produce \nan updated patch set that is split up a bit.\n\nMy suggestion is to make incremental patches along these lines:\n\n- Omit from output all fields that have value zero.\n\n- Omit location fields that have value -1.\n\n- Omit trailing zeroes for scalar values.\n\n- Recent location fields before storing in pg_rewrite (or possibly \ncatalogs in general?)\n\n- And then whatever is left, including the \"default\" value system that \nyou have proposed.\n\nThe last one I have some doubts about, as previously expressed, but the \nfirst few seem sensible to me. By splitting it up we can consider these \nincrementally.\n\n\n\n", "msg_date": "Tue, 2 Jan 2024 11:30:13 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 14 Dec 2023 at 19:21, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Thu, 7 Dec 2023 at 13:09, David Rowley <[email protected]> wrote:\n> > We could also easily serialize plans to binary format for copying to\n> > parallel workers rather than converting them to a text-based\n> > serialized format. It would also allow us to do things like serialize\n> > PREPAREd plans into a nicely compact single allocation that we could\n> > just pfree in a single pfree call on DEALLOCATE.\n>\n> I'm not sure what benefit you're refering to. If you mean \"it's more\n> compact than the current format\" then sure; but the other points can\n> already be covered by either the current nodeToString format, or by\n> nodeCopy-ing the prepared plan into its own MemoryContext, which would\n> allow us to do essentially the same thing.\n\nThere's significantly less memory involved in just having a plan\nserialised into a single chunk of memory vs a plan stored in its own\nMemoryContext. With the serialised plan, you don't have any power of\n2 rounding up wastage that aset.c does and don't need extra space for\nall the MemoryChunks that would exist for every single palloc'd chunk\nin the MemoryContext version.\n\nI think it would be nice if one day in the future if a PREPAREd plan\ncould have multiple different plans cached. We could then select which\none to use by looking at statistics for the given parameters and\nchoose the plan that's most suitable for the given parameters. Of\ncourse, this is a whole entirely different project. I mention it just\nbecause being able to serialise a plan would make the memory\nmanagement and overhead for such a feature much more manageable.\nThere'd likely need to be some eviction logic in such a feature as the\nnumber of possible plans for some complex query is quite likely to be\nmuch more than we'd care to cache.\n\nDavid\n\n\n", "msg_date": "Wed, 3 Jan 2024 15:02:02 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Wed, 3 Jan 2024 at 03:02, David Rowley <[email protected]> wrote:\n>\n> On Thu, 14 Dec 2023 at 19:21, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Thu, 7 Dec 2023 at 13:09, David Rowley <[email protected]> wrote:\n> > > We could also easily serialize plans to binary format for copying to\n> > > parallel workers rather than converting them to a text-based\n> > > serialized format. It would also allow us to do things like serialize\n> > > PREPAREd plans into a nicely compact single allocation that we could\n> > > just pfree in a single pfree call on DEALLOCATE.\n> >\n> > I'm not sure what benefit you're refering to. If you mean \"it's more\n> > compact than the current format\" then sure; but the other points can\n> > already be covered by either the current nodeToString format, or by\n> > nodeCopy-ing the prepared plan into its own MemoryContext, which would\n> > allow us to do essentially the same thing.\n>\n> There's significantly less memory involved in just having a plan\n> serialised into a single chunk of memory vs a plan stored in its own\n> MemoryContext. With the serialised plan, you don't have any power of\n> 2 rounding up wastage that aset.c does and don't need extra space for\n> all the MemoryChunks that would exist for every single palloc'd chunk\n> in the MemoryContext version.\n\nI was envisioning this to use the Bump memory context you proposed\nover in [0], as to the best of my knowledge prepared plans are not\nmodified, so nodeCopy-ing a prepared plan into bump context could be a\ngood use case for those contexts. This should remove the issue of\nrounding and memorychunk wastage in aset.\n\n> I think it would be nice if one day in the future if a PREPAREd plan\n> could have multiple different plans cached. We could then select which\n> one to use by looking at statistics for the given parameters and\n> choose the plan that's most suitable for the given parameters. Of\n> course, this is a whole entirely different project. I mention it just\n> because being able to serialise a plan would make the memory\n> management and overhead for such a feature much more manageable.\n> There'd likely need to be some eviction logic in such a feature as the\n> number of possible plans for some complex query is quite likely to be\n> much more than we'd care to cache.\n\nYeah, that'd be nice, but is also definitely future work.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0]: https://www.postgresql.org/message-id/flat/CAApHDvqGSpCU95TmM%3DBp%3D6xjL_nLys4zdZOpfNyWBk97Xrdj2w%40mail.gmail.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 00:02:13 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Tue, 2 Jan 2024 at 11:30, Peter Eisentraut <[email protected]> wrote:\n>\n> On 06.12.23 22:08, Matthias van de Meent wrote:\n> > PFA a patch that reduces the output size of nodeToString by 50%+ in\n> > most cases (measured on pg_rewrite), which on my system reduces the\n> > total size of pg_rewrite by 33% to 472KiB. This does keep the textual\n> > pg_node_tree format alive, but reduces its size signficantly.\n> >\n> > The basic techniques used are\n> > - Don't emit scalar fields when they contain a default value, and\n> > make the reading code aware of this.\n> > - Reasonable defaults are set for most datatypes, and overrides can\n> > be added with new pg_node_attr() attributes. No introspection into\n> > non-null Node/Array/etc. is being done though.\n> > - Reset more fields to their default values before storing the values.\n> > - Don't write trailing 0s in outDatum calls for by-ref types. This\n> > saves many bytes for Name fields, but also some other pre-existing\n> > entry points.\n>\n> Based on our discussions, my understanding is that you wanted to produce\n> an updated patch set that is split up a bit.\n\nI mentioned that I've been working on implementing (but have not yet\ncompleted) a binary serialization format, with an implementation based\non Andres' generated metadata idea. However, that requires more\nelaborate infrastructure than is currently available, so while I said\nI'd expected it to be complete before the Christmas weekend, it'll\ntake some more time - I'm not sure it'll be ready for PG17.\n\nIn the meantime here's an updated version of the v0 patch, formally\nkeeping the textual format alive, while reducing the size\nsignificantly (nearing 2/3 reduction), taking your comments into\naccount. I think the gains are worth the consideration without taking\ninto account the as-of-yet unimplemented binary format.\n\n> My suggestion is to make incremental patches along these lines:\n> [...]\n\nSomething like the attached? It splits out into the following\n0001: basic 'omit default values'\n0002: reset location and other querystring-related node fields for all\ncatalogs of type pg_node_tree.\n0003: add default marking on typmod fields.\n0004 & 0006: various node fields marked with default() based on\nobserved common or initial values of those fields\n0005: truncate trailing 0s from outDatum\n0007 (new): do run-length + gap coding for bitmapset and the various\ninteger list types. This saves a surprising amount of bytes.\n\n> The last one I have some doubts about, as previously expressed, but the\n> first few seem sensible to me. By splitting it up we can consider these\n> incrementally.\n\nThat makes a lot of sense. The numbers for the full patchset do seem\nquite positive though: The metrics of the query below show a 40%\ndecrease in size of a fresh pg_rewrite (standard toast compression)\nand a 5% decrease in size of the template0 database. The uncompressed\ndata of pg_rewrite.ev_action is also 60% smaller.\n\nselect pg_database_size('template0') as \"template0\"\n , pg_total_relation_size('pg_rewrite') as \"pg_rewrite\"\n , sum(pg_column_size(ev_action)) as \"compressed\"\n , sum(octet_length(ev_action)) as \"raw\"\nfrom pg_rewrite;\n\n version | template0 | pg_rewrite | compressed | raw\n---------|-----------+------------+------------+---------\n master | 7545359 | 761856 | 573307 | 2998712\n 0001 | 7365135 | 622592 | 438224 | 1943772\n 0002 | 7258639 | 573440 | 401660 | 1835803\n 0003 | 7258639 | 565248 | 386211 | 1672539\n 0004 | 7176719 | 483328 | 317099 | 1316552\n 0005 | 7176719 | 483328 | 315556 | 1300420\n 0006 | 7160335 | 466944 | 302806 | 1208621\n 0007 | 7143951 | 450560 | 287659 | 1187237\n\nWhile looking through the data, I noticed the larger views now consist\nfor a significant portion out of range table entries, specifically the\nAlias and Var nodes (which are mostly repeated and/or repetative\nvalues, but split across Nodes). I think column-major storage would be\nmore efficient to write, but I'm not sure it's worth the effort in\nplanner code.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 4 Jan 2024 00:23:50 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 04.01.24 00:23, Matthias van de Meent wrote:\n> Something like the attached? It splits out into the following\n> 0001: basic 'omit default values'\n\n /* Write an integer field (anything written as \":fldname %d\") */\n-#define WRITE_INT_FIELD(fldname) \\\n+#define WRITE_INT_FIELD_DIRECT(fldname) \\\n appendStringInfo(str, \" :\" CppAsString(fldname) \" %d\", \nnode->fldname)\n+#define WRITE_INT_FIELD_DEFAULT(fldname, default) \\\n+ ((node->fldname == default) ? (0) : WRITE_INT_FIELD_DIRECT(fldname))\n+#define WRITE_INT_FIELD(fldname) \\\n+ WRITE_INT_FIELD_DEFAULT(fldname, 0)\n\nDo we need the _DIRECT macros at all? Could we not combine that into \nthe _DEFAULT ones?\n\nI think the way the conditional operator (?:) is written is not \ntechnically correct C, because one side has an integer result (0) and \nthe other a void result (from appendStringInfo()). Also, this could \nbreak accidentally even more if the result type of appendStringInfo() \nwas changed for some reason. I think it would be better to write this \nin a more straightforward way like\n\n#define WRITE_INT_FIELD_DEFAULT(fldname, default) \\\ndo { \\\n if (node->fldname == default) \\\n appendStringInfo(str, \" :\" CppAsString(fldname) \" %d\", \nnode->fldname); \\\nwhile (0)\n\nRelatedly, this\n\n+/* a scaffold function to read an optionally-omitted field */\n+#define READ_OPT_SCAFFOLD(fldname, read_field_code, default_value) \\\n+ if (pg_strtoken_next(\":\" CppAsString(fldname))) \\\n+ { \\\n+ read_field_code; \\\n+ } \\\n+ else \\\n+ local_node->fldname = default_value\n\nwould need to be written with a do { } while (0) wrapper around it.\n\n\n> 0002: reset location and other querystring-related node fields for all\n> catalogs of type pg_node_tree.\n\nThis goal makes sense, but I think it can be done in a better way. If \nyou look into the area of stringToNode(), stringToNodeWithLocations(), \nand stringToNodeInternal(), there already is support for selectively \nresetting or omitting location fields. Notably, this works with the \nexisting automated knowledge of where the location fields are and \ndoesn't require a new hand-maintained table. I think the way forward \nhere would be to apply a similar toggle to nodeToString() (the reverse).\n\n\n> 0003: add default marking on typmod fields.\n> 0004 & 0006: various node fields marked with default() based on\n> observed common or initial values of those fields\n\nI think we could get about half the benefit here more automatically, by \ncreating a separate type for typmods, like\n\ntypedef int32 TypMod;\n\nand then having the node support automatically generate the \nserialization support with a -1 default.\n\n(A similar thing could be applied to the location fields, which would \nallow us to get rid of the current hack of parsing out the name.)\n\nMost of the other defaults I'm doubtful about. First, we are colliding \nhere between the goals of minimizing the storage size and making the \ndebug output more readable. If a Query dump would now omit the \ncommandType field if it is CMD_SELECT, I think that would be widely \nconfusing, and one would need to check the source code to identify the \nreason. Also, what if we later decide to change a \"default\" for a \nfield. Then output between version would differ. Of course, node \noutput does change between versions in general, but these kinds of \ndifferences would be confusing. Second, this relies on hand-maintained \nannotations that were created by you presumably through a combination of \nintuition and testing, based on what is in the template database. Do we \nknow whether this matches real-world queries created by users later? \nAlso, my experience dealing with the node support over the last little \nwhile is that these manually maintained exceptions get ossified and \noutdated and create a maintenance headache for the future.\n\n\n> 0005: truncate trailing 0s from outDatum\n\nDoes this significantly affect anything other than the \"name\" type? \nUser views don't usually use the \"name\" type, so this would have limited \nimpact outside of system views.\n\n\n> 0007 (new): do run-length + gap coding for bitmapset and the various\n> integer list types. This saves a surprising amount of bytes.\n\nCan you show examples of this? How would this affects the ability to \nmanually interpret the output?\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 09:23:20 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Tue, 9 Jan 2024, 09:23 Peter Eisentraut, <[email protected]> wrote:\n>\n> On 04.01.24 00:23, Matthias van de Meent wrote:\n> > Something like the attached? It splits out into the following\n> > 0001: basic 'omit default values'\n>\n> /* Write an integer field (anything written as \":fldname %d\") */\n> -#define WRITE_INT_FIELD(fldname) \\\n> +#define WRITE_INT_FIELD_DIRECT(fldname) \\\n> appendStringInfo(str, \" :\" CppAsString(fldname) \" %d\",\n> node->fldname)\n> +#define WRITE_INT_FIELD_DEFAULT(fldname, default) \\\n> + ((node->fldname == default) ? (0) : WRITE_INT_FIELD_DIRECT(fldname))\n> +#define WRITE_INT_FIELD(fldname) \\\n> + WRITE_INT_FIELD_DEFAULT(fldname, 0)\n>\n> Do we need the _DIRECT macros at all? Could we not combine that into\n> the _DEFAULT ones?\n\nI was planning on using them to reduce the size of generated code for\nselect fields that we know we will always serialize, but then later\ndecided against doing that in this patch as it'd add even more\narbitrary annotations to nodes. This is a leftover from that.\n\n> I think the way the conditional operator (?:) is written is not\n> technically correct C,\n> [...]\n> I think it would be better to write this\n> in a more straightforward way like\n>\n> #define WRITE_INT_FIELD_DEFAULT(fldname, default) \\\n> do { \\\n> [...]\n> while (0)\n>\n> Relatedly, this\n>\n> +/* a scaffold function to read an optionally-omitted field */\n> [...]\n> would need to be written with a do { } while (0) wrapper around it.\n\nI'll fix that.\n\n> > 0002: reset location and other querystring-related node fields for all\n> > catalogs of type pg_node_tree.\n>\n> This goal makes sense, but I think it can be done in a better way. If\n> you look into the area of stringToNode(), stringToNodeWithLocations(),\n> and stringToNodeInternal(), there already is support for selectively\n> resetting or omitting location fields. Notably, this works with the\n> existing automated knowledge of where the location fields are and\n> doesn't require a new hand-maintained table. I think the way forward\n> here would be to apply a similar toggle to nodeToString() (the reverse).\n\nI'll try to work something out for that.\n\n> > 0003: add default marking on typmod fields.\n> > 0004 & 0006: various node fields marked with default() based on\n> > observed common or initial values of those fields\n>\n> I think we could get about half the benefit here more automatically, by\n> creating a separate type for typmods, like\n>\n> typedef int32 TypMod;\n>\n> and then having the node support automatically generate the\n> serialization support with a -1 default.\n\nHm, I suspect that the code churn for that would be significant. I'd\nalso be confused when the type in storage (pg_attribute, pg_type's\ntyptypmod) is still int32 when it would be TypMod only in nodes.\n\n> (A similar thing could be applied to the location fields, which would\n> allow us to get rid of the current hack of parsing out the name.)\n\nI suppose so.\n\n> Most of the other defaults I'm doubtful about. First, we are colliding\n> here between the goals of minimizing the storage size and making the\n> debug output more readable.\n\nI've never really wanted to make the output \"more readable\". The\ncurrent one is too verbose, yes.\n\n> If a Query dump would now omit the\n> commandType field if it is CMD_SELECT, I think that would be widely\n> confusing, and one would need to check the source code to identify the\n> reason.\n\nAFAIK, SELECT is the only command type you can possibly store in a\nview (insert/delete/update/utility are all invalid there, and while\nI'm not fully certain about MERGE, I'd say it's certainly a niche).\n\n> Also, what if we later decide to change a \"default\" for a\n> field. Then output between version would differ. Of course, node\n> output does change between versions in general, but these kinds of\n> differences would be confusing.\n\nI've not heard of anyone trying to read and compare the contents of\npg_node_tree manually where they're not trying to debug some\ndeep-nested issue. Note\n\n> Second, this relies on hand-maintained\n> annotations that were created by you presumably through a combination of\n> intuition and testing, based on what is in the template database. Do we\n> know whether this matches real-world queries created by users later?\n\nNo, or at least I don't know this for certain. But I think it's a good start.\n\n> Also, my experience dealing with the node support over the last little\n> while is that these manually maintained exceptions get ossified and\n> outdated and create a maintenance headache for the future.\n\nI'm not sure what headache this would become. nodeToString is a fairly\nstraightforward API with (AFAIK) no external dependencies, where only\nnodes go in and out. The metadata on top of that will indeed require\nsome maintenance, but AFAIK only in the areas that read and utilize\nsaid metadata. While it certainly wouldn't be great if we didn't have\nthis metadata, it'd be no worse than not having compression.\n\n> > 0005: truncate trailing 0s from outDatum\n>\n> Does this significantly affect anything other than the \"name\" type?\n> User views don't usually use the \"name\" type, so this would have limited\n> impact outside of system views.\n\nIt saves a few bytes each on byval types like bool, oid, and int on\nlittle-endian systems, as they don't utilize the latter bytes of the\n4- or 8-byte Datum. At least in the default catalog this shaves some\nbytes off.\n\n> > 0007 (new): do run-length + gap coding for bitmapset and the various\n> > integer list types. This saves a surprising amount of bytes.\n>\n> Can you show examples of this? How would this affects the ability to\n> manually interpret the output?\n\nThe ability to interpret the results manually is somewhat reduced for\ncomplex cases (bitmaps), but things like RangeTableEntries are\nsignificantly reduced in size because of this. A good amount of\nIntegerLists is reduced to (i 1 +10) instead of (i 1 2 3 4 5 ... 11).\nSpecifically notable are the joinleftcols/joinrightcols fields, as\nthey will often contain large lists of joined columns when many tables\nare joined together. While bitmaps are less prevalent/large, they also\nbenefit from this optimization.\nAs for bitmapsets, the use of differential coding saves bytes when the\nset is large or otherwise has structure: the bitmapset of uneven\nnumbers (b 1 3 5 7 ... 23 25 27 ... 101 103 ...) takes up more space\n(and is less compressible than) the equivalent differential coded (b 1\n2 2 2 2 ...). This is at the cost of direct readability, but I think\nthat's worth it.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 30 Jan 2024 12:26:05 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 30.01.24 12:26, Matthias van de Meent wrote:\n>> Most of the other defaults I'm doubtful about. First, we are colliding\n>> here between the goals of minimizing the storage size and making the\n>> debug output more readable.\n> I've never really wanted to make the output \"more readable\". The\n> current one is too verbose, yes.\n\nMy motivations at the moment to work in this area are (1) to make the \noutput more readable, and (2) to reduce maintenance burden of node \nsupport functions.\n\nThere can clearly be some overlap with your goals. For example, a less \nverbose and less redundant output can ease readability. But it can also \ngo the opposite direction; a very minimalized output can be less readable.\n\nI would like to understand your target more. You have shown some \nfigures how these various changes reduce storage size in pg_rewrite. \nBut it's a few hundred kilobytes, if I read this correctly, maybe some \nmegabytes if you add a lot of user views. Does this translate into any \nother tangible benefits, like you can store more views, or processing \nviews is faster, or something like that?\n\n\n\n", "msg_date": "Wed, 31 Jan 2024 09:16:27 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Wed, 31 Jan 2024, 09:16 Peter Eisentraut, <[email protected]> wrote:\n\n> On 30.01.24 12:26, Matthias van de Meent wrote:\n> >> Most of the other defaults I'm doubtful about. First, we are colliding\n> >> here between the goals of minimizing the storage size and making the\n> >> debug output more readable.\n> > I've never really wanted to make the output \"more readable\". The\n> > current one is too verbose, yes.\n>\n> My motivations at the moment to work in this area are (1) to make the\n> output more readable, and (2) to reduce maintenance burden of node\n> support functions.\n>\n> There can clearly be some overlap with your goals. For example, a less\n> verbose and less redundant output can ease readability. But it can also\n> go the opposite direction; a very minimalized output can be less readable.\n>\n> I would like to understand your target more. You have shown some\n> figures how these various changes reduce storage size in pg_rewrite.\n> But it's a few hundred kilobytes, if I read this correctly, maybe some\n> megabytes if you add a lot of user views. Does this translate into any\n> other tangible benefits, like you can store more views, or processing\n> views is faster, or something like that?\n\n\nI was also thinking about smaller per-attribute expression storage, for\nindex attribute expressions, table default expressions, and functions.\nOther than that, less memory overhead for the serialized form of these\nconstructs also helps for catalog cache sizes, etc.\nPeople complained about the size of a fresh initdb, and I agreed with them,\nso I started looking at low-hanging fruits, and this is one.\n\nI've not done any tests yet on whether it's more performant in general. I'd\nexpect the new code to do a bit better given the extremely verbose nature\nof the data and the rather complex byte-at-a-time token read method used,\nbut this is currently hypothesis.\nI do think that serialization itself may be slightly slower, but given that\nthis generally happens only in DDL, and that we have to grow the output\nbuffer less often, this too may still be a net win (but, again, this is an\nuntested hypothesis).\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nOn Wed, 31 Jan 2024, 09:16 Peter Eisentraut, <[email protected]> wrote:On 30.01.24 12:26, Matthias van de Meent wrote:\n>> Most of the other defaults I'm doubtful about.  First, we are colliding\n>> here between the goals of minimizing the storage size and making the\n>> debug output more readable.\n> I've never really wanted to make the output \"more readable\". The\n> current one is too verbose, yes.\n\nMy motivations at the moment to work in this area are (1) to make the \noutput more readable, and (2) to reduce maintenance burden of node \nsupport functions.\n\nThere can clearly be some overlap with your goals.  For example, a less \nverbose and less redundant output can ease readability.  But it can also \ngo the opposite direction; a very minimalized output can be less readable.\n\nI would like to understand your target more.  You have shown some \nfigures how these various changes reduce storage size in pg_rewrite. \nBut it's a few hundred kilobytes, if I read this correctly, maybe some \nmegabytes if you add a lot of user views.  Does this translate into any \nother tangible benefits, like you can store more views, or processing \nviews is faster, or something like that?I was also thinking about smaller per-attribute expression storage, for index attribute expressions, table default expressions, and functions. Other than that, less memory overhead for the serialized form of these constructs also helps for catalog cache sizes, etc. People complained about the size of a fresh initdb, and I agreed with them, so I started looking at low-hanging fruits, and this is one. I've not done any tests yet on whether it's more performant in general. I'd expect the new code to do a bit better given the extremely verbose nature of the data and the rather complex byte-at-a-time token read method used, but this is currently hypothesis.I do think that serialization itself may be slightly slower, but given that this generally happens only in DDL, and that we have to grow the output buffer less often, this too may still be a net win (but, again, this is an untested hypothesis).Kind regards,Matthias van de MeentNeon (https://neon.tech)", "msg_date": "Wed, 31 Jan 2024 17:17:03 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Wed, Jan 31, 2024 at 11:17 AM Matthias van de Meent\n<[email protected]> wrote:\n> I was also thinking about smaller per-attribute expression storage, for index attribute expressions, table default expressions, and functions. Other than that, less memory overhead for the serialized form of these constructs also helps for catalog cache sizes, etc.\n> People complained about the size of a fresh initdb, and I agreed with them, so I started looking at low-hanging fruits, and this is one.\n>\n> I've not done any tests yet on whether it's more performant in general. I'd expect the new code to do a bit better given the extremely verbose nature of the data and the rather complex byte-at-a-time token read method used, but this is currently hypothesis.\n> I do think that serialization itself may be slightly slower, but given that this generally happens only in DDL, and that we have to grow the output buffer less often, this too may still be a net win (but, again, this is an untested hypothesis).\n\nI think we're going to have to have separate formats for debugging and\nstorage if we want to get very far here. The current format sucks for\nreadability because it's so verbose, and tightening that up where we\ncan makes sense to me. For me, that can include things like emitting\nunset location fields for sure, but delta-encoding of bitmap sets is\nmore questionable. Turning 1 2 3 4 5 6 7 8 9 10 into 1-10 would be\nfine with me because that is both shorter and more readable, but\nturning 2 4 6 8 10 into 2 2 2 2 2 is way worse for a human reader.\nSuch optimizations might make sense in a format that is designed for\ncomputer processing only but not one that has to serve multiple\npurposes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jan 2024 12:47:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Wed, 31 Jan 2024 at 18:47, Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jan 31, 2024 at 11:17 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > I was also thinking about smaller per-attribute expression storage, for index attribute expressions, table default expressions, and functions. Other than that, less memory overhead for the serialized form of these constructs also helps for catalog cache sizes, etc.\n> > People complained about the size of a fresh initdb, and I agreed with them, so I started looking at low-hanging fruits, and this is one.\n> >\n> > I've not done any tests yet on whether it's more performant in general. I'd expect the new code to do a bit better given the extremely verbose nature of the data and the rather complex byte-at-a-time token read method used, but this is currently hypothesis.\n> > I do think that serialization itself may be slightly slower, but given that this generally happens only in DDL, and that we have to grow the output buffer less often, this too may still be a net win (but, again, this is an untested hypothesis).\n>\n> I think we're going to have to have separate formats for debugging and\n> storage if we want to get very far here. The current format sucks for\n> readability because it's so verbose, and tightening that up where we\n> can makes sense to me. For me, that can include things like emitting\n> unset location fields for sure, but delta-encoding of bitmap sets is\n> more questionable. Turning 1 2 3 4 5 6 7 8 9 10 into 1-10 would be\n> fine with me because that is both shorter and more readable, but\n> turning 2 4 6 8 10 into 2 2 2 2 2 is way worse for a human reader.\n> Such optimizations might make sense in a format that is designed for\n> computer processing only but not one that has to serve multiple\n> purposes.\n\nI suppose so, yes. I've removed the delta-encoding from the\nserialization of bitmapsets in the attached patchset.\n\nPeter E. and I spoke about this patchset at FOSDEM PGDay, too. I said\nto him that I wouldn't mind if this patchset was only partly applied:\nThe gains for most of the changes are definitely worth it even if some\nothers don't get in.\n\nI think it'd be a nice QoL and storage improvement if even only (say)\nthe first two patches were committed, though the typmod default\nmarkings (or alternatively, using a typedef-ed TypMod and one more\ntype-specific serialization handler) would also be a good improvement\nwithout introducing too many \"common value = default = omitted\"\nconsiderations that would reduce debugability.\n\nAttached is patchset v2, which contains the improvements from these patches:\n\n0001 has the \"omit defaults\" for the current types.\n -23.5%pt / -35.1%pt (toasted / raw)\n0002+0003 has new #defined type \"Location\" for those fields in Nodes\nthat point into (or have sizes of) query texts, and adds\ninfrastructure to conditionally omit them at all (see previous\ndiscussions)\n -3.5%pt / -6.3%pt\n0004 has new #defined type TypeMod as alias for int32, that uses a\ndefault value of -1 for (de)serialization purposes.\n -3.0%pt / -6.1%pt\n0005 updates Const node serialization to omit `:constvalue` if the\nvalue is null.\n +0.1%pt / -0.1%pt [^0]\n0006 does run-length encoding for bitmaps and the various typed\ninteger lists, using \"+int\" as indicators of a run of a certain\nlength, excluding the start value.\n Bitmaps, IntLists and XidLists are based on runs with increments\nof 1 (so, a notation (i 1 +3) means (i 1 2 3 4), while OidLists are\nbased on runs with no increments (so, (o 1 +3) means (o 1 1 1 1).\n -2.5%pt / -0.6%pt\n0007 does add some select custom 'default' values, in that the\nvarnosyn and varattnosyn fields now treat the value of varno and\nvarattno as their default values.\n This reduces the size of lists of Vars significantly and has a\nvery meaningful impact on the size of the compressed data (the default\npg_rewrite dataset contains some 10.8k Var nodes).\n -10.4%pt / 9.7%pt\n\nTotal for the full applied patchset:\n 55.5% smaller data in pg_rewrite.ev_action before TOAST\n 45.7% smaller data in pg_rewrite.ev_action after applying TOAST\n\nToast relation size, as fraction of the main pg_rewrite table:\nselect pg_relation_size(2838) *1.0 / pg_relation_size('pg_rewrite');\n master: 4.7\n 0007: 1.3\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[^0]: The small difference in size for patch 0005 is presumably due to\nlow occurrance of NULL-valued Const nodes. Additionally, the inline vs\nout-of-line TOASTed data and data (not) fitting on the last blocks of\neach relation are likely to cause the change in total compression\nratio. If we had more null-valued Const nodes in pg_rewrite, the ratio\nwould presumably have been better than this.\n\nPS: query I used for my data collection, + combined data:\n\nselect 'master' as \"version\"\n , pg_database_size('template0') as \"template0\"\n , pg_total_relation_size('pg_rewrite') as \"pg_rewrite\"\n , sum(pg_column_size(ev_action)) as \"toasted\"\n , sum(octet_length(ev_action)) as \"raw\";\n\n version | template0 | pg_rewrite | toasted | raw\n---------+-----------+------------+---------+---------\n master | 7537167 | 770048 | 574003 | 3002556\n 0001 | 7348751 | 630784 | 438852 | 1946364\n 0002 | 7242255 | 573440 | 403160 | 1840404\n 0003 | 7242255 | 573440 | 402325 | 1838367\n 0004 | 7225871 | 557056 | 384888 | 1652287\n 0005 | 7234063 | 565248 | 385678 | 1648717\n 0006 | 7217679 | 548864 | 371256 | 1627733\n 0007 | 7143951 | 475136 | 311255 | 1337496", "msg_date": "Mon, 12 Feb 2024 19:03:30 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Mon, 12 Feb 2024 at 19:03, Matthias van de Meent\n<[email protected]> wrote:\n> Attached is patchset v2, which contains the improvements from these patches:\n\nAttached v3, which fixes an out-of-bounds read in pg_strtoken_next,\ndetected by asan, that was a likely cause of the problems in CFBot's\nFreeBSD regression tests.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Mon, 12 Feb 2024 20:32:56 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Mon, 12 Feb 2024 at 20:32, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 12 Feb 2024 at 19:03, Matthias van de Meent\n> <[email protected]> wrote:\n> > Attached is patchset v2, which contains the improvements from these patches:\n>\n> Attached v3, which fixes an out-of-bounds read in pg_strtoken_next,\n> detected by asan, that was a likely cause of the problems in CFBot's\n> FreeBSD regression tests.\n\nApparently that was caused by issues in my updated bitmapset\nserializer; where I used bms_next_member(..., x=0) as first iteration\nthus skipping the first bit. This didn't show up earlier because that\nbit is not exercised in PG's builtin views, but is exercised when\nWRITE_READ_PARSE_PLAN_TREES is defined (as on the FreeBSD CI job).\n\nTrivial fix in the attached v4 of the patchset, with some fixes for\nother assertions that'd get some exercise in non-pg_node_tree paths in\nthe WRITE_READ configuration.", "msg_date": "Tue, 13 Feb 2024 00:10:56 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "Thanks, this patch set is a good way to incrementally work through these \nchanges.\n\nI have looked at \nv4-0001-pg_node_tree-Omit-serialization-of-fields-with-de.patch today. \nHere are my thoughts:\n\nI believe we had discussed offline to not omit enum fields with value 0 \n(WRITE_ENUM_FIELD). This is because the values of enum fields are \nimplementation artifacts, and this could be confusing for readers. \n(This could be added as a squeeze-out-every-byte change later, but if \nwe're going to keep the format fit for human reading, I think we should \nskip this.)\n\nI have some concerns about the round-trippability of float values. If \nwe do, effectively, if (node->fldname != 0.0), then I think this would \nalso match negative zero, but when we read it back it, it would get \nassigned positive zero. Maybe there are other edge cases like this. \nMight be safer to not mess with this.\n\nOn the reading side, the macro nesting has gotten a bit out of hand. :) \nWe had talked earlier in the thread about the _DIRECT macros and you \nsaid there were left over from something else you want to try, but I see \nnothing else in this patch set uses this. I think this could all be \nmuch simpler, like (omitting required punctuation)\n\n#define READ_INT_FIELD(fldname, default)\n if ((token = next_field(fldname, &length)))\n local_node->fldname = atoi(token);\n else\n local_node->fldname = default;\n\nwhere next_field() would\n\n1. read the next token\n2. if it is \":fldname\", continue;\n else rewind the read pointer and return NULL\n3. read the next token and return that\n\nNot only is this simpler, but it might also have better performance, \nbecause we don't have separate pg_strtok_next() and pg_strtok() calls in \nsequence.\n\n\n\n", "msg_date": "Thu, 15 Feb 2024 13:59:04 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 15 Feb 2024 at 13:59, Peter Eisentraut <[email protected]> wrote:\n>\n> Thanks, this patch set is a good way to incrementally work through these\n> changes.\n>\n> I have looked at\n> v4-0001-pg_node_tree-Omit-serialization-of-fields-with-de.patch today.\n> Here are my thoughts:\n>\n> I believe we had discussed offline to not omit enum fields with value 0\n> (WRITE_ENUM_FIELD). This is because the values of enum fields are\n> implementation artifacts, and this could be confusing for readers.\n\nThanks for reminding me, I didn't remember this when I worked on\nupdating the patchset. I'll update this soon.\n\n> I have some concerns about the round-trippability of float values. If\n> we do, effectively, if (node->fldname != 0.0), then I think this would\n> also match negative zero, but when we read it back it, it would get\n> assigned positive zero. Maybe there are other edge cases like this.\n> Might be safer to not mess with this.\n\nThat's a good point. Would an additional check that the sign of the\nfield equals the default's sign be enough for this? As for other\ncases, I'm not sure we currently want to support non-normal floats,\neven if it is technically possible to do the round-trip in the current\nformat.\n\n> On the reading side, the macro nesting has gotten a bit out of hand. :)\n> We had talked earlier in the thread about the _DIRECT macros and you\n> said there were left over from something else you want to try, but I see\n> nothing else in this patch set uses this. I think this could all be\n> much simpler, like (omitting required punctuation)\n[...]\n> Not only is this simpler, but it might also have better performance,\n> because we don't have separate pg_strtok_next() and pg_strtok() calls in\n> sequence.\n\nGood points. I'll see what I can do here.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 15 Feb 2024 15:37:53 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 15 Feb 2024 at 15:37, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Thu, 15 Feb 2024 at 13:59, Peter Eisentraut <[email protected]> wrote:\n> >\n> > Thanks, this patch set is a good way to incrementally work through these\n> > changes.\n> >\n> > I have looked at\n> > v4-0001-pg_node_tree-Omit-serialization-of-fields-with-de.patch today.\n> > Here are my thoughts:\n> >\n> > I believe we had discussed offline to not omit enum fields with value 0\n> > (WRITE_ENUM_FIELD). This is because the values of enum fields are\n> > implementation artifacts, and this could be confusing for readers.\n>\n> Thanks for reminding me, I didn't remember this when I worked on\n> updating the patchset. I'll update this soon.\n\nThis has been split into patch 0008 in the set. A query on ev_action\nshows that enum default-0-omission is effective on 1994 fields:\n\nselect match, count(*)\nfrom pg_rewrite,\n lateral (\n select unnest(regexp_matches(ev_action, '(:\\w+ 0)[^0-9]', 'g')) match\n )\ngroup by 1 order by 2 desc;\n match | count\n-----------------+-------\n :funcformat 0 | 587\n :rtekind 0 | 449\n :limitOption 0 | 260\n :querySource 0 | 260\n :override 0 | 260\n :jointype 0 | 156\n :aggsplit 0 | 15\n :subLinkType 0 | 5\n :nulltesttype 0 | 2\n\n> > On the reading side, the macro nesting has gotten a bit out of hand. :)\n> > We had talked earlier in the thread about the _DIRECT macros and you\n> > said there were left over from something else you want to try, but I see\n> > nothing else in this patch set uses this. I think this could all be\n> > much simpler, like (omitting required punctuation)\n> [...]\n> > Not only is this simpler, but it might also have better performance,\n> > because we don't have separate pg_strtok_next() and pg_strtok() calls in\n> > sequence.\n>\n> Good points. I'll see what I can do here.\n\nAttached the updated version of the patch on top of 5497daf3, which\nincorporates this last round of feedback. It moves the\ndefault-0-omission for Enums to newly added 0008, and checks the sign\nto deal with +0/-0 issues in float default checks.\nSee below for updated numbers.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nNew numbers:\n\nselect 'master' as \"version\"\n , pg_database_size('template0') as \"template0\"\n , pg_total_relation_size('pg_rewrite') as \"rel_total\"\n , pg_relation_size('pg_rewrite', 'main') as \"rel_main\"\n , sum(pg_column_size(ev_action)) as \"toasted\"\n , sum(octet_length(ev_action)) as \"raw\"\nfrom pg_rewrite;\n\n version | template0 | rel_total | rel_main | toasted | raw\n---------+-----------+-----------+----------+---------+---------\n master | 7528975 | 770048 | 114688 | 574051 | 3002981\n 0001 | 7348751 | 630784 | 131072 | 448495 | 1972854\n 0002 | 7250447 | 589824 | 131072 | 412261 | 1866880\n 0003 | 7242255 | 581632 | 131072 | 410476 | 1864843\n 0004 | 7225871 | 565248 | 139264 | 393801 | 1678735\n 0005 | 7225871 | 565248 | 139264 | 393556 | 1675165\n 0006 | 7217679 | 557056 | 139264 | 379062 | 1654178\n 0007 | 7160335 | 491520 | 155648 | 322145 | 1363885\n 0008 | 7135759 | 475136 | 155648 | 311294 | 1337649", "msg_date": "Mon, 19 Feb 2024 14:19:58 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Mon, 19 Feb 2024 at 14:19, Matthias van de Meent\n<[email protected]> wrote:\n> Attached the updated version of the patch on top of 5497daf3, which\n> incorporates this last round of feedback.\n\nNow attached rebased on top of 93db6cbd to fix conflicts with fbc93b8b\nand an issue in the previous patchset: I attached one too many v3-0001\nfrom a previous patch I worked on.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 22 Feb 2024 13:37:00 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Thu, 22 Feb 2024 at 13:37, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 19 Feb 2024 at 14:19, Matthias van de Meent\n> <[email protected]> wrote:\n> > Attached the updated version of the patch on top of 5497daf3, which\n> > incorporates this last round of feedback.\n>\n> Now attached rebased on top of 93db6cbd to fix conflicts with fbc93b8b\n> and an issue in the previous patchset: I attached one too many v3-0001\n> from a previous patch I worked on.\n\n... and now with a fix for not overwriting newly deserialized location\nattributes with -1, which breaks test output for\nREAD_WRITE_PARSE_PLAN_TREES installations. Again, no other significant\nchanges since the patch of last Monday.\n\nSorry for the noise,\n\n-Matthias", "msg_date": "Thu, 22 Feb 2024 16:07:55 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 22.02.24 16:07, Matthias van de Meent wrote:\n> On Thu, 22 Feb 2024 at 13:37, Matthias van de Meent\n> <[email protected]> wrote:\n>>\n>> On Mon, 19 Feb 2024 at 14:19, Matthias van de Meent\n>> <[email protected]> wrote:\n>>> Attached the updated version of the patch on top of 5497daf3, which\n>>> incorporates this last round of feedback.\n>>\n>> Now attached rebased on top of 93db6cbd to fix conflicts with fbc93b8b\n>> and an issue in the previous patchset: I attached one too many v3-0001\n>> from a previous patch I worked on.\n> \n> ... and now with a fix for not overwriting newly deserialized location\n> attributes with -1, which breaks test output for\n> READ_WRITE_PARSE_PLAN_TREES installations. Again, no other significant\n> changes since the patch of last Monday.\n\n* v7-0002-pg_node_tree-Don-t-store-query-text-locations-in-.patch\n\nThis patch looks much more complicated than I was expecting. I had \nsuggested to model this after stringToNodeWithLocations(). This uses a \nglobal variable to toggle the mode. Your patch creates a function \nnodeToStringNoQLocs() -- why the different naming scheme? -- and passes \nthe flag down as an argument to all the output functions. I mean, in a \ngreen field, avoiding global variables can be sensible, of course, but I \nthink in this limited scope here it would really be better to keep the \ncode for the two directions read and write the same.\n\nAttached is a small patch that shows what I had in mind. (It doesn't \ncontain any callers, but your patch shows where all those would go.)\n\n\n* v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n\nThis looks sensible, but maybe making Location a global type is a bit \nmuch? Maybe something more specific like ParseLocation, or ParseLoc, to \nkeep it under 12 characters.", "msg_date": "Mon, 11 Mar 2024 14:19:44 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Mon, 11 Mar 2024 at 14:19, Peter Eisentraut <[email protected]> wrote:\n>\n> On 22.02.24 16:07, Matthias van de Meent wrote:\n> > On Thu, 22 Feb 2024 at 13:37, Matthias van de Meent\n> > <[email protected]> wrote:\n> >>\n> >> On Mon, 19 Feb 2024 at 14:19, Matthias van de Meent\n> >> <[email protected]> wrote:\n> >>> Attached the updated version of the patch on top of 5497daf3, which\n> >>> incorporates this last round of feedback.\n> >>\n> >> Now attached rebased on top of 93db6cbd to fix conflicts with fbc93b8b\n> >> and an issue in the previous patchset: I attached one too many v3-0001\n> >> from a previous patch I worked on.\n> >\n> > ... and now with a fix for not overwriting newly deserialized location\n> > attributes with -1, which breaks test output for\n> > READ_WRITE_PARSE_PLAN_TREES installations. Again, no other significant\n> > changes since the patch of last Monday.\n>\n> * v7-0002-pg_node_tree-Don-t-store-query-text-locations-in-.patch\n>\n> This patch looks much more complicated than I was expecting. I had\n> suggested to model this after stringToNodeWithLocations(). This uses a\n> global variable to toggle the mode. Your patch creates a function\n> nodeToStringNoQLocs() -- why the different naming scheme?\n\nIt doesn't just exclude .location fields, but also Query.len, a\nsimilar field which contains the length of the query's string. The\nname has been further refined to nodeToStringNoParseLocs() in the\nattached version, but feel free to replace the names in the patch to\nanything else you might want.\n\n> -- and passes\n> the flag down as an argument to all the output functions. I mean, in a\n> green field, avoiding global variables can be sensible, of course, but I\n> think in this limited scope here it would really be better to keep the\n> code for the two directions read and write the same.\n\nI'm a big fan of _not_ using magic global variables as passed context\nwithout resetting on subnormal exits...\nFor GUCs their usage is understandable (and there is infrastructure to\nreset them, and you're not supposed to manually update them), but IMO\nhere its usage should be a function-scoped variable or in a\npassed-by-reference context struct, not a file-local static.\nRegardless, attached is an adapted version with the file-local\nvariable implementation.\n\n> Attached is a small patch that shows what I had in mind. (It doesn't\n> contain any callers, but your patch shows where all those would go.)\n\nAttached a revised version that does it like stringToNodeInternal's\nhandling of restore_location_fields.\n\n> * v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n>\n> This looks sensible, but maybe making Location a global type is a bit\n> much? Maybe something more specific like ParseLocation, or ParseLoc, to\n> keep it under 12 characters.\n\nI've gone with ParseLoc in the attached v8 patchset.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Mon, 11 Mar 2024 21:52:26 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 11.03.24 21:52, Matthias van de Meent wrote:\n>> * v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n>>\n>> This looks sensible, but maybe making Location a global type is a bit\n>> much? Maybe something more specific like ParseLocation, or ParseLoc, to\n>> keep it under 12 characters.\n> I've gone with ParseLoc in the attached v8 patchset.\n\nI have committed this one.\n\nI moved the typedef to nodes/nodes.h, where we already had similar \ntypdefs (Cardinality, etc.). The fields stmt_location and stmt_len in \nPlannedStmt were not converted, so I fixed that. Also, between you \nwriting your patch and now, at least one new node type was added, so I \nfixed that one up, too. (I diffed the generated node support functions \nto check.) Hopefully, future hackers will apply the new type when \nappropriate.\n\n\n\n", "msg_date": "Tue, 19 Mar 2024 17:13:47 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Tue, 19 Mar 2024 at 17:13, Peter Eisentraut <[email protected]> wrote:\n>\n> On 11.03.24 21:52, Matthias van de Meent wrote:\n> >> * v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n> >>\n> >> This looks sensible, but maybe making Location a global type is a bit\n> >> much? Maybe something more specific like ParseLocation, or ParseLoc, to\n> >> keep it under 12 characters.\n> > I've gone with ParseLoc in the attached v8 patchset.\n>\n> I have committed this one.\n\nThanks!\n\n> I moved the typedef to nodes/nodes.h, where we already had similar\n> typdefs (Cardinality, etc.). The fields stmt_location and stmt_len in\n> PlannedStmt were not converted, so I fixed that. Also, between you\n> writing your patch and now, at least one new node type was added, so I\n> fixed that one up, too.\n\nGood points, thank you for fixing that.\n\n> (I diffed the generated node support functions\n> to check.) Hopefully, future hackers will apply the new type when\n> appropriate.\n\nAre you also planning on committing some of the other patches later,\nor should I rebase the set to keep CFBot happy?\n\n-Matthias\n\n\n", "msg_date": "Tue, 19 Mar 2024 17:46:26 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 19.03.24 17:13, Peter Eisentraut wrote:\n> On 11.03.24 21:52, Matthias van de Meent wrote:\n>>> * v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n>>>\n>>> This looks sensible, but maybe making Location a global type is a bit\n>>> much?  Maybe something more specific like ParseLocation, or ParseLoc, to\n>>> keep it under 12 characters.\n>> I've gone with ParseLoc in the attached v8 patchset.\n> \n> I have committed this one.\n\nNext, I was looking at \nv8-0003-pg_node_tree-Don-t-store-query-text-locations-in-.patch. After \napplying that, I was looking how many uses of nodeToString() (with \nlocations) were left. I think your patch forgot to convert a number of \nthem, and there also might have been a few new ones that came in with \nother recent patches. Might be hard to make sure all new developments \ndo this right. Plus, there are various mentions in the documentation \nthat should be updated. After considering all that, there weren't \nreally many callers of nodeToString() left. It's really only debugging \nsupport in postgres.c and print.c, and a few places were it doesn't \nmatter, like the few places where it initializes \"cooked expressions\", \nwhich were in turn already stripped of location fields at some earlier time.\n\nSo anyway, my idea was that we should turn this around and make \nnodeToString() always drop location information, and instead add \nnodeToStringWithLocations() for the few debugging uses. And this would \nalso be nice because then it matches exactly with the existing \nstringToNodeWithLocations().\n\nAttached patch shows this.", "msg_date": "Wed, 20 Mar 2024 12:49:52 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On Wed, 20 Mar 2024 at 12:49, Peter Eisentraut <[email protected]> wrote:\n>\n> On 19.03.24 17:13, Peter Eisentraut wrote:\n> > On 11.03.24 21:52, Matthias van de Meent wrote:\n> >>> * v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n> >>>\n> >>> This looks sensible, but maybe making Location a global type is a bit\n> >>> much? Maybe something more specific like ParseLocation, or ParseLoc, to\n> >>> keep it under 12 characters.\n> >> I've gone with ParseLoc in the attached v8 patchset.\n> >\n> > I have committed this one.\n>\n> Next, I was looking at\n> v8-0003-pg_node_tree-Don-t-store-query-text-locations-in-.patch.\n\n[...]\n\n> So anyway, my idea was that we should turn this around and make\n> nodeToString() always drop location information, and instead add\n> nodeToStringWithLocations() for the few debugging uses. And this would\n> also be nice because then it matches exactly with the existing\n> stringToNodeWithLocations().\n\nThat seems reasonable, yes.\n\n-Matthias\n\n\n", "msg_date": "Wed, 20 Mar 2024 13:03:39 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reducing output size of nodeToString" }, { "msg_contents": "On 20.03.24 13:03, Matthias van de Meent wrote:\n> On Wed, 20 Mar 2024 at 12:49, Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 19.03.24 17:13, Peter Eisentraut wrote:\n>>> On 11.03.24 21:52, Matthias van de Meent wrote:\n>>>>> * v7-0003-gen_node_support.pl-Mark-location-fields-as-type-.patch\n>>>>>\n>>>>> This looks sensible, but maybe making Location a global type is a bit\n>>>>> much? Maybe something more specific like ParseLocation, or ParseLoc, to\n>>>>> keep it under 12 characters.\n>>>> I've gone with ParseLoc in the attached v8 patchset.\n>>>\n>>> I have committed this one.\n>>\n>> Next, I was looking at\n>> v8-0003-pg_node_tree-Don-t-store-query-text-locations-in-.patch.\n> \n> [...]\n> \n>> So anyway, my idea was that we should turn this around and make\n>> nodeToString() always drop location information, and instead add\n>> nodeToStringWithLocations() for the few debugging uses. And this would\n>> also be nice because then it matches exactly with the existing\n>> stringToNodeWithLocations().\n> \n> That seems reasonable, yes.\n\nI have committed that one.\n\nThis takes care of your patches v8-0002 and v8-0003.\n\nAbout the rest of your patch set:\n\nAs long as we have only one output format, we need to balance several \nuses, including debugging, storage size, (de)serialization performance.\n\nYour patches v8-0005 and up are clearly positive for storage size but \nnegative for debugging. So I think we can't consider them now.\n\nYour patches v8-0001 (\"pg_node_tree: Omit serialization of fields with \ndefault values.\") and v8-0004 (\"gen_node_support.pl: Add a TypMod type \nfor signalling TypMod behaviour\") are also good for storage size. I \ndon't know how they affect serialization performance. I also don't know \nhow good they are for debugging. I have argued here and there that \nomitting unset fields can make node dumps more readable. But that's \njust me. I have looked at a lot of Query and RangeTblEntry nodes \nlately, which contain many rarely used fields. But other people might \nhave completely different experiences, with other node and tree types. \nWe didn't get much feedback from anyone else in this thread, so I'm very \nhesitant to impose this on everyone without any consensus.\n\nI could see \"Omit serialization of fields with default values\" as a \nseparate toggle for debug node dumps.\n\nAlso, there is clearly some lingering interesting in a separate \nbinary-ish serialization format for internal use. This should probably \nalso take a look at (de)serialization performance, which we haven't \nreally done in this thread. In a way, with the omit default values \npatch, the serialization and deserialization does more work, so it could \nhave an adverse impact. But we don't know.\n\nI think to proceed we need more buy-in on the \"what do I want from my \nnode dumps\" side, and more performance numbers on the other side. \nSaving a few hundred kilobytes on view storage is fine but isn't by \nitself that useful, I think, if it potentially negatively affects other \nuses.\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:18:33 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reducing output size of nodeToString" } ]
[ { "msg_contents": "Earlier this year, there was a thread about GSSAPI for delegated\ncredentials and various operating systems ultimately that Heimdal had\natrophied enough that you were comfortable not supporting it anymore as\na GSSAPI library.\n\nThread:\nhttps://www.postgresql.org/message-id/flat/ZDFTailRZzyGdbXl%40tamriel.snowman.net#7b4b7354bc3ea060fb26d51565f0ad67\n\nIn https://www.postgresql.org/message-id/3598083.1680976022%40sss.pgh.pa.us,\nTom Lane said:\n\n > I share your feeling that we could probably blow off Apple's built-in\n > GSSAPI. MacPorts offers both Heimdal and kerberos5, and I imagine\n > Homebrew has at least one of them, so Mac people could easily get\n > hold of newer implementations.\n\nI wanted to follow up on the decision to blow off Apple's built-in\nGSSAPI. Years back, for reasons I never found, Apple switched from MIT\nto Heimdal and have been maintaining their own version of it. I'm not\nclear how well they maintain it but they have enhanced it.\n\nOne of the things that Apple put it in was a different centralized\ncredentials cache system. (named of the form \"API:uuid\"). This isn't\nin Heimdal nor is it in MIT, so typical kerberos tickets issued by the\nApple provide Kerberos libraries are not accessible via other kerberos\nversions provided by homebrew/macports/etc. (netbsd pkgsrc on macos can\nbe told to use the system libraries, which is what I do). Installing a\nparallel version makes the client experience awful since it means having\nto manage two sets of tickets and ticket caches, and which one gets used\nvaries depending on what libraries they were linked against.\n\nAs you may have surmised, I use a mac as a client and use gssapi pretty\nheavily to interact with numerous postgresql databases. This has stopped\nme from upgrading my client side to 16. I'm wondering if there's be any\nwillingness to reconsider heimdal support under some circumstances?\n\nthanks,\n-Todd\n\n\n", "msg_date": "Wed, 06 Dec 2023 18:54:22 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pg16 && GSSAPI && Heimdal/Macos" }, { "msg_contents": "[email protected] writes:\n> Earlier this year, there was a thread about GSSAPI for delegated\n> credentials and various operating systems ultimately that Heimdal had\n> atrophied enough that you were comfortable not supporting it anymore as\n> a GSSAPI library.\n\nYup.\n\n> As you may have surmised, I use a mac as a client and use gssapi pretty\n> heavily to interact with numerous postgresql databases. This has stopped\n> me from upgrading my client side to 16. I'm wondering if there's be any\n> willingness to reconsider heimdal support under some circumstances?\n\nThe immediate reason for dropping that support is that Heimdal doesn't\nhave gss_store_cred_into(), without which we can't support delegated\ncredentials. AFAICT, Apple's version doesn't have that either.\nWe could argue about how important that feature is and whether it'd be\nokay to have an Apple-only build option to not have it. However...\n\n... there's another good reason to shy away from relying on Apple's\nlibrary, which is that they've conspicuously marked all the standard\nKerberos functions as deprecated. It's not clear if that means\nthey're planning to remove them outright, but surely it's an indicator\nthat Apple doesn't want outside code calling them.\n\nThe deprecation notices that you get if you try to build anyway say\n\"use GSS.framework\". So if somebody wanted to try to support this in\na somewhat future-proof way, the thing to do would be to look into how\ninvasive it'd be to do it like that. That's not something I plan to\nput any effort into, but if you're desperate enough for this, maybe\nyou could push that forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Dec 2023 22:57:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg16 && GSSAPI && Heimdal/Macos" } ]
[ { "msg_contents": "Hi hackers,\n\n\nFor local invalidation messages, there is no need to call\n`InvalidateCatalogSnapshot` to set the CatalogSnapshot to NULL and\n rebuild it later. Instead, just update the CatalogSnapshot's `curcid`\n in `SnapshotSetCommandId`, this way can make the CatalogSnapshot work\nwell too.\n\n This optimization can reduce the overhead of rebuilding CatalogSnapshot\n after each command.\n\n\nBest regards, xiaoran", "msg_date": "Thu, 7 Dec 2023 10:13:52 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH]: Not to invaldiate CatalogSnapshot for local invalidation\n messages" }, { "msg_contents": "Hi hackers,\n\nI would like to give more details of my patch.\n\n\nIn postgres, it uses a global snapshot “CatalogSnapshot” to check catalog\ndata visibility.\n\n“CatalogSnapshot” is always updated to the latest version to make the\nlatest catalog table\n\ncontent visible.\n\n\nIf there is any updating on catalog tables, to make the changes to be\nvisible for the following\n\ncommands in the current transaction,\n\n “CommandCounterIncrement”-\n\n>”AtCCI_LocalCache”\n\n->”CommandEndInvalidationMessages\n\n”->”LocalExecuteInvalidationMessage”\n\n->”InvalidateCatalogSnapshot”\n\n it will invalidate the “CatalogSnapshot” by setting it to NULL. And next\ntime, when it needs the\n\n“CatalogSnapsthot” and finds it is NULL, it will regenerate one.\n\n\nIn a query, “CommandCounterIncrement” may be called many times, and\n“CatalogSnapsthot” may be\n\ndestroyed and recreated many times. To reduce such overhead, instead of\ninvalidating “CatalogSnapshot”\n\n, we can keep it and just increase the “curcid” of it.\n\n\nWhen the transaction is committed or aborted, or there are catalog\ninvalidation messages from other\n\nbackends, the “CatalogSnapshot” will be invalidated and regenerated.\nSometimes, the “CatalogSnapshot” is\n\nnot the same as the transaction “CurrentSnapshot”, but we can still update\nthe CatalogSnapshot’s\n\n“curcid”, as the “curcid” only be checked when the tuple is inserted or\ndeleted by the current transaction.\n\n\n\n\n\nXiaoran Wang <[email protected]> 于2023年12月7日周四 10:13写道:\n\n> Hi hackers,\n>\n>\n> For local invalidation messages, there is no need to call\n> `InvalidateCatalogSnapshot` to set the CatalogSnapshot to NULL and\n> rebuild it later. Instead, just update the CatalogSnapshot's `curcid`\n> in `SnapshotSetCommandId`, this way can make the CatalogSnapshot work\n> well too.\n>\n> This optimization can reduce the overhead of rebuilding CatalogSnapshot\n> after each command.\n>\n>\n> Best regards, xiaoran\n>\n\nHi hackers,\nI would like to give more details of my patch.\n\nIn postgres, it uses a global snapshot “CatalogSnapshot” to check catalog data visibility.\n“CatalogSnapshot” is always updated to the latest version to make the latest catalog table\ncontent visible.\n\nIf there is any updating on catalog tables, to make the changes to be visible for the following \ncommands in the current transaction,  \n “CommandCounterIncrement”-\n >”AtCCI_LocalCache”\n ->”CommandEndInvalidationMessages\n ”->”LocalExecuteInvalidationMessage”\n ->”InvalidateCatalogSnapshot”\n it will invalidate the “CatalogSnapshot” by setting it to NULL.  And next time, when it needs the\n“CatalogSnapsthot” and finds it is NULL, it will regenerate one.\n\nIn a query, “CommandCounterIncrement” may be called many times, and “CatalogSnapsthot” may be\ndestroyed and recreated many times.  To reduce such overhead, instead of invalidating “CatalogSnapshot”\n, we can keep it and just increase the “curcid” of it.\n\nWhen the transaction is committed or aborted, or there are catalog invalidation messages from other \nbackends, the “CatalogSnapshot” will be invalidated and regenerated. Sometimes, the “CatalogSnapshot” is\nnot the same as the transaction “CurrentSnapshot”, but we can still update the CatalogSnapshot’s\n“curcid”, as the “curcid” only be checked when the tuple is inserted or deleted by the current transaction.\n \nXiaoran Wang <[email protected]> 于2023年12月7日周四 10:13写道:Hi hackers,For local invalidation messages, there is no need to call`InvalidateCatalogSnapshot` to set the CatalogSnapshot to NULL and rebuild it later. Instead, just update the CatalogSnapshot's `curcid` in `SnapshotSetCommandId`, this way can make the CatalogSnapshot workwell too. This optimization can reduce the overhead of rebuilding CatalogSnapshot after each command. Best regards, xiaoran", "msg_date": "Tue, 12 Dec 2023 22:37:54 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH]: Not to invaldiate CatalogSnapshot for local invalidation\n messages" }, { "msg_contents": "Hi\n---setup.\ndrop table s2;\ncreate table s2(a int);\n\nAfter apply the patch\nalter table s2 add primary key (a);\n\nwatch CatalogSnapshot\n----\n#0 GetNonHistoricCatalogSnapshot (relid=1259)\n at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:412\n#1 0x000055ba78f0d6ba in GetCatalogSnapshot (relid=1259)\n at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:371\n#2 0x000055ba785ffbe1 in systable_beginscan\n(heapRelation=0x7f256f30b5a8, indexId=2662, indexOK=false,\n snapshot=0x0, nkeys=1, key=0x7ffe230f0180)\n at ../../Desktop/pg_src/src7/postgresql/src/backend/access/index/genam.c:413\n(More stack frames follow...)\n\n-------------------------\nHardware watchpoint 13: CatalogSnapshot\n\nOld value = (Snapshot) 0x55ba7980b6a0 <CatalogSnapshotData>\nNew value = (Snapshot) 0x0\nInvalidateCatalogSnapshot () at\n../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n435 SnapshotResetXmin();\n(gdb) bt 4\n#0 InvalidateCatalogSnapshot ()\n at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n#1 0x000055ba78f0ee85 in AtEOXact_Snapshot (isCommit=true, resetXmin=false)\n at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:1057\n#2 0x000055ba7868201b in CommitTransaction ()\n at ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:2373\n#3 0x000055ba78683495 in CommitTransactionCommand ()\n at ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:3061\n(More stack frames follow...)\n\n--\nbut the whole process changes pg_class, pg_index,\npg_attribute,pg_constraint etc.\nonly one GetCatalogSnapshot and InvalidateCatalogSnapshot seems not correct?\nwhat if there are concurrency changes in the related pg_catalog table.\n\nyour patch did pass the isolation test!\n\nI think you patch doing is against following code comments in\nsrc/backend/utils/time/snapmgr.c\n\n/*\n * CurrentSnapshot points to the only snapshot taken in transaction-snapshot\n * mode, and to the latest one taken in a read-committed transaction.\n * SecondarySnapshot is a snapshot that's always up-to-date as of the current\n * instant, even in transaction-snapshot mode. It should only be used for\n * special-purpose code (say, RI checking.) CatalogSnapshot points to an\n * MVCC snapshot intended to be used for catalog scans; we must invalidate it\n * whenever a system catalog change occurs.\n *\n * These SnapshotData structs are static to simplify memory allocation\n * (see the hack in GetSnapshotData to avoid repeated malloc/free).\n */\nstatic SnapshotData CurrentSnapshotData = {SNAPSHOT_MVCC};\nstatic SnapshotData SecondarySnapshotData = {SNAPSHOT_MVCC};\nSnapshotData CatalogSnapshotData = {SNAPSHOT_MVCC};\nSnapshotData SnapshotSelfData = {SNAPSHOT_SELF};\nSnapshotData SnapshotAnyData = {SNAPSHOT_ANY};\n\n\n", "msg_date": "Mon, 18 Dec 2023 08:19:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH]: Not to invaldiate CatalogSnapshot for local invalidation\n messages" }, { "msg_contents": "Hi,\nThanks for your reply.\n\njian he <[email protected]> 于2023年12月18日周一 08:20写道:\n\n> Hi\n> ---setup.\n> drop table s2;\n> create table s2(a int);\n>\n> After apply the patch\n> alter table s2 add primary key (a);\n>\n> watch CatalogSnapshot\n> ----\n> #0 GetNonHistoricCatalogSnapshot (relid=1259)\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:412\n> #1 0x000055ba78f0d6ba in GetCatalogSnapshot (relid=1259)\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:371\n> #2 0x000055ba785ffbe1 in systable_beginscan\n> (heapRelation=0x7f256f30b5a8, indexId=2662, indexOK=false,\n> snapshot=0x0, nkeys=1, key=0x7ffe230f0180)\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/access/index/genam.c:413\n> (More stack frames follow...)\n>\n> -------------------------\n> Hardware watchpoint 13: CatalogSnapshot\n>\n> Old value = (Snapshot) 0x55ba7980b6a0 <CatalogSnapshotData>\n> New value = (Snapshot) 0x0\n> InvalidateCatalogSnapshot () at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n> 435 SnapshotResetXmin();\n> (gdb) bt 4\n> #0 InvalidateCatalogSnapshot ()\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n> #1 0x000055ba78f0ee85 in AtEOXact_Snapshot (isCommit=true,\n> resetXmin=false)\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:1057\n> #2 0x000055ba7868201b in CommitTransaction ()\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:2373\n> #3 0x000055ba78683495 in CommitTransactionCommand ()\n> at\n> ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:3061\n> (More stack frames follow...)\n>\n> --\n> but the whole process changes pg_class, pg_index,\n> pg_attribute,pg_constraint etc.\n> only one GetCatalogSnapshot and InvalidateCatalogSnapshot seems not\n> correct?\n> what if there are concurrency changes in the related pg_catalog table.\n>\n> your patch did pass the isolation test!\n>\n\nYes, I have run the installcheck-world locally, and all the tests passed.\nThere are two kinds of Invalidation Messages.\nOne kind is from the local backend, such as what you did in the example\n\"alter table s2 add primary key (a);\", it modifies the pg_class,\npg_attribute ect ,\nso it generates some Invalidation Messages to invalidate the \"s2\" related\ntuples in pg_class , pg_attribute ect, and Invalidate Message to invalidate\ns2\nrelation cache. When the command is finished, in the\nCommandCounterIncrement,\nthose Invalidation Messages will be processed to make the system cache work\nwell for the following commands.\n\nThe other kind of Invalidation Messages are from other backends.\nSuppose there are two sessions:\nsession1\n---\n1: create table foo(a int);\n---\nsession 2\n---\n1: create table test(a int); (before session1:1)\n2: insert into foo values(1); (execute after session1:1)\n---\nSession1 will generate Invalidation Messages and send them when the\ntransaction is committed,\nand session 2 will accept those Invalidation Messages from session 1 and\nthen execute\nthe second command.\n\nBefore the patch, Postgres will invalidate the CatalogSnapshot for those\ntwo kinds of Invalidation\nMessages. So I did a small optimization in this patch, for local\nInvalidation Messages, we don't\ncall InvalidateCatalogSnapshot, we can use one CatalogSnapshot in a\ntransaction even if we modify\nthe catalog and generate Invalidation Messages, as the visibility of the\ntuple is identified by the curcid,\nas long as we update the curcid of the CatalogSnapshot in\nSnapshotSetCommandId,\nit can work\ncorrectly.\n\n\n\n> I think you patch doing is against following code comments in\n> src/backend/utils/time/snapmgr.c\n>\n> /*\n> * CurrentSnapshot points to the only snapshot taken in\n> transaction-snapshot\n> * mode, and to the latest one taken in a read-committed transaction.\n> * SecondarySnapshot is a snapshot that's always up-to-date as of the\n> current\n> * instant, even in transaction-snapshot mode. It should only be used for\n> * special-purpose code (say, RI checking.) CatalogSnapshot points to an\n> * MVCC snapshot intended to be used for catalog scans; we must invalidate\n> it\n> * whenever a system catalog change occurs.\n> *\n> * These SnapshotData structs are static to simplify memory allocation\n> * (see the hack in GetSnapshotData to avoid repeated malloc/free).\n> */\n> static SnapshotData CurrentSnapshotData = {SNAPSHOT_MVCC};\n> static SnapshotData SecondarySnapshotData = {SNAPSHOT_MVCC};\n> SnapshotData CatalogSnapshotData = {SNAPSHOT_MVCC};\n> SnapshotData SnapshotSelfData = {SNAPSHOT_SELF};\n> SnapshotData SnapshotAnyData = {SNAPSHOT_ANY};\n>\n\nThank you for pointing it out, I think I need to update the comment in the\npatch too.\n\nHi,Thanks for your reply.jian he <[email protected]> 于2023年12月18日周一 08:20写道:Hi\n---setup.\ndrop table s2;\ncreate table s2(a int);\n\nAfter apply the patch\nalter table s2 add primary key (a);\n\nwatch CatalogSnapshot\n----\n#0  GetNonHistoricCatalogSnapshot (relid=1259)\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:412\n#1  0x000055ba78f0d6ba in GetCatalogSnapshot (relid=1259)\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:371\n#2  0x000055ba785ffbe1 in systable_beginscan\n(heapRelation=0x7f256f30b5a8, indexId=2662, indexOK=false,\n    snapshot=0x0, nkeys=1, key=0x7ffe230f0180)\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/access/index/genam.c:413\n(More stack frames follow...)\n\n-------------------------\nHardware watchpoint 13: CatalogSnapshot\n\nOld value = (Snapshot) 0x55ba7980b6a0 <CatalogSnapshotData>\nNew value = (Snapshot) 0x0\nInvalidateCatalogSnapshot () at\n../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n435                     SnapshotResetXmin();\n(gdb) bt 4\n#0  InvalidateCatalogSnapshot ()\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n#1  0x000055ba78f0ee85 in AtEOXact_Snapshot (isCommit=true, resetXmin=false)\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:1057\n#2  0x000055ba7868201b in CommitTransaction ()\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:2373\n#3  0x000055ba78683495 in CommitTransactionCommand ()\n    at ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:3061\n(More stack frames follow...)\n\n--\nbut the whole process changes pg_class, pg_index,\npg_attribute,pg_constraint etc.\nonly one GetCatalogSnapshot and  InvalidateCatalogSnapshot seems not correct?\nwhat if there are concurrency changes in the related pg_catalog table.\n\nyour patch did pass the isolation test! Yes, I have run the installcheck-world locally, and all the tests passed.There are two kinds of Invalidation Messages. One kind is from the local backend, such as what you did in the example \"alter table s2 add primary key (a);\",  it modifies the pg_class, pg_attribute ect , so it generates some Invalidation Messages to invalidate the \"s2\" relatedtuples in pg_class , pg_attribute ect, and Invalidate Message to invalidate s2 relation cache. When the command is finished, in the CommandCounterIncrement,those Invalidation Messages will be processed to make the system cache workwell for the following commands. The other kind of Invalidation Messages are from other backends. Suppose there are two sessions:session1---1: create table foo(a int);---session 2---1: create table test(a int); (before session1:1)2: insert into foo values(1); (execute after session1:1)---Session1 will generate Invalidation Messages and send them when the transaction is committed,and session 2 will accept those Invalidation Messages from session 1 and then executethe second command.Before the patch, Postgres will invalidate the CatalogSnapshot for those two kinds of InvalidationMessages. So I did a small optimization in this patch, for local Invalidation Messages, we don'tcall InvalidateCatalogSnapshot, we can use one CatalogSnapshot in a transaction even if we modifythe catalog and generate Invalidation Messages, as the visibility of the tuple is identified by the curcid,as long as we update the curcid of the CatalogSnapshot in  SnapshotSetCommandId, it can workcorrectly.\n\nI think you patch doing is against following code comments in\nsrc/backend/utils/time/snapmgr.c\n\n/*\n * CurrentSnapshot points to the only snapshot taken in transaction-snapshot\n * mode, and to the latest one taken in a read-committed transaction.\n * SecondarySnapshot is a snapshot that's always up-to-date as of the current\n * instant, even in transaction-snapshot mode.  It should only be used for\n * special-purpose code (say, RI checking.)  CatalogSnapshot points to an\n * MVCC snapshot intended to be used for catalog scans; we must invalidate it\n * whenever a system catalog change occurs.\n *\n * These SnapshotData structs are static to simplify memory allocation\n * (see the hack in GetSnapshotData to avoid repeated malloc/free).\n */\nstatic SnapshotData CurrentSnapshotData = {SNAPSHOT_MVCC};\nstatic SnapshotData SecondarySnapshotData = {SNAPSHOT_MVCC};\nSnapshotData CatalogSnapshotData = {SNAPSHOT_MVCC};\nSnapshotData SnapshotSelfData = {SNAPSHOT_SELF};\nSnapshotData SnapshotAnyData = {SNAPSHOT_ANY};Thank you for pointing it out, I think I need to update the comment in the patch too.", "msg_date": "Mon, 18 Dec 2023 15:02:23 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH]: Not to invaldiate CatalogSnapshot for local invalidation\n messages" }, { "msg_contents": "Hi,\nI updated the comment about the CatalogSnapshot `src/backend/utils/time/\nsnapmgr.c`\n\nXiaoran Wang <[email protected]> 于2023年12月18日周一 15:02写道:\n\n> Hi,\n> Thanks for your reply.\n>\n> jian he <[email protected]> 于2023年12月18日周一 08:20写道:\n>\n>> Hi\n>> ---setup.\n>> drop table s2;\n>> create table s2(a int);\n>>\n>> After apply the patch\n>> alter table s2 add primary key (a);\n>>\n>> watch CatalogSnapshot\n>> ----\n>> #0 GetNonHistoricCatalogSnapshot (relid=1259)\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:412\n>> #1 0x000055ba78f0d6ba in GetCatalogSnapshot (relid=1259)\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:371\n>> #2 0x000055ba785ffbe1 in systable_beginscan\n>> (heapRelation=0x7f256f30b5a8, indexId=2662, indexOK=false,\n>> snapshot=0x0, nkeys=1, key=0x7ffe230f0180)\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/access/index/genam.c:413\n>> (More stack frames follow...)\n>>\n>> -------------------------\n>> Hardware watchpoint 13: CatalogSnapshot\n>>\n>> Old value = (Snapshot) 0x55ba7980b6a0 <CatalogSnapshotData>\n>> New value = (Snapshot) 0x0\n>> InvalidateCatalogSnapshot () at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n>> 435 SnapshotResetXmin();\n>> (gdb) bt 4\n>> #0 InvalidateCatalogSnapshot ()\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:435\n>> #1 0x000055ba78f0ee85 in AtEOXact_Snapshot (isCommit=true,\n>> resetXmin=false)\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/utils/time/snapmgr.c:1057\n>> #2 0x000055ba7868201b in CommitTransaction ()\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:2373\n>> #3 0x000055ba78683495 in CommitTransactionCommand ()\n>> at\n>> ../../Desktop/pg_src/src7/postgresql/src/backend/access/transam/xact.c:3061\n>> (More stack frames follow...)\n>>\n>> --\n>> but the whole process changes pg_class, pg_index,\n>> pg_attribute,pg_constraint etc.\n>> only one GetCatalogSnapshot and InvalidateCatalogSnapshot seems not\n>> correct?\n>> what if there are concurrency changes in the related pg_catalog table.\n>>\n>> your patch did pass the isolation test!\n>>\n>\n> Yes, I have run the installcheck-world locally, and all the tests passed.\n> There are two kinds of Invalidation Messages.\n> One kind is from the local backend, such as what you did in the example\n> \"alter table s2 add primary key (a);\", it modifies the pg_class,\n> pg_attribute ect ,\n> so it generates some Invalidation Messages to invalidate the \"s2\" related\n> tuples in pg_class , pg_attribute ect, and Invalidate Message to\n> invalidate s2\n> relation cache. When the command is finished, in the\n> CommandCounterIncrement,\n> those Invalidation Messages will be processed to make the system cache work\n> well for the following commands.\n>\n> The other kind of Invalidation Messages are from other backends.\n> Suppose there are two sessions:\n> session1\n> ---\n> 1: create table foo(a int);\n> ---\n> session 2\n> ---\n> 1: create table test(a int); (before session1:1)\n> 2: insert into foo values(1); (execute after session1:1)\n> ---\n> Session1 will generate Invalidation Messages and send them when the\n> transaction is committed,\n> and session 2 will accept those Invalidation Messages from session 1 and\n> then execute\n> the second command.\n>\n> Before the patch, Postgres will invalidate the CatalogSnapshot for those\n> two kinds of Invalidation\n> Messages. So I did a small optimization in this patch, for local\n> Invalidation Messages, we don't\n> call InvalidateCatalogSnapshot, we can use one CatalogSnapshot in a\n> transaction even if we modify\n> the catalog and generate Invalidation Messages, as the visibility of the\n> tuple is identified by the curcid,\n> as long as we update the curcid of the CatalogSnapshot in SnapshotSetCommandId,\n> it can work\n> correctly.\n>\n>\n>\n>> I think you patch doing is against following code comments in\n>> src/backend/utils/time/snapmgr.c\n>>\n>> /*\n>> * CurrentSnapshot points to the only snapshot taken in\n>> transaction-snapshot\n>> * mode, and to the latest one taken in a read-committed transaction.\n>> * SecondarySnapshot is a snapshot that's always up-to-date as of the\n>> current\n>> * instant, even in transaction-snapshot mode. It should only be used for\n>> * special-purpose code (say, RI checking.) CatalogSnapshot points to an\n>> * MVCC snapshot intended to be used for catalog scans; we must\n>> invalidate it\n>> * whenever a system catalog change occurs.\n>> *\n>> * These SnapshotData structs are static to simplify memory allocation\n>> * (see the hack in GetSnapshotData to avoid repeated malloc/free).\n>> */\n>> static SnapshotData CurrentSnapshotData = {SNAPSHOT_MVCC};\n>> static SnapshotData SecondarySnapshotData = {SNAPSHOT_MVCC};\n>> SnapshotData CatalogSnapshotData = {SNAPSHOT_MVCC};\n>> SnapshotData SnapshotSelfData = {SNAPSHOT_SELF};\n>> SnapshotData SnapshotAnyData = {SNAPSHOT_ANY};\n>>\n>\n> Thank you for pointing it out, I think I need to update the comment in the\n> patch too.\n>", "msg_date": "Fri, 22 Dec 2023 15:34:00 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH]: Not to invaldiate CatalogSnapshot for local invalidation\n messages" } ]
[ { "msg_contents": "Hi all,\n\nPostgreSQL currently maintains several data structures in the SLRU cache. The current SLRU pages do not have any header, so it is impossible to checksum a page and verify its integrity. It is very difficult to debug issues caused by corrupted SLRU pages. Also, without a page header, page LSN is tracked in an ad-hoc fashion using LSN groups, which requires additional data structure in the shared memory. At eBay, we are building on the patch shared by Rishu Bagga in [1], which adds the standard PageHeaderData to each SLRU page. We believe that adding the standard page header to each SLRU page is the correct approach for the long run. It adds a checksum to each SLRU page, tracks page LSN as if it is a standard page and eases future page enhancements.\n\nThe enclosed patch changes the address calculation logic for all 7 SLRUs in the following 6 files:\nsrc/backend/access/transam/clog.c\nsrc/backend/access/transam/commit_ts.c\nsrc/backend/access/transam/multixact.c\nsrc/backend/access/transam/subtrans.c\nsrc/backend/commands/async.c\nsrc/backend/storage/lmgr/predicate.c\n\nThe patch enables page checksum with changes to the following 2 files:\nsrc/backend/access/transam/slru.c\nsrc/bin/pg_checksums/pg_checksums.c\n\nThe patch removes the group LSNs defined for each SLRU cache. See changes to:\nsrc/include/access/slru.h\n\nThe patch adds a few helper macros in the following files:\nsrc/backend/storage/page/bufpage.c\nsrc/include/storage/bufpage.h\n\nThe patch updates some test cases:\nsrc/bin/pg_resetwal/t/001_basic.pl\nsrc/test/modules/test_slru/test_slru.c\n\nI am still working on patching the pg_upgrade. Just love to hear your thoughts on the idea and the current patch.\n\n\nDiscussed with: Anton Shyrabokau and Shawn Debnath\n\n[1] https://www.postgresql.org/message-id/flat/EFAAC0BE-27E9-4186-B925-79B7C696D5AC%40amazon.com\n\n\nRegards,\nYong", "msg_date": "Thu, 7 Dec 2023 07:06:50 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal to add page headers to SLRU pages" }, { "msg_contents": "Hi Yong!\n\n+1 to the idea to protect SLRUs from corruption. I'm slightly leaning towards the idea of separating checksums from data pages, but anyway this checksums are better than no checksums.\n\n> On 7 Dec 2023, at 10:06, Li, Yong <[email protected]> wrote:\n> \n> I am still working on patching the pg_upgrade. Just love to hear your thoughts on the idea and the current patch.\n\nFWIW you can take upgrade code from this patch [0] doing all the same stuff :)\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n\nHi Yong!+1 to the idea to protect SLRUs from corruption. I'm slightly leaning towards the idea of separating checksums from data pages, but anyway this checksums are better than no checksums.On 7 Dec 2023, at 10:06, Li, Yong <[email protected]> wrote:I am still working on patching the pg_upgrade.  Just love to hear your thoughts on the idea and the current patch.FWIW you can take upgrade code from this patch [0] doing all the same stuff :)Best regards, Andrey Borodin.[0] https://www.postgresql.org/message-id/[email protected]", "msg_date": "Thu, 7 Dec 2023 13:19:12 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Hi,\n\n> +1 to the idea to protect SLRUs from corruption. I'm slightly leaning towards the idea of separating checksums from data pages, but anyway this checksums are better than no checksums.\n>\n> On 7 Dec 2023, at 10:06, Li, Yong <[email protected]> wrote:\n>\n> I am still working on patching the pg_upgrade. Just love to hear your thoughts on the idea and the current patch.\n>\n> FWIW you can take upgrade code from this patch [0] doing all the same stuff :)\n\nSounds like a half-measure to me. If we really want to go down this\nrabbit hole IMO SLRU should be moved to shared buffers as proposed\nelsewhere [1].\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 7 Dec 2023 17:16:59 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "07.12.2023, 19:17, \"Aleksander Alekseev\" <[email protected]>:Hi, +1 to the idea to protect SLRUs from corruption. I'm slightly leaning towards the idea of separating checksums from data pages, but anyway this checksums are better than no checksums. On 7 Dec 2023, at 10:06, Li, Yong <[email protected]> wrote: I am still working on patching the pg_upgrade. Just love to hear your thoughts on the idea and the current patch. FWIW you can take upgrade code from this patch [0] doing all the same stuff :)Sounds like a half-measure to me. If we really want to go down thisrabbit hole IMO SLRU should be moved to shared buffers as proposedelsewhere [1].Thread that I cited stopped in 2018 for this exact reason. 5 years ago. Is this argument still valid?Meanwhile checksums of buffer pages also reside on a page :)Best regards, Andrey Borodin.", "msg_date": "Thu, 07 Dec 2023 22:32:20 +0500", "msg_from": "Andrey Borodin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": ">> Sounds like a half-measure to me. If we really want to go down this\r\n>> rabbit hole IMO SLRU should be moved to shared buffers as proposed\r\n>> elsewhere [1].\r\n\r\n> Thread that I cited stopped in 2018 for this exact reason. 5 years ago. Is this argument still valid? \r\nMeanwhile checksums of buffer pages also reside on a page :)\r\n\r\nI would love to have seen more progress on the set of threads that proposed\r\nthe page header and integration of SLRU into buffer cache. The changes were\r\nlarge, and unfortunately as a result, it didn't get the detailed review\r\nthat it needed. The complex nature of the feature allowed for more branches\r\nto be split from the main thread with alternative approaches. Athough this is\r\ngreat to see, it did result in the set of core requirements around LSN and\r\nchecksum tracking via page headers to not get into PG 16.\r\n\r\nWhat is being proposed now is the simple and core functionality of introducing\r\npage headers to SLRU pages while continuing to be in the SLRU cache. This\r\nallows the whole project to be iterative and reviewers to better reason about\r\nthe smaller set of changes being introduced into the codebase.\r\n\r\nOnce the set of on-disk changes are in, we can follow up on optimizations.\r\nIt may be moving to buffer cache or reviewing Dilip's approach in [1], we\r\nwill have the option to be flexible in our approach.\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com\r\n\r\nShawn\r\n\r\n", "msg_date": "Thu, 7 Dec 2023 18:27:44 +0000", "msg_from": "\"Debnath, Shawn\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Thu, Dec 7, 2023 at 1:28 PM Debnath, Shawn <[email protected]> wrote:\n> What is being proposed now is the simple and core functionality of introducing\n> page headers to SLRU pages while continuing to be in the SLRU cache. This\n> allows the whole project to be iterative and reviewers to better reason about\n> the smaller set of changes being introduced into the codebase.\n>\n> Once the set of on-disk changes are in, we can follow up on optimizations.\n> It may be moving to buffer cache or reviewing Dilip's approach in [1], we\n> will have the option to be flexible in our approach.\n\nI basically agree with this. I don't think we should let the perfect\nbe the enemy of the good. Shooting down this patch because it doesn't\ndo everything that we want is a recipe for getting nothing done at\nall.\n\nThat said, I don't think that the original post on this thread\nprovides a sufficiently clear and detailed motivation for making this\nchange. For this to eventually be committed, it's going to need (among\nother things) a commit message that articulates a convincing rationale\nfor whatever changes it makes. Here's what the original email said:\n\n> It adds a checksum to each SLRU page, tracks page LSN as if it is a standard page and eases future page enhancements.\n\nOf those three things, in my opinion, the first is good and the other\ntwo are too vague. I assume that most people who would be likely to\nread a commit message would understand the value of pages having\nchecksums. But I can't immediately think of what the value of tracking\nthe page LSN as if it were a standard page might be, so that probably\nneeds more explanation. Similarly, at least one or two of the future\npage enhancements that might be eased should be spelled out, and/or\nthe ways in which they would be made easier should be articulated.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 14:51:06 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Given so many different approaches were discussed, I have started a wiki to record and collaborate all efforts towards SLRU improvements. The wiki provides a concise overview of all the ideas discussed and can serve as a portal for all historical discussions. Currently, the wiki summarizes four recent threads ranging from identifier format change to page header change, to moving SLRU into the main buffer pool, to reduce lock contention on SLRU latches. We can keep the patch related discussions in this thread and use the wiki as a live document for larger scale collaborations.\r\n\r\nThe wiki page is here: https://wiki.postgresql.org/wiki/SLRU_improvements\r\n\r\nRegarding the benefits of this patch, here is a detailed explanation:\r\n\r\n 1. Checksum is added to each page, allowing us to verify if a page has been corrupted when read from the disk.\r\n 2. The ad-hoc LSN group structure is removed from the SLRU cache control data and is replaced by the page LSN in the page header. This allows us to use the same WAL protocol as used by pages in the main buffer pool: flush all redo logs up to the page LSN before flushing the page itself. If we move SLRU caches into the main buffer pool, this change fits naturally.\r\n 3. It leaves further optimizations open. We can continue to pursue the goal of moving SLRU into the main buffer pool, or we can follow the lock partition idea. This change by itself does not conflict with either proposal.\r\n\r\nAlso, the patch is now complete and is ready for review. All check-world tests including tap tests passed with this patch.\r\n\r\n\r\nRegards,\r\nYong\r\n\r\nFrom: Robert Haas <[email protected]>\r\nDate: Friday, December 8, 2023 at 03:51\r\nTo: Debnath, Shawn <[email protected]>\r\nCc: Andrey Borodin <[email protected]>, PostgreSQL Hackers <[email protected]>, Aleksander Alekseev <[email protected]>, Li, Yong <[email protected]>, Shyrabokau, Anton <[email protected]>, Bagga, Rishu <[email protected]>\r\nSubject: Re: Proposal to add page headers to SLRU pages\r\nExternal Email\r\n\r\nOn Thu, Dec 7, 2023 at 1:28 PM Debnath, Shawn <[email protected]> wrote:\r\n> What is being proposed now is the simple and core functionality of introducing\r\n> page headers to SLRU pages while continuing to be in the SLRU cache. This\r\n> allows the whole project to be iterative and reviewers to better reason about\r\n> the smaller set of changes being introduced into the codebase.\r\n>\r\n> Once the set of on-disk changes are in, we can follow up on optimizations.\r\n> It may be moving to buffer cache or reviewing Dilip's approach in [1], we\r\n> will have the option to be flexible in our approach.\r\n\r\nI basically agree with this. I don't think we should let the perfect\r\nbe the enemy of the good. Shooting down this patch because it doesn't\r\ndo everything that we want is a recipe for getting nothing done at\r\nall.\r\n\r\nThat said, I don't think that the original post on this thread\r\nprovides a sufficiently clear and detailed motivation for making this\r\nchange. For this to eventually be committed, it's going to need (among\r\nother things) a commit message that articulates a convincing rationale\r\nfor whatever changes it makes. Here's what the original email said:\r\n\r\n> It adds a checksum to each SLRU page, tracks page LSN as if it is a standard page and eases future page enhancements.\r\n\r\nOf those three things, in my opinion, the first is good and the other\r\ntwo are too vague. I assume that most people who would be likely to\r\nread a commit message would understand the value of pages having\r\nchecksums. But I can't immediately think of what the value of tracking\r\nthe page LSN as if it were a standard page might be, so that probably\r\nneeds more explanation. Similarly, at least one or two of the future\r\npage enhancements that might be eased should be spelled out, and/or\r\nthe ways in which they would be made easier should be articulated.\r\n\r\n--\r\nRobert Haas\r\nEDB: https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com%2F&data=05%7C01%7Cyoli%40ebay.com%7C2cad2fe1de8a40f3167608dbf75de73c%7C46326bff992841a0baca17c16c94ea99%7C0%7C0%7C638375754901646398%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=hkccuGfpt1%2BKxuhk%2BJt%2F3HyYuJqQHYfizib76%2F9HtUU%3D&reserved=0<http://www.enterprisedb.com/>", "msg_date": "Fri, 8 Dec 2023 09:35:17 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Thu, Dec 8, 2023 at 1:36 AM Li, Yong <[email protected]> wrote:\r\n\r\n>Given so many different approaches were discussed, I have started a \r\n>wiki to record and collaborate all efforts towards SLRU \r\n>improvements. The wiki provides a concise overview of all the ideas \r\n>discussed and can serve as a portal for all historical \r\n>discussions. Currently, the wiki summarizes four recent threads \r\n>ranging from identifier format change to page header change, to moving \r\n>SLRU into the main buffer pool, to reduce lock contention on SLRU \r\n>latches. We can keep the patch related discussions in this thread and \r\n>use the wiki as a live document for larger scale collaborations.\r\n\r\n>The wiki page is \r\n>here: https://wiki.postgresql.org/wiki/SLRU_improvements\r\n\r\n>Regarding the benefits of this patch, here is a detailed explanation:\r\n\r\n1.\tChecksum is added to each page, allowing us to verify if a page has\r\n\tbeen corrupted when read from the disk.\r\n2.\tThe ad-hoc LSN group structure is removed from the SLRU cache \r\n\tcontrol data and is replaced by the page LSN in the page header. \r\n\tThis allows us to use the same WAL protocol as used by pages in the \r\n\tmain buffer pool: flush all redo logs up to the page LSN before \r\n\tflushing the page itself. If we move SLRU caches into the main \r\n\tbuffer pool, this change fits naturally.\r\n3.\tIt leaves further optimizations open. We can continue to pursue the \r\n\tgoal of moving SLRU into the main buffer pool, or we can follow the \r\n\tlock partition idea. This change by itself does not conflict with \r\n\teither proposal.\r\n\r\n>Also, the patch is now complete and is ready for review. All check-\r\n>world tests including tap tests passed with this patch. \r\n\r\n\r\n\r\n\r\nHi Yong, \r\n\r\nI agree we should break the effort for the SLRU optimization into \r\nsmaller chunks after having worked on some of the bigger patches and \r\nfacing difficulty in making progress that way.\r\n\r\nThe patch looks mostly good to me; though one thing that I thought about \r\ndifferently with the upgrade portion is where we should keep the logic \r\nof re-writing the CLOG files.\r\n\r\nThere is a precedent introduced back in Postgres v9.6 in making on disk \r\npage format changes across different in visibility map: [1]\r\n\r\ncode comment: \r\n * In versions of PostgreSQL prior to catversion 201603011, PostgreSQL's\r\n * visibility map included one bit per heap page; it now includes two.\r\n * When upgrading a cluster from before that time to a current PostgreSQL\r\n * version, we could refuse to copy visibility maps from the old cluster\r\n * to the new cluster; the next VACUUM would recreate them, but at the\r\n * price of scanning the entire table. So, instead, we rewrite the old\r\n * visibility maps in the new format. \r\n\r\n\r\n\r\nThis work is being done in file.c – it seems to me the proper way to \r\nproceed would be to continue writing on-disk upgrade logic here.\r\n\r\n\r\nBesides that this looks good to me, would like to hear what others have to say.\r\n\r\n\r\nThanks, \r\n\r\nRishu Bagga \r\n\r\nAmazon Web Services (AWS)\r\n\r\n[1] https://github.com/postgres/postgres/commit/7087166a88fe0c04fc6636d0d6d6bea1737fc1fb\r\n\r\n", "msg_date": "Tue, 19 Dec 2023 02:23:24 +0000", "msg_from": "\"Bagga, Rishu\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "> This work is being done in file.c – it seems to me the proper way to\r\n> proceed would be to continue writing on-disk upgrade logic here.\r\n\r\n> Besides that this looks good to me, would like to hear what others have to say.\r\n\r\nThank you, Rishu for taking time to review the code. I've updated the patch\r\nand moved the on-disk upgrade logic to pg_upgrade/file.c.\r\n\r\nI have also added this thread to the current Commitfest and hope this patch\r\nwill be part of the 17 release.\r\n\r\nThe commitfest link:\r\nhttps://commitfest.postgresql.org/46/4709/\r\n\r\n\r\nRegards,\r\nYong,", "msg_date": "Tue, 19 Dec 2023 07:28:19 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Hi,\n\n> I have also added this thread to the current Commitfest and hope this patch\n> will be part of the 17 release.\n>\n> The commitfest link:\n> https://commitfest.postgresql.org/46/4709/\n\nThanks for the updated patch.\n\ncfbot seems to have some complaints regarding compiler warnings and\nalso building the patch on Windows:\n\nhttp://cfbot.cputube.org/\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 2 Jan 2024 14:35:31 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "> On Jan 2, 2024, at 19:35, Aleksander Alekseev <[email protected]> wrote:\n>\n> Thanks for the updated patch.\n>\n> cfbot seems to have some complaints regarding compiler warnings and\n> also building the patch on Windows:\n>\n> http://cfbot.cputube.org/\n\nThanks for the information. Here is the updated patch.\n\nRegards,\nYong", "msg_date": "Thu, 4 Jan 2024 15:57:33 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Rebase the patch against the latest HEAD.\n\nRegards,\nYong", "msg_date": "Tue, 16 Jan 2024 09:12:16 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Rebase the patch against the latest HEAD.\n\nRegards,\nYong", "msg_date": "Wed, 6 Mar 2024 12:01:46 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Hello,\n\nI suppose this is important to do if we ever want to move SLRUs into\nshared buffers. However, I wonder about the extra time this adds to\npg_upgrade. Is this something we should be concerned about? Is there\nany measurement/estimates to tell us how long this would be? Right now,\nif you use a cloning strategy for the data files, the upgrade should be\npretty quick ... but the amount of data in pg_xact and pg_multixact\ncould be massive, and the rewrite is likely to take considerable time.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n", "msg_date": "Wed, 6 Mar 2024 14:11:44 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera ([email protected]) wrote:\n> I suppose this is important to do if we ever want to move SLRUs into\n> shared buffers. However, I wonder about the extra time this adds to\n> pg_upgrade. Is this something we should be concerned about? Is there\n> any measurement/estimates to tell us how long this would be? Right now,\n> if you use a cloning strategy for the data files, the upgrade should be\n> pretty quick ... but the amount of data in pg_xact and pg_multixact\n> could be massive, and the rewrite is likely to take considerable time.\n\nWhile I definitely agree that there should be some consideration of\nthis concern, it feels on-par with the visibility-map rewrite which was\ndone previously. Larger systems will likely have more to deal with than\nsmaller systems, but it's still a relatively small portion of the data\noverall.\n\nThe benefit of this change, beyond just the possibility of moving them\ninto shared buffers some day in the future, is that this would mean that\nSLRUs will have checksums (if the cluster has them enabled). That\nbenefit strikes me as well worth the cost of the rewrite taking some\ntime and the minor loss of space due to the page header.\n\nWould it be useful to consider parallelizing this work? There's already\nparts of pg_upgrade which can be parallelized and so this isn't,\nhopefully, a big lift to add, but I'm not sure if there's enough work\nbeing done here CPU-wise, compared to the amount of IO being done, to\nhave it make sense to run it in parallel. Might be worth looking into\nthough, at least, as disks have gotten to be quite fast.\n\nThanks!\n\nStephen", "msg_date": "Wed, 6 Mar 2024 14:09:59 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "\r\n> On Mar 7, 2024, at 03:09, Stephen Frost <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> From: Stephen Frost <[email protected]>\r\n> Subject: Re: Proposal to add page headers to SLRU pages\r\n> Date: March 7, 2024 at 03:09:59 GMT+8\r\n> To: Alvaro Herrera <[email protected]>\r\n> Cc: \"Li, Yong\" <[email protected]>, Aleksander Alekseev <[email protected]>, PostgreSQL Hackers <[email protected]>, \"Bagga, Rishu\" <[email protected]>, Robert Haas <[email protected]>, \"Debnath, Shawn\" <[email protected]>, Andrey Borodin <[email protected]>, \"Shyrabokau, Anton\" <[email protected]>\r\n> \r\n> \r\n> Greetings,\r\n> \r\n> * Alvaro Herrera ([email protected]) wrote:\r\n>> I suppose this is important to do if we ever want to move SLRUs into\r\n>> shared buffers. However, I wonder about the extra time this adds to\r\n>> pg_upgrade. Is this something we should be concerned about? Is there\r\n>> any measurement/estimates to tell us how long this would be? Right now,\r\n>> if you use a cloning strategy for the data files, the upgrade should be\r\n>> pretty quick ... but the amount of data in pg_xact and pg_multixact\r\n>> could be massive, and the rewrite is likely to take considerable time.\r\n> \r\n> While I definitely agree that there should be some consideration of\r\n> this concern, it feels on-par with the visibility-map rewrite which was\r\n> done previously. Larger systems will likely have more to deal with than\r\n> smaller systems, but it's still a relatively small portion of the data\r\n> overall.\r\n> \r\n> The benefit of this change, beyond just the possibility of moving them\r\n> into shared buffers some day in the future, is that this would mean that\r\n> SLRUs will have checksums (if the cluster has them enabled). That\r\n> benefit strikes me as well worth the cost of the rewrite taking some\r\n> time and the minor loss of space due to the page header.\r\n> \r\n> Would it be useful to consider parallelizing this work? There's already\r\n> parts of pg_upgrade which can be parallelized and so this isn't,\r\n> hopefully, a big lift to add, but I'm not sure if there's enough work\r\n> being done here CPU-wise, compared to the amount of IO being done, to\r\n> have it make sense to run it in parallel. Might be worth looking into\r\n> though, at least, as disks have gotten to be quite fast.\r\n> \r\n> Thanks!\r\n> \r\n> Stephen\r\n> \r\n\r\n\r\nThank Alvaro and Stephen for your thoughtful comments.\r\n\r\nI did a quick benchmark regarding pg_upgrade time, and here are the results.\r\n\r\nHardware spec:\r\nMacBook Pro M1 Max - 10 cores, 64GB memory, 1TB Apple SSD\r\n\r\nOperating system:\r\nmacOS 14.3.1\r\n\r\nComplier:\r\nApple clang 15.0.0\r\n\r\nCompiler optimization level: -O2\r\n\r\n====\r\nPG setups:\r\nOld cluster: PG 16.2 release (source build)\r\nNew cluster: PG Git HEAD plus the patch (source build)\r\n\r\n====\r\nBenchmark steps:\r\n\r\n1. Initdb for PG 16.2.\r\n2. Initdb for PG HEAD.\r\n3. Run pg_upgrade on the above empty database, and time the overall wall clock time.\r\n4. In the old cluster, write 512MB all-zero dummy segment files (2048 segments) under pg_xact.\r\n5. In the old cluster, write 512MB all-zero dummy segment files under pg_multixact/members.\r\n6. In the old cluster, write 512MB all-zero dummy segment files under pg_multixact/offsets.\r\n7. Purge the OS page cache.\r\n7. Run pg_upgrade again, and time the overall wall clock time.\r\n\r\n====\r\nTest result:\r\n\r\nOn the empty database, pg_upgrade took 4.8 seconds to complete.\r\n\r\nWith 1.5GB combined SLRU data to convert, pg_upgrade took 11.5 seconds to complete.\r\n\r\nIt took 6.7 seconds to convert 1.5GB SLRU files for pg_upgrade.\r\n\r\n====\r\n\r\nFor clog, 2048 segments can host about 2 billion transactions, right at the limit for wraparound.\r\nThat’s the maximum we can have. 2048 segments are also big for pg_multixact SLRUs.\r\n\r\nTherefore, on a modern hardware, in the worst case, pg_upgrade will run for 7 seconds longer.\r\n\r\n\r\nRegards,\r\n\r\nYong", "msg_date": "Fri, 8 Mar 2024 07:58:09 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Greetings,\n\n* Li, Yong ([email protected]) wrote:\n> > On Mar 7, 2024, at 03:09, Stephen Frost <[email protected]> wrote:\n> > * Alvaro Herrera ([email protected]) wrote:\n> >> I suppose this is important to do if we ever want to move SLRUs into\n> >> shared buffers. However, I wonder about the extra time this adds to\n> >> pg_upgrade. Is this something we should be concerned about? Is there\n> >> any measurement/estimates to tell us how long this would be? Right now,\n> >> if you use a cloning strategy for the data files, the upgrade should be\n> >> pretty quick ... but the amount of data in pg_xact and pg_multixact\n> >> could be massive, and the rewrite is likely to take considerable time.\n> > \n> > While I definitely agree that there should be some consideration of\n> > this concern, it feels on-par with the visibility-map rewrite which was\n> > done previously. Larger systems will likely have more to deal with than\n> > smaller systems, but it's still a relatively small portion of the data\n> > overall.\n> > \n> > The benefit of this change, beyond just the possibility of moving them\n> > into shared buffers some day in the future, is that this would mean that\n> > SLRUs will have checksums (if the cluster has them enabled). That\n> > benefit strikes me as well worth the cost of the rewrite taking some\n> > time and the minor loss of space due to the page header.\n> > \n> > Would it be useful to consider parallelizing this work? There's already\n> > parts of pg_upgrade which can be parallelized and so this isn't,\n> > hopefully, a big lift to add, but I'm not sure if there's enough work\n> > being done here CPU-wise, compared to the amount of IO being done, to\n> > have it make sense to run it in parallel. Might be worth looking into\n> > though, at least, as disks have gotten to be quite fast.\n> \n> Thank Alvaro and Stephen for your thoughtful comments.\n> \n> I did a quick benchmark regarding pg_upgrade time, and here are the results.\n\n> For clog, 2048 segments can host about 2 billion transactions, right at the limit for wraparound.\n> That’s the maximum we can have. 2048 segments are also big for pg_multixact SLRUs.\n> \n> Therefore, on a modern hardware, in the worst case, pg_upgrade will run for 7 seconds longer.\n\nThanks for testing! That strikes me as perfectly reasonable and seems\nunlikely that we'd get much benefit from parallelizing it, so I'd say it\nmakes sense to keep this code simple.\n\nThanks again!\n\nStephen", "msg_date": "Fri, 8 Mar 2024 05:17:56 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On 2024-Mar-08, Stephen Frost wrote:\n\n> Thanks for testing! That strikes me as perfectly reasonable and seems\n> unlikely that we'd get much benefit from parallelizing it, so I'd say it\n> makes sense to keep this code simple.\n\nOkay, agreed, that amount of time sounds reasonable to me too; but I\ndon't want to be responsible for this at least for pg17. If some other\ncommitter wants to take it, be my guest. However, I think this is\nmostly a foundation for building other things on top, so committing\nduring the last commitfest is perhaps not very useful.\n\n\nAnother aspect of this patch is the removal of the LSN groups. There's\nan explanation of the LSN groups in src/backend/access/transam/README,\nand while this patch removes the LSN group feature, it doesn't update\nthat text. That's already a problem which needs fixed, but the text\nsays\n\n: In fact, we store more than one LSN for each clog page. This relates to\n: the way we set transaction status hint bits during visibility tests.\n: We must not set a transaction-committed hint bit on a relation page and\n: have that record make it to disk prior to the WAL record of the commit.\n: Since visibility tests are normally made while holding buffer share locks,\n: we do not have the option of changing the page's LSN to guarantee WAL\n: synchronization. Instead, we defer the setting of the hint bit if we have\n: not yet flushed WAL as far as the LSN associated with the transaction.\n: This requires tracking the LSN of each unflushed async commit.\n: It is convenient to associate this data with clog buffers: because we\n: will flush WAL before writing a clog page, we know that we do not need\n: to remember a transaction's LSN longer than the clog page holding its\n: commit status remains in memory. However, the naive approach of storing\n: an LSN for each clog position is unattractive: the LSNs are 32x bigger\n: than the two-bit commit status fields, and so we'd need 256K of\n: additional shared memory for each 8K clog buffer page. We choose\n: instead to store a smaller number of LSNs per page, where each LSN is\n: the highest LSN associated with any transaction commit in a contiguous\n: range of transaction IDs on that page. This saves storage at the price\n: of some possibly-unnecessary delay in setting transaction hint bits.\n\nIn the new code we effectively store only one LSN per page, which I\nunderstand is strictly worse. Maybe the idea of doing away with these\nLSN groups should be reconsidered ... unless I completely misunderstand\nthe whole thing.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 8 Mar 2024 13:58:29 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Fri, 2024-03-08 at 13:58 +0100, Alvaro Herrera wrote:\n> In the new code we effectively store only one LSN per page, which I\n> understand is strictly worse.\n\nTo quote from the README:\n\n\"Instead, we defer the setting of the hint bit if we have not yet\nflushed WAL as far as the LSN associated with the transaction. This\nrequires tracking the LSN of each unflushed async commit.\"\n\nIn other words, the problem the group_lsns are solving is that we can't\nset the hint bit on a tuple until the commit record for that\ntransaction has actually been flushed. For ordinary sync commit, that's\nfine, because the CLOG bit isn't set until after the commit record is\nflushed. But for async commit, the CLOG may be updated before the WAL\nis flushed, and group_lsns are one way to track the right information\nto hold off updating the hint bits.\n\n\"It is convenient to associate this data with clog buffers: because we\nwill flush WAL before writing a clog page, we know that we do not need\nto remember a transaction's LSN longer than the clog page holding its\ncommit status remains in memory.\"\n\nIt's not clear to me that it is so convenient, if it's preventing the\nSLRU from fitting in with the rest of the system architecture.\n\n\"The worst case is where a sync-commit xact shares a cached LSN with an\nasync-commit xact that commits a bit later; even though we paid to sync\nthe first xact to disk, we won't be able to hint its outputs until the\nsecond xact is sync'd, up to three walwriter cycles later.\"\n\nPerhaps we can revisit alternatives to the group_lsn? If we accept\nYong's proposal, and the SLRU has a normal LSN and was used in the\nnormal way, we would just need some kind of secondary structure to hold\na mapping from XID->LSN only for async transactions.\n\nThe characteristics we are looking for in this secondary structure are:\n\n 1. cheap to test if it's empty, so it doesn't interfere with a purely\nsync workload at all\n 2. expire old entries (where the LSN has already been flushed)\ncheaply enough so the data structure doesn't bloat\n 3. look up an LSN given an XID cheaply enough that it doesn't\ninterfere with setting hint bits\n\nMaking a better secondary structure seems doable to me. Just to\nbrainstorm:\n\n * Have an open-addressed hash table mapping async XIDs to their\ncommit LSN. If you have a hash collision, opportunistically see if the\nentry is old and can be removed. Try K probes, and if they are all\nrecent, then you need to XLogFlush. The table could get pretty big,\nbecause it needs to hold enough async transactions for a wal writer\ncycle or two, but it seems reasonable to make async workloads pay that\nmemory cost.\n\n * Another idea, if the size of the structure is a problem, is to\ngroup K async xids into a bloom filter that points at a single LSN.\nWhen more transactions come along, create another bloom filter for the\nnext K async xids. This could interfere with setting hint bits for sync\nxids if the bloom filters are undersized, but that doesn't seem like a\nbig problem.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 08 Mar 2024 12:39:11 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Fri, 2024-03-08 at 12:39 -0800, Jeff Davis wrote:\n> Making a better secondary structure seems doable to me. Just to\n> brainstorm:\n\nWe can also keep the group_lsns, and then just update both the CLOG\npage LSN and the group_lsn when setting the transaction status. The\nformer would be used for all of the normal WAL-related stuff, and the\nlatter would be used by TransactionIdGetStatus() to return the more\nprecise LSN for that group.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 08 Mar 2024 13:02:53 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Wed, 2024-03-06 at 12:01 +0000, Li, Yong wrote:\n> Rebase the patch against the latest HEAD.\n\nThe upgrade logic could use more comments explaining what's going on\nand why. As I understand it, it's a one-time conversion that needs to\nhappen between 16 and 17. Is that right?\n\nWas the way CLOG is upgraded already decided in some earlier\ndiscussion?\n\nGiven that the CLOG is append-only and gets truncated occasionally, I\nwonder whether we can just have some marker that xids before some\nnumber are the old CLOG, and xids beyond that number are in the new\nCLOG. I'm not necessarily suggesting that; just an idea.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 08 Mar 2024 13:22:30 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "> On Mar 9, 2024, at 05:22, Jeff Davis <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> On Wed, 2024-03-06 at 12:01 +0000, Li, Yong wrote:\r\n>> Rebase the patch against the latest HEAD.\r\n> \r\n> The upgrade logic could use more comments explaining what's going on\r\n> and why. As I understand it, it's a one-time conversion that needs to\r\n> happen between 16 and 17. Is that right?\r\n> \r\n> Regards,\r\n> Jeff Davis\r\n> \r\n\r\n> In the new code we effectively store only one LSN per page, which I\r\n> understand is strictly worse. Maybe the idea of doing away with these\r\n> LSN groups should be reconsidered ... unless I completely misunderstand\r\n> the whole thing.\r\n> \r\n> --\r\n> Álvaro Herrera PostgreSQL Developer —\r\n\r\n\r\nThanks for the comments on LSN groups and pg_upgrade.\r\n\r\nI have updated the patch to address both comments:\r\n- The clog LSN group has been brought back.\r\n Now the page LSN on each clog page is used for honoring the write-ahead rule\r\n and it is always the highest LSN of all the LSN groups on the page.\r\n The LSN groups are used by TransactionIdGetStatus() as before.\r\n- New comments have been added to pg_upgrade to mention the SLRU\r\n page header change as the reason for upgrading clog files.\r\n\r\nRegards,\r\nYong", "msg_date": "Mon, 11 Mar 2024 10:01:42 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Mon, 2024-03-11 at 10:01 +0000, Li, Yong wrote:\n> - The clog LSN group has been brought back.\n>   Now the page LSN on each clog page is used for honoring the write-\n> ahead rule\n>   and it is always the highest LSN of all the LSN groups on the page.\n>   The LSN groups are used by TransactionIdGetStatus() as before.\n\nI like where this is going.\n\nÁlvaro, do you still see a problem with this approach?\n\n> - New comments have been added to pg_upgrade to mention the SLRU\n>   page header change as the reason for upgrading clog files.\n\nThat seems reasonable, but were any alternatives discussed? Do we have\nconsensus that this is the right thing to do?\n\nAnd if we use this approach, is there extra validation or testing that\ncan be done?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 14 Mar 2024 15:27:24 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "\r\n>> - New comments have been added to pg_upgrade to mention the SLRU\r\n>> page header change as the reason for upgrading clog files.\r\n> \r\n> That seems reasonable, but were any alternatives discussed? Do we have\r\n> consensus that this is the right thing to do?\r\n\r\nIn general, there are two approaches. Either we convert the existing clog files,\r\nor we don’t. The patch chooses to convert.\r\n\r\nIf we don’t, then the clog file code must be able to handle both formats. For,\r\nXIDs in the range where the clog is written in the old format, segment and offset\r\ncomputation must be done in one way, and for XIDs in a different range, it must\r\nbe computed in a different way. To avoid changing the format in the middle of a\r\npage, which must not happen, the new format must start from a clean page, \r\npossibly in a clean new segment. If the database is extremely small and has only\r\na few transactions on the first page of clog, then we must either convert the whole\r\npage (effectively the whole clog file), or we must skip the rest of the XIDs on the\r\npage and ask the database to start from XIDs on the second page on restart.\r\nAlso, we need to consider where to store the cut-off XID and when to remove it.\r\nAll these details feel very complex and error prone to me. Performing a one-time\r\nconversion is the most efficient and straightforward approach to me. \r\n\r\n> \r\n> And if we use this approach, is there extra validation or testing that\r\n> can be done?\r\n> \r\n> Regards,\r\n> Jeff Davis\r\n\r\nUnfortunately, the test requires a setup of two different versions of PG. I am not\r\naware of an existing test infrastructure which can run automated tests using two\r\nPGs. I did manually verify the output of pg_upgrade. \r\n\r\n\r\nRegards,\r\nYong\r\n\r\n", "msg_date": "Tue, 19 Mar 2024 06:48:33 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 19, 2024 at 06:48:33AM +0000, Li, Yong wrote:\n> \n> Unfortunately, the test requires a setup of two different versions of PG. I am not\n> aware of an existing test infrastructure which can run automated tests using two\n> PGs. I did manually verify the output of pg_upgrade. \n\nI think there is something in t/002_pg_upgrade.pl (see src/bin/pg_upgrade/TESTING),\nthat could be used to run automated tests using an old and a current version.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 07:19:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "On Mon, Jun 10, 2024 at 07:19:56AM +0000, Bertrand Drouvot wrote:\n> On Tue, Mar 19, 2024 at 06:48:33AM +0000, Li, Yong wrote:\n>> Unfortunately, the test requires a setup of two different versions of PG. I am not\n>> aware of an existing test infrastructure which can run automated tests using two\n>> PGs. I did manually verify the output of pg_upgrade. \n> \n> I think there is something in t/002_pg_upgrade.pl (see src/bin/pg_upgrade/TESTING),\n> that could be used to run automated tests using an old and a current version.\n\nCluster.pm relies on install_path for stuff, where it is possible to\ncreate tests with multiple nodes pointing to different installation\npaths. This allows mixing nodes with different build options, or just\ndifferent major versions like pg_upgrade's perl tests.\n--\nMichael", "msg_date": "Mon, 10 Jun 2024 17:01:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal to add page headers to SLRU pages" }, { "msg_contents": "\r\n\r\n> On Jun 10, 2024, at 16:01, Michael Paquier <[email protected]> wrote:\r\n> \r\n> External Email\r\n> \r\n> From: Michael Paquier <[email protected]>\r\n> Subject: Re: Proposal to add page headers to SLRU pages\r\n> Date: June 10, 2024 at 16:01:50 GMT+8\r\n> To: Bertrand Drouvot <[email protected]>\r\n> Cc: \"Li, Yong\" <[email protected]>, Jeff Davis <[email protected]>, Aleksander Alekseev <[email protected]>, PostgreSQL Hackers <[email protected]>, \"Bagga, Rishu\" <[email protected]>, Robert Haas <[email protected]>, Andrey Borodin <[email protected]>, \"Shyrabokau, Anton\" <[email protected]>\r\n> \r\n> \r\n> On Mon, Jun 10, 2024 at 07:19:56AM +0000, Bertrand Drouvot wrote:\r\n>> On Tue, Mar 19, 2024 at 06:48:33AM +0000, Li, Yong wrote:\r\n>>> Unfortunately, the test requires a setup of two different versions of PG. I am not\r\n>>> aware of an existing test infrastructure which can run automated tests using two\r\n>>> PGs. I did manually verify the output of pg_upgrade.\r\n>> \r\n>> I think there is something in t/002_pg_upgrade.pl (see src/bin/pg_upgrade/TESTING),\r\n>> that could be used to run automated tests using an old and a current version.\r\n> \r\n> Cluster.pm relies on install_path for stuff, where it is possible to\r\n> create tests with multiple nodes pointing to different installation\r\n> paths. This allows mixing nodes with different build options, or just\r\n> different major versions like pg_upgrade's perl tests.\r\n> —\r\n> Michael\r\n> \r\n> \r\n\r\nThanks for pointing\tthis out. Here is what I have tried:\r\n1. Manually build and install PostgreSQL from the latest source code.\r\n2. Following the instructions from src/bin/pg_upgrade to manually dump the regression database.\r\n3. Apply the patch to the latest code, and build from the source.\r\n4. Run “make check” by following the instructions from src/bin/pg_upgrade and setting up the olddump and oldinstall to point to the “old” installation used in step 2.\r\n\r\nAll tests pass.\r\n\r\n\r\nYong\r\n\r\n", "msg_date": "Thu, 13 Jun 2024 08:41:23 +0000", "msg_from": "\"Li, Yong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal to add page headers to SLRU pages" } ]
[ { "msg_contents": "Hi all,\n\nI was trying to cross-compile PostgreSQL 16.1 using Buildroot which worked fine\nuntil I executed \"initdb\" on my target device, where I faced the following error:\n\n2023-12-06 07:10:18.568 UTC [31] FATAL: could not load library \n\"/usr/lib/postgresql/dict_snowball.so\": /usr/lib/postgresql/dict_snowball.so: \nundefined symbol: CurrentMemoryContext\n\nIt turned out that the \"postgres\" binary was missing the symbol\n\"CurrentMemoryContext\" because the configure script assumed that my toolchain's\n(GCC 9.4) linker does not support \"--export-dynamic\", although it supports it.\n\nI had a quick look at the configure script where the following lines peeked my\ninterest:\n\nif test \"$cross_compiling\" = yes; then :\n pgac_cv_prog_cc_LDFLAGS_EX_BE__Wl___export_dynamic=\"assuming no\"\n\nApparently when cross-compiling the linker is automatically assumed to not\nunderstand \"--export-dynamic\", leading to aforementioned problem on my end.\n\nA workaround of mine is to override\npgac_cv_prog_cc_LDFLAGS_EX_BE__Wl___export_dynamic with \"yes\", which makes\neverything work as expected.\n\nThere is also at least one additional linker flag \"--as-needed\" that is not\nbeing used when cross-compiling. Is this a bug or am I misunderstanding the\nimplications that PostgreSQL has when \"$cross_compiling=yes\"?\n\nBest regards,\nDominik\n\n\n", "msg_date": "Thu, 7 Dec 2023 09:33:11 +0000", "msg_from": "Dominik Michael Rauh <[email protected]>", "msg_from_op": true, "msg_subject": "Configure problem when cross-compiling PostgreSQL 16.1" }, { "msg_contents": "Dominik Michael Rauh <[email protected]> writes:\n> Apparently when cross-compiling the linker is automatically assumed to not\n> understand \"--export-dynamic\", leading to aforementioned problem on my end.\n> ...\n> There is also at least one additional linker flag \"--as-needed\" that is not\n> being used when cross-compiling. Is this a bug or am I misunderstanding the\n> implications that PostgreSQL has when \"$cross_compiling=yes\"?\n\nCross-compiling isn't really a supported thing, because there's too\nmuch stuff we can't find out about the target system in that case.\nIf it works for you, great, but if it doesn't we're unlikely to put\na lot of effort into fixing it. You might be able to manually\ncorrect whatever mistaken assumptions configure made (by editing\nits output files). It's hard to see how that could be automated\nthough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Dec 2023 09:18:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configure problem when cross-compiling PostgreSQL 16.1" } ]
[ { "msg_contents": "Hi hackers,\n\nDuring logical replication, if there is a large write transaction, some\nspill files will be written to disk, depending on the setting of\nlogical_decoding_work_mem.\n\nThis behavior can effectively avoid OOM, but if the transaction\ngenerates a lot of change before commit, a large number of files may\nfill the disk. For example, you can update a TB-level table.\nOf course, this is also inevitable.\n\nBut I found an inelegant phenomenon. If the updated large table is not\npublished, its changes will also be written with a large number of spill files.\nLook at an example below:\n\npublisher:\n```\ncreate table tbl_pub(id int, val1 text, val2 text,val3 text);\ncreate table tbl_t1(id int, val1 text, val2 text,val3 text);\nCREATE PUBLICATION mypub FOR TABLE public.tbl_pub;\n```\n\nsubscriber:\n```\ncreate table tbl_pub(id int, val1 text, val2 text,val3 text);\ncreate table tbl_t1(id int, val1 text, val2 text,val3 text);\nCREATE SUBSCRIPTION mysub CONNECTION 'host=127.0.0.1 port=5432\nuser=postgres dbname=postgres' PUBLICATION mypub;\n```\n\npublisher:\n```\nbegin;\ninsert into tbl_t1 select i,repeat('xyzzy', i),repeat('abcba',\ni),repeat('dfds', i) from generate_series(0,999999) i;\n```\n\nLater you will see a large number of spill files in the\n\"/$PGDATA/pg_replslot/mysub/\" directory.\n```\n$ll -sh\ntotal 4.5G\n4.0K -rw------- 1 postgres postgres 200 Nov 30 09:24 state\n17M -rw------- 1 postgres postgres 17M Nov 30 08:22 xid-750-lsn-0-10000000.spill\n12M -rw------- 1 postgres postgres 12M Nov 30 08:20 xid-750-lsn-0-1000000.spill\n17M -rw------- 1 postgres postgres 17M Nov 30 08:23 xid-750-lsn-0-11000000.spill\n......\n```\n\nWe can see that table tbl_t1 is not published in mypub. It also won't be sent\ndownstream because it's not subscribed.\nAfter the transaction is reorganized, the pgoutput decoding plugin filters out\nchanges to these unpublished relationships when sending logical changes.\nSee function pgoutput_change.\n\nMost importantly, if we filter out unpublished relationship-related\nchanges after\nconstructing the changes but before queuing the changes into a transaction,\nwill it reduce the workload of logical decoding and avoid disk or memory growth\nas much as possible?\n\nAttached is the patch I used to implement this optimization.\n\nDesign:\n\n1. Added a callback LogicalDecodeFilterByRelCB for the output plugin.\n\n2. Added this callback function pgoutput_table_filter for the pgoutput plugin.\nIts main implementation is based on the table filter in the\npgoutput_change function.\nIts main function is to determine whether the change needs to be published based\non the parameters of the publication, and if not, filter it.\n\n3. After constructing a change and before Queue a change into a transaction,\nuse RelidByRelfilenumber to obtain the relation associated with the change,\njust like obtaining the relation in the ReorderBufferProcessTXN function.\n\n4. Relation may be a toast, and there is no good way to get its real\ntable relation based on toast relation. Here, I get the real table oid\nthrough toast relname, and then get the real table relation.\n\n5. This filtering takes into account INSERT/UPDATE/INSERT. Other\nchanges have not been considered yet and can be expanded in the future.\n\nTest:\n1. Added a test case 034_table_filter.pl\n2. Like the case above, create two tables, the published table tbl_pub and\n the non-published table tbl_t1\n3. Insert 10,000 rows of toast data into tbl_t1 on the publisher, and use\n pg_ls_replslotdir to record the total size of the slot directory\nevery second.\n4. Compare the size of the slot directory at the beginning of the\ntransaction(size1),\n the size at the end of the transaction (size2), and the average\nsize of the entire process(size3).\n5. Assert(size1==size2==size3)\n\nSincerely look forward to your feedback.\nRegards, lijie", "msg_date": "Thu, 7 Dec 2023 19:09:12 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Reduce useless changes before reassembly during logical replication" }, { "msg_contents": "Hi Jie,\n\n> Most importantly, if we filter out unpublished relationship-related\n> changes after\n> constructing the changes but before queuing the changes into a transaction,\n> will it reduce the workload of logical decoding and avoid disk or memory growth\n> as much as possible?\n\nThanks for the report!\n\nDiscarding the unused changes as soon as possible looks like a valid\noptimization for me, but I pretty like more experienced people have a\ndouble check. \n\n> Attached is the patch I used to implement this optimization.\n\nAfter a quick look at the patch, I found FilterByTable is too expensive\nbecause of the StartTransaction and AbortTransaction. With your above\nsetup and run the below test:\n\ninsert into tbl_t1 select i,repeat('xyzzy', i),repeat('abcba',\ni),repeat('dfds', i) from generate_series(0,999100) i;\n\nperf the wal sender of mypub for 30 seconds, then I get:\n\n- 22.04% 1.53% postgres postgres [.] FilterByTable - 20.51% FilterByTable \n AbortTransaction ResourceOwnerReleaseInternal LockReleaseAll hash_seq_search \n\nThe main part comes from AbortTransaction, and the 20% is not trivial.\n\n From your patch:\n+\n+\t/*\n+\t * Decoding needs access to syscaches et al., which in turn use\n+\t * heavyweight locks and such. Thus we need to have enough state around to\n+\t * keep track of those. The easiest way is to simply use a transaction\n+\t * internally.\n+ ....\n+\tusing_subtxn = IsTransactionOrTransactionBlock();\n+\n+\tif (using_subtxn)\n+\t\tBeginInternalSubTransaction(\"filter change by table\");\n+\telse\n+\t\tStartTransactionCommand();\n\nAcutally FilterByTable here is simpler than \"decoding\", we access\nsyscache only when we find an entry in get_rel_sync_entry and the\nreplicate_valid is false, and the invalid case should rare. \n\nWhat I'm thinking now is we allow the get_rel_sync_sync_entry build its\nown transaction state *only when it find a invalid entry*. if the caller\nhas built it already, like the existing cases in master, nothing will\nhappen except a simple transaction state check. Then in the\nFilterByTable case we just leave it for get_rel_sync_sync_entry. See the\nattachemnt for the idea.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 21 Feb 2024 18:47:40 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reduce useless changes before reassembly during logical\n replication" } ]
[ { "msg_contents": "In the documentation, Under CREATE PUBLICATION under parameters \n\n publish (string)\n This parameter determines which DML operations will be published by the new publication to the subscribers. The value is comma-separated list of operations. The default is to publish all actions, and so the default value for this option is ‘insert, update, delete, truncate’.\n\nFrom what I’ve seen, truncate is not set to published by default. I’m looking at a server now with 4 publications on it, and none has truncate set to true. One of these I created, and I know I didn’t set any values. All the other values are set, but not truncate.\n\nI don’t know if this was intentional in the code or an oversight, but documentation is incorrect currently. Also, the line before, beginning with “The value is comma-separated…”, could use a little work as well. Maybe just an “a” between “is” and “comma-separated”.\n—\nJay\n\n\nSent from my iPad\n\n", "msg_date": "Thu, 7 Dec 2023 09:39:42 -0500", "msg_from": "John Scalia <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?I=E2=80=99ve_come_across_what_I_think_is_a_bug?=" }, { "msg_contents": "On Thu, Dec 7, 2023, at 11:39 AM, John Scalia wrote:\n> In the documentation, Under CREATE PUBLICATION under parameters \n> \n> publish (string)\n> This parameter determines which DML operations will be published by the new publication to the subscribers. The value is comma-separated list of operations. The default is to publish all actions, and so the default value for this option is ‘insert, update, delete, truncate’.\n> \n> From what I’ve seen, truncate is not set to published by default. I’m looking at a server now with 4 publications on it, and none has truncate set to true. One of these I created, and I know I didn’t set any values. All the other values are set, but not truncate.\n\nWhat's your Postgres version? The truncate option was introduced in v11. You\ndidn't provide an evidence that's a bug. Since v11 we have the same behavior:\n\npostgres=# create publication pub1;\nCREATE PUBLICATION\npostgres=# \\x\nExpanded display is on.\npostgres=# select * from pg_publication;\n-[ RECORD 1 ]+-----\npubname | pub1\npubowner | 10\npuballtables | f\npubinsert | t\npubupdate | t\npubdelete | t\npubtruncate | t\n\npostgres=# select version();\n-[ RECORD 1 ]-----------------------------------------------------------------------------------------------\nversion | PostgreSQL 11.21 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n\nMaybe you are using a client that is *not* providing truncate as an operation.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Dec 7, 2023, at 11:39 AM, John Scalia wrote:In the documentation, Under CREATE PUBLICATION under parameters          publish (string)                 This parameter determines which DML operations will be published by the new publication to the subscribers. The value is comma-separated list of operations. The default is to publish all actions, and so the default value for this option is ‘insert, update, delete, truncate’.From what I’ve seen, truncate is not set to published by default. I’m looking at a server now with 4 publications on it, and none has truncate set to true. One of these I created, and I know I didn’t set any values. All the other values are set, but not truncate.What's your Postgres version? The truncate option was introduced in v11. Youdidn't provide an evidence that's a bug. Since v11 we have the same behavior:postgres=# create publication pub1;CREATE PUBLICATIONpostgres=# \\xExpanded display is on.postgres=# select * from pg_publication;-[ RECORD 1 ]+-----pubname      | pub1pubowner     | 10puballtables | fpubinsert    | tpubupdate    | tpubdelete    | tpubtruncate  | tpostgres=# select version();-[ RECORD 1 ]-----------------------------------------------------------------------------------------------version | PostgreSQL 11.21 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bitMaybe you are using a client that is *not* providing truncate as an operation.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 07 Dec 2023 12:09:23 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re:_I=E2=80=99ve_come_across_what_I_think_is_a_bug?=" } ]
[ { "msg_contents": "Greetings,\n\nGetting the following error:\n\nstate-exec: run failed: cannot create new executor meta: cannot get\nmatching bin by path: no matching binary by path\n\"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\nstate-exec: Not user serviceable; Please contact support for assistance.\n\nanyone seen this or have a fix ?\n\nDave Cramer\n\nGreetings,Getting the following error:state-exec: run failed: cannot create new executor meta: cannot get matching bin by path: no matching binary by path \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"state-exec: Not user serviceable; Please contact support for assistance.anyone seen this or have a fix ?Dave Cramer", "msg_date": "Thu, 7 Dec 2023 12:54:27 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "errors building on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> state-exec: run failed: cannot create new executor meta: cannot get\n> matching bin by path: no matching binary by path\n> \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> state-exec: Not user serviceable; Please contact support for assistance.\n> \n> anyone seen this or have a fix ?\n\nI've not seen that before. Please provide a bit more detail. Compiler,\nbuilding with ninja or msbuild/visual studio, when exactly you're encountering\nthe issue, ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Dec 2023 10:52:59 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: errors building on windows using meson" }, { "msg_contents": "On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> > state-exec: run failed: cannot create new executor meta: cannot get\n> > matching bin by path: no matching binary by path\n> >\n> \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> > state-exec: Not user serviceable; Please contact support for assistance.\n> >\n> > anyone seen this or have a fix ?\n>\n> I've not seen that before. Please provide a bit more detail. Compiler,\n> building with ninja or msbuild/visual studio, when exactly you're\n> encountering\n> the issue, ...\n>\n> Windows Server 2019\nVS 2019\nbuilding with ninja\n\nDave\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> state-exec: run failed: cannot create new executor meta: cannot get\n> matching bin by path: no matching binary by path\n> \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> state-exec: Not user serviceable; Please contact support for assistance.\n> \n> anyone seen this or have a fix ?\n\nI've not seen that before. Please provide a bit more detail. Compiler,\nbuilding with ninja or msbuild/visual studio, when exactly you're encountering\nthe issue, ...\nWindows Server 2019VS 2019building with ninjaDave \nGreetings,\n\nAndres Freund", "msg_date": "Thu, 7 Dec 2023 14:16:52 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: errors building on windows using meson" }, { "msg_contents": "Hi,\n\nOn 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n> On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> > > state-exec: run failed: cannot create new executor meta: cannot get\n> > > matching bin by path: no matching binary by path\n> > >\n> > \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> > > state-exec: Not user serviceable; Please contact support for assistance.\n> > >\n> > > anyone seen this or have a fix ?\n> >\n> > I've not seen that before. Please provide a bit more detail. Compiler,\n> > building with ninja or msbuild/visual studio, when exactly you're\n> > encountering\n> > the issue, ...\n> >\n> > Windows Server 2019\n> VS 2019\n> building with ninja\n\nI don't think this is sufficient detail to provide you with advice / fix\nproblems / whatnot. Please provide complete logs of configuring and building.\n\n- Andres\n\n\n", "msg_date": "Thu, 7 Dec 2023 11:34:48 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: errors building on windows using meson" }, { "msg_contents": "On Thu, 7 Dec 2023 at 14:34, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n> > On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n> >\n> > > Hi,\n> > >\n> > > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> > > > state-exec: run failed: cannot create new executor meta: cannot get\n> > > > matching bin by path: no matching binary by path\n> > > >\n> > >\n> \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> > > > state-exec: Not user serviceable; Please contact support for\n> assistance.\n> > > >\n> > > > anyone seen this or have a fix ?\n> > >\n> > > I've not seen that before. Please provide a bit more detail. Compiler,\n> > > building with ninja or msbuild/visual studio, when exactly you're\n> > > encountering\n> > > the issue, ...\n> > >\n> > > Windows Server 2019\n> > VS 2019\n> > building with ninja\n>\n> I don't think this is sufficient detail to provide you with advice / fix\n> problems / whatnot. Please provide complete logs of configuring and\n> building.\n>\n\nI built perl from source and it worked.\n\nDave\n\n\n\n>\n> - Andres\n>\n\nOn Thu, 7 Dec 2023 at 14:34, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n> On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> > > state-exec: run failed: cannot create new executor meta: cannot get\n> > > matching bin by path: no matching binary by path\n> > >\n> > \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> > > state-exec: Not user serviceable; Please contact support for assistance.\n> > >\n> > > anyone seen this or have a fix ?\n> >\n> > I've not seen that before. Please provide a bit more detail. Compiler,\n> > building with ninja or msbuild/visual studio, when exactly you're\n> > encountering\n> > the issue, ...\n> >\n> > Windows Server 2019\n> VS 2019\n> building with ninja\n\nI don't think this is sufficient detail to provide you with advice / fix\nproblems / whatnot. Please provide complete logs of configuring and building.I built perl from source and it worked.Dave \n\n- Andres", "msg_date": "Thu, 7 Dec 2023 15:46:37 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: errors building on windows using meson" }, { "msg_contents": "Hi\n\nI was facing the same error when using active state perl for compiling\npostgres with meson on windows. I used active state perl because it was\nworking fine for pg compilations until pg-16.\n\nUsing choco strawberry perl solved my problem.\n\nThanks\n\nImran Zaheer\n\nOn Wed, 29 May 2024 at 00:22, Dave Cramer <[email protected]> wrote:\n\n>\n> On Thu, 7 Dec 2023 at 14:34, Andres Freund <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> On 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n>> > On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n>> >\n>> > > Hi,\n>> > >\n>> > > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n>> > > > state-exec: run failed: cannot create new executor meta: cannot get\n>> > > > matching bin by path: no matching binary by path\n>> > > >\n>> > >\n>> \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n>> > > > state-exec: Not user serviceable; Please contact support for\n>> assistance.\n>> > > >\n>> > > > anyone seen this or have a fix ?\n>> > >\n>> > > I've not seen that before. Please provide a bit more detail. Compiler,\n>> > > building with ninja or msbuild/visual studio, when exactly you're\n>> > > encountering\n>> > > the issue, ...\n>> > >\n>> > > Windows Server 2019\n>> > VS 2019\n>> > building with ninja\n>>\n>> I don't think this is sufficient detail to provide you with advice / fix\n>> problems / whatnot. Please provide complete logs of configuring and\n>> building.\n>>\n>\n> I built perl from source and it worked.\n>\n> Dave\n>\n>\n>\n>>\n>> - Andres\n>>\n>\n\n\nHiI was facing the same error when using active state perl for compiling \npostgres with meson on windows. I used active state perl because it was \nworking fine for pg compilations until pg-16. Using choco strawberry perl \nsolved my problem.\n\nThanksImran Zaheer\nOn Wed, 29 May 2024 at 00:22, Dave Cramer <[email protected]> wrote:On Thu, 7 Dec 2023 at 14:34, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n> On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> > > state-exec: run failed: cannot create new executor meta: cannot get\n> > > matching bin by path: no matching binary by path\n> > >\n> > \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> > > state-exec: Not user serviceable; Please contact support for assistance.\n> > >\n> > > anyone seen this or have a fix ?\n> >\n> > I've not seen that before. Please provide a bit more detail. Compiler,\n> > building with ninja or msbuild/visual studio, when exactly you're\n> > encountering\n> > the issue, ...\n> >\n> > Windows Server 2019\n> VS 2019\n> building with ninja\n\nI don't think this is sufficient detail to provide you with advice / fix\nproblems / whatnot. Please provide complete logs of configuring and building.I built perl from source and it worked.Dave \n\n- Andres", "msg_date": "Wed, 29 May 2024 00:50:21 +0900", "msg_from": "Imran Zaheer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: errors building on windows using meson" }, { "msg_contents": "On Tue, May 28, 2024 at 8:50 PM Imran Zaheer <[email protected]> wrote:\n\n> Hi\n>\n> I was facing the same error when using active state perl for compiling\n> postgres with meson on windows. I used active state perl because it was\n> working fine for pg compilations until pg-16.\n>\n> Using choco strawberry perl solved my problem.\n>\n> Thanks\n>\n> Imran Zaheer\n>\n> On Wed, 29 May 2024 at 00:22, Dave Cramer <[email protected]> wrote:\n>\n>>\n>> On Thu, 7 Dec 2023 at 14:34, Andres Freund <[email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> On 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n>>> > On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n>>> >\n>>> > > Hi,\n>>> > >\n>>> > > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n>>> > > > state-exec: run failed: cannot create new executor meta: cannot get\n>>> > > > matching bin by path: no matching binary by path\n>>> > > >\n>>> > >\n>>> \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n>>> > > > state-exec: Not user serviceable; Please contact support for\n>>> assistance.\n>>> > > >\n>>> > > > anyone seen this or have a fix ?\n>>> > >\n>>>\n>>\nI am facing the same issue while compiling the pg17. I have resolved this\nissue by installing perl from \"https://strawberryperl.com/\".\nIn addition to installing the strawberry's perl, I had to adjust the\nenvironment variable PATH values. That is, I had to give higher priority to\nstrawberry's perl bin paths.\n\n(Please note that I haven't removed activestate's perl, or mingw's perl. So\nwhenever, I will need to build pg16 or lower, I have to\nreadjust activestate's perl bin path in the environment variable.)\n\n> > I've not seen that before. Please provide a bit more detail. Compiler,\n>>> > > building with ninja or msbuild/visual studio, when exactly you're\n>>> > > encountering\n>>> > > the issue, ...\n>>> > >\n>>> > > Windows Server 2019\n>>> > VS 2019\n>>> > building with ninja\n>>>\n>>> I don't think this is sufficient detail to provide you with advice / fix\n>>> problems / whatnot. Please provide complete logs of configuring and\n>>> building.\n>>>\n>>\n>> I built perl from source and it worked.\n>>\n>> Dave\n>>\n>>\n>>\n>>>\n>>> - Andres\n>>>\n>>\n\nOn Tue, May 28, 2024 at 8:50 PM Imran Zaheer <[email protected]> wrote:\nHiI was facing the same error when using active state perl for compiling \npostgres with meson on windows. I used active state perl because it was \nworking fine for pg compilations until pg-16. Using choco strawberry perl \nsolved my problem.\n\nThanksImran Zaheer\nOn Wed, 29 May 2024 at 00:22, Dave Cramer <[email protected]> wrote:On Thu, 7 Dec 2023 at 14:34, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-12-07 14:16:52 -0500, Dave Cramer wrote:\n> On Thu, 7 Dec 2023 at 13:53, Andres Freund <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On 2023-12-07 12:54:27 -0500, Dave Cramer wrote:\n> > > state-exec: run failed: cannot create new executor meta: cannot get\n> > > matching bin by path: no matching binary by path\n> > >\n> > \"C:\\\\Users\\\\Administrator\\\\AppData\\\\Local\\\\activestate\\\\cache\\\\b9117b06\\\\exec\\\\perl.EXE\"\n> > > state-exec: Not user serviceable; Please contact support for assistance.\n> > >\n> > > anyone seen this or have a fix ?\n> > I am facing the same issue while compiling the pg17. I have resolved this issue by installing perl from \"https://strawberryperl.com/\". In addition to installing the strawberry's perl, I had to adjust the environment variable PATH values. That is, I had to give higher priority to strawberry's perl bin paths. (Please note that I haven't removed activestate's perl, or mingw's perl. So whenever, I will need to build pg16 or lower, I have to readjust activestate's perl bin path in the environment variable.)\n> > I've not seen that before. Please provide a bit more detail. Compiler,\n> > building with ninja or msbuild/visual studio, when exactly you're\n> > encountering\n> > the issue, ...\n> >\n> > Windows Server 2019\n> VS 2019\n> building with ninja\n\nI don't think this is sufficient detail to provide you with advice / fix\nproblems / whatnot. Please provide complete logs of configuring and building.I built perl from source and it worked.Dave \n\n- Andres", "msg_date": "Tue, 16 Jul 2024 15:47:18 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": false, "msg_subject": "Re: errors building on windows using meson" } ]
[ { "msg_contents": "Hi,\n\nPostgresql seems to be missing upcasting when doing INT range and\nmulti-range operation, for example when checking if an int4 is inside\nan int8 range.\nSome non working example are the following\n\n SELECT 2::INT4 <@ '[1, 4)'::INT8RANGE\n -- ERROR: operator does not exist: integer <@ int8range\n\n SELECT 1::INT4 <@ '{[1, 4),[6,19)}'::INT8MULTIRANGE\n -- ERROR: operator does not exist: integer <@ int8multirange\n\n SELECT 1::INT2 <@ '{[1, 4),[6,19)}'::INT4MULTIRANGE\n -- ERROR: operator does not exist: smallint <@ int4multirange\n\n SELECT '[2, 3]'::INT4RANGE <@ '[1, 42)'::INT8RANGE\n -- ERROR: operator does not exist: int4range <@ int8range\n\n SELECT 2::INT8 <@ '[1, 4)'::INT4RANGE\n -- ERROR: operator does not exist: bigint <@ int4range\n\netc.\n\nIn all these cases the smaller integer type can be upcasted to the\nlarger integer type.\n\nPosted here since it doesn't seem like a bug, just a missing feature.\n\nThanks for reading\n Federico\n\n\n", "msg_date": "Thu, 7 Dec 2023 21:21:11 +0100", "msg_from": "Federico <[email protected]>", "msg_from_op": true, "msg_subject": "Improve upcasting for INT range and multi range types" }, { "msg_contents": "On Fri, Dec 8, 2023 at 4:21 AM Federico <[email protected]> wrote:\n>\n> Hi,\n>\n> Postgresql seems to be missing upcasting when doing INT range and\n> multi-range operation, for example when checking if an int4 is inside\n> an int8 range.\n> Some non working example are the following\n>\n> SELECT 2::INT4 <@ '[1, 4)'::INT8RANGE\n> -- ERROR: operator does not exist: integer <@ int8range\n\nselect oprname,\n oprleft::regtype,\n oprright::regtype,\n oprcode\nfrom pg_operator\nwhere oprname = '<@';\n\nlook at the results, you can see related info is:\n oprname | oprleft | oprright | oprcode\n---------+------------+---------------+------------------------------\n <@ | anyelement | anyrange | elem_contained_by_range\n <@ | anyelement | anymultirange | elem_contained_by_multirange\n\nSELECT 2::INT4 <@ '[1, 4)'::INT8RANGE\nIt actually first does an operator sanity check, transforms\nanyelement, anyrange to the detailed non-polymorphic data type.\n then calls the function elem_contained_by_range.\nbut it failed at the first step.\n\nper doc https://www.postgresql.org/docs/current/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC\nSimilarly, if there are positions declared anyrange and others\ndeclared anyelement or anyarray, the actual range type in the anyrange\npositions must be a range whose subtype is the same type appearing in\nthe anyelement positions and the same as the element type of the\nanyarray positions. If there are positions declared anymultirange,\ntheir actual multirange type must contain ranges matching parameters\ndeclared anyrange and base elements matching parameters declared\nanyelement and anyarray.\n\nBased on my interpretation, I don't think SELECT 2::INT4 <@ '[1,\n4)'::INT8RANGE is doable.\n\n\n", "msg_date": "Wed, 13 Dec 2023 12:10:03 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve upcasting for INT range and multi range types" }, { "msg_contents": "jian he <[email protected]> writes:\n> Based on my interpretation, I don't think SELECT 2::INT4 <@ '[1,\n> 4)'::INT8RANGE is doable.\n\nYeah, it would require a considerable expansion of the scope of\nwhat can be matched by a polymorphic operator. I'm afraid that\nthe negative consequences (mainly, \"ambiguous operator\" failures\nbecause more than one thing can be matched) would outweigh the\nbenefits. It is kind of annoying though that the system can't\ndo the \"obvious\" right thing here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Dec 2023 23:16:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve upcasting for INT range and multi range types" }, { "msg_contents": "Hi,\n\nThanks for the reply. I suspected that there were technical reasons\nthat prevented the obvious right thing to be done.\n\nWould adding overloads to the functions and operators be something\nthat could be considered as an acceptable solution?\nI've tried a very naive solution and it seems to work (there are for\nsure better options to declare the function overloads):\n\n begin;\n\n create function elem_contained_by_range(int4, int8range) returns\nboolean as $$ select elem_contained_by_range($1::int8, $2) $$ LANGUAGE\nSQL;\n create function elem_contained_by_range(int8, int4range) returns\nboolean as $$ select elem_contained_by_range($1, $2::text::int8range)\n$$ LANGUAGE SQL;\n\n create operator <@(\n LEFTARG = int4,\n RIGHTARG = int8range,\n FUNCTION = elem_contained_by_range,\n RESTRICT = rangesel,\n JOIN = contjoinsel,\n HASHES, MERGES\n );\n create operator <@(\n LEFTARG = int8,\n RIGHTARG = int4range,\n FUNCTION = elem_contained_by_range,\n RESTRICT = rangesel,\n JOIN = contjoinsel,\n HASHES, MERGES\n );\n\n select 2::int4 <@ '[1,9)'::int8range;\n select 2::int8 <@ '[1,9)'::int4range;\n\n rollback;\n\nThe major drawback is that every combination operator - type would\nneed its own overload creating a large number of them.\n\nAs a side note it seems that int4range cannot be casted automatically\nto int8range.\n\nBest regards,\n Federico\n\nOn Wed, 13 Dec 2023 at 05:16, Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > Based on my interpretation, I don't think SELECT 2::INT4 <@ '[1,\n> > 4)'::INT8RANGE is doable.\n>\n> Yeah, it would require a considerable expansion of the scope of\n> what can be matched by a polymorphic operator. I'm afraid that\n> the negative consequences (mainly, \"ambiguous operator\" failures\n> because more than one thing can be matched) would outweigh the\n> benefits. It is kind of annoying though that the system can't\n> do the \"obvious\" right thing here.\n>\n> regards, tom lane\n\n\n", "msg_date": "Thu, 14 Dec 2023 14:21:24 +0100", "msg_from": "Federico <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve upcasting for INT range and multi range types" } ]
[ { "msg_contents": "Moving this to a new thread...\n\nOn Thu, Dec 07, 2023 at 07:15:28AM -0500, Joe Conway wrote:\n> On 12/6/23 21:56, Nathan Bossart wrote:\n>> On Wed, Dec 06, 2023 at 03:20:46PM -0500, Tom Lane wrote:\n>> > If Nathan's perf results hold up elsewhere, it seems like some\n>> > micro-optimization around the text-pushing (appendStringInfoString)\n>> > might be more useful than caching. The 7% spent in cache lookups\n>> > could be worth going after later, but it's not the top of the list.\n>> \n>> Hah, it turns out my benchmark of 110M integers really stresses the\n>> JSONTYPE_NUMERIC path in datum_to_json_internal(). That particular path\n>> calls strlen() twice: once for IsValidJsonNumber(), and once in\n>> appendStringInfoString(). If I save the result from IsValidJsonNumber()\n>> and give it to appendBinaryStringInfo() instead, the COPY goes ~8% faster.\n>> It's probably worth giving datum_to_json_internal() a closer look in a new\n>> thread.\n> \n> Yep, after looking through that code I was going to make the point that your\n> 11 integer test was over indexing on that one type. I am sure there are\n> other micro-optimizations to be made here, but I also think that it is\n> outside the scope of the COPY TO JSON patch.\n\nHere's a patch that removes a couple of strlen() calls that showed up\nprominently in perf for a COPY TO (FORMAT json) on 110M integers. On my\nlaptop, I see a 20% speedup from ~23.6s to ~18.9s for this test.\n\nI plan to test the other types as well, and I'd also like to look into the\ncaching mentioned above if/when COPY TO (FORMAT json) is committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 7 Dec 2023 17:12:51 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "micro-optimizing json.c" }, { "msg_contents": "On Thu, 2023-12-07 at 17:12 -0600, Nathan Bossart wrote:\n> Here's a patch that removes a couple of strlen() calls that showed up\n> prominently in perf for a COPY TO (FORMAT json) on 110M integers.  On\n> my\n> laptop, I see a 20% speedup from ~23.6s to ~18.9s for this test.\n\nNice improvement. The use of (len = ...) in a conditional is slightly\nout of the ordinary, but it makes the conditionals a bit simpler and\nyou have a comment, so it's fine with me.\n\nI wonder, if there were an efficient cast from numeric to text, then\nperhaps you could avoid the strlen() entirely? Maybe there's a way to\nuse a static buffer to even avoid the palloc() in get_str_from_var()?\nNot sure these are worth the effort; just brainstorming.\n\nIn any case, +1 to your simple change.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 07 Dec 2023 15:43:47 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> Nice improvement. The use of (len = ...) in a conditional is slightly\n> out of the ordinary, but it makes the conditionals a bit simpler and\n> you have a comment, so it's fine with me.\n\nYeah. It's a little not per project style, but I don't see a nice\nway to do it differently, granting that we don't want to put the\nstrlen() ahead of the !key_scalar test. More straightforward\ncoding would end up with two else-path calls of escape_json,\nwhich doesn't seem all that much more straightforward.\n\n> I wonder, if there were an efficient cast from numeric to text, then\n> perhaps you could avoid the strlen() entirely?\n\nHmm ... I think that might not be the way to think about it. What\nI'm wondering is why we need a test as expensive as IsValidJsonNumber\nin the first place, given that we know this is a numeric data type's\noutput. ISTM we only need to reject \"Inf\"/\"-Inf\" and \"NaN\", which\nsurely does not require a full parse. Skip over a sign, check for\n\"I\"/\"N\", and you're done.\n\n... and for that matter, why does quoting of Inf/NaN require\nthat we apply something as expensive as escape_json? Several other\npaths in this switch have no hesitation about assuming that they\ncan just plaster double quotes around what was emitted. How is\nthat safe for timestamps but not Inf/NaN?\n\n> In any case, +1 to your simple change.\n\nIf we end up not using IsValidJsonNumber then this strlen hackery\nwould become irrelevant, so maybe that idea should be looked at first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Dec 2023 19:40:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Fri, 8 Dec 2023 at 12:13, Nathan Bossart <[email protected]> wrote:\n> Here's a patch that removes a couple of strlen() calls that showed up\n> prominently in perf for a COPY TO (FORMAT json) on 110M integers. On my\n> laptop, I see a 20% speedup from ~23.6s to ~18.9s for this test.\n\n+ seplen = use_line_feeds ? sizeof(\",\\n \") - 1 : sizeof(\",\") - 1;\n\nMost modern compilers will be fine with just:\n\nseplen = strlen(sep);\n\nI had to go back to clang 3.4.1 and GCC 4.1.2 to see the strlen() call\nwith that code [1].\n\nWith:\n\n if (needsep)\n- appendStringInfoString(result, sep);\n+ appendBinaryStringInfo(result, sep, seplen);\n\nI might be neater to get rid of the if condition and have:\n\nsep = use_line_feeds ? \",\\n \" : \",\";\nseplen = strlen(sep);\nslen = 0;\n\n...\nfor (int i = 0; i < tupdesc->natts; i++)\n{\n ...\n appendBinaryStringInfo(result, sep, slen);\n slen = seplen;\n ...\n}\n\nDavid\n\n[1] https://godbolt.org/z/8dq8a88bP\n\n\n", "msg_date": "Fri, 8 Dec 2023 16:11:52 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I might be neater to get rid of the if condition and have:\n> [ calls of appendBinaryStringInfo with len 0 ]\n\nHmm, if we are trying to micro-optimize, I seriously doubt that\nthat's an improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Dec 2023 22:19:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Thu, Dec 07, 2023 at 07:40:30PM -0500, Tom Lane wrote:\n> Hmm ... I think that might not be the way to think about it. What\n> I'm wondering is why we need a test as expensive as IsValidJsonNumber\n> in the first place, given that we know this is a numeric data type's\n> output. ISTM we only need to reject \"Inf\"/\"-Inf\" and \"NaN\", which\n> surely does not require a full parse. Skip over a sign, check for\n> \"I\"/\"N\", and you're done.\n> \n> ... and for that matter, why does quoting of Inf/NaN require\n> that we apply something as expensive as escape_json? Several other\n> paths in this switch have no hesitation about assuming that they\n> can just plaster double quotes around what was emitted. How is\n> that safe for timestamps but not Inf/NaN?\n\nI did both of these in v2, although I opted to test that the first\ncharacter after the optional '-' was a digit instead of testing that it was\n_not_ an 'I' or 'N'. I think that should be similar performance-wise, and\nmaybe it's a bit more future-proof in case someone invents some new\nnotation for a numeric data type (/shrug). In any case, this seems to\nspeed up my test by another half a second or so.\n\nI think there are some similar improvements that we can make for\nJSONTYPE_BOOL and JSONTYPE_CAST, but I haven't tested them yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 7 Dec 2023 21:20:28 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> I did both of these in v2, although I opted to test that the first\n> character after the optional '-' was a digit instead of testing that it was\n> _not_ an 'I' or 'N'.\n\nYeah, I thought about that too after sending my message. This version\nLGTM, although maybe the comment could be slightly more verbose with\nexplicit reference to Inf/NaN as being the cases we need to quote.\n\n> I think there are some similar improvements that we can make for\n> JSONTYPE_BOOL and JSONTYPE_CAST, but I haven't tested them yet.\n\nI am suspicious of using\n\n\tappendStringInfo(result, \"\\\"%s\\\"\", ...);\n\nin each of these paths; snprintf is not a terribly cheap thing.\nIt might be worth expanding that to appendStringInfoChar/\nappendStringInfoString/appendStringInfoChar.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Dec 2023 22:28:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Fri, Dec 08, 2023 at 04:11:52PM +1300, David Rowley wrote:\n> + seplen = use_line_feeds ? sizeof(\",\\n \") - 1 : sizeof(\",\") - 1;\n> \n> Most modern compilers will be fine with just:\n> \n> seplen = strlen(sep);\n> \n> I had to go back to clang 3.4.1 and GCC 4.1.2 to see the strlen() call\n> with that code [1].\n\nHm. I tried this first, but my compiler (gcc 9.4.0 on this machine) was\nstill doing the strlen()...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 7 Dec 2023 21:32:06 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Thu, Dec 07, 2023 at 10:28:50PM -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I did both of these in v2, although I opted to test that the first\n>> character after the optional '-' was a digit instead of testing that it was\n>> _not_ an 'I' or 'N'.\n> \n> Yeah, I thought about that too after sending my message. This version\n> LGTM, although maybe the comment could be slightly more verbose with\n> explicit reference to Inf/NaN as being the cases we need to quote.\n\nDone.\n\n>> I think there are some similar improvements that we can make for\n>> JSONTYPE_BOOL and JSONTYPE_CAST, but I haven't tested them yet.\n> \n> I am suspicious of using\n> \n> \tappendStringInfo(result, \"\\\"%s\\\"\", ...);\n> \n> in each of these paths; snprintf is not a terribly cheap thing.\n> It might be worth expanding that to appendStringInfoChar/\n> appendStringInfoString/appendStringInfoChar.\n\nWFM. I'll tackle JSONTYPE_BOOL and JSONTYPE_CAST next...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 7 Dec 2023 22:02:09 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Thu, Dec 07, 2023 at 10:28:50PM -0500, Tom Lane wrote:\n>> Yeah, I thought about that too after sending my message. This version\n>> LGTM, although maybe the comment could be slightly more verbose with\n>> explicit reference to Inf/NaN as being the cases we need to quote.\n\n> Done.\n\nThis version works for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Dec 2023 23:10:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Fri, Dec 8, 2023 at 10:32 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Dec 08, 2023 at 04:11:52PM +1300, David Rowley wrote:\n> > + seplen = use_line_feeds ? sizeof(\",\\n \") - 1 : sizeof(\",\") - 1;\n> >\n> > Most modern compilers will be fine with just:\n> >\n> > seplen = strlen(sep);\n> >\n> > I had to go back to clang 3.4.1 and GCC 4.1.2 to see the strlen() call\n> > with that code [1].\n>\n> Hm. I tried this first, but my compiler (gcc 9.4.0 on this machine) was\n> still doing the strlen()...\n\nThis is less verbose and still compiles with constants:\n\nuse_line_feeds ? strlen(\",\\n \") : strlen(\",\");\n\n\n", "msg_date": "Fri, 8 Dec 2023 11:51:15 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Fri, Dec 08, 2023 at 11:51:15AM +0700, John Naylor wrote:\n> This is less verbose and still compiles with constants:\n> \n> use_line_feeds ? strlen(\",\\n \") : strlen(\",\");\n\nThis one worked on my machine. I've committed the patch with that change.\nThanks everyone for the reviews!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Dec 2023 13:45:25 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "Here are a couple more easy micro-optimizations in nearby code. I've split\nthem into individual patches for review, but I'll probably just combine\nthem into one patch before committing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 8 Dec 2023 14:37:08 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Here are a couple more easy micro-optimizations in nearby code. I've split\n> them into individual patches for review, but I'll probably just combine\n> them into one patch before committing.\n\nLGTM\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 08 Dec 2023 17:56:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: micro-optimizing json.c" }, { "msg_contents": "On Fri, Dec 08, 2023 at 05:56:20PM -0500, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Here are a couple more easy micro-optimizations in nearby code. I've split\n>> them into individual patches for review, but I'll probably just combine\n>> them into one patch before committing.\n> \n> LGTM\n\nCommitted. Thanks for reviewing!\n\nFor the record, I did think about changing appendStringInfoString() into a\nmacro or an inline function so that any calls with a string literal would\nbenefit from this sort of optimization, but I was on-the-fence about it\nbecause it requires some special knowledge, i.e., you have to know to\nprovide string literals to remove the runtime calls to strlen(). Perhaps\nthis is worth further exploration...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Dec 2023 10:41:35 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: micro-optimizing json.c" } ]
[ { "msg_contents": "I found a few places where access/xlog_internal.h was apparently \nincluded unnecessarily. In some of those places, a more specific header \nfile (that somehow came in via access/xlog_internal.h) can be used \ninstead. The *.h file change passes headerscheck.", "msg_date": "Fri, 8 Dec 2023 12:44:38 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Remove some unnecessary includes of \"access/xlog_internal.h\"" }, { "msg_contents": "On 08/12/2023 13:44, Peter Eisentraut wrote:\n> I found a few places where access/xlog_internal.h was apparently\n> included unnecessarily. In some of those places, a more specific header\n> file (that somehow came in via access/xlog_internal.h) can be used\n> instead. The *.h file change passes headerscheck.\n\n+1\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 8 Dec 2023 14:36:05 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove some unnecessary includes of \"access/xlog_internal.h\"" } ]
[ { "msg_contents": "Hi Team,\n\nI hope this email finds you well. We are currently in the process of\nmigrating from PostgreSQL 13.2 to PostgreSQL 15, and we've encountered some\nissues during the restoration process.\n\nError Details:\n\n 1. pg_restore: error: could not execute query: ERROR: column reference\n \"wal_records\" is ambiguous\n 2. pg_restore: error: could not execute query: ERROR: relation\n \"metric_helpers.pg_stat_statements\" does not exist\n\nExtensions Used:\n\n - pg_stat_kcache | 2.2.0 | public | Kernel statistics gathering\n - pg_stat_statements | 1.8 | public | Track planning and execution\n statistics of all SQL statements executed\n - pg_trgm | 1.5 | public | Text similarity measurement and index\n searching based on trigrams\n - plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n - set_user | 2.0 | public | Similar to SET ROLE but with added logging\n - timescaledb | 2.2.1 | public | Enables scalable inserts and complex\n queries for time-series data\n - uuid-ossp | 1.1 | public | Generate universally unique identifiers\n (UUIDs)\n - pg_cron | 1.3 | public | Job scheduler for PostgreSQL\n\nCommands Used:\n\n 1. kubectl exec basebackup-qa-cluster-1 -n cnpg-test -c postgres --\n pg_dump -Fc -d eksqademo > eksqademo.dump\n 2. kubectl exec -i qa-test-cluster-1 -c postgres -n demo -- pg_restore\n --no-owner -d eksqademo --verbose < eksqademo.dump\n\nCould you please provide some guidance on why these errors are occurring\nand how we can resolve them? Any help would be greatly appreciated.\n\nThank you,\n\n-charan\n\nHi Team,I hope this email finds you well. We are currently in the process of migrating from PostgreSQL 13.2 to PostgreSQL 15, and we've encountered some issues during the restoration process.Error Details:pg_restore: error: could not execute query: ERROR: column reference \"wal_records\" is ambiguouspg_restore: error: could not execute query: ERROR: relation \"metric_helpers.pg_stat_statements\" does not existExtensions Used:pg_stat_kcache | 2.2.0 | public | Kernel statistics gatheringpg_stat_statements | 1.8 | public | Track planning and execution statistics of all SQL statements executedpg_trgm | 1.5 | public | Text similarity measurement and index searching based on trigramsplpgsql | 1.0 | pg_catalog | PL/pgSQL procedural languageset_user | 2.0 | public | Similar to SET ROLE but with added loggingtimescaledb | 2.2.1 | public | Enables scalable inserts and complex queries for time-series datauuid-ossp | 1.1 | public | Generate universally unique identifiers (UUIDs)pg_cron | 1.3 | public | Job scheduler for PostgreSQLCommands Used:kubectl exec basebackup-qa-cluster-1 -n cnpg-test -c postgres -- pg_dump -Fc -d eksqademo > eksqademo.dumpkubectl exec -i qa-test-cluster-1 -c postgres -n demo -- pg_restore --no-owner -d eksqademo --verbose < eksqademo.dumpCould you please provide some guidance on why these errors are occurring and how we can resolve them? Any help would be greatly appreciated.Thank you,-charan", "msg_date": "Fri, 8 Dec 2023 17:34:56 +0530", "msg_from": "Charan K <[email protected]>", "msg_from_op": true, "msg_subject": "Assistance Needed: PostgreSQL Migration Errors 13.2 to 15" }, { "msg_contents": "Charan K <[email protected]> writes:\n> I hope this email finds you well. We are currently in the process of\n> migrating from PostgreSQL 13.2 to PostgreSQL 15, and we've encountered some\n> issues during the restoration process.\n\n> Error Details:\n\n> 1. pg_restore: error: could not execute query: ERROR: column reference\n> \"wal_records\" is ambiguous\n\nCould you show us the statement causing this error?\n\n> 2. pg_restore: error: could not execute query: ERROR: relation\n> \"metric_helpers.pg_stat_statements\" does not exist\n\nPresumably this is just cascading from the first error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Dec 2023 11:00:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assistance Needed: PostgreSQL Migration Errors 13.2 to 15" } ]
[ { "msg_contents": "Hello hackers,\n\nWhile studying bug #18158, I've come to the conclusion that the existing\ntesting infrastructure is unable to detect abnormal situations. of some\nkind.\n\nJust a simple example:\nWith Assert(0) injected in walsender (say:\nsed \"s/WalSndDone(send_data)/Assert(0)/\" -i src/backend/replication/walsender.c\n), I observe the following:\n$ make -s check -C src/test/recovery PROVE_TESTS=\"t/012*\"\n\nt/012_subtransactions.pl .. ok\nAll tests successful.\n\n(The same with meson:\n1/1 postgresql:recovery / recovery/012_subtransactions OK                3.24s   12 subtests passed)\n\nBut:\n$ grep TRAP src/test/recovery/tmp_check/log/*\nsrc/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line: \n2528, PID: 373729\nsrc/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line: \n2528, PID: 373750\nsrc/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line: \n2528, PID: 373821\nsrc/test/recovery/tmp_check/log/012_subtransactions_standby.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line: \n2528, PID: 373786\n\nsrc/test/recovery/tmp_check/log/012_subtransactions_primary.log contains:\n...\n2023-12-09 03:23:20.210 UTC [375933] LOG:  server process (PID 375975) was terminated by signal 6: Aborted\n2023-12-09 03:23:20.210 UTC [375933] DETAIL:  Failed process was running: START_REPLICATION 0/3000000 TIMELINE 3\n2023-12-09 03:23:20.210 UTC [375933] LOG:  terminating any other active server processes\n2023-12-09 03:23:20.210 UTC [375933] LOG:  abnormal database system shutdown\n2023-12-09 03:23:20.211 UTC [375933] LOG:  database system is shut down\n...\n\nSo the shutdown definitely was considered abnormal, but we missed that.\n\nWith log_min_messages = DEBUG3, I can see in the log:\n2023-12-09 03:26:50.145 UTC [377844] LOG:  abnormal database system shutdown\n2023-12-09 03:26:50.145 UTC [377844] DEBUG:  shmem_exit(1): 0 before_shmem_exit callbacks to make\n2023-12-09 03:26:50.145 UTC [377844] DEBUG:  shmem_exit(1): 5 on_shmem_exit callbacks to make\n2023-12-09 03:26:50.145 UTC [377844] DEBUG:  cleaning up orphaned dynamic shared memory with ID 2898643884\n2023-12-09 03:26:50.145 UTC [377844] DEBUG:  cleaning up dynamic shared memory control segment with ID 3446966170\n2023-12-09 03:26:50.146 UTC [377844] DEBUG:  proc_exit(1): 2 callbacks to make\n2023-12-09 03:26:50.146 UTC [377844] LOG:  database system is shut down\n2023-12-09 03:26:50.146 UTC [377844] DEBUG:  exit(1)\n2023-12-09 03:26:50.146 UTC [377844] DEBUG:  shmem_exit(-1): 0 before_shmem_exit callbacks to make\n2023-12-09 03:26:50.146 UTC [377844] DEBUG:  shmem_exit(-1): 0 on_shmem_exit callbacks to make\n2023-12-09 03:26:50.146 UTC [377844] DEBUG:  proc_exit(-1): 0 callbacks to make\n\nThe postmaster process exits with exit code 1, but pg_ctl can't get the\ncode and just reports that stop was completed successfully.\n\nShould this be improved? And if yes, how it can be done?\nMaybe postmaster shouldn't remove it's postmaster.pid when it exits\nabnormally to let pg_ctl know of it?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 9 Dec 2023 07:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "How abnormal server shutdown could be detected by tests?" }, { "msg_contents": "On Sat, Dec 9, 2023 at 9:30 AM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> While studying bug #18158, I've come to the conclusion that the existing\n> testing infrastructure is unable to detect abnormal situations. of some\n> kind.\n>\n> Just a simple example:\n> With Assert(0) injected in walsender (say:\n> sed \"s/WalSndDone(send_data)/Assert(0)/\" -i src/backend/replication/walsender.c\n> ), I observe the following:\n> $ make -s check -C src/test/recovery PROVE_TESTS=\"t/012*\"\n>\n> t/012_subtransactions.pl .. ok\n> All tests successful.\n>\n> (The same with meson:\n> 1/1 postgresql:recovery / recovery/012_subtransactions OK 3.24s 12 subtests passed)\n>\n> But:\n> $ grep TRAP src/test/recovery/tmp_check/log/*\n> src/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line:\n> 2528, PID: 373729\n> src/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line:\n> 2528, PID: 373750\n> src/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line:\n> 2528, PID: 373821\n> src/test/recovery/tmp_check/log/012_subtransactions_standby.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line:\n> 2528, PID: 373786\n>\n> src/test/recovery/tmp_check/log/012_subtransactions_primary.log contains:\n> ...\n> 2023-12-09 03:23:20.210 UTC [375933] LOG: server process (PID 375975) was terminated by signal 6: Aborted\n> 2023-12-09 03:23:20.210 UTC [375933] DETAIL: Failed process was running: START_REPLICATION 0/3000000 TIMELINE 3\n> 2023-12-09 03:23:20.210 UTC [375933] LOG: terminating any other active server processes\n> 2023-12-09 03:23:20.210 UTC [375933] LOG: abnormal database system shutdown\n> 2023-12-09 03:23:20.211 UTC [375933] LOG: database system is shut down\n> ...\n>\n> So the shutdown definitely was considered abnormal, but we missed that.\n>\n> With log_min_messages = DEBUG3, I can see in the log:\n> 2023-12-09 03:26:50.145 UTC [377844] LOG: abnormal database system shutdown\n> 2023-12-09 03:26:50.145 UTC [377844] DEBUG: shmem_exit(1): 0 before_shmem_exit callbacks to make\n> 2023-12-09 03:26:50.145 UTC [377844] DEBUG: shmem_exit(1): 5 on_shmem_exit callbacks to make\n> 2023-12-09 03:26:50.145 UTC [377844] DEBUG: cleaning up orphaned dynamic shared memory with ID 2898643884\n> 2023-12-09 03:26:50.145 UTC [377844] DEBUG: cleaning up dynamic shared memory control segment with ID 3446966170\n> 2023-12-09 03:26:50.146 UTC [377844] DEBUG: proc_exit(1): 2 callbacks to make\n> 2023-12-09 03:26:50.146 UTC [377844] LOG: database system is shut down\n> 2023-12-09 03:26:50.146 UTC [377844] DEBUG: exit(1)\n> 2023-12-09 03:26:50.146 UTC [377844] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make\n> 2023-12-09 03:26:50.146 UTC [377844] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make\n> 2023-12-09 03:26:50.146 UTC [377844] DEBUG: proc_exit(-1): 0 callbacks to make\n>\n> The postmaster process exits with exit code 1, but pg_ctl can't get the\n> code and just reports that stop was completed successfully.\n>\n\nFor what it's worth, there is another thread which stated the similar problem:\nhttps://www.postgresql.org/message-id/flat/2366244.1651681550%40sss.pgh.pa.us\n\n> Should this be improved? And if yes, how it can be done?\n> Maybe postmaster shouldn't remove it's postmaster.pid when it exits\n> abnormally to let pg_ctl know of it?\n>\n\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 12 Dec 2023 14:14:09 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How abnormal server shutdown could be detected by tests?" }, { "msg_contents": "Hello Shveta,\n\n12.12.2023 11:44, shveta malik wrote:\n>\n>> The postmaster process exits with exit code 1, but pg_ctl can't get the\n>> code and just reports that stop was completed successfully.\n>>\n> For what it's worth, there is another thread which stated the similar problem:\n> https://www.postgresql.org/message-id/flat/2366244.1651681550%40sss.pgh.pa.us\n>\n\nThank you for the reference!\nSo I refreshed a first part of the question Tom Lane raised before...\n\nI've made a quick experiment with leaving postmaster.pid intact in case of\nabnormal shutdown:\n@@ -1113,6 +1113,7 @@ UnlinkLockFiles(int status, Datum arg)\n      {\n          char       *curfile = (char *) lfirst(l);\n\n+if (strcmp(curfile, DIRECTORY_LOCK_FILE) != 0 || status == 0)\n          unlink(curfile);\n          /* Should we complain if the unlink fails? */\n      }\n\nand `make check-world` passed for me with no failure.\n(In the meantime, the assertion failure forced as above is detected.)\n\nThough there is a minor issue with a couple of tests. Namely,\n003_recovery_targets.pl does the following:\n# wait for the error message in the standby log\nforeach my $i (0 .. 10 * $PostgreSQL::Test::Utils::timeout_default)\n{\n     $logfile = slurp_file($node_primary->logfile());\n     $res = ($logfile =~\n         qr/FATAL: .* recovery ended before configured recovery target was reached/);\n     if ($res) {\n         last;\n     }\n     usleep(100_000);\n}\nok($res,\n     'recovery end before target reached is a fatal error');\n\nWith postmaster.pid left after unclean shutdown, the test waits for 300\nseconds by default and then completes successfully.\n\nIf rewrite that loop as follows:\n# wait for the error message in the standby log\nforeach my $i (0 .. 10 * $PostgreSQL::Test::Utils::timeout_default)\n{\n     $logfile = slurp_file($node_primary->logfile());\n     $res = ($logfile =~\n         qr/FATAL: .* recovery ended before configured recovery target was reached/);\n     if ($res) {\n         last;\n     }\n     usleep(100_000);\n}\nok($res,\n     'recovery end before target reached is a fatal error');\n\nthe test completes as quickly as before.\n(standby.log is only 2kb, so rereading it isn't a big deal, IMO)\n\nSo maybe it's the way to go?\n\nAnother way I can think of is sending some signal to pg_ctl in case\npostmaster terminates with status 0. Though I think it would complicate\nthings a little as it allows for three different states:\npostmaster.pid preserved (in case postmaster killed with -9),\npostmaster.pid removed and the signal received/not received.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 12 Dec 2023 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How abnormal server shutdown could be detected by tests?" } ]
[ { "msg_contents": "Hello fellow Hackers,\n\nDoes anyone know why we have decided that the wal_keep_size,\nmax_slot_wal_keep_size GUCs \"can only be set in the postgresql.conf\nfile or on the server command line.\" [1]?\n\nIt does not seem fundamentally needed , as they are \"kind of\nguidance\", especially the second one.\n\nThe first one - wal_keep_size - could be something that is directly\nrelied on in some code paths, so setting it in a live database could\ninvolve some two-step process, where you first set the value and then\nwait all current transactions to finish before you do any operations\ndepending on the new value, like removing the wal files no more kept\nbecause of earlier larger value. moving it up should need no extra\naction. Moving it up then down immediately after could cause some\ninteresting race conditions when you move it down lower than it was in\nthe beginning, so \"wait for all transactions to finish\" should apply\nin all cases\n\nFor the second one - max_slot_wal_keep_size - I can not immediately\ncome up with a scenario where just setting it could cause any\nunexpected consequences. If you set it to a value below a current slot\nvalue you *do* expect the slot to be invalidated. if you set it to a\nlarger than current value, then infringing slots get more time to\ncorrect themselves. Both behaviours would be much more useful if you\ndid not have to restart the whole server to make adjustments.\n\n-\n[1] https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-WAL-KEEP-SIZE\n\nBest Regards\nHannu\n\n\n", "msg_date": "Sat, 9 Dec 2023 12:32:22 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Why are wal_keep_size,\n max_slot_wal_keep_size requiring server restart?" }, { "msg_contents": "Hello Hannu,\n\nOn 2023-Dec-09, Hannu Krosing wrote:\n\n> Does anyone know why we have decided that the wal_keep_size,\n> max_slot_wal_keep_size GUCs \"can only be set in the postgresql.conf\n> file or on the server command line.\" [1]?\n\nI think you misread that boilerplate text. If a GUC can be set in the\npostgresql.conf file, then you can change it there and do a reload.\nAny change will take effect then. No need for a restart.\n\nVariables that require a restart say \"This parameter can only be set at\nserver start.\"\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)\n\n\n", "msg_date": "Sat, 9 Dec 2023 13:31:32 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why are wal_keep_size, max_slot_wal_keep_size requiring server\n restart?" } ]
[ { "msg_contents": "Dear Postgres Team,\r\n\r\nWe are contacting you today to get your feedback on a service degradation, that occurred, after upgrading related postgres databases to db-Version 15.\r\n\r\nWe upgraded several Live- and Non-Live-Landscapes to db-Version 15, coming from db-version 11. One AWS-Live-Landscape showed an increasing CPU-Usage from 5 % to 100% with the effect, that Lifecycle Operations on customer side were experiencing intermittent availability issues several times and with a max period of 186 minutes of degradation.\r\n\r\nThe Root Cause of this behavior, as aligned with AWS RDS Support, has been a new feature-set coding (parallel_feature_query) with Postgres Version 15, that shows a different behavior due to related parameter (max_parallel_workers_per_gather).\r\nThis parameter sets the maximum number of workers, that can be started by a single Gather or Gather Merge node. Parallel workers are taken from the pool of processes established by max_worker_processes, limited by max_parallel_workers. The default value is 2. Setting this value to 0 disabled parallel query execution and resolved the issue.\r\n\r\nRemaining question now is, what has to be done to move related Live-Landscapes back to the default parameter value (2) without creating the same impact again.\r\n\r\nDid you see such behavior with other customers ?\r\nWhat is your suggestion and recommended way-forward to enable parallel-worker setup again ?\r\n\r\nThanks and kind regards\r\n\r\n\r\nAxel Ritthaler\r\n\r\nBTP Development Manager, BTP FP CF WDF\r\n\r\nSAP SE Dietmar-Hopp-Allee 16, 69190 Walldorf, Germany\r\n\r\nE: [email protected]<mailto:[email protected]>\r\n\r\nM: +4915153858987<tel:%20+4915153858987> T: +496227774698<tel:%20+496227774698>\r\n\r\n\r\nPlease consider the impact on the environment before printing this e-mail.\r\n\r\n\r\n[cid:[email protected]]\r\n\r\n\r\nPflichtangaben/Mandatory Disclosure Statement: www.sap.com/impressum\r\n\r\nDiese E-Mail kann Betriebs- oder Geschäftsgeheimnisse oder sonstige vertrauliche Informationen enthalten. Sollten Sie diese E-Mail irrtümlich erhalten haben, ist Ihnen eine Kenntnisnahme des Inhalts, eine Vervielfältigung oder Weitergabe der E-Mail ausdrücklich untersagt. Bitte benachrichtigen Sie uns und vernichten Sie die empfangene E-Mail. Vielen Dank.\r\n\r\nThis e-mail may contain trade secrets or privileged, undisclosed, or otherwise confidential information. If you have received this e-mail in error, you are hereby notified that any review, copying, or distribution of it is strictly prohibited. Please inform us immediately and destroy the original transmittal. Thank you for your cooperation.", "msg_date": "Sat, 9 Dec 2023 12:26:53 +0000", "msg_from": "\"Ritthaler, Axel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres db Update to Version 15" }, { "msg_contents": "On Sun, 10 Dec 2023 at 04:10, Ritthaler, Axel <[email protected]> wrote:\n> The Root Cause of this behavior, as aligned with AWS RDS Support, has been a new feature-set coding (parallel_feature_query) with Postgres Version 15, that shows a different behavior due to related parameter (max_parallel_workers_per_gather).\n\nWhat is parallel_feature_query? No version of PostgreSQL has a\nsetting by that name.\n\n> Remaining question now is, what has to be done to move related Live-Landscapes back to the default parameter value (2) without creating the same impact again.\n\nYou'll need to identify the query or queries causing the problem.\nWe've likely made many more query shapes parallelizable in PG15\ncompared to PG11. So it does not seem unusual that PG15 will be able\nto paralleize more of your queries than what PG11 could do. That\ncould lead to parallel plans not getting the workers they desire due\nto workers being busy with other queries.\n\n> What is your suggestion and recommended way-forward to enable parallel-worker setup again ?\n\nIdentify the queries causing the problem. Then determine if the plan\nhas changed since PG11. You can then check all the release notes\nstarting with PG12 to see if anything is mentioned about why the plan\nmight have changed. e.g. something in the query is parallelizable in\nthis version that wasn't in PG11.\n\nOne thing to keep in mind is that PostgreSQL does not opt to\nparallelize the cheapest serial plan. It will attempt to find the\ncheapest plan with or without parallel workers. The difference here\nis that it's optimized for speed rather than resource usage. I'm not\nsure if this is a factor in your issue, but it may be something to\nkeep in mind while investigating.\n\nDavid\n\n\n", "msg_date": "Mon, 11 Dec 2023 15:32:06 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres db Update to Version 15" } ]
[ { "msg_contents": "Hi,\n\nAFAICS you can't use unconstify()/unvolatize() in a static inline\nfunction in a .h file, or in a .cpp file, because\n__builtin_types_compatible_p is only available in C, not C++. Seems\nlike a reasonable thing to want to be able to do, no? I'm not\nimmediately sure what the right fix is; would #if\ndefined(HAVE__BUILTIN_TYPES_COMPATIBLE_P) && !defined(__cplusplus)\naround the relevant versions of constify()/unvolatize() be too easy?\n\nHAVE__BUILTIN_TYPES_COMPATIBLE_P is also tested in relptr.h, but only\nfor further preprocessor stuff, not in functions that the compiler\nwill see, so cpluspluscheck doesn't have anything to reject, and\nnothing will break unless someone writing C++ code actually tries to\nuse relptr_access(). I think we can live with that one?\n\n\n", "msg_date": "Mon, 11 Dec 2023 13:42:48 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "unconstify()/unvolatize() vs g++/clang++" }, { "msg_contents": "On 11.12.23 01:42, Thomas Munro wrote:\n> AFAICS you can't use unconstify()/unvolatize() in a static inline\n> function in a .h file, or in a .cpp file, because\n> __builtin_types_compatible_p is only available in C, not C++. Seems\n> like a reasonable thing to want to be able to do, no? I'm not\n> immediately sure what the right fix is; would #if\n> defined(HAVE__BUILTIN_TYPES_COMPATIBLE_P) && !defined(__cplusplus)\n> around the relevant versions of constify()/unvolatize() be too easy?\n\nThat seems right to me.\n\nIf you are slightly more daring, you can write an alternative definition \nin C++ using const_cast?\n\n\n\n", "msg_date": "Mon, 11 Dec 2023 10:17:51 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: unconstify()/unvolatize() vs g++/clang++" }, { "msg_contents": "On Mon, Dec 11, 2023 at 10:17 PM Peter Eisentraut <[email protected]> wrote:\n> If you are slightly more daring, you can write an alternative definition\n> in C++ using const_cast?\n\nOh, yeah, right, that works for my case. I can't think of any\nreasons not to do this, but IANAC++L.", "msg_date": "Mon, 11 Dec 2023 23:32:52 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: unconstify()/unvolatize() vs g++/clang++" }, { "msg_contents": "On Mon, Dec 11, 2023 at 11:32 PM Thomas Munro <[email protected]> wrote:\n> On Mon, Dec 11, 2023 at 10:17 PM Peter Eisentraut <[email protected]> wrote:\n> > If you are slightly more daring, you can write an alternative definition\n> > in C++ using const_cast?\n>\n> Oh, yeah, right, that works for my case. I can't think of any\n> reasons not to do this, but IANAC++L.\n\nAnd pushed. Thanks!\n\n\n", "msg_date": "Tue, 12 Dec 2023 09:49:34 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: unconstify()/unvolatize() vs g++/clang++" } ]
[ { "msg_contents": "Hi,\n\nThis is not exhaustive, I just noticed in passing that we don't need these.", "msg_date": "Mon, 11 Dec 2023 13:57:39 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Some useless includes in llvmjit_inline.cpp" }, { "msg_contents": "On Mon, Dec 11, 2023 at 6:28 AM Thomas Munro <[email protected]> wrote:\n>\n> Hi,\n>\n> This is not exhaustive, I just noticed in passing that we don't need these.\n\nI was able to compile the changes with \"--with-llvm\" successfully, and\nthe changes look good to me.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Wed, 13 Dec 2023 16:57:29 +0530", "msg_from": "Shubham Khanna <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some useless includes in llvmjit_inline.cpp" } ]
[ { "msg_contents": "Hello,\n\nI recently encountered a case where partial indexes were surprisingly not\nbeing used. The issue is that predtest doesn't understand how boolean\nvalues and IS <boolean> expressions relate.\n\nFor example if I have:\n\ncreate table foo(i int, bar boolean);\ncreate index on foo(i) where bar is true;\n\nthen this query:\n\nselect * from foo where i = 1 and bar;\n\ndoesn't use the partial index.\n\nAttached is a patch that solves that issue. It also teaches predtest about\nquite a few more cases involving BooleanTest expressions (e.g., how they\nrelate to NullTest expressions). One thing I could imagine being an\nobjection is that not all of these warrant cycles in planning. If that\nturns out to be the case there's not a particularly clear line in my mind\nabout where to draw that line.\n\nAs noted in a TODO in the patch itself, I think it may be worth refactoring\nthe test_predtest module to run the \"x, y\" case as well as the \"y, x\" case\nwith a single call so as to eliminate a lot of repetition in\nclause/expression test cases. If reviewers agree that's desirable, then I\ncould do that as a precursor.\n\nRegards,\nJames Coleman", "msg_date": "Mon, 11 Dec 2023 14:59:46 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> Attached is a patch that solves that issue. It also teaches predtest about\n> quite a few more cases involving BooleanTest expressions (e.g., how they\n> relate to NullTest expressions). One thing I could imagine being an\n> objection is that not all of these warrant cycles in planning. If that\n> turns out to be the case there's not a particularly clear line in my mind\n> about where to draw that line.\n\nI don't have an objection in principle to adding more smarts to\npredtest.c. However, we should be wary of slowing down cases where\nno BooleanTests are present to be optimized. I wonder if it could\nhelp to use a switch on nodeTag rather than a series of if(IsA())\ntests. (I'd be inclined to rewrite the inner if-then-else chains\nas switches too, really. You get some benefit from the compiler\nnoticing whether you've covered all the enum values.)\n\nI note you've actively broken the function's ability to cope with\nNULL input pointers. Maybe we don't need it to, but I'm not going\nto accept a patch that just side-swipes that case without any\njustification.\n\nAnother way in which the patch needs more effort is that you've\nnot bothered to update the large comment block atop the function.\nPerhaps, rather than hoping people will notice comments that are\npotentially offscreen from what they're modifying, we should relocate\nthose comment paras to be adjacent to the relevant parts of the\nfunction?\n\nI've not gone through the patch in detail to see whether I believe\nthe proposed proof rules. It would help to have more comments\njustifying them.\n\n> As noted in a TODO in the patch itself, I think it may be worth refactoring\n> the test_predtest module to run the \"x, y\" case as well as the \"y, x\" case\n> with a single call so as to eliminate a lot of repetition in\n> clause/expression test cases. If reviewers agree that's desirable, then I\n> could do that as a precursor.\n\nI think that's actively undesirable. It is not typically the case that\na proof rule for A => B also works in the other direction, so this would\nencourage wasting cycles in the tests. I fear it might also cause\nconfusion about which direction a proof rule is supposed to work in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Dec 2023 13:36:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "Thanks for taking a look!\n\nOn Wed, Dec 13, 2023 at 1:36 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > Attached is a patch that solves that issue. It also teaches predtest about\n> > quite a few more cases involving BooleanTest expressions (e.g., how they\n> > relate to NullTest expressions). One thing I could imagine being an\n> > objection is that not all of these warrant cycles in planning. If that\n> > turns out to be the case there's not a particularly clear line in my mind\n> > about where to draw that line.\n>\n> I don't have an objection in principle to adding more smarts to\n> predtest.c. However, we should be wary of slowing down cases where\n> no BooleanTests are present to be optimized. I wonder if it could\n> help to use a switch on nodeTag rather than a series of if(IsA())\n> tests. (I'd be inclined to rewrite the inner if-then-else chains\n> as switches too, really. You get some benefit from the compiler\n> noticing whether you've covered all the enum values.)\n\nI think I could take this on; would you prefer it as a patch in this\nseries? Or as a new patch thread?\n\n> I note you've actively broken the function's ability to cope with\n> NULL input pointers. Maybe we don't need it to, but I'm not going\n> to accept a patch that just side-swipes that case without any\n> justification.\n\nI should have explained that. I don't think I've broken it:\n\n1. predicate_implied_by_simple_clause() is only ever called by\npredicate_implied_by_recurse()\n2. predicate_implied_by_recurse() starts with:\n pclass = predicate_classify(predicate, &pred_info);\n3. predicate_classify(Node *clause, PredIterInfo info) starts off with:\n Assert(clause != NULL);\n\nI believe this means we are currently guaranteed by the caller to\nreceive a non-NULL pointer, but I could be missing something.\n\nThe same argument (just substituting the equivalent \"refute\" function\nnames) applies to predicate_refuted_by_simple_clause().\n\n> Another way in which the patch needs more effort is that you've\n> not bothered to update the large comment block atop the function.\n> Perhaps, rather than hoping people will notice comments that are\n> potentially offscreen from what they're modifying, we should relocate\n> those comment paras to be adjacent to the relevant parts of the\n> function?\n\nSplitting up that block comment makes sense to me.\n\n> I've not gone through the patch in detail to see whether I believe\n> the proposed proof rules. It would help to have more comments\n> justifying them.\n\nMost of them are sufficiently simple -- e.g., X IS TRUE implies X --\nthat I don't think there's a lot to say in justification. In some\ncases I've noted the cases that force only strong or weak implication.\n\nThere are a few cases, though, (e.g., \"X is unknown weakly implies X\nis not true\") that, reading over this again, don't immediately strike\nme as obvious, so I'll expand on those.\n\n> > As noted in a TODO in the patch itself, I think it may be worth refactoring\n> > the test_predtest module to run the \"x, y\" case as well as the \"y, x\" case\n> > with a single call so as to eliminate a lot of repetition in\n> > clause/expression test cases. If reviewers agree that's desirable, then I\n> > could do that as a precursor.\n>\n> I think that's actively undesirable. It is not typically the case that\n> a proof rule for A => B also works in the other direction, so this would\n> encourage wasting cycles in the tests. I fear it might also cause\n> confusion about which direction a proof rule is supposed to work in.\n\nThat makes sense in the general case.\n\nBoolean expressions seem like a special case in that regard: (subject\nto what it looks like) would you be OK with a wrapping function that\ndoes both directions (with output that shows which direction is being\ntested) used only for the cases where we do want to check both\ndirections?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Wed, 13 Dec 2023 19:35:01 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> On Wed, Dec 13, 2023 at 1:36 PM Tom Lane <[email protected]> wrote:\n>> I don't have an objection in principle to adding more smarts to\n>> predtest.c. However, we should be wary of slowing down cases where\n>> no BooleanTests are present to be optimized. I wonder if it could\n>> help to use a switch on nodeTag rather than a series of if(IsA())\n>> tests. (I'd be inclined to rewrite the inner if-then-else chains\n>> as switches too, really. You get some benefit from the compiler\n>> noticing whether you've covered all the enum values.)\n\n> I think I could take this on; would you prefer it as a patch in this\n> series? Or as a new patch thread?\n\nNo, keep it in the same thread (and make a CF entry, if you didn't\nalready). It might be best to make a series of 2 patches, first\njust refactoring what's there per this discussion, and then a\nsecond one to add BooleanTest logic.\n\n>> I note you've actively broken the function's ability to cope with\n>> NULL input pointers. Maybe we don't need it to, but I'm not going\n>> to accept a patch that just side-swipes that case without any\n>> justification.\n\n> [ all callers have previously used predicate_classify ]\n\nOK, fair enough. The checks for nulls are probably from ancient\nhabit, but I agree we could remove 'em here.\n\n>> Perhaps, rather than hoping people will notice comments that are\n>> potentially offscreen from what they're modifying, we should relocate\n>> those comment paras to be adjacent to the relevant parts of the\n>> function?\n\n> Splitting up that block comment makes sense to me.\n\nDone, let's make it so.\n\n>> I've not gone through the patch in detail to see whether I believe\n>> the proposed proof rules. It would help to have more comments\n>> justifying them.\n\n> Most of them are sufficiently simple -- e.g., X IS TRUE implies X --\n> that I don't think there's a lot to say in justification. In some\n> cases I've noted the cases that force only strong or weak implication.\n\nYeah, it's the strong-vs-weak distinction that makes me cautious here.\nOne's high-school-algebra instinct for what's obviously true tends to\nnot think about NULL/UNKNOWN, and you do have to consider that.\n\n>>> As noted in a TODO in the patch itself, I think it may be worth refactoring\n>>> the test_predtest module to run the \"x, y\" case as well as the \"y, x\" case\n\n>> I think that's actively undesirable. It is not typically the case that\n>> a proof rule for A => B also works in the other direction, so this would\n>> encourage wasting cycles in the tests. I fear it might also cause\n>> confusion about which direction a proof rule is supposed to work in.\n\n> That makes sense in the general case.\n\n> Boolean expressions seem like a special case in that regard: (subject\n> to what it looks like) would you be OK with a wrapping function that\n> does both directions (with output that shows which direction is being\n> tested) used only for the cases where we do want to check both\n> directions?\n\nUsing a wrapper where appropriate would remove the inefficiency\nconcern, but I still worry whether it will promote confusion about\nwhich direction we're proving things in. You'll need to be very clear\nabout the labeling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Dec 2023 16:38:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "On Thu, Dec 14, 2023 at 4:38 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > On Wed, Dec 13, 2023 at 1:36 PM Tom Lane <[email protected]> wrote:\n> >> I don't have an objection in principle to adding more smarts to\n> >> predtest.c. However, we should be wary of slowing down cases where\n> >> no BooleanTests are present to be optimized. I wonder if it could\n> >> help to use a switch on nodeTag rather than a series of if(IsA())\n> >> tests. (I'd be inclined to rewrite the inner if-then-else chains\n> >> as switches too, really. You get some benefit from the compiler\n> >> noticing whether you've covered all the enum values.)\n>\n> > I think I could take this on; would you prefer it as a patch in this\n> > series? Or as a new patch thread?\n>\n> No, keep it in the same thread (and make a CF entry, if you didn't\n> already). It might be best to make a series of 2 patches, first\n> just refactoring what's there per this discussion, and then a\n> second one to add BooleanTest logic.\n\nCF entry is already created; I'll keep it here then.\n\n> >> I note you've actively broken the function's ability to cope with\n> >> NULL input pointers. Maybe we don't need it to, but I'm not going\n> >> to accept a patch that just side-swipes that case without any\n> >> justification.\n>\n> > [ all callers have previously used predicate_classify ]\n>\n> OK, fair enough. The checks for nulls are probably from ancient\n> habit, but I agree we could remove 'em here.\n>\n> >> Perhaps, rather than hoping people will notice comments that are\n> >> potentially offscreen from what they're modifying, we should relocate\n> >> those comment paras to be adjacent to the relevant parts of the\n> >> function?\n>\n> > Splitting up that block comment makes sense to me.\n>\n> Done, let's make it so.\n>\n> >> I've not gone through the patch in detail to see whether I believe\n> >> the proposed proof rules. It would help to have more comments\n> >> justifying them.\n>\n> > Most of them are sufficiently simple -- e.g., X IS TRUE implies X --\n> > that I don't think there's a lot to say in justification. In some\n> > cases I've noted the cases that force only strong or weak implication.\n>\n> Yeah, it's the strong-vs-weak distinction that makes me cautious here.\n> One's high-school-algebra instinct for what's obviously true tends to\n> not think about NULL/UNKNOWN, and you do have to consider that.\n>\n> >>> As noted in a TODO in the patch itself, I think it may be worth refactoring\n> >>> the test_predtest module to run the \"x, y\" case as well as the \"y, x\" case\n>\n> >> I think that's actively undesirable. It is not typically the case that\n> >> a proof rule for A => B also works in the other direction, so this would\n> >> encourage wasting cycles in the tests. I fear it might also cause\n> >> confusion about which direction a proof rule is supposed to work in.\n>\n> > That makes sense in the general case.\n>\n> > Boolean expressions seem like a special case in that regard: (subject\n> > to what it looks like) would you be OK with a wrapping function that\n> > does both directions (with output that shows which direction is being\n> > tested) used only for the cases where we do want to check both\n> > directions?\n>\n> Using a wrapper where appropriate would remove the inefficiency\n> concern, but I still worry whether it will promote confusion about\n> which direction we're proving things in. You'll need to be very clear\n> about the labeling.\n\nI've not yet applied all of your feedback, but I wanted to get an\ninitial read on your thoughts on how using switch statements ends up\nlooking. Attached is a single (pure refactor) patch that converts the\nvarious if/else levels that check things like node tag and\nboolean/null test type into switch statements. I've retained 'default'\nkeyword usages for now for simplicity (my intuition is that we\ngenerally prefer to list out all options for compiler safety benefits,\nthough I'm not 100% sure that's useful in the outer node tag check\nsince it's unlikely that someone adding a new node would modify\nthis...).\n\nMy big question is: are you comfortable with the indentation explosion\nthis creates? IMO it's a lot wordier, but it is also more obvious what\nthe structural goal is. I'm not sure how we want to make the right\ntrade-off though.\n\nOnce there's agreement on this part, I'll add back a second patch\napplying my changes on top of the refactor as well as apply other\nfeedback (e.g., splitting up the block comment).\n\nRegards,\nJames Coleman", "msg_date": "Fri, 22 Dec 2023 10:00:50 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> I've not yet applied all of your feedback, but I wanted to get an\n> initial read on your thoughts on how using switch statements ends up\n> looking. Attached is a single (pure refactor) patch that converts the\n> various if/else levels that check things like node tag and\n> boolean/null test type into switch statements. I've retained 'default'\n> keyword usages for now for simplicity (my intuition is that we\n> generally prefer to list out all options for compiler safety benefits,\n> though I'm not 100% sure that's useful in the outer node tag check\n> since it's unlikely that someone adding a new node would modify\n> this...).\n\n> My big question is: are you comfortable with the indentation explosion\n> this creates? IMO it's a lot wordier, but it is also more obvious what\n> the structural goal is. I'm not sure how we want to make the right\n> trade-off though.\n\nYeah, I see what you mean. Also, I'd wanted to shove most of\nthe text in the function header in-line and get rid of the short\nrestatements of those paras. I carried that through just for\npredicate_implied_by_simple_clause, as attached. The structure is\ndefinitely clearer, but we end up with an awful lot of indentation,\nwhich makes the comments less readable than I'd like. (I did some\nminor rewording to make them flow better.)\n\nOn balance I think this is probably better than what we have, but\nmaybe we'd be best off to avoid doubly nested switches? I think\nthere's a good argument for the outer switch on nodeTag, but\nmaybe we're getting diminishing returns from an inner switch.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 22 Dec 2023 14:48:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "On Fri, Dec 22, 2023 at 2:48 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > I've not yet applied all of your feedback, but I wanted to get an\n> > initial read on your thoughts on how using switch statements ends up\n> > looking. Attached is a single (pure refactor) patch that converts the\n> > various if/else levels that check things like node tag and\n> > boolean/null test type into switch statements. I've retained 'default'\n> > keyword usages for now for simplicity (my intuition is that we\n> > generally prefer to list out all options for compiler safety benefits,\n> > though I'm not 100% sure that's useful in the outer node tag check\n> > since it's unlikely that someone adding a new node would modify\n> > this...).\n>\n> > My big question is: are you comfortable with the indentation explosion\n> > this creates? IMO it's a lot wordier, but it is also more obvious what\n> > the structural goal is. I'm not sure how we want to make the right\n> > trade-off though.\n>\n> Yeah, I see what you mean. Also, I'd wanted to shove most of\n> the text in the function header in-line and get rid of the short\n> restatements of those paras. I carried that through just for\n> predicate_implied_by_simple_clause, as attached. The structure is\n> definitely clearer, but we end up with an awful lot of indentation,\n> which makes the comments less readable than I'd like. (I did some\n> minor rewording to make them flow better.)\n>\n> On balance I think this is probably better than what we have, but\n> maybe we'd be best off to avoid doubly nested switches? I think\n> there's a good argument for the outer switch on nodeTag, but\n> maybe we're getting diminishing returns from an inner switch.\n>\n> regards, tom lane\n>\n\nApologies for the long delay.\n\nAttached is a new patch series.\n\n0001 does the initial pure refactor. 0003 makes a lot of modifications\nto what we can prove about implication and refutation. Finally, 0003\nisn't intended to be committed, but attempts to validate more\nholistically that none of the changes creates any invalid proofs\nbeyond the mostly happy-path tests added in 0004.\n\nI ended up not tackling changing how test_predtest tests run for now.\nThat's plausibly still useful, and I'd be happy to add that if you\ngenerally agree with the direction of the patch and with that\nabstraction being useful.\n\nI added some additional verifications to the test_predtest module to\nprevent additional obvious flaws.\n\nRegards,\nJames Coleman", "msg_date": "Wed, 17 Jan 2024 19:34:36 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> 0001 does the initial pure refactor. 0003 makes a lot of modifications\n> to what we can prove about implication and refutation. Finally, 0003\n> isn't intended to be committed, but attempts to validate more\n> holistically that none of the changes creates any invalid proofs\n> beyond the mostly happy-path tests added in 0004.\n\n> I ended up not tackling changing how test_predtest tests run for now.\n> That's plausibly still useful, and I'd be happy to add that if you\n> generally agree with the direction of the patch and with that\n> abstraction being useful.\n\n> I added some additional verifications to the test_predtest module to\n> prevent additional obvious flaws.\n\nI looked through 0001 and made some additional cosmetic changes,\nmostly to get comments closer to the associated code; I also\nran pgindent on it (see v5-0001 attached). That seems pretty\ncommittable to me at this point. I also like your 0002 additions to\ntest_predtest.c (although why the mixture of ERROR and WARNING?\nISTM they should all be WARNING, so we can press on with the test).\n\nOne other thought is that maybe separating out\npredicate_implied_not_null_by_clause should be part of 0001?\n\nI'm less excited about the rest of v4-0002.\n\n@@ -740,6 +747,16 @@ predicate_refuted_by_recurse(Node *clause, Node *predicate,\n !weak))\n return true;\n \n+ /*\n+ * Because weak refutation expands the allowed outcomes for B\n+ * from \"false\" to \"false or null\", we can additionally prove\n+ * weak refutation in the case that strong refutation is proven.\n+ */\n+ if (weak && not_arg &&\n+ predicate_implied_by_recurse(predicate, not_arg,\n+ true))\n+ return true;\n+\n switch (pclass)\n {\n case CLASS_AND:\n\nI don't buy this bit at all. If the prior recursive call fails to\nprove weak refutation in a case where strong refutation holds, how is\nthat not a bug lower down? Moreover, in order to mask such a bug,\nyou're doubling the time taken by failed proofs, which is an\nunfortunate thing --- we don't like spending a lot of time on\nsomething that fails to improve the plan.\n\n@@ -1138,32 +1155,114 @@ predicate_implied_by_simple_clause(Expr *predicate, Node *clause,\n Assert(list_length(op->args) == 2);\n rightop = lsecond(op->args);\n \n- /*\n- * We might never see a null Const here, but better check\n- * anyway\n- */\n- if (rightop && IsA(rightop, Const) &&\n- !((Const *) rightop)->constisnull)\n+ if (rightop && IsA(rightop, Const))\n {\n+ Const *constexpr = (Const *) rightop;\n Node *leftop = linitial(op->args);\n \n- if (DatumGetBool(((Const *) rightop)->constvalue))\n- {\n- /* X = true implies X */\n- if (equal(predicate, leftop))\n- return true;\n- }\n+ if (constexpr->constisnull)\n+ return false;\n+\n+ if (DatumGetBool(constexpr->constvalue))\n+ return equal(predicate, leftop);\n else\n- {\n- /* X = false implies NOT X */\n- if (is_notclause(predicate) &&\n- equal(get_notclausearg(predicate), leftop))\n- return true;\n- }\n+ return is_notclause(predicate) &&\n+ equal(get_notclausearg(predicate), leftop);\n }\n }\n }\n break;\n\nI don't understand what this bit is doing ... and the fact that\nthe patch removes all the existing comments and adds none isn't\nhelping that. What it seems to mostly be doing is adding early\n\"return false\"s, which I'm not sure is a good thing, because\nit seems possible that operator_predicate_proof could apply here.\n\n+ case IS_UNKNOWN:\n+ /*\n+ * When the clause is in the form \"foo IS UNKNOWN\" then\n+ * we can prove weak implication of a predicate that\n+ * is strict for \"foo\" and negated. This doesn't work\n+ * for strong implication since if \"foo\" is \"null\" then\n+ * the predicate will evaluate to \"null\" rather than\n+ * \"true\".\n+ */\n\nThe phrasing of this comment seems randomly inconsistent with others\nmaking similar arguments.\n\n+ case IS_TRUE:\n /*\n- * If the predicate is of the form \"foo IS NOT NULL\",\n- * and we are considering strong implication, we can\n- * conclude that the predicate is implied if the\n- * clause is strict for \"foo\", i.e., it must yield\n- * false or NULL when \"foo\" is NULL. In that case\n- * truth of the clause ensures that \"foo\" isn't NULL.\n- * (Again, this is a safe conclusion because \"foo\"\n- * must be immutable.) This doesn't work for weak\n- * implication, though. Also, \"row IS NOT NULL\" does\n- * not act in the simple way we have in mind.\n+ * X implies X is true\n+ *\n+ * We can only prove strong implication here since\n+ * `null is true` is false rather than null.\n */\n\nThis hardly seems like an improvement on the comment. (Also, here and\nelsewhere, could we avoid using two different types of quotes?)\n\n+ /* X is unknown weakly implies X is not true */\n+ if (weak && clausebtest->booltesttype == IS_UNKNOWN &&\n+ equal(clausebtest->arg, predbtest->arg))\n+ return true;\n\nMaybe I'm confused, but why is it only weak?\n\n+ /*\n+ * When we know what the predicate is in the form\n+ * \"foo IS UNKNOWN\" then we can prove strong and\n+ * weak refutation together. This is because the\n+ * limits imposed by weak refutation (allowing\n+ * \"false\" instead of just \"null\") is equivalently\n+ * helpful since \"foo\" being \"false\" also refutes\n+ * the predicate. Hence we pass weak=false here\n+ * always.\n+ */\n\nThis comment doesn't make sense to me either.\n \n+ /* TODO: refactor this into switch statements also? */\n\nLet's drop the TODO comments.\n\n+ /*\n+ * We can recurse into \"not foo\" without any additional processing because\n+ * \"not (null)\" evaluates to null. That doesn't work for allow_false,\n+ * however, since \"not (false)\" is true rather than null.\n+ */\n+ if (is_notclause(clause) &&\n+ clause_is_strict_for((Node *) get_notclausearg(clause), subexpr, false))\n+ return true;\n\nNot exactly convinced by this. The way the comment is written, I'd\nexpect to not call clause_is_strict_for at all if allow_false. If\nit's okay to call it anyway and pass allow_false = false, you need\nto defend that, which this comment isn't doing.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 22 Jan 2024 12:57:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "Thanks for the feedback.\n\nOn Mon, Jan 22, 2024 at 12:57 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > 0001 does the initial pure refactor. 0003 makes a lot of modifications\n> > to what we can prove about implication and refutation. Finally, 0003\n> > isn't intended to be committed, but attempts to validate more\n> > holistically that none of the changes creates any invalid proofs\n> > beyond the mostly happy-path tests added in 0004.\n>\n> > I ended up not tackling changing how test_predtest tests run for now.\n> > That's plausibly still useful, and I'd be happy to add that if you\n> > generally agree with the direction of the patch and with that\n> > abstraction being useful.\n>\n> > I added some additional verifications to the test_predtest module to\n> > prevent additional obvious flaws.\n>\n> I looked through 0001 and made some additional cosmetic changes,\n> mostly to get comments closer to the associated code; I also\n> ran pgindent on it (see v5-0001 attached). That seems pretty\n> committable to me at this point.\n\nGreat.\n\n> I also like your 0002 additions to\n> test_predtest.c (although why the mixture of ERROR and WARNING?\n> ISTM they should all be WARNING, so we can press on with the test).\n\nMy reasoning is that one is a major error in something larger than\npredtest, while the other is clearly \"your code change isn't\naccurate\". The surrounding code seems to be drawing a distinction also\n(it uses both ERROR and WARNING), and so I was trying to parallel that\nappropriately.\n\nI'm fine with making both WARNING though.\n\nBut does that also mean we should make other such cases WARNING as\nwell? For example, the query not returning two boolean columns doesn't\nreally seem like a reason to break subsequent tests.\n\nI haven't changed this yet pending these questions.\n\n> One other thought is that maybe separating out\n> predicate_implied_not_null_by_clause should be part of 0001?\n\nWould you prefer to commit a refactor along with some functionality\nchanges? Or one patch with the pure refactor and then a second patch\nwith the predicate_implied_not_null_by_clause changes?\n\n> I'm less excited about the rest of v4-0002.\n>\n> @@ -740,6 +747,16 @@ predicate_refuted_by_recurse(Node *clause, Node *predicate,\n> !weak))\n> return true;\n>\n> + /*\n> + * Because weak refutation expands the allowed outcomes for B\n> + * from \"false\" to \"false or null\", we can additionally prove\n> + * weak refutation in the case that strong refutation is proven.\n> + */\n> + if (weak && not_arg &&\n> + predicate_implied_by_recurse(predicate, not_arg,\n> + true))\n> + return true;\n> +\n> switch (pclass)\n> {\n> case CLASS_AND:\n>\n> I don't buy this bit at all. If the prior recursive call fails to\n> prove weak refutation in a case where strong refutation holds, how is\n> that not a bug lower down?\n\nThis is one of the last additions I made while authoring the most\nrecent version of the patch, and at first I thought it suggested a bug\nlower down also.\n\nHowever the cases proven by these lines (\"x is not false\" is weakly\nrefuted by \"not x\", \"x is false\", and \"x = false\") correctly do not\nhave their not arg (\"x\") strongly implied by \"x is not false\" since\nboth \"x is null\" and \"x is true\" would have to imply \"x\", which\nobviously doesn't hold. These aren't cases we're handling directly in\npredicate_refuted_by_simple_clause.\n\nThis is caused by the asymmetry between implication and refutation\nthat I noted in my addition to the comments nearer the top of the\nfile:\n\n+ * A notable difference between implication and refutation proofs is that\n+ * strong/weak refutations don't vary the input of A (both must be true) but\n+ * vary the allowed outcomes of B (false vs. non-truth), while for implications\n+ * we vary both A (truth vs. non-falsity) and B (truth vs. non-falsity).\n\nPut another way in the comments I added in test_predtest.c:\n\n+ /* Because weak refutation proofs are a strict subset of strong refutation\n+ * proofs (since for \"A => B\" \"A\" is always true) we ought never\nhave strong\n+ * refutation hold when weak refutation does not.\n+ *\n+ * We can't make the same assertion for implication since moving\nfrom strong\n+ * to weak implication expands the allowed values of \"A\" from\ntrue to either\n+ * true or NULL.\n\nWe could decide to handle this particular failing case explicitly in\npredicate_refuted_by_simple_clause as opposed to inferring it by\nwhether or not implication by the not-arg holds, but I suspect that\nleaves us open to other cases we should be to prove refutation for but\ndon't.\n\nAlternatively (to avoid unnecessary CPU burn) we could modify\npredicate_implied_by_recurse (and functionals called by it) to have a\nargument beyond \"weak = true/false\" Ie.g., an enum that allows for\nsomething like \"WEAK\", \"STRONG\", and \"EITHER\". That's a bigger change,\nso I didn't want to do that right away unless there was agreement on\nthat direction.\n\nI haven't changed this yet pending this discussion.\n\n> Moreover, in order to mask such a bug,\n> you're doubling the time taken by failed proofs, which is an\n> unfortunate thing --- we don't like spending a lot of time on\n> something that fails to improve the plan.\n\nSee above.\n\n> @@ -1138,32 +1155,114 @@ predicate_implied_by_simple_clause(Expr *predicate, Node *clause,\n> Assert(list_length(op->args) == 2);\n> rightop = lsecond(op->args);\n>\n> - /*\n> - * We might never see a null Const here, but better check\n> - * anyway\n> - */\n> - if (rightop && IsA(rightop, Const) &&\n> - !((Const *) rightop)->constisnull)\n> + if (rightop && IsA(rightop, Const))\n> {\n> + Const *constexpr = (Const *) rightop;\n> Node *leftop = linitial(op->args);\n>\n> - if (DatumGetBool(((Const *) rightop)->constvalue))\n> - {\n> - /* X = true implies X */\n> - if (equal(predicate, leftop))\n> - return true;\n> - }\n> + if (constexpr->constisnull)\n> + return false;\n> +\n> + if (DatumGetBool(constexpr->constvalue))\n> + return equal(predicate, leftop);\n> else\n> - {\n> - /* X = false implies NOT X */\n> - if (is_notclause(predicate) &&\n> - equal(get_notclausearg(predicate), leftop))\n> - return true;\n> - }\n> + return is_notclause(predicate) &&\n> + equal(get_notclausearg(predicate), leftop);\n> }\n> }\n> }\n> break;\n>\n> I don't understand what this bit is doing ... and the fact that\n> the patch removes all the existing comments and adds none isn't\n> helping that. What it seems to mostly be doing is adding early\n> \"return false\"s, which I'm not sure is a good thing, because\n> it seems possible that operator_predicate_proof could apply here.\n\nI was mostly bringing it in line with the style I have elsewhere in\nthe patch by pulling out the Const* into a variable to avoid repeated\ncasting.\n\nThat being said, you're right that I didn't catch in the many\nrevisions along the way that I'd added unnecessary early returns and\nlost the comments. Fixed both of those in the next version.\n\n> + case IS_UNKNOWN:\n> + /*\n> + * When the clause is in the form \"foo IS UNKNOWN\" then\n> + * we can prove weak implication of a predicate that\n> + * is strict for \"foo\" and negated. This doesn't work\n> + * for strong implication since if \"foo\" is \"null\" then\n> + * the predicate will evaluate to \"null\" rather than\n> + * \"true\".\n> + */\n>\n> The phrasing of this comment seems randomly inconsistent with others\n> making similar arguments.\n\nChanged.\n\n> + case IS_TRUE:\n> /*\n> - * If the predicate is of the form \"foo IS NOT NULL\",\n> - * and we are considering strong implication, we can\n> - * conclude that the predicate is implied if the\n> - * clause is strict for \"foo\", i.e., it must yield\n> - * false or NULL when \"foo\" is NULL. In that case\n> - * truth of the clause ensures that \"foo\" isn't NULL.\n> - * (Again, this is a safe conclusion because \"foo\"\n> - * must be immutable.) This doesn't work for weak\n> - * implication, though. Also, \"row IS NOT NULL\" does\n> - * not act in the simple way we have in mind.\n> + * X implies X is true\n> + *\n> + * We can only prove strong implication here since\n> + * `null is true` is false rather than null.\n> */\n>\n> This hardly seems like an improvement on the comment. (Also, here and\n> elsewhere, could we avoid using two different types of quotes?)\n\nI think the git diff is confusing here. The old comment was about a\npredicate \"foo IS NOT NULL\", but the new comment is about a predicate\n\"foo IS TRUE\".\n\nI did fix the usage of backticks though.\n\n> + /* X is unknown weakly implies X is not true */\n> + if (weak && clausebtest->booltesttype == IS_UNKNOWN &&\n> + equal(clausebtest->arg, predbtest->arg))\n> + return true;\n>\n> Maybe I'm confused, but why is it only weak?\n\nYou're not confused; this seems like a mistake (same with the IS NOT\nFALSE below it).\n\n> + /*\n> + * When we know what the predicate is in the form\n> + * \"foo IS UNKNOWN\" then we can prove strong and\n> + * weak refutation together. This is because the\n> + * limits imposed by weak refutation (allowing\n> + * \"false\" instead of just \"null\") is equivalently\n> + * helpful since \"foo\" being \"false\" also refutes\n> + * the predicate. Hence we pass weak=false here\n> + * always.\n> + */\n>\n> This comment doesn't make sense to me either.\n\nI rewrote the comment in the attached revision; let me know if that helps.\n\n> + /* TODO: refactor this into switch statements also? */\n>\n> Let's drop the TODO comments.\n\nThis one was meant to be a question for you in review: do we want to\nmake that change? Or are we content to leave it as-is?\n\nEither way, removed.\n\n> + /*\n> + * We can recurse into \"not foo\" without any additional processing because\n> + * \"not (null)\" evaluates to null. That doesn't work for allow_false,\n> + * however, since \"not (false)\" is true rather than null.\n> + */\n> + if (is_notclause(clause) &&\n> + clause_is_strict_for((Node *) get_notclausearg(clause), subexpr, false))\n> + return true;\n>\n> Not exactly convinced by this. The way the comment is written, I'd\n> expect to not call clause_is_strict_for at all if allow_false. If\n> it's okay to call it anyway and pass allow_false = false, you need\n> to defend that, which this comment isn't doing.\n\nI updated the comment to clarify. The restriction on allow_false\n(always passing along false on the recursion case) is already\ndocumented as a requirement in the function comment, but I wanted the\ncomment here to explain why that was necessary here, since in my\nopinion it's not immediately obvious reading the function comment why\nsuch a restriction would necessarily hold true for all recursion\ncases.\n\nRegards,\nJames Coleman", "msg_date": "Wed, 24 Jan 2024 20:08:47 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> [ v6 patchset ]\n\nI went ahead and committed 0001 after one more round of review\n\nstatements; my bad). I also added the changes in test_predtest.c from\n0002. I attach a rebased version of 0002, as well as 0003 which isn't\nchanged, mainly to keep the cfbot happy.\n\nI'm still not happy with what you did in predicate_refuted_by_recurse:\nit feels wrong and rather expensively so. There has to be a better\nway. Maybe strong vs. weak isn't quite the right formulation for\nrefutation tests?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 25 Mar 2024 17:53:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "I wrote:\n> I went ahead and committed 0001 after one more round of review\n> \n> statements; my bad). I also added the changes in test_predtest.c from\n> 0002. I attach a rebased version of 0002, as well as 0003 which isn't\n> changed, mainly to keep the cfbot happy.\n\n[ squint.. ] Apparently I managed to hit ^K right before sending this\nemail. The missing line was meant to be more or less\n\n> which found a couple of missing \"break\"\n\nNot too important, but perhaps future readers of the archives will\nbe confused.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 Mar 2024 23:45:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:45 PM Tom Lane <[email protected]> wrote:\n>\n> I wrote:\n> > I went ahead and committed 0001 after one more round of review\n> >\n> > statements; my bad). I also added the changes in test_predtest.c from\n> > 0002. I attach a rebased version of 0002, as well as 0003 which isn't\n> > changed, mainly to keep the cfbot happy.\n>\n> [ squint.. ] Apparently I managed to hit ^K right before sending this\n> email. The missing line was meant to be more or less\n>\n> > which found a couple of missing \"break\"\n>\n> Not too important, but perhaps future readers of the archives will\n> be confused.\n\nI was wondering myself :) so thanks for clarifying.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Mon, 1 Apr 2024 08:05:02 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "On Mon, Mar 25, 2024 at 5:53 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > [ v6 patchset ]\n>\n> I went ahead and committed 0001 after one more round of review\n>\n> statements; my bad). I also added the changes in test_predtest.c from\n> 0002. I attach a rebased version of 0002, as well as 0003 which isn't\n> changed, mainly to keep the cfbot happy.\n>\n> I'm still not happy with what you did in predicate_refuted_by_recurse:\n> it feels wrong and rather expensively so. There has to be a better\n> way. Maybe strong vs. weak isn't quite the right formulation for\n> refutation tests?\n\nPossibly. Earlier I'd mused that:\n\n> Alternatively (to avoid unnecessary CPU burn) we could modify\n> predicate_implied_by_recurse (and functionals called by it) to have a\n> argument beyond \"weak = true/false\" Ie.g., an enum that allows for\n> something like \"WEAK\", \"STRONG\", and \"EITHER\". That's a bigger change,\n> so I didn't want to do that right away unless there was agreement on\n> that direction.\n\nI'm going to try implementing that and see how I feel about what it\nlooks like in practice.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Mon, 1 Apr 2024 08:06:42 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" }, { "msg_contents": "On Mon, Apr 1, 2024 at 8:06 AM James Coleman <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 5:53 PM Tom Lane <[email protected]> wrote:\n> >\n> > James Coleman <[email protected]> writes:\n> > > [ v6 patchset ]\n> >\n> > I went ahead and committed 0001 after one more round of review\n> >\n> > statements; my bad). I also added the changes in test_predtest.c from\n> > 0002. I attach a rebased version of 0002, as well as 0003 which isn't\n> > changed, mainly to keep the cfbot happy.\n> >\n> > I'm still not happy with what you did in predicate_refuted_by_recurse:\n> > it feels wrong and rather expensively so. There has to be a better\n> > way. Maybe strong vs. weak isn't quite the right formulation for\n> > refutation tests?\n>\n> Possibly. Earlier I'd mused that:\n>\n> > Alternatively (to avoid unnecessary CPU burn) we could modify\n> > predicate_implied_by_recurse (and functionals called by it) to have a\n> > argument beyond \"weak = true/false\" Ie.g., an enum that allows for\n> > something like \"WEAK\", \"STRONG\", and \"EITHER\". That's a bigger change,\n> > so I didn't want to do that right away unless there was agreement on\n> > that direction.\n>\n> I'm going to try implementing that and see how I feel about what it\n> looks like in practice.\n\nAttached is v8 which does this. Note that I kept the patch 0001 as\nbefore and inserted a new 0002 to show exactly what's changed from the\npreviously version -- I wouldn't expect that to be committed\nseparately, of course. With this change we only need to recurse a\nsingle time and can check for both strong and weak refutation when\neither will do for proving refutation of the \"NOT x\" construct.\n\nRegards,\nJames Coleman", "msg_date": "Fri, 5 Apr 2024 20:43:33 -0400", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Teach predtest about IS [NOT] <boolean> proofs" } ]
[ { "msg_contents": "Hi,\n\nA customer found what looks like a sort regression while testing his code from\nv11 on a higher version. We hunt this regression down to commit 586b98fdf1aae,\nintroduced in v12.\n\nConsider the following test case:\n\n createdb -l fr_FR.utf8 -T template0 reg\n psql reg <<<\"\n BEGIN;\n CREATE TABLE IF NOT EXISTS reg\n (\n id bigint NOT NULL,\n reg bytea NOT NULL\n );\n \n INSERT INTO reg VALUES\n (1, convert_to( 'aaa', 'UTF8')),\n (2, convert_to( 'aa}', 'UTF8'));\n \n SELECT id FROM reg ORDER BY convert_from(reg, 'UTF8');\"\n\nIn parent commit 68f6f2b7395fe, it results:\n\n id \n ────\n 2\n 1\n\nAnd in 586b98fdf1aae:\n\n id \n ────\n 1\n 2\n\nLooking at the plan, the sort node are different:\n\n* 68f6f2b7395fe: Sort Key: (convert_from(reg, 'UTF8'::name))\n* 586b98fdf1aae: Sort Key: (convert_from(reg, 'UTF8'::name)) COLLATE \"C\"\n\nIt looks like since 586b98fdf1aae, the result type collation of \"convert_from\"\nis forced to \"C\", like the patch does for type \"name\", instead of the \"default\"\ncollation for type \"text\".\n\nLooking at hints in the header comment of function \"exprCollation\", I poked\naround and found that the result collation wrongly follow the input collation\nin this case. With 586b98fdf1aae:\n\n -- 2nd parameter type resolved as \"name\" so collation forced to \"C\"\n SELECT id FROM reg ORDER BY convert_from(reg, 'UTF8');\n -- 1\n -- 2\n\n -- Collation of 2nd parameter is forced to something else\n SELECT id FROM reg ORDER BY convert_from(reg, 'UTF8' COLLATE \\\"default\\\");\n -- 2\n -- 1\n -- Sort\n -- Sort Key: (convert_from(reg, 'UTF8'::name COLLATE \"default\"))\n -- -> Seq Scan on reg\n\nIt seems because the second parameter type is \"name\", the result collation\nbecome \"C\" instead of being the collation associated with \"text\" type:\n\"default\".\n\nI couldn't find anything explaining this behavior in the changelog. It looks\nlike a regression to me, but if this is actually expected, maybe this deserve\nsome documentation patch?\n\nRegards,\n\n\n", "msg_date": "Mon, 11 Dec 2023 21:09:51 +0100", "msg_from": "Jehan-Guillaume de Rorthais <[email protected]>", "msg_from_op": true, "msg_subject": "Sorting regression of text function result since commit\n 586b98fdf1aae" }, { "msg_contents": "Jehan-Guillaume de Rorthais <[email protected]> writes:\n> It looks like since 586b98fdf1aae, the result type collation of \"convert_from\"\n> is forced to \"C\", like the patch does for type \"name\", instead of the \"default\"\n> collation for type \"text\".\n\nWell, convert_from() inherits its result collation from the input,\nper the normal rules for collation assignment [1].\n\n> Looking at hints in the header comment of function \"exprCollation\", I poked\n> around and found that the result collation wrongly follow the input collation\n> in this case.\n\nIt's not \"wrong\", it's what the SQL standard requires.\n\n> I couldn't find anything explaining this behavior in the changelog. It looks\n> like a regression to me, but if this is actually expected, maybe this deserve\n> some documentation patch?\n\nThe v12 release notes do say\n\n Type name now behaves much like a domain over type text that has\n default collation “C”.\n\nYou'd have similar results from an expression involving such a domain,\nI believe.\n\nI'm less than excited about patching the v12 release notes four\nyears later. Maybe, if this point had come up in a more timely\nfashion, we'd have mentioned it --- but it's hardly possible to\ncover every potential implication of such a change in the\nrelease notes.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/collation.html#COLLATION-CONCEPTS\n\n\n", "msg_date": "Mon, 11 Dec 2023 15:43:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting regression of text function result since commit\n 586b98fdf1aae" }, { "msg_contents": "On Mon, 11 Dec 2023 15:43:12 -0500\nTom Lane <[email protected]> wrote:\n\n> Jehan-Guillaume de Rorthais <[email protected]> writes:\n> > It looks like since 586b98fdf1aae, the result type collation of\n> > \"convert_from\" is forced to \"C\", like the patch does for type \"name\",\n> > instead of the \"default\" collation for type \"text\". \n> \n> Well, convert_from() inherits its result collation from the input,\n> per the normal rules for collation assignment [1].\n> \n> > Looking at hints in the header comment of function \"exprCollation\", I poked\n> > around and found that the result collation wrongly follow the input\n> > collation in this case. \n> \n> It's not \"wrong\", it's what the SQL standard requires.\n\nMh, OK. This is at least a surprising behavior. Having a non-data related\nargument impacting the result collation seems counter-intuitive. But I\nunderstand this is by standard, no need to discuss it.\n\n> > I couldn't find anything explaining this behavior in the changelog. It looks\n> > like a regression to me, but if this is actually expected, maybe this\n> > deserve some documentation patch? \n> \n> The v12 release notes do say\n> \n> Type name now behaves much like a domain over type text that has\n> default collation “C”.\n\nSure, and I saw it, but reading at this entry, I couldn't guess this could have\nsuch implication on text result from a function call. That's why I hunt for\nthe precise commit and was surprise to find this was the actual change.\n\n> You'd have similar results from an expression involving such a domain,\n> I believe.\n> \n> I'm less than excited about patching the v12 release notes four\n> years later. Maybe, if this point had come up in a more timely\n> fashion, we'd have mentioned it --- but it's hardly possible to\n> cover every potential implication of such a change in the\n> release notes.\n\nThis could have been documented in the collation concept page, as a trap to be\naware of. A link from the release note to such a small paragraph would have\nbeen enough to warn devs this might have implications when mixed with other\ncollatable types. But I understand we can not document all the traps paving the\nway to the standard anyway.\n\nThank you for your explanation!\n\nRegards,\n\n\n", "msg_date": "Tue, 12 Dec 2023 11:52:46 +0100", "msg_from": "Jehan-Guillaume de Rorthais <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorting regression of text function result since commit\n 586b98fdf1aae" } ]