threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "While working on some other patches, I found myself wanting to use the\nfollowing command to vacuum the catalogs in all databases in a cluster:\n\n\tvacuumdb --all --schema pg_catalog\n\nHowever, this presently fails with the following error:\n\n\tcannot vacuum specific schema(s) in all databases\n\nAFAICT there no technical reason to block this, and the resulting behavior\nfeels intuitive to me, so I wrote 0001 to allow it. 0002 allows specifying\ntables to process in all databases in clusterdb, and 0003 allows specifying\ntables, indexes, schemas, or the system catalogs to process in all\ndatabases in reindexdb.\n\nI debated also allowing users to specify different types of objects in the\nsame command (e.g., \"vacuumdb --schema myschema --table mytable\"), but it\nlooked like this would require a more substantial rewrite, and I didn't\nfeel that the behavior was intuitive. For the example I just gave, does\nthe user expect us to process both the \"myschema\" schema and the \"mytable\"\ntable, or does the user want us to process the \"mytable\" table in the\n\"myschema\" schema? In vacuumdb, this is already blocked, but reindexdb\naccepts combinations of tables, schemas, and indexes (yet disallows\nspecifying --system along with other types of objects). Since this is\ninconsistent with vacuumdb and IMO ambiguous, I've restricted such\ncombinations in 0003.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 28 Jun 2023 16:24:02 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuumdb/clusterdb/reindexdb: allow specifying objects to process in\n all databases"
},
{
"msg_contents": "At Wed, 28 Jun 2023 16:24:02 -0700, Nathan Bossart <[email protected]> wrote in \n> While working on some other patches, I found myself wanting to use the\n> following command to vacuum the catalogs in all databases in a cluster:\n> \n> \tvacuumdb --all --schema pg_catalog\n> \n> However, this presently fails with the following error:\n> \n> \tcannot vacuum specific schema(s) in all databases\n> \n> AFAICT there no technical reason to block this, and the resulting behavior\n> feels intuitive to me, so I wrote 0001 to allow it. 0002 allows specifying\n> tables to process in all databases in clusterdb, and 0003 allows specifying\n> tables, indexes, schemas, or the system catalogs to process in all\n> databases in reindexdb.\n\nIt seems like useful.\n\n> I debated also allowing users to specify different types of objects in the\n> same command (e.g., \"vacuumdb --schema myschema --table mytable\"), but it\n> looked like this would require a more substantial rewrite, and I didn't\n> feel that the behavior was intuitive. For the example I just gave, does\n> the user expect us to process both the \"myschema\" schema and the \"mytable\"\n> table, or does the user want us to process the \"mytable\" table in the\n> \"myschema\" schema? In vacuumdb, this is already blocked, but reindexdb\n\nI think spcyfying the two at once is inconsistent if we maintain the\ncurrent behavior of those options.\n\nIt seems to me that that change clearly modifies the functionality of\nthe options. As a result, those options look like restriction\nfilters. For example, \"vacuumdb -s s1_* -t t1\" will vacuum all table\nnamed \"t1\" in all schemas matches \"s1_*\".\n\n> accepts combinations of tables, schemas, and indexes (yet disallows\n> specifying --system along with other types of objects). Since this is\n> inconsistent with vacuumdb and IMO ambiguous, I've restricted such\n> combinations in 0003.\n> \n> Thoughts?\n\nWhile I think this is useful, primarily for system catalogs, I'm not\nentirely convinced about its practicality to user objects. It's\ndifficult for me to imagine that a situation where all databases share\nthe same schema would be major.\n\nAssuming this is used for user objects, it may be necessary to safely\nexclude databases that lack the specified schema or table, provided\nthe object present in at least one other database. But the exclusion\nshould be done with printing some warnings. It could also be\nnecessary to safely move to the next object when reindex or cluster\noperation fails on a single object due to missing prerequisite\nsituations. But I don't think we might want to add such complexity to\nthese \"script\" tools.\n\nSo.. an alternative path might be to introduce a new option like\n--syscatalog to specify system catalogs as the only option that can be\ncombined with --all. In doing so, we can leave the --table and\n--schema options untouched.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 29 Jun 2023 14:16:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Thu, Jun 29, 2023 at 02:16:26PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 28 Jun 2023 16:24:02 -0700, Nathan Bossart <[email protected]> wrote in \n>> I debated also allowing users to specify different types of objects in the\n>> same command (e.g., \"vacuumdb --schema myschema --table mytable\"), but it\n>> looked like this would require a more substantial rewrite, and I didn't\n>> feel that the behavior was intuitive. For the example I just gave, does\n>> the user expect us to process both the \"myschema\" schema and the \"mytable\"\n>> table, or does the user want us to process the \"mytable\" table in the\n>> \"myschema\" schema? In vacuumdb, this is already blocked, but reindexdb\n> \n> I think spcyfying the two at once is inconsistent if we maintain the\n> current behavior of those options.\n> \n> It seems to me that that change clearly modifies the functionality of\n> the options. As a result, those options look like restriction\n> filters. For example, \"vacuumdb -s s1_* -t t1\" will vacuum all table\n> named \"t1\" in all schemas matches \"s1_*\".\n\nSorry, I'm not following. I intentionally avoided allowing combinations of\n--schema and --table in the patches I sent. This is the current behavior\nof vacuumdb. Are you suggesting that they should be treated as restriction\nfilters?\n\n> While I think this is useful, primarily for system catalogs, I'm not\n> entirely convinced about its practicality to user objects. It's\n> difficult for me to imagine that a situation where all databases share\n> the same schema would be major.\n> \n> Assuming this is used for user objects, it may be necessary to safely\n> exclude databases that lack the specified schema or table, provided\n> the object present in at least one other database. But the exclusion\n> should be done with printing some warnings. It could also be\n> necessary to safely move to the next object when reindex or cluster\n> operation fails on a single object due to missing prerequisite\n> situations. But I don't think we might want to add such complexity to\n> these \"script\" tools.\n\nPerhaps we could add something like a --skip-missing option.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Jun 2023 13:56:38 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "At Thu, 29 Jun 2023 13:56:38 -0700, Nathan Bossart <[email protected]> wrote in \n> Thanks for taking a look.\n> \n> On Thu, Jun 29, 2023 at 02:16:26PM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 28 Jun 2023 16:24:02 -0700, Nathan Bossart <[email protected]> wrote in \n> >> I debated also allowing users to specify different types of objects in the\n> >> same command (e.g., \"vacuumdb --schema myschema --table mytable\"), but it\n> >> looked like this would require a more substantial rewrite, and I didn't\n> >> feel that the behavior was intuitive. For the example I just gave, does\n> >> the user expect us to process both the \"myschema\" schema and the \"mytable\"\n> >> table, or does the user want us to process the \"mytable\" table in the\n> >> \"myschema\" schema? In vacuumdb, this is already blocked, but reindexdb\n> > \n> > I think spcyfying the two at once is inconsistent if we maintain the\n> > current behavior of those options.\n> > \n> > It seems to me that that change clearly modifies the functionality of\n> > the options. As a result, those options look like restriction\n> > filters. For example, \"vacuumdb -s s1_* -t t1\" will vacuum all table\n> > named \"t1\" in all schemas matches \"s1_*\".\n> \n> Sorry, I'm not following. I intentionally avoided allowing combinations of\n> --schema and --table in the patches I sent. This is the current behavior\n> of vacuumdb. Are you suggesting that they should be treated as restriction\n> filters?\n\nNo. I'm not suggesting. Just saying that they would look appear to\nwork as a restriction filters if those two options can be specified at\nonce.\n\n> > While I think this is useful, primarily for system catalogs, I'm not\n> > entirely convinced about its practicality to user objects. It's\n> > difficult for me to imagine that a situation where all databases share\n> > the same schema would be major.\n> > \n> > Assuming this is used for user objects, it may be necessary to safely\n> > exclude databases that lack the specified schema or table, provided\n> > the object present in at least one other database. But the exclusion\n> > should be done with printing some warnings. It could also be\n> > necessary to safely move to the next object when reindex or cluster\n> > operation fails on a single object due to missing prerequisite\n> > situations. But I don't think we might want to add such complexity to\n> > these \"script\" tools.\n> \n> Perhaps we could add something like a --skip-missing option.\n\nBut isn't it a bit too complicated for the gain?\n\nI don't have a strong objection if we're fine with just allowing\n\"--all --schema=xxx\", knowing that it will works cleanly only for\nsystem catalogs.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 30 Jun 2023 12:05:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 12:05:17PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 29 Jun 2023 13:56:38 -0700, Nathan Bossart <[email protected]> wrote in \n>> Sorry, I'm not following. I intentionally avoided allowing combinations of\n>> --schema and --table in the patches I sent. This is the current behavior\n>> of vacuumdb. Are you suggesting that they should be treated as restriction\n>> filters?\n> \n> No. I'm not suggesting. Just saying that they would look appear to\n> work as a restriction filters if those two options can be specified at\n> once.\n\nGot it, thanks for clarifying.\n\n>> Perhaps we could add something like a --skip-missing option.\n> \n> But isn't it a bit too complicated for the gain?\n> \n> I don't have a strong objection if we're fine with just allowing\n> \"--all --schema=xxx\", knowing that it will works cleanly only for\n> system catalogs.\n\nOkay. I haven't scoped out what would be required to support a\n--skip-missing option, but it doesn't sound too terribly complicated to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Jun 2023 22:13:32 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 23 Oct 2023 15:25:42 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 03:25:42PM -0500, Nathan Bossart wrote:\n> rebased\n\nI saw that this thread was referenced elsewhere [0], so I figured I'd take\na fresh look. From a quick glance, I'd say 0001 and 0002 are pretty\nreasonable and could probably be committed for v17. 0003 probably requires\nsome more attention. If there is indeed interest in these changes, I'll\ntry to spend some more time on it.\n\n[0] https://postgr.es/m/E0D2F0CE-D27C-49B1-902B-AD8D2427F07E%40yandex-team.ru\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 4 Mar 2024 20:22:25 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "On Tue, 5 Mar 2024 at 02:22, Nathan Bossart <[email protected]> wrote:\n>\n> I saw that this thread was referenced elsewhere [0], so I figured I'd take\n> a fresh look. From a quick glance, I'd say 0001 and 0002 are pretty\n> reasonable and could probably be committed for v17.\n>\n\nI'm not sure how useful these changes are, but I don't really object.\nYou need to update the synopsis section of the docs though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 5 Mar 2024 23:20:13 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to process\n in all databases"
},
{
"msg_contents": "On Tue, Mar 05, 2024 at 11:20:13PM +0000, Dean Rasheed wrote:\n> I'm not sure how useful these changes are, but I don't really object.\n> You need to update the synopsis section of the docs though.\n\nThanks for taking a look. I updated the synopsis sections in v3.\n\nI also spent some more time on the reindexdb patch (0003). I previously\nhad decided to restrict combinations of tables, schemas, and indexes\nbecause I felt it was \"ambiguous and inconsistent with vacuumdb,\" but\nlooking closer, I think that's the wrong move. reindexdb already supports\nsuch combinations, which it interprets to mean it should reindex each\nlisted object. So, I removed that change in v3.\n\nEven though reindexdb allows combinations of tables, schema, and indexes,\nit doesn't allow combinations of \"system catalogs\" and other objects, and\nit's not clear why. In v3, I've removed this restriction, which ended up\nsimplifying the 0003 patch a bit. Like combinations of tables, schemas,\nand indexes, reindexdb will now interpret combinations that include\n--system to mean it should reindex each listed object as well as the system\ncatalogs.\n\nIdeally, we'd allow similar combinations in vacuumdb, but I believe that\nwould require a much more invasive patch, and I've already spent far more\ntime on this change than I wanted to.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 6 Mar 2024 16:22:51 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 22:22, Nathan Bossart <[email protected]> wrote:\n>\n> Thanks for taking a look. I updated the synopsis sections in v3.\n\nOK, that looks good. The vacuumdb synopsis in particular looks a lot\nbetter now that \"-N | --exclude-schema\" is on its own line, because it\nwas hard to read previously, and easy to mistakenly think that -n\ncould be combined with -N.\n\nIf I'm nitpicking, \"[--verbose | -v]\" in the clusterdb synopsis should\nbe replaced with \"[option...]\", like the other commands, because there\nare other general-purpose options like --quiet and --echo.\n\n> I also spent some more time on the reindexdb patch (0003). I previously\n> had decided to restrict combinations of tables, schemas, and indexes\n> because I felt it was \"ambiguous and inconsistent with vacuumdb,\" but\n> looking closer, I think that's the wrong move. reindexdb already supports\n> such combinations, which it interprets to mean it should reindex each\n> listed object. So, I removed that change in v3.\n\nMakes sense.\n\n> Even though reindexdb allows combinations of tables, schema, and indexes,\n> it doesn't allow combinations of \"system catalogs\" and other objects, and\n> it's not clear why. In v3, I've removed this restriction, which ended up\n> simplifying the 0003 patch a bit. Like combinations of tables, schemas,\n> and indexes, reindexdb will now interpret combinations that include\n> --system to mean it should reindex each listed object as well as the system\n> catalogs.\n\nOK, that looks useful, especially given that most people will still\nprobably use this against a single database, and it's making that more\nflexible.\n\nI think this is good to go.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 8 Mar 2024 09:33:19 +0000",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to process\n in all databases"
},
{
"msg_contents": "On Fri, Mar 08, 2024 at 09:33:19AM +0000, Dean Rasheed wrote:\n> If I'm nitpicking, \"[--verbose | -v]\" in the clusterdb synopsis should\n> be replaced with \"[option...]\", like the other commands, because there\n> are other general-purpose options like --quiet and --echo.\n\nGood catch. I fixed that in v4. We could probably back-patch this\nparticular change, but since it's been this way for a while, I don't think\nit's terribly important to do so.\n\n> I think this is good to go.\n\nThanks. In v4, I've added a first draft of the commit messages, and I am\nplanning to commit this early next week.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 8 Mar 2024 16:03:22 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
},
{
"msg_contents": "On Fri, Mar 08, 2024 at 04:03:22PM -0600, Nathan Bossart wrote:\n> On Fri, Mar 08, 2024 at 09:33:19AM +0000, Dean Rasheed wrote:\n>> I think this is good to go.\n> \n> Thanks. In v4, I've added a first draft of the commit messages, and I am\n> planning to commit this early next week.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 11 Mar 2024 15:48:10 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuumdb/clusterdb/reindexdb: allow specifying objects to\n process in all databases"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile working on a different patch, I have noted three code paths that\ncall changeDependencyFor() but don't check that they do not return\nerrors. In all the three cases (support function, extension/schema\nand object/schema), it seems to me that only one dependency update is\nexpected.\n\nI am adding that to the next CF. Thoughts or comments about the\nattached?\n--\nMichael",
"msg_date": "Thu, 29 Jun 2023 08:36:30 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On 29/06/2023 02:36, Michael Paquier wrote:\n> Hi all,\n> \n> While working on a different patch, I have noted three code paths that\n> call changeDependencyFor() but don't check that they do not return\n> errors. In all the three cases (support function, extension/schema\n> and object/schema), it seems to me that only one dependency update is\n> expected.\n\nMakes sense.\n\n> \t/* update dependencies to point to the new schema */\n\nSuggest: \"update dependency ...\" in singular, as there should be only one.\n\n> \tif (changeDependencyFor(ExtensionRelationId, extensionOid,\n> \t\t\t\t\t\t\tNamespaceRelationId, oldNspOid, nspOid) != 1)\n> \t\telog(ERROR, \"failed to change schema dependency for extension %s\",\n> \t\t\t NameStr(extForm->extname));\n\nThe error messages like \"failed to change schema dependency for \nextension\" don't conform to the usual error message style. \"could not \nchange schema dependency for extension\" would be more conformant. I see \nthat you copy-pasted that from existing messages, and we have a bunch of \nother \"failed to\" messages in the repository too, so I'm OK with leaving \nit as it is for now. Or maybe change the wording of all the \nchangeDependencyFor() callers now, and consider all the other \"failed \nto\" messages separately later.\n\nIf changeDependencyFor() returns >= 2, the message is a bit misleading. \nThat's what the existing callers did too, so maybe that's fine.\n\nI can hit the above error with the attached test case. That seems wrong, \nalthough I don't know if it means that the check is wrong or it exposed \na long-standing bug.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 29 Jun 2023 10:06:35 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On 2023-Jun-29, Heikki Linnakangas wrote:\n\n> I can hit the above error with the attached test case. That seems wrong,\n> although I don't know if it means that the check is wrong or it exposed a\n> long-standing bug.\n\n> +CREATE SCHEMA test_func_dep1;\n> +CREATE SCHEMA test_func_dep2;\n> +CREATE EXTENSION test_ext_req_schema1 SCHEMA test_func_dep1;\n> +ALTER FUNCTION test_func_dep1.dep_req1() SET SCHEMA test_func_dep2;\n> +\n> +ALTER EXTENSION test_ext_req_schema1 SET SCHEMA test_func_dep2;\n> +\n> +DROP EXTENSION test_ext_req_schema1 CASCADE;\n\nHmm, shouldn't we disallow moving the function to another schema, if the\nfunction's schema was originally determined at extension creation time?\nI'm not sure we really want to allow moving objects of an extension to a\ndifferent schema.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Tue, 4 Jul 2023 18:52:03 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Hmm, shouldn't we disallow moving the function to another schema, if the\n> function's schema was originally determined at extension creation time?\n> I'm not sure we really want to allow moving objects of an extension to a\n> different schema.\n\nWhy not? I do not believe that an extension's objects are required\nto all be in the same schema.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jul 2023 14:40:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 02:40:04PM -0400, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Hmm, shouldn't we disallow moving the function to another schema, if the\n>> function's schema was originally determined at extension creation time?\n>> I'm not sure we really want to allow moving objects of an extension to a\n>> different schema.\n> \n> Why not? I do not believe that an extension's objects are required\n> to all be in the same schema.\n\nYes, I don't see what we would gain by putting restrictions regarding\nwhich schema an object is located in, depending on which schema an\nextension uses.\n--\nMichael",
"msg_date": "Wed, 5 Jul 2023 14:10:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 10:06:35AM +0300, Heikki Linnakangas wrote:\n> The error messages like \"failed to change schema dependency for extension\"\n> don't conform to the usual error message style. \"could not change schema\n> dependency for extension\" would be more conformant. I see that you\n> copy-pasted that from existing messages, and we have a bunch of other\n> \"failed to\" messages in the repository too, so I'm OK with leaving it as it\n> is for now. Or maybe change the wording of all the changeDependencyFor()\n> callers now, and consider all the other \"failed to\" messages separately\n> later.\n\nI'm OK to change the messages for all changeDependencyFor() now that\nthese are being touched. I am counting 7 of them.\n\n> If changeDependencyFor() returns >= 2, the message is a bit misleading.\n> That's what the existing callers did too, so maybe that's fine.\n> \n> I can hit the above error with the attached test case. That seems wrong,\n> although I don't know if it means that the check is wrong or it exposed a\n> long-standing bug.\n\nComing back to this one, I think that my check and you have found an\nold bug in AlterExtensionNamespace() where the sequence of objects\nmanipulated breaks the namespace OIDs used to change the normal\ndependency of the extension when calling changeDependencyFor(). The\ncheck I have added looks actually correct to me because there should\nbe always have one 'n' pg_depend entry to change between the extension\nand its schema, and we should always change it.\n\nA little bit of debugging is showing me that at the stage of \"ALTER\nEXTENSION test_ext_req_schema1 SET SCHEMA test_func_dep3;\", oldNspOid\nis set to the OID of test_func_dep2, and nspOid is the OID of\ntest_func_dep3. So the new OID is correct, but the old one points to\nthe schema test_func_dep2 used by the function because it is the first\nobject it has been picked up while scanning pg_depend, and not the\nschema test_func_dep1 used by the extension. This causes the command\nto fail to update the schema dependency between the schema and the\nextension.\n\nThe origin of the confusing comes to the handling of oldNspOid, in my\nopinion. I don't quite see why it is necessary to save the old OID of\nthe namespace from the object scanned while we know the previous\nschema used by the extension thanks to its pg_extension entry.\n\nAlso, note that there is a check in AlterExtensionNamespace() to\nprevent the command from happening if an object is not in the same\nschema as the extension, but it fails to trigger here. I have written\na couple of extra queries to show the difference.\n\nPlease find attached a patch to fix this issue with ALTER EXTENSION\n.. SET SCHEMA, and the rest. The patch does everything discussed, but\nit had better be split into two patches for different branches. Here\nare my thoughts:\n- Fix and backpatch the ALTER EXTENSION business, *without* the new\nsanity check for changeDependencyFor() in AlterExtensionNamespace(),\nwith its regression test.\n- Add all the sanity checks and reword the error messages related to\nchangeDependencyFor() only on HEAD.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 5 Jul 2023 15:34:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On 29.06.23 01:36, Michael Paquier wrote:\n> While working on a different patch, I have noted three code paths that\n> call changeDependencyFor() but don't check that they do not return\n> errors. In all the three cases (support function, extension/schema\n> and object/schema), it seems to me that only one dependency update is\n> expected.\n\nWhy can't changeDependencyFor() raise the error itself?\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:41:49 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-05 14:10:42 +0900, Michael Paquier wrote:\n> On Tue, Jul 04, 2023 at 02:40:04PM -0400, Tom Lane wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> >> Hmm, shouldn't we disallow moving the function to another schema, if the\n> >> function's schema was originally determined at extension creation time?\n> >> I'm not sure we really want to allow moving objects of an extension to a\n> >> different schema.\n> > \n> > Why not? I do not believe that an extension's objects are required\n> > to all be in the same schema.\n> \n> Yes, I don't see what we would gain by putting restrictions regarding\n> which schema an object is located in, depending on which schema an\n> extension uses.\n\nWell, it adds an exploitation opportunity. If other functions in the extension\nreference the original location (explicitly or via search_path), somebody else\ncan create a function there, which might be called from a more privileged\ncontext. Obviously permissions limit the likelihood of this being a real\nissue.\n\nI also don't think pg_dump will dump the changed schema, which means a\ndump/restore leads to a different schema - IMO something to avoid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:09:20 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "The patch looks fine and passes all the tests. I am using Arch Linux on an x86_64 system.\r\nThe patch does not cause any unnecessary bugs and does not make any non trivial changes to the source code.\r\nI believe it is ready to be committed!\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 07 Jul 2023 18:12:48 +0000",
"msg_from": "Akshat Jaimini <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 06:41:49PM +0200, Peter Eisentraut wrote:\n> On 29.06.23 01:36, Michael Paquier wrote:\n>> While working on a different patch, I have noted three code paths that\n>> call changeDependencyFor() but don't check that they do not return\n>> errors. In all the three cases (support function, extension/schema\n>> and object/schema), it seems to me that only one dependency update is\n>> expected.\n> \n> Why can't changeDependencyFor() raise the error itself?\n\nThere is appeal in that, but I can't really get excited for any\nout-of-core callers of this routine. Even if you would not lose much\nerror context, it would not be completely flexible if the number of\ndependencies to switch is a variable number.\n--\nMichael",
"msg_date": "Sat, 8 Jul 2023 08:47:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 10:09:20AM -0700, Andres Freund wrote:\n> Well, it adds an exploitation opportunity. If other functions in the extension\n> reference the original location (explicitly or via search_path), somebody else\n> can create a function there, which might be called from a more privileged\n> context. Obviously permissions limit the likelihood of this being a real\n> issue.\n\nYeah..\n\n> I also don't think pg_dump will dump the changed schema, which means a\n> dump/restore leads to a different schema - IMO something to avoid.\n\nYes, you're right here. The function dumped is restored in the same\nschema as the extension. For instance:\npsql postgres << EOF\nCREATE SCHEMA test_func_dep1;\nCREATE SCHEMA test_func_dep2;\nCREATE EXTENSION test_ext_req_schema1 SCHEMA test_func_dep1;\nALTER FUNCTION test_func_dep1.dep_req1() SET SCHEMA test_func_dep2;\nEOF\npg_dump -f dump.sql postgres\ncreatedb popo\npsql -f dump.sql popo\npsql -c '\\dx+ test_ext_req_schema1' popo\n\nObjects in extension \"test_ext_req_schema1\"\n Object description \n------------------------------------\n function test_func_dep1.dep_req1()\n(1 row)\n\nI am honestly not sure how much restrictions we should have here, as\nthis could hurt anybody relying on the existing behavior, as well, if\nthere are any. (It seems that that schema modification restrictions\nfor extension objects would need to go through\nExecAlterObjectSchemaStmt().)\n\nAnyway, I think that I'll just go ahead and fix the SET SCHEMA bug, as\nthat's wrong as it stands. Regarding these restrictions, perhaps\nsomething could be done on HEAD, though it impacts usability IMO.\n--\nMichael",
"msg_date": "Sat, 8 Jul 2023 13:41:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On Fri, Jul 07, 2023 at 06:12:48PM +0000, Akshat Jaimini wrote:\n> I believe it is ready to be committed!\n\nOkay, thanks. Please note that I have backpatched the bug and added\nthe checks for the callers of changeDependencyFor() on HEAD on top of\nthe bugfix.\n--\nMichael",
"msg_date": "Mon, 10 Jul 2023 13:10:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Jul 06, 2023 at 10:09:20AM -0700, Andres Freund wrote:\n>> I also don't think pg_dump will dump the changed schema, which means a\n>> dump/restore leads to a different schema - IMO something to avoid.\n\n> Yes, you're right here. The function dumped is restored in the same\n> schema as the extension.\n\nActually, I think the given example demonstrates pilot error rather\nthan a bug. The user has altered properties of an extension member\nobject locally within the database, but has not changed the extension's\ninstallation script to match. The fact that after restore, the object\ndoes again match the script is intended behavior. We've made some\nexceptions to that rule for permissions, but not anything else.\nI don't see a reason to consider the objects' schema assignments\ndifferently from other properties for this purpose.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jul 2023 10:51:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On 2023-Jul-10, Tom Lane wrote:\n\n> Michael Paquier <[email protected]> writes:\n> > On Thu, Jul 06, 2023 at 10:09:20AM -0700, Andres Freund wrote:\n> >> I also don't think pg_dump will dump the changed schema, which means a\n> >> dump/restore leads to a different schema - IMO something to avoid.\n> \n> > Yes, you're right here. The function dumped is restored in the same\n> > schema as the extension.\n> \n> Actually, I think the given example demonstrates pilot error rather\n> than a bug.\n\nWell, if this is pilot error, why don't we throw an error ourselves?\n\n> The user has altered properties of an extension member\n> object locally within the database, but has not changed the extension's\n> installation script to match.\n\nIf I were developing an extension and decided, down the line, to have\nsome objects in another schema, I would certainly increment the\nextension's version number and have a new script to move the object. I\nwould never expect the user to do an ALTER directly (and it makes no\nsense for me as an extension developer to do it manually, either.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:55:06 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Jul-10, Tom Lane wrote:\n>> The user has altered properties of an extension member\n>> object locally within the database, but has not changed the extension's\n>> installation script to match.\n\n> If I were developing an extension and decided, down the line, to have\n> some objects in another schema, I would certainly increment the\n> extension's version number and have a new script to move the object. I\n> would never expect the user to do an ALTER directly (and it makes no\n> sense for me as an extension developer to do it manually, either.)\n\nIt's certainly poor practice, but I could see doing it early in an\nextension's development (while you're still working towards 1.0).\n\nISTR that we discussed forbidding such changes way back when the\nextension mechanism was invented, and decided against it on the\ngrounds that (a) it'd be nanny-ism, (b) we'd have to add checks in an\nawful lot of places and it'd be easy to miss some, and (c) forbidding\nsuperusers from doing anything they want is generally not our style.\nWe could reconsider that now, but I think we'd probably land on the\nsame place.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jul 2023 11:04:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 11:04:48AM -0400, Tom Lane wrote:\n> ISTR that we discussed forbidding such changes way back when the\n> extension mechanism was invented, and decided against it on the\n> grounds that (a) it'd be nanny-ism, (b) we'd have to add checks in an\n> awful lot of places and it'd be easy to miss some,\n\nThe namepace modifications depending on the object types are quite\ncentralized lately, FWIW. And that was the case in 9.3 as well since\nwe have ExecAlterObjectSchemaStmt(). It would be easy to miss a new\ncode path if somebody introduces a new object type that needs its own\nupdate path, but based on the last 15 years of experience on the\nmatter, that would be unlikely? Adding a note at the top of\nExecAlterObjectSchemaStmt() would make that even harder to miss.\n\n> and (c) forbidding\n> superusers from doing anything they want is generally not our style.\n\nYeah.\n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 10:40:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add more sanity checks around callers of changeDependencyFor()"
}
] |
[
{
"msg_contents": "While working on the invalid parameterized join path issue [1], I\nnoticed that we can simplify the codes for checking parameterized\npartial paths in try_partial_hashjoin/mergejoin_path, with the help of\nmacro PATH_REQ_OUTER.\n\n- if (inner_path->param_info != NULL)\n- {\n- Relids inner_paramrels =\ninner_path->param_info->ppi_req_outer;\n-\n- if (!bms_is_empty(inner_paramrels))\n- return;\n- }\n+ if (!bms_is_empty(PATH_REQ_OUTER(inner_path)))\n+ return;\n\nAlso there is a comment there that is not correct.\n\n * If the inner path is parameterized, the parameterization must be fully\n * satisfied by the proposed outer path.\n\nThis is true for nestloop but not for hashjoin/mergejoin.\n\nBesides, I wonder if it'd be better that we verify that the outer input\npath for a partial join path should not have any parameterization\ndependency.\n\nAttached is a patch for all these changes.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAJKUy5g2uZRrUDZJ8p-%3DgiwcSHVUn0c9nmdxPSY0jF0Ov8VoEA%40mail.gmail.com\n\nThanks\nRichard",
"msg_date": "Thu, 29 Jun 2023 11:23:09 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trivial revise for the check of parameterized partial paths"
},
{
"msg_contents": "On Thu, 29 Jun 2023 at 08:53, Richard Guo <[email protected]> wrote:\n>\n> While working on the invalid parameterized join path issue [1], I\n> noticed that we can simplify the codes for checking parameterized\n> partial paths in try_partial_hashjoin/mergejoin_path, with the help of\n> macro PATH_REQ_OUTER.\n>\n> - if (inner_path->param_info != NULL)\n> - {\n> - Relids inner_paramrels = inner_path->param_info->ppi_req_outer;\n> -\n> - if (!bms_is_empty(inner_paramrels))\n> - return;\n> - }\n> + if (!bms_is_empty(PATH_REQ_OUTER(inner_path)))\n> + return;\n>\n> Also there is a comment there that is not correct.\n>\n> * If the inner path is parameterized, the parameterization must be fully\n> * satisfied by the proposed outer path.\n>\n> This is true for nestloop but not for hashjoin/mergejoin.\n>\n> Besides, I wonder if it'd be better that we verify that the outer input\n> path for a partial join path should not have any parameterization\n> dependency.\n>\n> Attached is a patch for all these changes.\n\nI'm seeing that there has been no activity in this thread for nearly 7\nmonths, I'm planning to close this in the current commitfest unless\nsomeone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 21 Jan 2024 18:06:03 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trivial revise for the check of parameterized partial paths"
},
{
"msg_contents": "On Sun, Jan 21, 2024 at 8:36 PM vignesh C <[email protected]> wrote:\n\n> I'm seeing that there has been no activity in this thread for nearly 7\n> months, I'm planning to close this in the current commitfest unless\n> someone is planning to take it forward.\n\n\nThis patch fixes the wrong comments in try_partial_hashjoin_path, and\nalso simplifies and enhances the checks for parameterized partial paths.\nI think it's worth to be moved forward.\n\nI've rebased the patch over current master, added a commit message\ndescribing what it does, and updated the comment a little bit in the v2\npatch.\n\nThanks\nRichard",
"msg_date": "Thu, 25 Jan 2024 15:21:26 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trivial revise for the check of parameterized partial paths"
},
{
"msg_contents": "On Thu, Jan 25, 2024 at 3:21 PM Richard Guo <[email protected]> wrote:\n> On Sun, Jan 21, 2024 at 8:36 PM vignesh C <[email protected]> wrote:\n>> I'm seeing that there has been no activity in this thread for nearly 7\n>> months, I'm planning to close this in the current commitfest unless\n>> someone is planning to take it forward.\n>\n>\n> This patch fixes the wrong comments in try_partial_hashjoin_path, and\n> also simplifies and enhances the checks for parameterized partial paths.\n> I think it's worth to be moved forward.\n>\n> I've rebased the patch over current master, added a commit message\n> describing what it does, and updated the comment a little bit in the v2\n> patch.\n\nI've pushed this patch.\n\nThanks\nRichard\n\n\n",
"msg_date": "Tue, 30 Jul 2024 15:42:30 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trivial revise for the check of parameterized partial paths"
}
] |
[
{
"msg_contents": "Background\n==========\nPostgreSQL has an amazing variety of routines for accessing files. Consider just the “open file” routines.\n PathNameOpenFile, OpenTemporaryFile, BasicOpenFile, open, fopen, BufFileCreateFileSet,\n BufFileOpenFileSet, AllocateFile, OpenTransientFile, FileSetCreate, FileSetOpen, mdcreate, mdopen,\n Smgr_open,\n\nOn the downside, “amazing variety” also means somewhat confusing and difficult to add new features.\nSomeday, we’d like to add encryption or compression to the various PostgreSql files.\nTo do that, we need to bring all the relevant files into a common file API where we can implement\nthe new features.\n\nGoals of Patch\n=============\n1)Unify file access so most of “the other” files can go through a common interface, allowing new features\nlike checksums, encryption or compression to be added transparently. 2) Do it in a way which doesn’t\nchange the logic of current code. 3)Convert a reasonable set of callers to use the new interface.\n\nNote the focus is on the “other” files. The buffer cache and the WAL have similar needs,\nbut they are being done in a separate project. (yes, the two projects are coordinating)\n\nPatch 0001. Create a common file API.\n===============================\nCurrrently, PostgreSQL files feed into three funnels. 1) system file descriptors (read/write/open),\n2) C library buffered files (fread/fwri;te/fopn), and 3) virtual file descriptors (FileRead/FileWrite/PathNameOpenFile).\nOf these three, virtual file descriptors (VFDs) are the most common. They are also the\nonly funnel which is implemented by PostgresSql.\n\nDecision: Choose VFDs as the common interface.\n\nProblem: VFDs are random access only.\nSolution: Add sequential read/write code on top of VFDs. (FileReadSeq, FileWriteSeq, FileSeek, FileTell, O_APPEND)\n\nProblem: VFDs have minimal error handling (based on errno.)\nSolution: Add an “ferror” style interface (FileError, FileEof, FileErrorCode, FileErrorMsg)\n\nProblem: Must maintain compatibility with existing error handling code.\nSolution: save and restore errno to minimize changes to existing code.\n\nPatch 0002. Update code to use the common file API\n===========================================\nThe second patch alters callers so they use VFDs rather than system or C library files.\nIt doesn’t modify all callers, but it does capture many of the files which need\nto be encrypted or compressed. This is definitely WIP.\n\nFuture (not too far away)\n=====================\nLooking ahead, there will be another set of patches which inject buffering and encryption into\nthe VFD interface. The future patches will build on the current work and introduce new “oflags”\nto enable encryption and buffering.\n\nCompression is also a possibility, but currently lower priority and a bit tricky for random access files.\nLet us know if you have a use case.",
"msg_date": "Thu, 29 Jun 2023 07:50:17 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unified File API"
},
{
"msg_contents": "On Thu, 29 Jun 2023 at 13:20, John Morris <[email protected]> wrote:\n>\n> Background\n>\n> ==========\n>\n> PostgreSQL has an amazing variety of routines for accessing files. Consider just the “open file” routines.\n> PathNameOpenFile, OpenTemporaryFile, BasicOpenFile, open, fopen, BufFileCreateFileSet,\n>\n> BufFileOpenFileSet, AllocateFile, OpenTransientFile, FileSetCreate, FileSetOpen, mdcreate, mdopen,\n>\n> Smgr_open,\n>\n>\n>\n> On the downside, “amazing variety” also means somewhat confusing and difficult to add new features.\n> Someday, we’d like to add encryption or compression to the various PostgreSql files.\n> To do that, we need to bring all the relevant files into a common file API where we can implement\n> the new features.\n>\n>\n>\n> Goals of Patch\n>\n> =============\n>\n> 1)Unify file access so most of “the other” files can go through a common interface, allowing new features\n> like checksums, encryption or compression to be added transparently. 2) Do it in a way which doesn’t\n> change the logic of current code. 3)Convert a reasonable set of callers to use the new interface.\n>\n>\n>\n> Note the focus is on the “other” files. The buffer cache and the WAL have similar needs,\n> but they are being done in a separate project. (yes, the two projects are coordinating)\n>\n> Patch 0001. Create a common file API.\n>\n> ===============================\n>\n> Currrently, PostgreSQL files feed into three funnels. 1) system file descriptors (read/write/open),\n> 2) C library buffered files (fread/fwri;te/fopn), and 3) virtual file descriptors (FileRead/FileWrite/PathNameOpenFile).\n> Of these three, virtual file descriptors (VFDs) are the most common. They are also the\n> only funnel which is implemented by PostgresSql.\n>\n>\n>\n> Decision: Choose VFDs as the common interface.\n>\n>\n>\n> Problem: VFDs are random access only.\n>\n> Solution: Add sequential read/write code on top of VFDs. (FileReadSeq, FileWriteSeq, FileSeek, FileTell, O_APPEND)\n>\n>\n>\n> Problem: VFDs have minimal error handling (based on errno.)\n>\n> Solution: Add an “ferror” style interface (FileError, FileEof, FileErrorCode, FileErrorMsg)\n>\n>\n>\n> Problem: Must maintain compatibility with existing error handling code.\n>\n> Solution: save and restore errno to minimize changes to existing code.\n>\n>\n>\n> Patch 0002. Update code to use the common file API\n>\n> ===========================================\n>\n> The second patch alters callers so they use VFDs rather than system or C library files.\n> It doesn’t modify all callers, but it does capture many of the files which need\n> to be encrypted or compressed. This is definitely WIP.\n>\n>\n>\n> Future (not too far away)\n>\n> =====================\n>\n> Looking ahead, there will be another set of patches which inject buffering and encryption into\n> the VFD interface. The future patches will build on the current work and introduce new “oflags”\n>\n> to enable encryption and buffering.\n>\n>\n> Compression is also a possibility, but currently lower priority and a bit tricky for random access files.\n> Let us know if you have a use case.\n\nCFbot shows few compilation warnings/error at [1]:\n[15:54:06.825] ../src/backend/storage/file/fd.c:2420:11: warning:\nunused variable 'save_errno' [-Wunused-variable]\n[15:54:06.825] int ret, save_errno;\n[15:54:06.825] ^\n[15:54:06.825] ../src/backend/storage/file/fd.c:4026:29: error: use of\nundeclared identifier 'MAXIMUM_VFD'\n[15:54:06.825] Assert(file >= 0 && file < MAXIMUM_VFD);\n[15:54:06.825] ^\n[15:54:06.825] 1 warning and 1 error generated.\n\n[1] - https://cirrus-ci.com/task/6552527404007424\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 6 Jan 2024 22:58:30 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unified File API"
},
{
"msg_contents": "On Sat, 6 Jan 2024 at 22:58, vignesh C <[email protected]> wrote:\n>\n> On Thu, 29 Jun 2023 at 13:20, John Morris <[email protected]> wrote:\n> >\n> > Background\n> >\n> > ==========\n> >\n> > PostgreSQL has an amazing variety of routines for accessing files. Consider just the “open file” routines.\n> > PathNameOpenFile, OpenTemporaryFile, BasicOpenFile, open, fopen, BufFileCreateFileSet,\n> >\n> > BufFileOpenFileSet, AllocateFile, OpenTransientFile, FileSetCreate, FileSetOpen, mdcreate, mdopen,\n> >\n> > Smgr_open,\n> >\n> >\n> >\n> > On the downside, “amazing variety” also means somewhat confusing and difficult to add new features.\n> > Someday, we’d like to add encryption or compression to the various PostgreSql files.\n> > To do that, we need to bring all the relevant files into a common file API where we can implement\n> > the new features.\n> >\n> >\n> >\n> > Goals of Patch\n> >\n> > =============\n> >\n> > 1)Unify file access so most of “the other” files can go through a common interface, allowing new features\n> > like checksums, encryption or compression to be added transparently. 2) Do it in a way which doesn’t\n> > change the logic of current code. 3)Convert a reasonable set of callers to use the new interface.\n> >\n> >\n> >\n> > Note the focus is on the “other” files. The buffer cache and the WAL have similar needs,\n> > but they are being done in a separate project. (yes, the two projects are coordinating)\n> >\n> > Patch 0001. Create a common file API.\n> >\n> > ===============================\n> >\n> > Currrently, PostgreSQL files feed into three funnels. 1) system file descriptors (read/write/open),\n> > 2) C library buffered files (fread/fwri;te/fopn), and 3) virtual file descriptors (FileRead/FileWrite/PathNameOpenFile).\n> > Of these three, virtual file descriptors (VFDs) are the most common. They are also the\n> > only funnel which is implemented by PostgresSql.\n> >\n> >\n> >\n> > Decision: Choose VFDs as the common interface.\n> >\n> >\n> >\n> > Problem: VFDs are random access only.\n> >\n> > Solution: Add sequential read/write code on top of VFDs. (FileReadSeq, FileWriteSeq, FileSeek, FileTell, O_APPEND)\n> >\n> >\n> >\n> > Problem: VFDs have minimal error handling (based on errno.)\n> >\n> > Solution: Add an “ferror” style interface (FileError, FileEof, FileErrorCode, FileErrorMsg)\n> >\n> >\n> >\n> > Problem: Must maintain compatibility with existing error handling code.\n> >\n> > Solution: save and restore errno to minimize changes to existing code.\n> >\n> >\n> >\n> > Patch 0002. Update code to use the common file API\n> >\n> > ===========================================\n> >\n> > The second patch alters callers so they use VFDs rather than system or C library files.\n> > It doesn’t modify all callers, but it does capture many of the files which need\n> > to be encrypted or compressed. This is definitely WIP.\n> >\n> >\n> >\n> > Future (not too far away)\n> >\n> > =====================\n> >\n> > Looking ahead, there will be another set of patches which inject buffering and encryption into\n> > the VFD interface. The future patches will build on the current work and introduce new “oflags”\n> >\n> > to enable encryption and buffering.\n> >\n> >\n> > Compression is also a possibility, but currently lower priority and a bit tricky for random access files.\n> > Let us know if you have a use case.\n>\n> CFbot shows few compilation warnings/error at [1]:\n> [15:54:06.825] ../src/backend/storage/file/fd.c:2420:11: warning:\n> unused variable 'save_errno' [-Wunused-variable]\n> [15:54:06.825] int ret, save_errno;\n> [15:54:06.825] ^\n> [15:54:06.825] ../src/backend/storage/file/fd.c:4026:29: error: use of\n> undeclared identifier 'MAXIMUM_VFD'\n> [15:54:06.825] Assert(file >= 0 && file < MAXIMUM_VFD);\n> [15:54:06.825] ^\n> [15:54:06.825] 1 warning and 1 error generated.\n\n\nWith no update to the thread and the compilation still failing I'm\nmarking this as returned with feedback. Please feel free to resubmit\nto the next CF when there is a new version of the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 21:56:24 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unified File API"
}
] |
[
{
"msg_contents": "Hi,\n\nIt's documented that a failed REINDEX can leave behind a transient\nindex, and I'm not going to speculate on all the conditions that could\nlead to this situation. However, cancelling a REINDEX CONCURRENTLY\nwill reliably leave behind the index it was building (<index\nname>_ccnew).\n\nDoesn't a cancellation instruct the process that the user has made a\ndecision regarding the fate of the new version of the index? Is there\na situation where the incomplete transient index might need to be\ninspected following a cancellation?\n\nBecause if not, why not get it to tidy up after itself? If the\nprocess crashed, fair enough, but it just doesn't sit well that\nleftover artifacts of an aborted operation aren't tidied up,\nespecially since subsequent attempts to REINDEX will find these\ninvalid transient versions and attempt to REINDEX them. Why should\nthe user need to know about them and take manual action in the case of\na cancellation?\n\nI get the feeling that this is deliberate, and perhaps an attempt to\nmitigate locking issues, or some other explanation, but the rationale\nisn't immediately apparent to me if this is the case.\n\nThanks\n\nThom\n\n\n",
"msg_date": "Thu, 29 Jun 2023 10:13:47 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "On 6/29/23 11:13, Thom Brown wrote:\n> I get the feeling that this is deliberate, and perhaps an attempt to\n> mitigate locking issues, or some other explanation, but the rationale\n> isn't immediately apparent to me if this is the case.\n\nI have always assumed the reason is that there might be other \ntransactions using the index so if we are going to drop it on rollback \nwe might get stuck forever waiting for an exclusive lock on the index. \nHow do you get around that? Rollback being stuck waiting forever is \ncertainly not a nice behavior.\n\nAndreas\n\n\n\n",
"msg_date": "Thu, 29 Jun 2023 13:04:25 +0200",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "Andreas Karlsson <[email protected]> writes:\n> On 6/29/23 11:13, Thom Brown wrote:\n>> I get the feeling that this is deliberate, and perhaps an attempt to\n>> mitigate locking issues, or some other explanation, but the rationale\n>> isn't immediately apparent to me if this is the case.\n\n> I have always assumed the reason is that there might be other \n> transactions using the index so if we are going to drop it on rollback \n> we might get stuck forever waiting for an exclusive lock on the index. \n> How do you get around that? Rollback being stuck waiting forever is \n> certainly not a nice behavior.\n\nRight. The whole point of CONCURRENTLY is to never take an exclusive\nlock. But once we reach the stage where the index is open for other\ntransactions to insert into, it's difficult to back out in a nice way.\n\nNow that we have DROP INDEX CONCURRENTLY, you could imagine switching\ninto that code path --- but that *also* involves waiting for other\ntransactions, so you still have the problem that the transaction may\nappear to be stuck and not responding to cancel.\n\n(IIRC, cancelling DROP INDEX CONCURRENTLY also leads to a messy\nsituation, in that the index is still there but might not be fully\nfunctional.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jun 2023 07:17:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "ALTER TABLE DETACH CONCURRENTLY had to deal with this also, and it did it by having a COMPLETE option you can run later in case things got stuck the first time around. I suppose we could do something similar, where the server automatically does the needful, whatever that is.\n\nALTER TABLE DETACH CONCURRENTLY had to deal with this also, and it did it by having a COMPLETE option you can run later in case things got stuck the first time around. I suppose we could do something similar, where the server automatically does the needful, whatever that is.",
"msg_date": "Thu, 29 Jun 2023 15:45:16 +0200",
"msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "On Thu, 29 Jun 2023, 14:45 Álvaro Herrera, <[email protected]> wrote:\n\n> ALTER TABLE DETACH CONCURRENTLY had to deal with this also, and it did it\n> by having a COMPLETE option you can run later in case things got stuck the\n> first time around. I suppose we could do something similar, where the\n> server automatically does the needful, whatever that is.\n>\n\nSo there doesn't appear to be provision for deferred activities.\n\nCould, perhaps, the fact that it is an invalid index that has no locks on\nit, and is dependent on the table mean it could be removed by a VACUUM?\n\nI just don't like the idea of the user needing to remove broken things.\n\nThom\n\nOn Thu, 29 Jun 2023, 14:45 Álvaro Herrera, <[email protected]> wrote:ALTER TABLE DETACH CONCURRENTLY had to deal with this also, and it did it by having a COMPLETE option you can run later in case things got stuck the first time around. I suppose we could do something similar, where the server automatically does the needful, whatever that is.So there doesn't appear to be provision for deferred activities.Could, perhaps, the fact that it is an invalid index that has no locks on it, and is dependent on the table mean it could be removed by a VACUUM?I just don't like the idea of the user needing to remove broken things.Thom",
"msg_date": "Sat, 1 Jul 2023 17:39:07 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "On 2023-Jul-01, Thom Brown wrote:\n\n> On Thu, 29 Jun 2023, 14:45 Álvaro Herrera, <[email protected]> wrote:\n> \n> > ALTER TABLE DETACH CONCURRENTLY had to deal with this also, and it did it\n> > by having a COMPLETE option you can run later in case things got stuck the\n> > first time around. I suppose we could do something similar, where the\n> > server automatically does the needful, whatever that is.\n> \n> So there doesn't appear to be provision for deferred activities.\n\nThere is not.\n\n> Could, perhaps, the fact that it is an invalid index that has no locks on\n> it, and is dependent on the table mean it could be removed by a VACUUM?\n\nWell, I definitely agree that it would be useful to have *something*\nthat automatically removes debris (I'm not sure VACUUM is the best place\nto do it. Perhaps we could have autovacuum check for it, and do it\nseparately of vacuum proper.)\n\nOn the whole, the reason we don't have such a mechanism AFAIK is that\nnobody has presented a credible implementation for it. There was a push\nto use UNDO to remove orphan files; if we had that, we could also use it\nto implement cleanup of dead indexes and partially-detached partitions.\nHowever, that project crashed and burned a long time ago and has seen no\nresurrection as yet.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n",
"msg_date": "Mon, 3 Jul 2023 19:46:27 +0200",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 07:46:27PM +0200, Alvaro Herrera wrote:\n> On 2023-Jul-01, Thom Brown wrote:\n>> On Thu, 29 Jun 2023, 14:45 Álvaro Herrera, <[email protected]> wrote:\n>>> ALTER TABLE DETACH CONCURRENTLY had to deal with this also, and it did it\n>>> by having a COMPLETE option you can run later in case things got stuck the\n>>> first time around. I suppose we could do something similar, where the\n>>> server automatically does the needful, whatever that is.\n\nI could imagine a code path for manual and automatic operations for\nREINDEX (?) at table level and at database level, but using this\nkeyword would be strange, as well. CONCURRENTLY cannot work on system\nindexes so SYSTEM does not make sense, and index level is no different\nthan a DROP.\n\n> Well, I definitely agree that it would be useful to have *something*\n> that automatically removes debris (I'm not sure VACUUM is the best place\n> to do it. Perhaps we could have autovacuum check for it, and do it\n> separately of vacuum proper.)\n\nBeing able to reuse some of the worker/launcher parts from autovacuum\ncould make things easier for a bgworker implementation, perhaps?\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 07:48:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "On 2023-Jul-04, Michael Paquier wrote:\n\n> On Mon, Jul 03, 2023 at 07:46:27PM +0200, Alvaro Herrera wrote:\n\n> > Perhaps we could have autovacuum check for it, and do it\n> > separately of vacuum proper.)\n> \n> Being able to reuse some of the worker/launcher parts from autovacuum\n> could make things easier for a bgworker implementation, perhaps?\n\nTBH I don't understand what you are thinking about.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I can see support will not be a problem. 10 out of 10.\" (Simon Wittber)\n (http://archives.postgresql.org/pgsql-general/2004-12/msg00159.php)\n\n\n",
"msg_date": "Tue, 4 Jul 2023 18:59:57 +0200",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
},
{
"msg_contents": "On 03.07.23 19:46, Álvaro Herrera wrote:\n> Well, I definitely agree that it would be useful to have*something*\n> that automatically removes debris\n\nYeah, like \"undo\".\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:14:53 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does a cancelled REINDEX CONCURRENTLY need to be messy?"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nplan_create_index_workers[1] does not consider the amount of tuples \r\nexisting in TOAST pages when determining the number of parallel workers \r\nto use for a build. The estimation comes from estimate_rel_size[2], \r\nwhich in this case, will just take the value from rel->rd_rel->relpages.\r\n\r\nWe probably don't notice this much with B-trees, given a B-tree is \r\ntypically used for data that does not require toasting. However, this \r\nbecomes more visible when working on custom index access methods that \r\nimplement their own parallel build strategy.\r\n\r\nFor example, pgvector[3] provides its own data types and index access \r\nmethod for indexing vector data. Vectors can get quite large fairly \r\nquickly, e.g. a 768-dimensional vector takes up 8 + 4*768 = 3080 bytes \r\non disk, which quickly clears the default TOAST tuple threshold.\r\n\r\nIn a recent patch proposal to allow for building indexes in parallel[4], \r\nI performed a few experiments on how many parallel workers would be \r\nspawned when indexing 1,000,000 (1MM) 768-dim vectors, both with \r\nEXTEDNED (default) and PLAIN storage. In all cases, I allowed for leader \r\nparticipation, but the leader is not considered in \r\nplan_create_index_workers.\r\n\r\nWith EXTENDED, plan_create_index_workers recommended 2 workers. The \r\nbuild time was ~2x faster than the serial build.\r\n\r\nWith PLAIN, plan_create_index_workers recommended 4 workers. The build \r\ntime was **~3X faster** than the serial build.\r\n\r\n(I've been doing more detailed, less hand-waivy performance testing, but \r\nI wanted to provide directional numbers here)\r\n\r\nIt seems like we're leaving some performance for columns with TOASTed \r\ndata that require indexing, so I wanted to propose allowing the pages in \r\nTOASTed tables to be considered when we're trying to index a column with \r\nTOASTed attributes.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/optimizer/plan/planner.c;hb=refs/heads/master#l6734\r\n[2] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/optimizer/util/plancat.c;hb=refs/heads/master#l1117\r\n[3] https://github.com/pgvector/pgvector\r\n[4] https://github.com/pgvector/pgvector/commits/parallel-index-build",
"msg_date": "Thu, 29 Jun 2023 10:12:54 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "plan_create_index_workers doesn't account for TOAST"
},
{
"msg_contents": "On 6/29/23 10:12 AM, Jonathan S. Katz wrote:\r\n> Hi,\r\n> \r\n> plan_create_index_workers[1] does not consider the amount of tuples \r\n> existing in TOAST pages when determining the number of parallel workers \r\n> to use for a build. The estimation comes from estimate_rel_size[2], \r\n> which in this case, will just take the value from rel->rd_rel->relpages.\r\n> \r\n> We probably don't notice this much with B-trees, given a B-tree is \r\n> typically used for data that does not require toasting. However, this \r\n> becomes more visible when working on custom index access methods that \r\n> implement their own parallel build strategy.\r\n> \r\n> For example, pgvector[3] provides its own data types and index access \r\n> method for indexing vector data. Vectors can get quite large fairly \r\n> quickly, e.g. a 768-dimensional vector takes up 8 + 4*768 = 3080 bytes \r\n> on disk, which quickly clears the default TOAST tuple threshold.\r\n> \r\n> In a recent patch proposal to allow for building indexes in parallel[4], \r\n> I performed a few experiments on how many parallel workers would be \r\n> spawned when indexing 1,000,000 (1MM) 768-dim vectors, both with \r\n> EXTEDNED (default) and PLAIN storage. In all cases, I allowed for leader \r\n> participation, but the leader is not considered in \r\n> plan_create_index_workers.\r\n> \r\n> With EXTENDED, plan_create_index_workers recommended 2 workers. The \r\n> build time was ~2x faster than the serial build.\r\n> \r\n> With PLAIN, plan_create_index_workers recommended 4 workers. The build \r\n> time was **~3X faster** than the serial build.\r\n> \r\n> (I've been doing more detailed, less hand-waivy performance testing, but \r\n> I wanted to provide directional numbers here)\r\n> \r\n> It seems like we're leaving some performance for columns with TOASTed \r\n> data that require indexing, so I wanted to propose allowing the pages in \r\n> TOASTed tables to be considered when we're trying to index a column with \r\n> TOASTed attributes.\r\n\r\nJust to add to this: there is a lever to get more parallel workers by \r\nsetting \"min_parallel_table_scan_size\" to a lower value, which does help \r\nin this case. However, it does mask the fact that a large chunk of the \r\ndata required to build the index exists in the TOAST table, which is not \r\nintuitive to a user who rarely has to use tuning parameters.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 29 Jun 2023 15:51:10 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: plan_create_index_workers doesn't account for TOAST"
}
] |
[
{
"msg_contents": "Hi!\n\n(posting this to -hackers rather than to -docs since it seems a deeper\nproblem than just adjusting the docs)\n\nI recently observed a case with standby corruption after upgrading pg12 to\npg14, which was presented in the form of XX001 errors on the new cluster's\nstandby nodes. e.g.:\n ERROR: missing chunk number 0 for toast value 3228893903 in\npg_toast_79504413\n\nComparing the content of the data directory and checking files with md5sum,\nI noticed that some files for some TOAST index have different content on\nnew standby nodes compared to the new primary – and interesting was the\nfact all standbys had the same content. Just different compared to the\nprimary.\n\nWe used the \"rsync --size-only\" snippet from the docs\nhttps://www.postgresql.org/docs/current/pgupgrade.html to upgrade standbys.\n\nWith \"--size-only\", 1 GiB files for tables and indexes obviously cannot be\nreliably synchronized. In our case, we perform additional steps involving\nlogical replication, advancing primary to certain LSN position -- and\nduring that, we keep standbys down. This explains the increased corruption\nrisks. But I think these risks are present for those who just follow the\nsteps in the docs as is, and probably some fixes or improvements are needed\nhere.\n\nThe main question: why do we consider \"rsync --size-only\" as reliable in\nthe general case? May standby corruption happen if we use follow steps from\nhttps://www.postgresql.org/docs/current/pgupgrade.html?\n\nConsidering several general situations:\n1. For streaming replication:\n a. if we shut down the primary first, based on the code in walsender.c\ndefining how shutdown even is handled, replicas should receive all the\nchanges?\n b. if shut down standbys first (might be preferred if we run cluster\nunder Patroni control, to avoid unnecessary failover), then some changes\nfrom the primary won't be received by standbys – and we do have standby\ncorruption risks\n2. For replication based on WAL shipping, I don't think we can guarantee\nthat all changes are propagated to standbys.\n\nThe docs also have this:\n\n> 9. Prepare for standby server upgrades\n> If you are upgrading standby servers using methods outlined in section\nStep 11, verify that the old standby servers are caught up by running\npg_controldata against the old primary and standby clusters. Verify that\nthe “Latest checkpoint location” values match in all clusters. (There will\nbe a mismatch if old standby servers were shut down before the old primary\nor if the old standby servers are still running.) Also, make sure wal_level\nis not set to minimal in the postgresql.conf file on the new primary\ncluster.\n\n– admitting that there might be mismatch. But if there is mismatch, rsync\n--size-only is not going to help synchronize properly, right?\n\nI was thinking about how to improve here, some ideas:\n- \"rsync --checksum\" doesn't seem to be a good idea, it's, unfortunately,\nvery, very slow, though it would be the most reliable approach (but since\nit's slow, I guess it's not worth even mentioning, crossing this out)\n- we could remove \"--size-only\" and rely on default rsync behavior –\nchecking size and modification time; but how reliable would it be in\ngeneral case?\n- make the step verifying “Latest checkpoint location” *after* shutting\ndown all nodes as mandatory, with instructions on how to avoid mismatch:\ne.g., shut down primary first, disabling automated failover software, if\nany, then run pg_controldata on standbys while they are running, and on\nprimary while it's already shut down (probably, different instructions are\nneeded for WAL shipping and streaming cases)\n- probably, we should always run \"rsync --checksum\" for pg_wal\n- I think, it's time to provide a snippet to run \"rsync\" in multiple\nthreads. A lot of installations today have many vCPUs and fast SSDs, and\nrunning single-threaded rsync seems to be very slow (especially if we do\nneed to move away from \"--size-only\"). If it makes sense, I could come up\nwith some patch proposal for the docs\n- it's probably time to implement support for standby upgrade in\npg_upgrade itself, finding some way to take care of standbys and moving\naway from the need to run rsync or to rebuild standby nodes? Although, this\nis just a raw idea without a proper proposal yet.\n\nDoes this make sense or I'm missing something and the current docs describe\na reliable process? (As I said, we have deviated from the process, to\ninvolve logical replication, so I'm not 100% sure I'm right suspecting the\noriginal procedure in having standby corruption risks.)\n\nThanks,\nNikolay Samokhvalov\nFounder, Postgres.ai\n\nHi!(posting this to -hackers rather than to -docs since it seems a deeper problem than just adjusting the docs)I recently observed a case with standby corruption after upgrading pg12 to pg14, which was presented in the form of XX001 errors on the new cluster's standby nodes. e.g.: ERROR: missing chunk number 0 for toast value 3228893903 in pg_toast_79504413Comparing the content of the data directory and checking files with md5sum, I noticed that some files for some TOAST index have different content on new standby nodes compared to the new primary – and interesting was the fact all standbys had the same content. Just different compared to the primary.We used the \"rsync --size-only\" snippet from the docs https://www.postgresql.org/docs/current/pgupgrade.html to upgrade standbys.With \"--size-only\", 1 GiB files for tables and indexes obviously cannot be reliably synchronized. In our case, we perform additional steps involving logical replication, advancing primary to certain LSN position -- and during that, we keep standbys down. This explains the increased corruption risks. But I think these risks are present for those who just follow the steps in the docs as is, and probably some fixes or improvements are needed here.The main question: why do we consider \"rsync --size-only\" as reliable in the general case? May standby corruption happen if we use follow steps from https://www.postgresql.org/docs/current/pgupgrade.html?Considering several general situations:1. For streaming replication: a. if we shut down the primary first, based on the code in walsender.c defining how shutdown even is handled, replicas should receive all the changes? b. if shut down standbys first (might be preferred if we run cluster under Patroni control, to avoid unnecessary failover), then some changes from the primary won't be received by standbys – and we do have standby corruption risks 2. For replication based on WAL shipping, I don't think we can guarantee that all changes are propagated to standbys.The docs also have this:> 9. Prepare for standby server upgrades > If you are upgrading standby servers using methods outlined in section Step 11, verify that the old standby servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the “Latest checkpoint location” values match in all clusters. (There will be a mismatch if old standby servers were shut down before the old primary or if the old standby servers are still running.) Also, make sure wal_level is not set to minimal in the postgresql.conf file on the new primary cluster.– admitting that there might be mismatch. But if there is mismatch, rsync --size-only is not going to help synchronize properly, right?I was thinking about how to improve here, some ideas:- \"rsync --checksum\" doesn't seem to be a good idea, it's, unfortunately, very, very slow, though it would be the most reliable approach (but since it's slow, I guess it's not worth even mentioning, crossing this out)- we could remove \"--size-only\" and rely on default rsync behavior – checking size and modification time; but how reliable would it be in general case?- make the step verifying “Latest checkpoint location” *after* shutting down all nodes as mandatory, with instructions on how to avoid mismatch: e.g., shut down primary first, disabling automated failover software, if any, then run pg_controldata on standbys while they are running, and on primary while it's already shut down (probably, different instructions are needed for WAL shipping and streaming cases) - probably, we should always run \"rsync --checksum\" for pg_wal - I think, it's time to provide a snippet to run \"rsync\" in multiple threads. A lot of installations today have many vCPUs and fast SSDs, and running single-threaded rsync seems to be very slow (especially if we do need to move away from \"--size-only\"). If it makes sense, I could come up with some patch proposal for the docs- it's probably time to implement support for standby upgrade in pg_upgrade itself, finding some way to take care of standbys and moving away from the need to run rsync or to rebuild standby nodes? Although, this is just a raw idea without a proper proposal yet.Does this make sense or I'm missing something and the current docs describe a reliable process? (As I said, we have deviated from the process, to involve logical replication, so I'm not 100% sure I'm right suspecting the original procedure in having standby corruption risks.)Thanks,Nikolay SamokhvalovFounder, Postgres.ai",
"msg_date": "Thu, 29 Jun 2023 10:50:12 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade instructions involving \"rsync --size-only\" might lead to\n standby corruption?"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 1:50 PM Nikolay Samokhvalov <[email protected]> wrote:\n> Does this make sense or I'm missing something and the current docs describe a reliable process? (As I said, we have deviated from the process, to involve logical replication, so I'm not 100% sure I'm right suspecting the original procedure in having standby corruption risks.)\n\nI'm very suspicious about this section of the documentation. It\ndoesn't explain why --size-only is used or why --no-inc-recursive is\nused.\n\n> > 9. Prepare for standby server upgrades\n> > If you are upgrading standby servers using methods outlined in section Step 11, verify that the old standby servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the “Latest checkpoint location” values match in all clusters. (There will be a mismatch if old standby servers were shut down before the old primary or if the old standby servers are still running.) Also, make sure wal_level is not set to minimal in the postgresql.conf file on the new primary cluster.\n>\n> – admitting that there might be mismatch. But if there is mismatch, rsync --size-only is not going to help synchronize properly, right?\n\nI think the idea is that you shouldn't use the procedure in this case.\nBut honestly I don't think it's probably a good idea to use this\nprocedure at all. It's not clear enough under what circumstances, if\nany, it's safe to use, and there's not really any way to know if\nyou've done it correctly. You couldn't pay me enough to recommend this\nprocedure to anyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jun 2023 14:38:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 02:38:58PM -0400, Robert Haas wrote:\n> On Thu, Jun 29, 2023 at 1:50 PM Nikolay Samokhvalov <[email protected]> wrote:\n> > Does this make sense or I'm missing something and the current docs describe a reliable process? (As I said, we have deviated from the process, to involve logical replication, so I'm not 100% sure I'm right suspecting the original procedure in having standby corruption risks.)\n> \n> I'm very suspicious about this section of the documentation. It\n> doesn't explain why --size-only is used or why --no-inc-recursive is\n> used.\n\nI think --size-only was chosen only because it is the minimal comparison\noption.\n \n> > > 9. Prepare for standby server upgrades\n> > > If you are upgrading standby servers using methods outlined in section Step 11, verify that the old standby servers are caught up by running pg_controldata against the old primary and standby clusters. Verify that the “Latest checkpoint location” values match in all clusters. (There will be a mismatch if old standby servers were shut down before the old primary or if the old standby servers are still running.) Also, make sure wal_level is not set to minimal in the postgresql.conf file on the new primary cluster.\n> >\n> > – admitting that there might be mismatch. But if there is mismatch, rsync --size-only is not going to help synchronize properly, right?\n> \n> I think the idea is that you shouldn't use the procedure in this case.\n> But honestly I don't think it's probably a good idea to use this\n> procedure at all. It's not clear enough under what circumstances, if\n> any, it's safe to use, and there's not really any way to know if\n> you've done it correctly. You couldn't pay me enough to recommend this\n> procedure to anyone.\n\nI think it would be good to revisit all the steps outlined in that\nprocedure and check which ones are still valid or need adjusting. It is\nvery possible the original steps have bugs or that new Postgres features\nadded since the steps were created don't work with these steps. I think\nwe need to bring Stephen Frost into the discussion, so I have CC'ed him.\n\nFrankly, I didn't think the documented procedure would work either, but\npeople say it does, so it is in the docs. I do think it is overdue for\na re-analysis.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 13:41:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 1:41 PM Bruce Momjian <[email protected]> wrote:\n> I think --size-only was chosen only because it is the minimal comparison\n> option.\n\nI think it's worse than that. I think that the procedure relies on\nusing the --size-only option to intentionally trick rsync into\nthinking that files are identical when they're not.\n\nSay we have a file like base/23246/78901 on the primary. Unless\nwal_log_hints=on, the standby version is very likely different, but\nonly in ways that don't matter to WAL replay. So the procedure aims to\ntrick rsync into hard-linking the version of that file that exists on\nthe standby in the old cluster into the new cluster on the standby,\ninstead of copying the slightly-different version from the master,\nthus making the upgrade very fast. If rsync actually checksummed the\nfiles, it would realize that they're different and copy the file from\nthe original primary, which the person who wrote this procedure does\nnot want.\n\nThat's kind of a crazy thing for us to be documenting. I think we\nreally ought to consider removing from this documentation. If somebody\nwants to write a reliable tool for this to ship as part of PostgreSQL,\nwell and good. But this procedure has no real sanity checks and is\nbased on very fragile assumptions. That doesn't seem suitable for\nend-user use.\n\nI'm not quite clear on how Nikolay got into trouble here. I don't\nthink I understand under exactly what conditions the procedure is\nreliable and under what conditions it isn't. But there is no way in\nheck I would ever advise anyone to use this procedure on a database\nthey actually care about. This is a great party trick or something to\nshow off in a lightning talk at PGCon, not something you ought to be\ndoing with valuable data that you actually care about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:16:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 04:16:31PM -0400, Robert Haas wrote:\n> On Fri, Jun 30, 2023 at 1:41 PM Bruce Momjian <[email protected]> wrote:\n> > I think --size-only was chosen only because it is the minimal comparison\n> > option.\n> \n> I think it's worse than that. I think that the procedure relies on\n> using the --size-only option to intentionally trick rsync into\n> thinking that files are identical when they're not.\n> \n> Say we have a file like base/23246/78901 on the primary. Unless\n> wal_log_hints=on, the standby version is very likely different, but\n> only in ways that don't matter to WAL replay. So the procedure aims to\n> trick rsync into hard-linking the version of that file that exists on\n> the standby in the old cluster into the new cluster on the standby,\n> instead of copying the slightly-different version from the master,\n> thus making the upgrade very fast. If rsync actually checksummed the\n> files, it would realize that they're different and copy the file from\n> the original primary, which the person who wrote this procedure does\n> not want.\n\nWhat is the problem with having different hint bits between the two\nservers?\n\n> That's kind of a crazy thing for us to be documenting. I think we\n> really ought to consider removing from this documentation. If somebody\n> wants to write a reliable tool for this to ship as part of PostgreSQL,\n> well and good. But this procedure has no real sanity checks and is\n> based on very fragile assumptions. That doesn't seem suitable for\n> end-user use.\n> \n> I'm not quite clear on how Nikolay got into trouble here. I don't\n> think I understand under exactly what conditions the procedure is\n> reliable and under what conditions it isn't. But there is no way in\n> heck I would ever advise anyone to use this procedure on a database\n> they actually care about. This is a great party trick or something to\n> show off in a lightning talk at PGCon, not something you ought to be\n> doing with valuable data that you actually care about.\n\nWell, it does get used, and if we remove it perhaps we can have it on\nour wiki and point to it from our docs.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 17:33:03 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 14:33 Bruce Momjian <[email protected]> wrote:\n\n> On Fri, Jun 30, 2023 at 04:16:31PM -0400, Robert Haas wrote:\n> > I'm not quite clear on how Nikolay got into trouble here. I don't\n> > think I understand under exactly what conditions the procedure is\n> > reliable and under what conditions it isn't. But there is no way in\n> > heck I would ever advise anyone to use this procedure on a database\n> > they actually care about. This is a great party trick or something to\n> > show off in a lightning talk at PGCon, not something you ought to be\n> > doing with valuable data that you actually care about.\n>\n> Well, it does get used, and if we remove it perhaps we can have it on\n> our wiki and point to it from our docs.\n\n\nIn my case, we performed some additional writes on the primary before\nrunning \"pg_upgrade -k\" and we did it *after* we shut down all the\nstandbys. So those changes were not replicated and then \"rsync --size-only\"\nignored them. (By the way, that cluster has wal_log_hints=on to allow\nPatroni run pg_rewind when needed.)\n\nBut this can happen with anyone who follows the procedure from the docs as\nis and doesn't do any additional steps, because in step 9 \"Prepare for\nstandby server upgrades\":\n\n1) there is no requirement to follow specific order to shut down the nodes\n - \"Streaming replication and log-shipping standby servers can remain\nrunning until a later step\" should probably be changed to a\nrequirement-like \"keep them running\"\n\n2) checking the latest checkpoint position with pg_controldata now looks\nlike a thing that is good to do, but with uncertainty purpose -- it does\nnot seem to be used to support any decision\n - \"There will be a mismatch if old standby servers were shut down before\nthe old primary or if the old standby servers are still running\" should\nprobably be rephrased saying that if there is mismatch, it's a big problem\n\nSo following the steps as is, if some writes on the primary are not\nreplicated (due to whatever reason) before execution of pg_upgrade -k +\nrsync --size-only, then those writes are going to be silently lost on\nstandbys.\n\nI wonder, if we ensure that standbys are fully caught up before upgrading\nthe primary, if we check the latest checkpoint positions, are we good to\nuse \"rsync --size-only\", or there are still some concerns? It seems so to\nme, but maybe I'm missing something.\n\n> --\n\nThanks,\nNikolay Samokhvalov\nFounder, Postgres.ai\n\nOn Fri, Jun 30, 2023 at 14:33 Bruce Momjian <[email protected]> wrote:On Fri, Jun 30, 2023 at 04:16:31PM -0400, Robert Haas wrote: \n> I'm not quite clear on how Nikolay got into trouble here. I don't\n> think I understand under exactly what conditions the procedure is\n> reliable and under what conditions it isn't. But there is no way in\n> heck I would ever advise anyone to use this procedure on a database\n> they actually care about. This is a great party trick or something to\n> show off in a lightning talk at PGCon, not something you ought to be\n> doing with valuable data that you actually care about.\n\nWell, it does get used, and if we remove it perhaps we can have it on\nour wiki and point to it from our docs.In my case, we performed some additional writes on the primary before running \"pg_upgrade -k\" and we did it *after* we shut down all the standbys. So those changes were not replicated and then \"rsync --size-only\" ignored them. (By the way, that cluster has wal_log_hints=on to allow Patroni run pg_rewind when needed.)But this can happen with anyone who follows the procedure from the docs as is and doesn't do any additional steps, because in step 9 \"Prepare for standby server upgrades\":1) there is no requirement to follow specific order to shut down the nodes - \"Streaming replication and log-shipping standby servers can remain running until a later step\" should probably be changed to a requirement-like \"keep them running\"2) checking the latest checkpoint position with pg_controldata now looks like a thing that is good to do, but with uncertainty purpose -- it does not seem to be used to support any decision - \"There will be a mismatch if old standby servers were shut down before the old primary or if the old standby servers are still running\" should probably be rephrased saying that if there is mismatch, it's a big problemSo following the steps as is, if some writes on the primary are not replicated (due to whatever reason) before execution of pg_upgrade -k + rsync --size-only, then those writes are going to be silently lost on standbys.I wonder, if we ensure that standbys are fully caught up before upgrading the primary, if we check the latest checkpoint positions, are we good to use \"rsync --size-only\", or there are still some concerns? It seems so to me, but maybe I'm missing something.-- Thanks,Nikolay SamokhvalovFounder, Postgres.ai",
"msg_date": "Fri, 30 Jun 2023 15:18:03 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 03:18:03PM -0700, Nikolay Samokhvalov wrote:\n> I wonder, if we ensure that standbys are fully caught up before upgrading the\n> primary, if we check the latest checkpoint positions, are we good to use \"rsync\n> --size-only\", or there are still some concerns? It seems so to me, but maybe\n> I'm missing something.\n\nYes, I think you are correct.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 18:36:01 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "Fast upgrade of highly available cluster is a vital part of being industry-acceptable solution for any data management system. Because the cluster is required to be highly available.\n\nWithout this documented technique upgrade of 1Tb cluster would last many hours, not seconds.\nThere are industry concerns about scalability beyond tens of terabytes per cluster, but such downtime would significantly lower that boundary.\n\n> On 1 Jul 2023, at 01:16, Robert Haas <[email protected]> wrote:\n> \n> If somebody\n> wants to write a reliable tool for this to ship as part of PostgreSQL,\n> well and good.\n\nIMV that's a good idea. We could teach pg_upgrade or some new tool to do that reliably. The tricky part is that the tool must stop-start standby remotely...\n\n\nBest regards, Andrey Borodin.\nFast upgrade of highly available cluster is a vital part of being industry-acceptable solution for any data management system. Because the cluster is required to be highly available.Without this documented technique upgrade of 1Tb cluster would last many hours, not seconds.There are industry concerns about scalability beyond tens of terabytes per cluster, but such downtime would significantly lower that boundary.On 1 Jul 2023, at 01:16, Robert Haas <[email protected]> wrote: If somebodywants to write a reliable tool for this to ship as part of PostgreSQL,well and good.IMV that's a good idea. We could teach pg_upgrade or some new tool to do that reliably. The tricky part is that the tool must stop-start standby remotely...Best regards, Andrey Borodin.",
"msg_date": "Sat, 1 Jul 2023 14:02:23 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "Greetings,\n\n* Nikolay Samokhvalov ([email protected]) wrote:\n> On Fri, Jun 30, 2023 at 14:33 Bruce Momjian <[email protected]> wrote:\n> > On Fri, Jun 30, 2023 at 04:16:31PM -0400, Robert Haas wrote:\n> > > I'm not quite clear on how Nikolay got into trouble here. I don't\n> > > think I understand under exactly what conditions the procedure is\n> > > reliable and under what conditions it isn't. But there is no way in\n> > > heck I would ever advise anyone to use this procedure on a database\n> > > they actually care about. This is a great party trick or something to\n> > > show off in a lightning talk at PGCon, not something you ought to be\n> > > doing with valuable data that you actually care about.\n> >\n> > Well, it does get used, and if we remove it perhaps we can have it on\n> > our wiki and point to it from our docs.\n\nI was never a fan of having it actually documented because it's a pretty\ncomplex and involved process that really requires someone doing it have\na strong understanding of how PG works.\n\n> In my case, we performed some additional writes on the primary before\n> running \"pg_upgrade -k\" and we did it *after* we shut down all the\n> standbys. So those changes were not replicated and then \"rsync --size-only\"\n> ignored them. (By the way, that cluster has wal_log_hints=on to allow\n> Patroni run pg_rewind when needed.)\n\nThat's certainly going to cause problems..\n\n> But this can happen with anyone who follows the procedure from the docs as\n> is and doesn't do any additional steps, because in step 9 \"Prepare for\n> standby server upgrades\":\n> \n> 1) there is no requirement to follow specific order to shut down the nodes\n> - \"Streaming replication and log-shipping standby servers can remain\n> running until a later step\" should probably be changed to a\n> requirement-like \"keep them running\"\n\nAgreed that it would be good to clarify that the primary should be shut\ndown first, to make sure everything written by the primary has been\nreplicated to all of the replicas.\n\n> 2) checking the latest checkpoint position with pg_controldata now looks\n> like a thing that is good to do, but with uncertainty purpose -- it does\n> not seem to be used to support any decision\n> - \"There will be a mismatch if old standby servers were shut down before\n> the old primary or if the old standby servers are still running\" should\n> probably be rephrased saying that if there is mismatch, it's a big problem\n\nYes, it's absolutely a big problem and that's the point of the check.\nSlightly surprised that we need to explicitly say \"if they don't match\nthen you need to figure out what you did wrong and don't move forward\nuntil you get everything shut down and with matching values\", but that's\nalso why it isn't a great idea to try and do this without a solid\nunderstanding of how PG works.\n\n> So following the steps as is, if some writes on the primary are not\n> replicated (due to whatever reason) before execution of pg_upgrade -k +\n> rsync --size-only, then those writes are going to be silently lost on\n> standbys.\n\nYup.\n\n> I wonder, if we ensure that standbys are fully caught up before upgrading\n> the primary, if we check the latest checkpoint positions, are we good to\n> use \"rsync --size-only\", or there are still some concerns? It seems so to\n> me, but maybe I'm missing something.\n\nI've seen a lot of success with it.\n\nUltimately, when I was describing this process, it was always with the\nidea that it would be performed by someone quite familiar with the\ninternals of PG or, ideally, could be an outline of how an interested PG\nhacker could write a tool to do it. Hard to say, but I do feel like\nhaving it documented has actually reduced the interest in writing a tool\nto do it, which, if that's the case, is quite unfortunate.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 7 Jul 2023 09:31:33 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 6:31 AM Stephen Frost <[email protected]> wrote:\n\n> * Nikolay Samokhvalov ([email protected]) wrote:\n> > But this can happen with anyone who follows the procedure from the docs\n> as\n> > is and doesn't do any additional steps, because in step 9 \"Prepare for\n> > standby server upgrades\":\n> >\n> > 1) there is no requirement to follow specific order to shut down the\n> nodes\n> > - \"Streaming replication and log-shipping standby servers can remain\n> > running until a later step\" should probably be changed to a\n> > requirement-like \"keep them running\"\n>\n> Agreed that it would be good to clarify that the primary should be shut\n> down first, to make sure everything written by the primary has been\n> replicated to all of the replicas.\n>\n\nThanks!\n\nHere is a patch to fix the existing procedure description.\n\nI agree with Andrey – without it, we don't have any good way to upgrade\nlarge clusters in short time. Default rsync mode (without \"--size-only\")\ntakes a lot of time too, if the load is heavy.\n\nWith these adjustments, can \"rsync --size-only\" remain in the docs as the\n*fast* and safe method to upgrade standbys, or there are still some\nconcerns related to corruption risks?",
"msg_date": "Mon, 10 Jul 2023 13:36:39 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jul 10, 2023 at 01:36:39PM -0700, Nikolay Samokhvalov wrote:\n> On Fri, Jul 7, 2023 at 6:31 AM Stephen Frost <[email protected]> wrote:\n> > * Nikolay Samokhvalov ([email protected]) wrote:\n> > > But this can happen with anyone who follows the procedure from the docs\n> > as\n> > > is and doesn't do any additional steps, because in step 9 \"Prepare for\n> > > standby server upgrades\":\n> > >\n> > > 1) there is no requirement to follow specific order to shut down the\n> > nodes\n> > > - \"Streaming replication and log-shipping standby servers can remain\n> > > running until a later step\" should probably be changed to a\n> > > requirement-like \"keep them running\"\n> >\n> > Agreed that it would be good to clarify that the primary should be shut\n> > down first, to make sure everything written by the primary has been\n> > replicated to all of the replicas.\n> \n> Thanks!\n> \n> Here is a patch to fix the existing procedure description.\n\nThanks for that!\n\n> I agree with Andrey – without it, we don't have any good way to upgrade\n> large clusters in short time. Default rsync mode (without \"--size-only\")\n> takes a lot of time too, if the load is heavy.\n> \n> With these adjustments, can \"rsync --size-only\" remain in the docs as the\n> *fast* and safe method to upgrade standbys, or there are still some\n> concerns related to corruption risks?\n\nI hope somebody can answer that definitively, but I read Stephen's mail\nto indicate that this procedure should be safe in principle (if you know\nwhat you are doing).\n\n> From: Nikolay Samokhvalov <[email protected]>\n> Date: Mon, 10 Jul 2023 20:07:18 +0000\n> Subject: [PATCH] Improve major upgrade docs\n\nMaybe mention standby here, like \"Improve major upgrade documentation\nfor standby servers\".\n\n> +++ b/doc/src/sgml/ref/pgupgrade.sgml\n> @@ -380,22 +380,28 @@ NET STOP postgresql-&majorversion;\n> </para>\n> \n> <para>\n> - Streaming replication and log-shipping standby servers can\n> + Streaming replication and log-shipping standby servers must\n> remain running until a later step.\n> </para>\n> </step>\n> \n> - <step>\n> + <step id=\"pgupgrade-step-prepare-standbys\">\n>\n> <para>\n> - If you are upgrading standby servers using methods outlined in section <xref\n> - linkend=\"pgupgrade-step-replicas\"/>, verify that the old standby\n> - servers are caught up by running <application>pg_controldata</application>\n> - against the old primary and standby clusters. Verify that the\n> - <quote>Latest checkpoint location</quote> values match in all clusters.\n> - (There will be a mismatch if old standby servers were shut down\n> - before the old primary or if the old standby servers are still running.)\n> + If you are upgrading standby servers using methods outlined in \n> + <xref linkend=\"pgupgrade-step-replicas\"/>, \n\nYou dropped the \"section\" before the xref, I think that should be kept\naround.\n\n> + ensure that they were running when \n> + you shut down the primaries in the previous step, so all the latest changes \n\nYou talk of primaries in plural here, that is a bit weird for PostgreSQL\ndocumentation.\n\n> + and the shutdown checkpoint record were received. You can verify this by running \n> + <application>pg_controldata</application> against the old primary and standby \n> + clusters. The <quote>Latest checkpoint location</quote> values must match in all \n> + nodes. A mismatch might occur if old standby servers were shut down before \n> + the old primary. To fix a mismatch, start all old servers and return to the \n> + previous step; proceeding with mismatched \n> + <quote>Latest checkpoint location</quote> may lead to standby corruption.\n> + </para>\n> +\n> + <para>\n> Also, make sure <varname>wal_level</varname> is not set to\n> <literal>minimal</literal> in the <filename>postgresql.conf</filename> file on the\n> new primary cluster.\n> @@ -497,7 +503,6 @@ pg_upgrade.exe\n> linkend=\"warm-standby\"/>) standby servers, you can follow these steps to\n> quickly upgrade them. You will not be running <application>pg_upgrade</application> on\n> the standby servers, but rather <application>rsync</application> on the primary.\n> - Do not start any servers yet.\n> </para>\n> \n> <para>\n> @@ -508,6 +513,15 @@ pg_upgrade.exe\n> is running.\n> </para>\n> \n> + <para>\n> + Before running rsync, to avoid standby corruption, it is absolutely\n> + critical to ensure that both primaries are shut down and standbys \n> + have received the last changes (see <xref linkend=\"pgupgrade-step-prepare-standbys\"/>). \n\nI think this should be something like \"ensure both that the primary is\nshut down and that the standbys have received all the changes\".\n\n> + Standbys can be running at this point or fully stopped.\n\n\"or be fully stopped.\" I think.\n\n> + If they \n> + are still running, you can stop, upgrade, and start them one by one; this\n> + can be useful to keep the cluster open for read-only transactions.\n> + </para>\n\nMaybe this is clear from the context, but \"upgrade\" in the above should\nmaybe more explicitly refer to the rsync method or else people might\nthink one can run pg_upgrade on them after all?\n\n\nMichael\n\n\n",
"msg_date": "Mon, 10 Jul 2023 23:02:54 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 2:02 PM Michael Banck <[email protected]> wrote:\n\n> Thanks for that!\n>\n\nThanks for the fast review.\n\n\n>\n> > I agree with Andrey – without it, we don't have any good way to upgrade\n> > large clusters in short time. Default rsync mode (without \"--size-only\")\n> > takes a lot of time too, if the load is heavy.\n> >\n> > With these adjustments, can \"rsync --size-only\" remain in the docs as the\n> > *fast* and safe method to upgrade standbys, or there are still some\n> > concerns related to corruption risks?\n>\n> I hope somebody can answer that definitively, but I read Stephen's mail\n> to indicate that this procedure should be safe in principle (if you know\n> what you are doing).\n>\n\nright, this is my understanding too\n\n\n> > +++ b/doc/src/sgml/ref/pgupgrade.sgml\n> > @@ -380,22 +380,28 @@ NET STOP postgresql-&majorversion;\n> > </para>\n> >\n> > <para>\n> > - Streaming replication and log-shipping standby servers can\n> > + Streaming replication and log-shipping standby servers must\n> > remain running until a later step.\n> > </para>\n> > </step>\n> >\n> > - <step>\n> > + <step id=\"pgupgrade-step-prepare-standbys\">\n> >\n> > <para>\n> > - If you are upgrading standby servers using methods outlined in\n> section <xref\n> > - linkend=\"pgupgrade-step-replicas\"/>, verify that the old standby\n> > - servers are caught up by running\n> <application>pg_controldata</application>\n> > - against the old primary and standby clusters. Verify that the\n> > - <quote>Latest checkpoint location</quote> values match in all\n> clusters.\n> > - (There will be a mismatch if old standby servers were shut down\n> > - before the old primary or if the old standby servers are still\n> running.)\n> > + If you are upgrading standby servers using methods outlined in\n> > + <xref linkend=\"pgupgrade-step-replicas\"/>,\n>\n> You dropped the \"section\" before the xref, I think that should be kept\n> around.\n>\n\nSeems to be a problem in discussing source code that looks quite different\nthan the final result.\n\nIn the result – the docs – we currently have \"section Step 9\", looking\nweird. I still think it's good to remove it. We also have \"in Step 17\nbelow\" (without the extra word \"section\") in a different place on the same\npage.\n\n\n>\n> > + ensure that they were\n> running when\n> > + you shut down the primaries in the previous step, so all the\n> latest changes\n>\n> You talk of primaries in plural here, that is a bit weird for PostgreSQL\n> documentation.\n>\n\nThe same docs already discuss two primaries (\"8. Stop both primaries\"), but\nI agree it might look confusing if you read only a part of the doc, jumping\ninto middle of it, like I do all the time when using the docs in \"check the\nreference\" style.\n\nI agree with this comment, but it tells me we need even more improvements\nof this doc, beyond my original goal – e.g., I don't like section 8 saying\n\"Make sure both database servers\", it should be \"both primaries\".\n\n\n>\n> > + and the shutdown checkpoint record were received. You can verify\n> this by running\n> > + <application>pg_controldata</application> against the old primary\n> and standby\n> > + clusters. The <quote>Latest checkpoint location</quote> values\n> must match in all\n> > + nodes. A mismatch might occur if old standby servers were shut\n> down before\n> > + the old primary. To fix a mismatch, start all old servers and\n> return to the\n> > + previous step; proceeding with mismatched\n> > + <quote>Latest checkpoint location</quote> may lead to standby\n> corruption.\n> > + </para>\n> > +\n> > + <para>\n> > Also, make sure <varname>wal_level</varname> is not set to\n> > <literal>minimal</literal> in the\n> <filename>postgresql.conf</filename> file on the\n> > new primary cluster.\n> > @@ -497,7 +503,6 @@ pg_upgrade.exe\n> > linkend=\"warm-standby\"/>) standby servers, you can follow these\n> steps to\n> > quickly upgrade them. You will not be running\n> <application>pg_upgrade</application> on\n> > the standby servers, but rather <application>rsync</application>\n> on the primary.\n> > - Do not start any servers yet.\n> > </para>\n> >\n> > <para>\n> > @@ -508,6 +513,15 @@ pg_upgrade.exe\n> > is running.\n> > </para>\n> >\n> > + <para>\n> > + Before running rsync, to avoid standby corruption, it is absolutely\n> > + critical to ensure that both primaries are shut down and standbys\n> > + have received the last changes (see <xref\n> linkend=\"pgupgrade-step-prepare-standbys\"/>).\n>\n> I think this should be something like \"ensure both that the primary is\n> shut down and that the standbys have received all the changes\".\n>\n\nWell, we have two primary servers – old and new. I tried to clarify it in\nthe new version.\n\n\n>\n> > + Standbys can be running at this point or fully stopped.\n>\n> \"or be fully stopped.\" I think.\n>\n> > + If they\n> > + are still running, you can stop, upgrade, and start them one by\n> one; this\n> > + can be useful to keep the cluster open for read-only transactions.\n> > + </para>\n>\n> Maybe this is clear from the context, but \"upgrade\" in the above should\n> maybe more explicitly refer to the rsync method or else people might\n> think one can run pg_upgrade on them after all?\n>\n\nMaybe. It will require changes in other parts of this doc.\nThinking (here:\nhttps://gitlab.com/postgres/postgres/-/merge_requests/18/diffs)\n\nMeanwhile, attached is v2\n\nthanks for the comments",
"msg_date": "Mon, 10 Jul 2023 14:37:24 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jul 10, 2023 at 02:37:24PM -0700, Nikolay Samokhvalov wrote:\n> On Mon, Jul 10, 2023 at 2:02 PM Michael Banck <[email protected]> wrote:\n> > > +++ b/doc/src/sgml/ref/pgupgrade.sgml\n> > > @@ -380,22 +380,28 @@ NET STOP postgresql-&majorversion;\n> > > </para>\n> > >\n> > > <para>\n> > > - Streaming replication and log-shipping standby servers can\n> > > + Streaming replication and log-shipping standby servers must\n> > > remain running until a later step.\n> > > </para>\n> > > </step>\n> > >\n> > > - <step>\n> > > + <step id=\"pgupgrade-step-prepare-standbys\">\n> > >\n> > > <para>\n> > > - If you are upgrading standby servers using methods outlined in\n> > section <xref\n> > > - linkend=\"pgupgrade-step-replicas\"/>, verify that the old standby\n> > > - servers are caught up by running\n> > <application>pg_controldata</application>\n> > > - against the old primary and standby clusters. Verify that the\n> > > - <quote>Latest checkpoint location</quote> values match in all\n> > clusters.\n> > > - (There will be a mismatch if old standby servers were shut down\n> > > - before the old primary or if the old standby servers are still\n> > running.)\n> > > + If you are upgrading standby servers using methods outlined in\n> > > + <xref linkend=\"pgupgrade-step-replicas\"/>,\n> >\n> > You dropped the \"section\" before the xref, I think that should be kept\n> > around.\n> \n> Seems to be a problem in discussing source code that looks quite different\n> than the final result.\n> \n> In the result – the docs – we currently have \"section Step 9\", looking\n> weird. I still think it's good to remove it. We also have \"in Step 17\n> below\" (without the extra word \"section\") in a different place on the same\n> page.\n\nOk.\n \n> > > + ensure that they were\n> > running when\n> > > + you shut down the primaries in the previous step, so all the\n> > latest changes\n> >\n> > You talk of primaries in plural here, that is a bit weird for PostgreSQL\n> > documentation.\n> \n> The same docs already discuss two primaries (\"8. Stop both primaries\"), but\n> I agree it might look confusing if you read only a part of the doc, jumping\n> into middle of it, like I do all the time when using the docs in \"check the\n> reference\" style.\n\n[...]\n\n> > I think this should be something like \"ensure both that the primary is\n> > shut down and that the standbys have received all the changes\".\n> \n> Well, we have two primary servers – old and new. I tried to clarify it in\n> the new version.\n\nYeah sorry about that, I think I should have first have coffee and/or\nslept over this review before sending it.\n\nMaybe one reason why I was confused is beause I consider a \"primary\"\nmore like a full server/VM, not necessarily a database instance (though\none could of course have a primary/seconday pair on the same host).\nPossibly \"primary instances\" or something might be clearer, but I think\n\nI should re-read the whole section first before commenting further.\n\n\nMichael\n\n\n",
"msg_date": "Wed, 12 Jul 2023 11:23:06 +0200",
"msg_from": "Michael Banck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 02:37:24PM -0700, Nikolay Samokhvalov wrote:\n> Maybe. It will require changes in other parts of this doc.\n> Thinking (here: https://gitlab.com/postgres/postgres/-/merge_requests/18/diffs)\n> \n> Meanwhile, attached is v2\n> \n> thanks for the comments\n\nI looked over this issue thoroughly and I think I see the cause of the\nconfusion. In step 8 we say:\n\n\t8. Stop both servers\n\tStreaming replication and log-shipping standby servers can remain\n\t ---\n\trunning until a later step.\n\nOf course this has to be \"must\" and it would be good to explain why,\nwhich I have done in the attached patch.\n\nSecondly, in step 9 we say \"verify the LSNs\", but have a parenthetical\nsentence that explains why they might not match:\n\n\t(There will be a mismatch if old standby servers were shut down before\n\tthe old primary or if the old standby servers are still running.)\n\nPeople might take that to mean that it is okay if this is the reason\nthey don't match, which is incorrect. Better to tell them to keep the\nstreaming replication and log-shipping servers running so we don't need\nthat sentence.\n\nThe instructions are already long so I am hesitant to add more text\nwithout a clear purpose.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 7 Sep 2023 13:52:45 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 01:52:45PM -0400, Bruce Momjian wrote:\n> On Mon, Jul 10, 2023 at 02:37:24PM -0700, Nikolay Samokhvalov wrote:\n> > Maybe. It will require changes in other parts of this doc.\n> > Thinking (here: https://gitlab.com/postgres/postgres/-/merge_requests/18/diffs)\n> > \n> > Meanwhile, attached is v2\n> > \n> > thanks for the comments\n> \n> I looked over this issue thoroughly and I think I see the cause of the\n> confusion. In step 8 we say:\n> \n> \t8. Stop both servers\n> \tStreaming replication and log-shipping standby servers can remain\n> \t ---\n> \trunning until a later step.\n> \n> Of course this has to be \"must\" and it would be good to explain why,\n> which I have done in the attached patch.\n> \n> Secondly, in step 9 we say \"verify the LSNs\", but have a parenthetical\n> sentence that explains why they might not match:\n> \n> \t(There will be a mismatch if old standby servers were shut down before\n> \tthe old primary or if the old standby servers are still running.)\n> \n> People might take that to mean that it is okay if this is the reason\n> they don't match, which is incorrect. Better to tell them to keep the\n> streaming replication and log-shipping servers running so we don't need\n> that sentence.\n> \n> The instructions are already long so I am hesitant to add more text\n> without a clear purpose.\n\nPatch applied back to PG 11.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 26 Sep 2023 18:54:25 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade instructions involving \"rsync --size-only\" might lead\n to standby corruption?"
}
] |
[
{
"msg_contents": "I hope this email finds you well. I am excited to share that I have\nextended the functionality of the `pg_buffercache` extension by\nimplementing buffer invalidation capability, as requested by some\nPostgreSQL contributors for improved testing scenarios.\n\nThis marks my first time submitting a patch to pgsql-hackers, and I am\neager to receive your expert feedback on the changes made. Your\ninsights are invaluable, and any review or comments you provide will\nbe greatly appreciated.\n\nThe primary objective of this enhancement is to enable explicit buffer\ninvalidation within the `pg_buffercache` extension. By doing so, we\ncan simulate scenarios where buffers are invalidated and observe the\nresulting behavior in PostgreSQL.\n\nAs part of this patch, a new function or mechanism has been introduced\nto facilitate buffer invalidation. I would like to hear your thoughts\non whether this approach provides a good user interface for this\nfunctionality. Additionally, I seek your evaluation of the buffer\nlocking protocol employed in the extension to ensure its correctness\nand efficiency.\n\nPlease note that I plan to add comprehensive documentation once the\ndetails of this enhancement are agreed upon. This documentation will\nserve as a valuable resource for users and contributors alike. I\nbelieve that your expertise will help uncover any potential issues and\nopportunities for further improvement.\n\nI have attached the patch file to this email for your convenience.\nYour valuable time and consideration in reviewing this extension are\nsincerely appreciated.\n\nThank you for your continued support and guidance. I am looking\nforward to your feedback and collaboration in enhancing the PostgreSQL\necosystem.\n\nThe working of the extension:\n\n1. Creating the extension pg_buffercache and then call select query on\na table and note the buffer to be cleared.\npgbench=# create extension pg_buffercache;\nCREATE EXTENSION\npgbench=# select count(*) from pgbench_accounts;\n count\n--------\n 100000\n(1 row)\n\npgbench=# SELECT *\nFROM pg_buffercache\nWHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n bufferid | relfilenode | reltablespace | reldatabase | relforknumber\n| relblocknumber | isdirty | usagecount | pinning_backends\n----------+-------------+---------------+-------------+---------------+----------------+---------+------------+------------------\n 233 | 16397 | 1663 | 16384 | 0\n| 0 | f | 1 | 0\n 234 | 16397 | 1663 | 16384 | 0\n| 1 | f | 1 | 0\n 235 | 16397 | 1663 | 16384 | 0\n| 2 | f | 1 | 0\n 236 | 16397 | 1663 | 16384 | 0\n| 3 | f | 1 | 0\n 237 | 16397 | 1663 | 16384 | 0\n| 4 | f | 1 | 0\n\n\n2. Clearing a single buffer by entering the bufferid.\npgbench=# SELECT count(*)\nFROM pg_buffercache\nWHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n count\n-------\n 1660\n(1 row)\n\npgbench=# select pg_buffercache_invalidate(233);\n pg_buffercache_invalidate\n---------------------------\n t\n(1 row)\n\npgbench=# SELECT count(*)\nFROM pg_buffercache\nWHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n count\n-------\n 1659\n(1 row)\n\n3. Clearing the entire buffer for a relation using the function.\npgbench=# SELECT count(*)\nFROM pg_buffercache\nWHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n count\n-------\n 1659\n(1 row)\n\npgbench=# select count(pg_buffercache_invalidate(bufferid)) from\npg_buffercache where relfilenode =\npg_relation_filenode('pgbench_accounts'::regclass);\n count\n-------\n 1659\n(1 row)\n\npgbench=# SELECT count(*)\nFROM pg_buffercache\nWHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n count\n-------\n 0\n(1 row)\n\n\nBest regards,\nPalak",
"msg_date": "Fri, 30 Jun 2023 16:16:50 +0530",
"msg_from": "Palak Chaturvedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 10:47 PM Palak Chaturvedi\n<[email protected]> wrote:\n> pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> pg_buffercache where relfilenode =\n> pg_relation_filenode('pgbench_accounts'::regclass);\n\nHi Palak,\n\nThanks for working on this! I think this will be very useful for\ntesting existing workloads but also for testing future work on\nprefetching with AIO (and DIO), work on putting SLRUs (or anything\nelse) into the buffer pool, nearby proposals for caching buffer\nmapping information, etc etc.\n\nPalak and I talked about this idea a bit last week (stimulated by a\nrecent thread[1], but the topic has certainly come up before), and we\ndiscussed some different ways one could specify which pages are\ndropped. For example, perhaps the pg_prewarm extension could have an\n'unwarm' option instead. I personally thought the buffer ID-based\napproach was quite good because it's extremely simple, while giving\nthe user the full power of SQL to say which buffers. Half a table?\nVisibility map? Everything? Root page of an index? I think that's\nprobably better than something that requires more code and\ncomplication but is less flexible in the end. It feels like the right\nlevel of rawness for something primarily of interest to hackers and\nadvanced users. I don't think it matters that there is a window\nbetween selecting a buffer ID and invalidating it, for the intended\nuse cases. That's my vote, anyway, let's see if others have other\nideas...\n\nWe also talked a bit about how one might control the kernel page cache\nin more fine-grained ways for testing purposes, but it seems like the\npgfincore project has that covered with its pgfadvise_willneed() and\npgfadvise_dontneed(). IMHO that project could use more page-oriented\noperations (instead of just counts and coarse grains operations) but\nthat's something that could be material for patches to send to the\nextension maintainers. This work, in contrast, is more tangled up\nwith bufmgr.c internals, so it feels like this feature belongs in a\ncore contrib module.\n\nSome initial thoughts on the patch:\n\nI wonder if we should include a simple exercise in\ncontrib/pg_buffercache/sql/pg_buffercache.sql. One problem is that\nit's not guaranteed to succeed in general. It doesn't wait for pins\nto go away, and it doesn't retry cleaning dirty buffers after one\nattempt, it just returns false, which I think is probably the right\napproach, but it makes the behaviour too non-deterministic for simple\ntests. Perhaps it's enough to include an exercise where we call it a\nfew times to hit a couple of cases, but not verify what effect it has.\n\nIt should be restricted by role, but I wonder which role it should be.\nTesting for superuser is now out of fashion.\n\nWhere the Makefile mentions 1.4--1.5.sql, the meson.build file needs\nto do the same. That's because PostgreSQL is currently in transition\nfrom autoconf/gmake to meson/ninja[2], so for now we have to maintain\nboth build systems. That's why it fails to build in some CI tasks[3].\nYou can enable CI in your own GitHub account if you want to run test\nbuilds on several operating systems, see [4] for info.\n\n[1] https://www.postgresql.org/message-id/flat/CAFSGpE3y_oMK1uHhcHxGxBxs%2BKrjMMdGrE%2B6HHOu0vttVET0UQ%40mail.gmail.com\n[2] https://wiki.postgresql.org/wiki/Meson\n[3] http://cfbot.cputube.org/palak-chaturvedi.html\n[4] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/tools/ci/README;hb=HEAD\n\n\n",
"msg_date": "Sat, 1 Jul 2023 10:09:12 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hi Thomas,\nThank you for your suggestions. I have added the sql in the meson\nbuild as well.\n\nOn Sat, 1 Jul 2023 at 03:39, Thomas Munro <[email protected]> wrote:\n>\n> On Fri, Jun 30, 2023 at 10:47 PM Palak Chaturvedi\n> <[email protected]> wrote:\n> > pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> > pg_buffercache where relfilenode =\n> > pg_relation_filenode('pgbench_accounts'::regclass);\n>\n> Hi Palak,\n>\n> Thanks for working on this! I think this will be very useful for\n> testing existing workloads but also for testing future work on\n> prefetching with AIO (and DIO), work on putting SLRUs (or anything\n> else) into the buffer pool, nearby proposals for caching buffer\n> mapping information, etc etc.\n>\n> Palak and I talked about this idea a bit last week (stimulated by a\n> recent thread[1], but the topic has certainly come up before), and we\n> discussed some different ways one could specify which pages are\n> dropped. For example, perhaps the pg_prewarm extension could have an\n> 'unwarm' option instead. I personally thought the buffer ID-based\n> approach was quite good because it's extremely simple, while giving\n> the user the full power of SQL to say which buffers. Half a table?\n> Visibility map? Everything? Root page of an index? I think that's\n> probably better than something that requires more code and\n> complication but is less flexible in the end. It feels like the right\n> level of rawness for something primarily of interest to hackers and\n> advanced users. I don't think it matters that there is a window\n> between selecting a buffer ID and invalidating it, for the intended\n> use cases. That's my vote, anyway, let's see if others have other\n> ideas...\n>\n> We also talked a bit about how one might control the kernel page cache\n> in more fine-grained ways for testing purposes, but it seems like the\n> pgfincore project has that covered with its pgfadvise_willneed() and\n> pgfadvise_dontneed(). IMHO that project could use more page-oriented\n> operations (instead of just counts and coarse grains operations) but\n> that's something that could be material for patches to send to the\n> extension maintainers. This work, in contrast, is more tangled up\n> with bufmgr.c internals, so it feels like this feature belongs in a\n> core contrib module.\n>\n> Some initial thoughts on the patch:\n>\n> I wonder if we should include a simple exercise in\n> contrib/pg_buffercache/sql/pg_buffercache.sql. One problem is that\n> it's not guaranteed to succeed in general. It doesn't wait for pins\n> to go away, and it doesn't retry cleaning dirty buffers after one\n> attempt, it just returns false, which I think is probably the right\n> approach, but it makes the behaviour too non-deterministic for simple\n> tests. Perhaps it's enough to include an exercise where we call it a\n> few times to hit a couple of cases, but not verify what effect it has.\n>\n> It should be restricted by role, but I wonder which role it should be.\n> Testing for superuser is now out of fashion.\n>\n> Where the Makefile mentions 1.4--1.5.sql, the meson.build file needs\n> to do the same. That's because PostgreSQL is currently in transition\n> from autoconf/gmake to meson/ninja[2], so for now we have to maintain\n> both build systems. That's why it fails to build in some CI tasks[3].\n> You can enable CI in your own GitHub account if you want to run test\n> builds on several operating systems, see [4] for info.\n>\n> [1] https://www.postgresql.org/message-id/flat/CAFSGpE3y_oMK1uHhcHxGxBxs%2BKrjMMdGrE%2B6HHOu0vttVET0UQ%40mail.gmail.com\n> [2] https://wiki.postgresql.org/wiki/Meson\n> [3] http://cfbot.cputube.org/palak-chaturvedi.html\n> [4] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/tools/ci/README;hb=HEAD",
"msg_date": "Mon, 3 Jul 2023 13:56:29 +0530",
"msg_from": "Palak Chaturvedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 4:26 PM Palak Chaturvedi\n<[email protected]> wrote:\n>\n> Hi Thomas,\n> Thank you for your suggestions. I have added the sql in the meson\n> build as well.\n>\n> On Sat, 1 Jul 2023 at 03:39, Thomas Munro <[email protected]> wrote:\n> >\n> > On Fri, Jun 30, 2023 at 10:47 PM Palak Chaturvedi\n> > <[email protected]> wrote:\n> > > pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> > > pg_buffercache where relfilenode =\n> > > pg_relation_filenode('pgbench_accounts'::regclass);\n> >\n> > Hi Palak,\n> >\n> > Thanks for working on this! I think this will be very useful for\n> > testing existing workloads but also for testing future work on\n> > prefetching with AIO (and DIO), work on putting SLRUs (or anything\n> > else) into the buffer pool, nearby proposals for caching buffer\n> > mapping information, etc etc.\n> >\n> > Palak and I talked about this idea a bit last week (stimulated by a\n> > recent thread[1], but the topic has certainly come up before), and we\n> > discussed some different ways one could specify which pages are\n> > dropped. For example, perhaps the pg_prewarm extension could have an\n> > 'unwarm' option instead. I personally thought the buffer ID-based\n> > approach was quite good because it's extremely simple, while giving\n> > the user the full power of SQL to say which buffers. Half a table?\n> > Visibility map? Everything? Root page of an index? I think that's\n> > probably better than something that requires more code and\n> > complication but is less flexible in the end. It feels like the right\n> > level of rawness for something primarily of interest to hackers and\n> > advanced users. I don't think it matters that there is a window\n> > between selecting a buffer ID and invalidating it, for the intended\n> > use cases. That's my vote, anyway, let's see if others have other\n> > ideas...\n> >\n> > We also talked a bit about how one might control the kernel page cache\n> > in more fine-grained ways for testing purposes, but it seems like the\n> > pgfincore project has that covered with its pgfadvise_willneed() and\n> > pgfadvise_dontneed(). IMHO that project could use more page-oriented\n> > operations (instead of just counts and coarse grains operations) but\n> > that's something that could be material for patches to send to the\n> > extension maintainers. This work, in contrast, is more tangled up\n> > with bufmgr.c internals, so it feels like this feature belongs in a\n> > core contrib module.\n> >\n> > Some initial thoughts on the patch:\n> >\n> > I wonder if we should include a simple exercise in\n> > contrib/pg_buffercache/sql/pg_buffercache.sql. One problem is that\n> > it's not guaranteed to succeed in general. It doesn't wait for pins\n> > to go away, and it doesn't retry cleaning dirty buffers after one\n> > attempt, it just returns false, which I think is probably the right\n> > approach, but it makes the behaviour too non-deterministic for simple\n> > tests. Perhaps it's enough to include an exercise where we call it a\n> > few times to hit a couple of cases, but not verify what effect it has.\n> >\n> > It should be restricted by role, but I wonder which role it should be.\n> > Testing for superuser is now out of fashion.\n> >\n> > Where the Makefile mentions 1.4--1.5.sql, the meson.build file needs\n> > to do the same. That's because PostgreSQL is currently in transition\n> > from autoconf/gmake to meson/ninja[2], so for now we have to maintain\n> > both build systems. That's why it fails to build in some CI tasks[3].\n> > You can enable CI in your own GitHub account if you want to run test\n> > builds on several operating systems, see [4] for info.\n> >\n> > [1] https://www.postgresql.org/message-id/flat/CAFSGpE3y_oMK1uHhcHxGxBxs%2BKrjMMdGrE%2B6HHOu0vttVET0UQ%40mail.gmail.com\n> > [2] https://wiki.postgresql.org/wiki/Meson\n> > [3] http://cfbot.cputube.org/palak-chaturvedi.html\n> > [4] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/tools/ci/README;hb=HEAD\n\nnewbie question:\nquote from: https://www.interdb.jp/pg/pgsql08.html\n>\n> Pinned: When the corresponding buffer pool slot stores a page and any PostgreSQL processes are accessing the page (i.e. refcount and usage_count are greater than or equal to 1), the state of this buffer descriptor is pinned.\n> Unpinned: When the corresponding buffer pool slot stores a page but no PostgreSQL processes are accessing the page (i.e. usage_count is greater than or equal to 1, but refcount is 0), the state of this buffer descriptor is unpinned.\n\n\nSo do you need to check BUF_STATE_GET_REFCOUNT(buf_state) and\nBUF_STATE_GET_USAGECOUNT(state)?\n\n\n",
"msg_date": "Mon, 3 Jul 2023 23:46:26 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "hi,\nI don't think we need to check the usage count. Because we are\nclearing all the buffers that are not pinned.\nChecking the usage count is for buffer replacement since we are not\nreplacing it does not matter.\nOn Mon, 3 Jul 2023 at 21:16, jian he <[email protected]> wrote:\n>\n> On Mon, Jul 3, 2023 at 4:26 PM Palak Chaturvedi\n> <[email protected]> wrote:\n> >\n> > Hi Thomas,\n> > Thank you for your suggestions. I have added the sql in the meson\n> > build as well.\n> >\n> > On Sat, 1 Jul 2023 at 03:39, Thomas Munro <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 30, 2023 at 10:47 PM Palak Chaturvedi\n> > > <[email protected]> wrote:\n> > > > pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> > > > pg_buffercache where relfilenode =\n> > > > pg_relation_filenode('pgbench_accounts'::regclass);\n> > >\n> > > Hi Palak,\n> > >\n> > > Thanks for working on this! I think this will be very useful for\n> > > testing existing workloads but also for testing future work on\n> > > prefetching with AIO (and DIO), work on putting SLRUs (or anything\n> > > else) into the buffer pool, nearby proposals for caching buffer\n> > > mapping information, etc etc.\n> > >\n> > > Palak and I talked about this idea a bit last week (stimulated by a\n> > > recent thread[1], but the topic has certainly come up before), and we\n> > > discussed some different ways one could specify which pages are\n> > > dropped. For example, perhaps the pg_prewarm extension could have an\n> > > 'unwarm' option instead. I personally thought the buffer ID-based\n> > > approach was quite good because it's extremely simple, while giving\n> > > the user the full power of SQL to say which buffers. Half a table?\n> > > Visibility map? Everything? Root page of an index? I think that's\n> > > probably better than something that requires more code and\n> > > complication but is less flexible in the end. It feels like the right\n> > > level of rawness for something primarily of interest to hackers and\n> > > advanced users. I don't think it matters that there is a window\n> > > between selecting a buffer ID and invalidating it, for the intended\n> > > use cases. That's my vote, anyway, let's see if others have other\n> > > ideas...\n> > >\n> > > We also talked a bit about how one might control the kernel page cache\n> > > in more fine-grained ways for testing purposes, but it seems like the\n> > > pgfincore project has that covered with its pgfadvise_willneed() and\n> > > pgfadvise_dontneed(). IMHO that project could use more page-oriented\n> > > operations (instead of just counts and coarse grains operations) but\n> > > that's something that could be material for patches to send to the\n> > > extension maintainers. This work, in contrast, is more tangled up\n> > > with bufmgr.c internals, so it feels like this feature belongs in a\n> > > core contrib module.\n> > >\n> > > Some initial thoughts on the patch:\n> > >\n> > > I wonder if we should include a simple exercise in\n> > > contrib/pg_buffercache/sql/pg_buffercache.sql. One problem is that\n> > > it's not guaranteed to succeed in general. It doesn't wait for pins\n> > > to go away, and it doesn't retry cleaning dirty buffers after one\n> > > attempt, it just returns false, which I think is probably the right\n> > > approach, but it makes the behaviour too non-deterministic for simple\n> > > tests. Perhaps it's enough to include an exercise where we call it a\n> > > few times to hit a couple of cases, but not verify what effect it has.\n> > >\n> > > It should be restricted by role, but I wonder which role it should be.\n> > > Testing for superuser is now out of fashion.\n> > >\n> > > Where the Makefile mentions 1.4--1.5.sql, the meson.build file needs\n> > > to do the same. That's because PostgreSQL is currently in transition\n> > > from autoconf/gmake to meson/ninja[2], so for now we have to maintain\n> > > both build systems. That's why it fails to build in some CI tasks[3].\n> > > You can enable CI in your own GitHub account if you want to run test\n> > > builds on several operating systems, see [4] for info.\n> > >\n> > > [1] https://www.postgresql.org/message-id/flat/CAFSGpE3y_oMK1uHhcHxGxBxs%2BKrjMMdGrE%2B6HHOu0vttVET0UQ%40mail.gmail.com\n> > > [2] https://wiki.postgresql.org/wiki/Meson\n> > > [3] http://cfbot.cputube.org/palak-chaturvedi.html\n> > > [4] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/tools/ci/README;hb=HEAD\n>\n> newbie question:\n> quote from: https://www.interdb.jp/pg/pgsql08.html\n> >\n> > Pinned: When the corresponding buffer pool slot stores a page and any PostgreSQL processes are accessing the page (i.e. refcount and usage_count are greater than or equal to 1), the state of this buffer descriptor is pinned.\n> > Unpinned: When the corresponding buffer pool slot stores a page but no PostgreSQL processes are accessing the page (i.e. usage_count is greater than or equal to 1, but refcount is 0), the state of this buffer descriptor is unpinned.\n>\n>\n> So do you need to check BUF_STATE_GET_REFCOUNT(buf_state) and\n> BUF_STATE_GET_USAGECOUNT(state)?\n\n\n",
"msg_date": "Tue, 4 Jul 2023 11:38:04 +0530",
"msg_from": "Palak Chaturvedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "\nOn Mon, 03 Jul 2023 at 16:26, Palak Chaturvedi <[email protected]> wrote:\n> Hi Thomas,\n> Thank you for your suggestions. I have added the sql in the meson\n> build as well.\n>\n> On Sat, 1 Jul 2023 at 03:39, Thomas Munro <[email protected]> wrote:\n>>\n>> On Fri, Jun 30, 2023 at 10:47 PM Palak Chaturvedi\n>> <[email protected]> wrote:\n>> > pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n>> > pg_buffercache where relfilenode =\n>> > pg_relation_filenode('pgbench_accounts'::regclass);\n>>\n>> Hi Palak,\n>>\n>> Thanks for working on this! I think this will be very useful for\n>> testing existing workloads but also for testing future work on\n>> prefetching with AIO (and DIO), work on putting SLRUs (or anything\n>> else) into the buffer pool, nearby proposals for caching buffer\n>> mapping information, etc etc.\n>>\n>> Palak and I talked about this idea a bit last week (stimulated by a\n>> recent thread[1], but the topic has certainly come up before), and we\n>> discussed some different ways one could specify which pages are\n>> dropped. For example, perhaps the pg_prewarm extension could have an\n>> 'unwarm' option instead. I personally thought the buffer ID-based\n>> approach was quite good because it's extremely simple, while giving\n>> the user the full power of SQL to say which buffers. Half a table?\n>> Visibility map? Everything? Root page of an index? I think that's\n>> probably better than something that requires more code and\n>> complication but is less flexible in the end. It feels like the right\n>> level of rawness for something primarily of interest to hackers and\n>> advanced users. I don't think it matters that there is a window\n>> between selecting a buffer ID and invalidating it, for the intended\n>> use cases. That's my vote, anyway, let's see if others have other\n>> ideas...\n>>\n>> We also talked a bit about how one might control the kernel page cache\n>> in more fine-grained ways for testing purposes, but it seems like the\n>> pgfincore project has that covered with its pgfadvise_willneed() and\n>> pgfadvise_dontneed(). IMHO that project could use more page-oriented\n>> operations (instead of just counts and coarse grains operations) but\n>> that's something that could be material for patches to send to the\n>> extension maintainers. This work, in contrast, is more tangled up\n>> with bufmgr.c internals, so it feels like this feature belongs in a\n>> core contrib module.\n>>\n>> Some initial thoughts on the patch:\n>>\n>> I wonder if we should include a simple exercise in\n>> contrib/pg_buffercache/sql/pg_buffercache.sql. One problem is that\n>> it's not guaranteed to succeed in general. It doesn't wait for pins\n>> to go away, and it doesn't retry cleaning dirty buffers after one\n>> attempt, it just returns false, which I think is probably the right\n>> approach, but it makes the behaviour too non-deterministic for simple\n>> tests. Perhaps it's enough to include an exercise where we call it a\n>> few times to hit a couple of cases, but not verify what effect it has.\n>>\n>> It should be restricted by role, but I wonder which role it should be.\n>> Testing for superuser is now out of fashion.\n>>\n>> Where the Makefile mentions 1.4--1.5.sql, the meson.build file needs\n>> to do the same. That's because PostgreSQL is currently in transition\n>> from autoconf/gmake to meson/ninja[2], so for now we have to maintain\n>> both build systems. That's why it fails to build in some CI tasks[3].\n>> You can enable CI in your own GitHub account if you want to run test\n>> builds on several operating systems, see [4] for info.\n>>\n>> [1] https://www.postgresql.org/message-id/flat/CAFSGpE3y_oMK1uHhcHxGxBxs%2BKrjMMdGrE%2B6HHOu0vttVET0UQ%40mail.gmail.com\n>> [2] https://wiki.postgresql.org/wiki/Meson\n>> [3] http://cfbot.cputube.org/palak-chaturvedi.html\n>> [4] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/tools/ci/README;hb=HEAD\n\nI think, zero is not a valid buffer identifier. See src/include/storage/buf.h.\n\n+\tbufnum = PG_GETARG_INT32(0);\n+\tif (bufnum < 0 || bufnum > NBuffers)\n+\t{\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t errmsg(\"buffernum is not valid\")));\n+\n+\t}\n\nIf we use SELECT pg_buffercache_invalidate(0), it will crash.\n\n-- \nRegrads,\nJapin Li.\n\n\n",
"msg_date": "Tue, 04 Jul 2023 16:50:33 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "the following will also crash. no idea why.\nbegin;\n select count(*) from onek;\n select relpages from pg_class where relname = 'onek'; --queryA\n\n SELECT count(*) FROM pg_buffercache WHERE relfilenode =\npg_relation_filenode('onek'::regclass); --queryB\n\n insert into onek values(default);\n\n select count(pg_buffercache_invalidate(bufferid)) from\n pg_buffercache where relfilenode =\npg_relation_filenode('onek'::regclass);\n\n---------------------------------\nqueryA returns 35, queryB returns 37.\n----------------------------------\ncrash info:\ntest_dev=*# insert into onek values(default);\nINSERT 0 1\ntest_dev=*# select count(pg_buffercache_invalidate(bufferid)) from\n pg_buffercache where relfilenode =\npg_relation_filenode('onek'::regclass);\nTRAP: failed Assert(\"resarr->nitems < resarr->maxitems\"), File:\n\"../../Desktop/pg_sources/main/postgres/src/backend/utils/resowner/resowner.c\",\nLine: 275, PID: 1533312\npostgres: jian test_dev [local]\nSELECT(ExceptionalCondition+0xa1)[0x55fc8f8d14e1]\npostgres: jian test_dev [local] SELECT(+0x9e7ab3)[0x55fc8f915ab3]\npostgres: jian test_dev [local]\nSELECT(ResourceOwnerRememberBuffer+0x1d)[0x55fc8f91696d]\npostgres: jian test_dev [local] SELECT(+0x78ab17)[0x55fc8f6b8b17]\npostgres: jian test_dev [local]\nSELECT(TryInvalidateBuffer+0x6d)[0x55fc8f6c507d]\n/home/jian/postgres/pg16_test/lib/pg_buffercache.so(pg_buffercache_invalidate+0x3d)[0x7f2361837abd]\npostgres: jian test_dev [local] SELECT(+0x57eebc)[0x55fc8f4acebc]\npostgres: jian test_dev [local]\nSELECT(ExecInterpExprStillValid+0x3c)[0x55fc8f4a6e2c]\npostgres: jian test_dev [local] SELECT(+0x5a0f16)[0x55fc8f4cef16]\npostgres: jian test_dev [local] SELECT(+0x5a3588)[0x55fc8f4d1588]\npostgres: jian test_dev [local] SELECT(+0x58f747)[0x55fc8f4bd747]\npostgres: jian test_dev [local]\nSELECT(standard_ExecutorRun+0x1f0)[0x55fc8f4b29f0]\npostgres: jian test_dev [local] SELECT(ExecutorRun+0x46)[0x55fc8f4b2d16]\npostgres: jian test_dev [local] SELECT(+0x7eb3b0)[0x55fc8f7193b0]\npostgres: jian test_dev [local] SELECT(PortalRun+0x1eb)[0x55fc8f71b7ab]\npostgres: jian test_dev [local] SELECT(+0x7e8cf4)[0x55fc8f716cf4]\npostgres: jian test_dev [local] SELECT(PostgresMain+0x134f)[0x55fc8f71869f]\npostgres: jian test_dev [local] SELECT(+0x70f80c)[0x55fc8f63d80c]\npostgres: jian test_dev [local]\nSELECT(PostmasterMain+0x1758)[0x55fc8f63f278]\npostgres: jian test_dev [local] SELECT(main+0x27e)[0x55fc8f27067e]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f2361629d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f2361629e40]\npostgres: jian test_dev [local] SELECT(_start+0x25)[0x55fc8f272bb5]\n2023-07-04 16:56:13.088 CST [1532822] LOG: server process (PID 1533312)\nwas terminated by signal 6: Aborted\n2023-07-04 16:56:13.088 CST [1532822] DETAIL: Failed process was running:\nselect count(pg_buffercache_invalidate(bufferid)) from\n pg_buffercache where relfilenode =\npg_relation_filenode('onek'::regclass);\n2023-07-04 16:56:13.088 CST [1532822] LOG: terminating any other active\nserver processes\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: 2023-07-04\n16:56:13.091 CST [1533381] FATAL: the database system is in recovery mode\nFailed.\nThe connection to the server was lost. Attempting reset: Failed.\n\nthe following will also crash. no idea why.begin; select count(*) from onek; select relpages from pg_class where relname = 'onek'; --queryA SELECT count(*) FROM pg_buffercache WHERE relfilenode = pg_relation_filenode('onek'::regclass); --queryB insert into onek values(default); select count(pg_buffercache_invalidate(bufferid)) from pg_buffercache where relfilenode = pg_relation_filenode('onek'::regclass);---------------------------------queryA returns 35, queryB returns 37.----------------------------------crash info:test_dev=*# insert into onek values(default);INSERT 0 1test_dev=*# select count(pg_buffercache_invalidate(bufferid)) from pg_buffercache where relfilenode = pg_relation_filenode('onek'::regclass);TRAP: failed Assert(\"resarr->nitems < resarr->maxitems\"), File: \"../../Desktop/pg_sources/main/postgres/src/backend/utils/resowner/resowner.c\", Line: 275, PID: 1533312postgres: jian test_dev [local] SELECT(ExceptionalCondition+0xa1)[0x55fc8f8d14e1]postgres: jian test_dev [local] SELECT(+0x9e7ab3)[0x55fc8f915ab3]postgres: jian test_dev [local] SELECT(ResourceOwnerRememberBuffer+0x1d)[0x55fc8f91696d]postgres: jian test_dev [local] SELECT(+0x78ab17)[0x55fc8f6b8b17]postgres: jian test_dev [local] SELECT(TryInvalidateBuffer+0x6d)[0x55fc8f6c507d]/home/jian/postgres/pg16_test/lib/pg_buffercache.so(pg_buffercache_invalidate+0x3d)[0x7f2361837abd]postgres: jian test_dev [local] SELECT(+0x57eebc)[0x55fc8f4acebc]postgres: jian test_dev [local] SELECT(ExecInterpExprStillValid+0x3c)[0x55fc8f4a6e2c]postgres: jian test_dev [local] SELECT(+0x5a0f16)[0x55fc8f4cef16]postgres: jian test_dev [local] SELECT(+0x5a3588)[0x55fc8f4d1588]postgres: jian test_dev [local] SELECT(+0x58f747)[0x55fc8f4bd747]postgres: jian test_dev [local] SELECT(standard_ExecutorRun+0x1f0)[0x55fc8f4b29f0]postgres: jian test_dev [local] SELECT(ExecutorRun+0x46)[0x55fc8f4b2d16]postgres: jian test_dev [local] SELECT(+0x7eb3b0)[0x55fc8f7193b0]postgres: jian test_dev [local] SELECT(PortalRun+0x1eb)[0x55fc8f71b7ab]postgres: jian test_dev [local] SELECT(+0x7e8cf4)[0x55fc8f716cf4]postgres: jian test_dev [local] SELECT(PostgresMain+0x134f)[0x55fc8f71869f]postgres: jian test_dev [local] SELECT(+0x70f80c)[0x55fc8f63d80c]postgres: jian test_dev [local] SELECT(PostmasterMain+0x1758)[0x55fc8f63f278]postgres: jian test_dev [local] SELECT(main+0x27e)[0x55fc8f27067e]/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f2361629d90]/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f2361629e40]postgres: jian test_dev [local] SELECT(_start+0x25)[0x55fc8f272bb5]2023-07-04 16:56:13.088 CST [1532822] LOG: server process (PID 1533312) was terminated by signal 6: Aborted2023-07-04 16:56:13.088 CST [1532822] DETAIL: Failed process was running: select count(pg_buffercache_invalidate(bufferid)) from pg_buffercache where relfilenode = pg_relation_filenode('onek'::regclass);2023-07-04 16:56:13.088 CST [1532822] LOG: terminating any other active server processesserver closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.The connection to the server was lost. Attempting reset: 2023-07-04 16:56:13.091 CST [1533381] FATAL: the database system is in recovery modeFailed.The connection to the server was lost. Attempting reset: Failed.",
"msg_date": "Tue, 4 Jul 2023 17:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "\nOn Tue, 04 Jul 2023 at 17:00, jian he <[email protected]> wrote:\n> the following will also crash. no idea why.\n> begin;\n> select count(*) from onek;\n> select relpages from pg_class where relname = 'onek'; --queryA\n>\n> SELECT count(*) FROM pg_buffercache WHERE relfilenode =\n> pg_relation_filenode('onek'::regclass); --queryB\n>\n> insert into onek values(default);\n>\n> select count(pg_buffercache_invalidate(bufferid)) from\n> pg_buffercache where relfilenode =\n> pg_relation_filenode('onek'::regclass);\n>\n> ---------------------------------\n> queryA returns 35, queryB returns 37.\n> ----------------------------------\n> crash info:\n> test_dev=*# insert into onek values(default);\n> INSERT 0 1\n> test_dev=*# select count(pg_buffercache_invalidate(bufferid)) from\n> pg_buffercache where relfilenode =\n> pg_relation_filenode('onek'::regclass);\n> TRAP: failed Assert(\"resarr->nitems < resarr->maxitems\"), File:\n> \"../../Desktop/pg_sources/main/postgres/src/backend/utils/resowner/resowner.c\",\n> Line: 275, PID: 1533312\n\nAccording to the comments of ResourceArrayAdd(), the caller must have previously\ndone ResourceArrayEnlarge(). I tried to call ResourceOwnerEnlargeBuffers() before\nPinBuffer_Locked(), so it can avoid this crash.\n\n\t\tif ((buf_state & BM_DIRTY) == BM_DIRTY)\n\t\t{\n+\t\t\t/* make sure we can handle the pin */\n+\t\t\tResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n+\n\t\t\t/*\n\t\t\t * Try once to flush the dirty buffer.\n\t\t\t */\n\t\t\tPinBuffer_Locked(bufHdr);\n\n-- \nRegrads,\nJapin Li.\n\n\n",
"msg_date": "Tue, 04 Jul 2023 17:45:47 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 5:45 PM Japin Li <[email protected]> wrote:\n\n>\n> On Tue, 04 Jul 2023 at 17:00, jian he <[email protected]> wrote:\n> > the following will also crash. no idea why.\n> > begin;\n> > select count(*) from onek;\n> > select relpages from pg_class where relname = 'onek'; --queryA\n> >\n> > SELECT count(*) FROM pg_buffercache WHERE relfilenode =\n> > pg_relation_filenode('onek'::regclass); --queryB\n> >\n> > insert into onek values(default);\n> >\n> > select count(pg_buffercache_invalidate(bufferid)) from\n> > pg_buffercache where relfilenode =\n> > pg_relation_filenode('onek'::regclass);\n> >\n> > ---------------------------------\n> > queryA returns 35, queryB returns 37.\n> > ----------------------------------\n> > crash info:\n> > test_dev=*# insert into onek values(default);\n> > INSERT 0 1\n> > test_dev=*# select count(pg_buffercache_invalidate(bufferid)) from\n> > pg_buffercache where relfilenode =\n> > pg_relation_filenode('onek'::regclass);\n> > TRAP: failed Assert(\"resarr->nitems < resarr->maxitems\"), File:\n> >\n> \"../../Desktop/pg_sources/main/postgres/src/backend/utils/resowner/resowner.c\",\n> > Line: 275, PID: 1533312\n>\n> According to the comments of ResourceArrayAdd(), the caller must have\n> previously\n> done ResourceArrayEnlarge(). I tried to call ResourceOwnerEnlargeBuffers()\n> before\n> PinBuffer_Locked(), so it can avoid this crash.\n>\n> if ((buf_state & BM_DIRTY) == BM_DIRTY)\n> {\n> + /* make sure we can handle the pin */\n> + ResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n> +\n> /*\n> * Try once to flush the dirty buffer.\n> */\n> PinBuffer_Locked(bufHdr);\n>\n> --\n> Regrads,\n> Japin Li.\n>\n\n\nthanks. tested flush pg_catalog, public schema, now, both works as pitched.\n\nOn Tue, Jul 4, 2023 at 5:45 PM Japin Li <[email protected]> wrote:\nOn Tue, 04 Jul 2023 at 17:00, jian he <[email protected]> wrote:\n> the following will also crash. no idea why.\n> begin;\n> select count(*) from onek;\n> select relpages from pg_class where relname = 'onek'; --queryA\n>\n> SELECT count(*) FROM pg_buffercache WHERE relfilenode =\n> pg_relation_filenode('onek'::regclass); --queryB\n>\n> insert into onek values(default);\n>\n> select count(pg_buffercache_invalidate(bufferid)) from\n> pg_buffercache where relfilenode =\n> pg_relation_filenode('onek'::regclass);\n>\n> ---------------------------------\n> queryA returns 35, queryB returns 37.\n> ----------------------------------\n> crash info:\n> test_dev=*# insert into onek values(default);\n> INSERT 0 1\n> test_dev=*# select count(pg_buffercache_invalidate(bufferid)) from\n> pg_buffercache where relfilenode =\n> pg_relation_filenode('onek'::regclass);\n> TRAP: failed Assert(\"resarr->nitems < resarr->maxitems\"), File:\n> \"../../Desktop/pg_sources/main/postgres/src/backend/utils/resowner/resowner.c\",\n> Line: 275, PID: 1533312\n\nAccording to the comments of ResourceArrayAdd(), the caller must have previously\ndone ResourceArrayEnlarge(). I tried to call ResourceOwnerEnlargeBuffers() before\nPinBuffer_Locked(), so it can avoid this crash.\n\n if ((buf_state & BM_DIRTY) == BM_DIRTY)\n {\n+ /* make sure we can handle the pin */\n+ ResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n+\n /*\n * Try once to flush the dirty buffer.\n */\n PinBuffer_Locked(bufHdr);\n\n-- \nRegrads,\nJapin Li.\nthanks. tested flush pg_catalog, public schema, now, both works as pitched.",
"msg_date": "Tue, 4 Jul 2023 19:53:39 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Sat, Jul 1, 2023 at 6:09 AM Thomas Munro <[email protected]> wrote:\n>\n>\n> It should be restricted by role, but I wonder which role it should be.\n> Testing for superuser is now out of fashion.\n>\n\nas pg_buffercache/pg_buffercache--1.2--1.3.sql. You need pg_maintain\nprivilege to use pg_buffercache.\nThe following query works on a single user. Obviously you need a role who\ncan gain pg_monitor privilege.\n\nbegin;\ncreate role test login nosuperuser;\ngrant select, insert on onek to test;\ngrant pg_monitor to test;\nset role test;\nselect count(*) from onek;\ninsert into onek values(default);\n(SELECT count(*) FROM pg_buffercache WHERE relfilenode =\npg_relation_filenode('onek'::regclass))\nexcept\n(\nselect count(pg_buffercache_invalidate(bufferid))\nfrom pg_buffercache where relfilenode =\npg_relation_filenode('onek'::regclass)\n);\n\nrollback;\n\nOn Sat, Jul 1, 2023 at 6:09 AM Thomas Munro <[email protected]> wrote:>>> It should be restricted by role, but I wonder which role it should be.> Testing for superuser is now out of fashion.>as pg_buffercache/pg_buffercache--1.2--1.3.sql. You need pg_maintain privilege to use pg_buffercache.The following query works on a single user. Obviously you need a role who can gain pg_monitor privilege.begin;create role test login nosuperuser;grant select, insert on onek to test;grant pg_monitor to test;set role test;select count(*) from onek;insert into onek values(default);(SELECT count(*) FROM pg_buffercache WHERE relfilenode = pg_relation_filenode('onek'::regclass))except(select count(pg_buffercache_invalidate(bufferid)) from pg_buffercache where relfilenode = pg_relation_filenode('onek'::regclass));rollback;",
"msg_date": "Wed, 5 Jul 2023 09:14:59 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "+1 for the idea. It's going to be more useful to test and understand\nthe buffer management of PostgreSQL and it can be used to explicitly\nfree up the buffers if there are any such requirements.\n\nI had a quick look over the patch. Following are the comments.\n\nFirst, The TryInvalidateBuffer() tries to flush the buffer if it is\ndirty and then tries to invalidate it if it meets the requirement.\nInstead of directly doing this can we provide an option to the caller\nto mention whether to invalidate the dirty buffers or not. For\nexample, TryInvalidateBuffer(Buffer bufnum, bool force), if the force\nis set to FALSE, then ignore invalidating dirty buffers. Otherwise,\nflush the dirty buffer and try to invalidate.\n\nSecond, In TryInvalidateBuffer(), it first checks if the reference\ncount is greater than zero and then checks for dirty buffers. Will\nthere be a scenario where the buffer is dirty and its reference count\nis zero? Can you please provide more information on this or adjust the\ncode accordingly.\n\n> +/*\n> +Try Invalidating a buffer using bufnum.\n> +If the buffer is invalid, the function returns false.\n> +The function checks for dirty buffer and flushes the dirty buffer before invalidating.\n> +If the buffer is still dirty it returns false.\n> +*/\n> +bool\n\nThe star(*) and space are missing here. Please refer to the style of\nfunction comments and change accordingly.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Jun 30, 2023 at 4:17 PM Palak Chaturvedi\n<[email protected]> wrote:\n>\n> I hope this email finds you well. I am excited to share that I have\n> extended the functionality of the `pg_buffercache` extension by\n> implementing buffer invalidation capability, as requested by some\n> PostgreSQL contributors for improved testing scenarios.\n>\n> This marks my first time submitting a patch to pgsql-hackers, and I am\n> eager to receive your expert feedback on the changes made. Your\n> insights are invaluable, and any review or comments you provide will\n> be greatly appreciated.\n>\n> The primary objective of this enhancement is to enable explicit buffer\n> invalidation within the `pg_buffercache` extension. By doing so, we\n> can simulate scenarios where buffers are invalidated and observe the\n> resulting behavior in PostgreSQL.\n>\n> As part of this patch, a new function or mechanism has been introduced\n> to facilitate buffer invalidation. I would like to hear your thoughts\n> on whether this approach provides a good user interface for this\n> functionality. Additionally, I seek your evaluation of the buffer\n> locking protocol employed in the extension to ensure its correctness\n> and efficiency.\n>\n> Please note that I plan to add comprehensive documentation once the\n> details of this enhancement are agreed upon. This documentation will\n> serve as a valuable resource for users and contributors alike. I\n> believe that your expertise will help uncover any potential issues and\n> opportunities for further improvement.\n>\n> I have attached the patch file to this email for your convenience.\n> Your valuable time and consideration in reviewing this extension are\n> sincerely appreciated.\n>\n> Thank you for your continued support and guidance. I am looking\n> forward to your feedback and collaboration in enhancing the PostgreSQL\n> ecosystem.\n>\n> The working of the extension:\n>\n> 1. Creating the extension pg_buffercache and then call select query on\n> a table and note the buffer to be cleared.\n> pgbench=# create extension pg_buffercache;\n> CREATE EXTENSION\n> pgbench=# select count(*) from pgbench_accounts;\n> count\n> --------\n> 100000\n> (1 row)\n>\n> pgbench=# SELECT *\n> FROM pg_buffercache\n> WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> bufferid | relfilenode | reltablespace | reldatabase | relforknumber\n> | relblocknumber | isdirty | usagecount | pinning_backends\n> ----------+-------------+---------------+-------------+---------------+----------------+---------+------------+------------------\n> 233 | 16397 | 1663 | 16384 | 0\n> | 0 | f | 1 | 0\n> 234 | 16397 | 1663 | 16384 | 0\n> | 1 | f | 1 | 0\n> 235 | 16397 | 1663 | 16384 | 0\n> | 2 | f | 1 | 0\n> 236 | 16397 | 1663 | 16384 | 0\n> | 3 | f | 1 | 0\n> 237 | 16397 | 1663 | 16384 | 0\n> | 4 | f | 1 | 0\n>\n>\n> 2. Clearing a single buffer by entering the bufferid.\n> pgbench=# SELECT count(*)\n> FROM pg_buffercache\n> WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> count\n> -------\n> 1660\n> (1 row)\n>\n> pgbench=# select pg_buffercache_invalidate(233);\n> pg_buffercache_invalidate\n> ---------------------------\n> t\n> (1 row)\n>\n> pgbench=# SELECT count(*)\n> FROM pg_buffercache\n> WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> count\n> -------\n> 1659\n> (1 row)\n>\n> 3. Clearing the entire buffer for a relation using the function.\n> pgbench=# SELECT count(*)\n> FROM pg_buffercache\n> WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> count\n> -------\n> 1659\n> (1 row)\n>\n> pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> pg_buffercache where relfilenode =\n> pg_relation_filenode('pgbench_accounts'::regclass);\n> count\n> -------\n> 1659\n> (1 row)\n>\n> pgbench=# SELECT count(*)\n> FROM pg_buffercache\n> WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> count\n> -------\n> 0\n> (1 row)\n>\n>\n> Best regards,\n> Palak\n\n\n",
"msg_date": "Wed, 5 Jul 2023 17:53:04 +0530",
"msg_from": "Nitin Jadhav <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hey Nitin,\n>Will\n>there be a scenario where the buffer is dirty and its reference count\n>is zero?\nThere might be a buffer that has been dirtied but is not pinned or\nbeing used currently by a process. So checking the refcount and then\ndirty buffers helps.\n>First, The TryInvalidateBuffer() tries to flush the buffer if it is\ndirty and then tries to invalidate it if it meets the requirement.\nInstead of directly doing this can we provide an option to the caller\nto mention whether to invalidate the dirty buffers or not.\nYes that can be implemented with a default value of force. Will\nimplement it in the next patch.\n\nOn Wed, 5 Jul 2023 at 17:53, Nitin Jadhav <[email protected]> wrote:\n>\n> +1 for the idea. It's going to be more useful to test and understand\n> the buffer management of PostgreSQL and it can be used to explicitly\n> free up the buffers if there are any such requirements.\n>\n> I had a quick look over the patch. Following are the comments.\n>\n> First, The TryInvalidateBuffer() tries to flush the buffer if it is\n> dirty and then tries to invalidate it if it meets the requirement.\n> Instead of directly doing this can we provide an option to the caller\n> to mention whether to invalidate the dirty buffers or not. For\n> example, TryInvalidateBuffer(Buffer bufnum, bool force), if the force\n> is set to FALSE, then ignore invalidating dirty buffers. Otherwise,\n> flush the dirty buffer and try to invalidate.\n>\n> Second, In TryInvalidateBuffer(), it first checks if the reference\n> count is greater than zero and then checks for dirty buffers. Will\n> there be a scenario where the buffer is dirty and its reference count\n> is zero? Can you please provide more information on this or adjust the\n> code accordingly.\n>\n> > +/*\n> > +Try Invalidating a buffer using bufnum.\n> > +If the buffer is invalid, the function returns false.\n> > +The function checks for dirty buffer and flushes the dirty buffer before invalidating.\n> > +If the buffer is still dirty it returns false.\n> > +*/\n> > +bool\n>\n> The star(*) and space are missing here. Please refer to the style of\n> function comments and change accordingly.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Fri, Jun 30, 2023 at 4:17 PM Palak Chaturvedi\n> <[email protected]> wrote:\n> >\n> > I hope this email finds you well. I am excited to share that I have\n> > extended the functionality of the `pg_buffercache` extension by\n> > implementing buffer invalidation capability, as requested by some\n> > PostgreSQL contributors for improved testing scenarios.\n> >\n> > This marks my first time submitting a patch to pgsql-hackers, and I am\n> > eager to receive your expert feedback on the changes made. Your\n> > insights are invaluable, and any review or comments you provide will\n> > be greatly appreciated.\n> >\n> > The primary objective of this enhancement is to enable explicit buffer\n> > invalidation within the `pg_buffercache` extension. By doing so, we\n> > can simulate scenarios where buffers are invalidated and observe the\n> > resulting behavior in PostgreSQL.\n> >\n> > As part of this patch, a new function or mechanism has been introduced\n> > to facilitate buffer invalidation. I would like to hear your thoughts\n> > on whether this approach provides a good user interface for this\n> > functionality. Additionally, I seek your evaluation of the buffer\n> > locking protocol employed in the extension to ensure its correctness\n> > and efficiency.\n> >\n> > Please note that I plan to add comprehensive documentation once the\n> > details of this enhancement are agreed upon. This documentation will\n> > serve as a valuable resource for users and contributors alike. I\n> > believe that your expertise will help uncover any potential issues and\n> > opportunities for further improvement.\n> >\n> > I have attached the patch file to this email for your convenience.\n> > Your valuable time and consideration in reviewing this extension are\n> > sincerely appreciated.\n> >\n> > Thank you for your continued support and guidance. I am looking\n> > forward to your feedback and collaboration in enhancing the PostgreSQL\n> > ecosystem.\n> >\n> > The working of the extension:\n> >\n> > 1. Creating the extension pg_buffercache and then call select query on\n> > a table and note the buffer to be cleared.\n> > pgbench=# create extension pg_buffercache;\n> > CREATE EXTENSION\n> > pgbench=# select count(*) from pgbench_accounts;\n> > count\n> > --------\n> > 100000\n> > (1 row)\n> >\n> > pgbench=# SELECT *\n> > FROM pg_buffercache\n> > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > bufferid | relfilenode | reltablespace | reldatabase | relforknumber\n> > | relblocknumber | isdirty | usagecount | pinning_backends\n> > ----------+-------------+---------------+-------------+---------------+----------------+---------+------------+------------------\n> > 233 | 16397 | 1663 | 16384 | 0\n> > | 0 | f | 1 | 0\n> > 234 | 16397 | 1663 | 16384 | 0\n> > | 1 | f | 1 | 0\n> > 235 | 16397 | 1663 | 16384 | 0\n> > | 2 | f | 1 | 0\n> > 236 | 16397 | 1663 | 16384 | 0\n> > | 3 | f | 1 | 0\n> > 237 | 16397 | 1663 | 16384 | 0\n> > | 4 | f | 1 | 0\n> >\n> >\n> > 2. Clearing a single buffer by entering the bufferid.\n> > pgbench=# SELECT count(*)\n> > FROM pg_buffercache\n> > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > count\n> > -------\n> > 1660\n> > (1 row)\n> >\n> > pgbench=# select pg_buffercache_invalidate(233);\n> > pg_buffercache_invalidate\n> > ---------------------------\n> > t\n> > (1 row)\n> >\n> > pgbench=# SELECT count(*)\n> > FROM pg_buffercache\n> > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > count\n> > -------\n> > 1659\n> > (1 row)\n> >\n> > 3. Clearing the entire buffer for a relation using the function.\n> > pgbench=# SELECT count(*)\n> > FROM pg_buffercache\n> > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > count\n> > -------\n> > 1659\n> > (1 row)\n> >\n> > pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> > pg_buffercache where relfilenode =\n> > pg_relation_filenode('pgbench_accounts'::regclass);\n> > count\n> > -------\n> > 1659\n> > (1 row)\n> >\n> > pgbench=# SELECT count(*)\n> > FROM pg_buffercache\n> > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > count\n> > -------\n> > 0\n> > (1 row)\n> >\n> >\n> > Best regards,\n> > Palak\n\n\n",
"msg_date": "Tue, 11 Jul 2023 18:08:56 +0530",
"msg_from": "Palak Chaturvedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Can you please review the new patch of the extension with implemented\nforce variable.\n\nOn Tue, 11 Jul 2023 at 18:08, Palak Chaturvedi\n<[email protected]> wrote:\n>\n> Hey Nitin,\n> >Will\n> >there be a scenario where the buffer is dirty and its reference count\n> >is zero?\n> There might be a buffer that has been dirtied but is not pinned or\n> being used currently by a process. So checking the refcount and then\n> dirty buffers helps.\n> >First, The TryInvalidateBuffer() tries to flush the buffer if it is\n> dirty and then tries to invalidate it if it meets the requirement.\n> Instead of directly doing this can we provide an option to the caller\n> to mention whether to invalidate the dirty buffers or not.\n> Yes that can be implemented with a default value of force. Will\n> implement it in the next patch.\n>\n> On Wed, 5 Jul 2023 at 17:53, Nitin Jadhav <[email protected]> wrote:\n> >\n> > +1 for the idea. It's going to be more useful to test and understand\n> > the buffer management of PostgreSQL and it can be used to explicitly\n> > free up the buffers if there are any such requirements.\n> >\n> > I had a quick look over the patch. Following are the comments.\n> >\n> > First, The TryInvalidateBuffer() tries to flush the buffer if it is\n> > dirty and then tries to invalidate it if it meets the requirement.\n> > Instead of directly doing this can we provide an option to the caller\n> > to mention whether to invalidate the dirty buffers or not. For\n> > example, TryInvalidateBuffer(Buffer bufnum, bool force), if the force\n> > is set to FALSE, then ignore invalidating dirty buffers. Otherwise,\n> > flush the dirty buffer and try to invalidate.\n> >\n> > Second, In TryInvalidateBuffer(), it first checks if the reference\n> > count is greater than zero and then checks for dirty buffers. Will\n> > there be a scenario where the buffer is dirty and its reference count\n> > is zero? Can you please provide more information on this or adjust the\n> > code accordingly.\n> >\n> > > +/*\n> > > +Try Invalidating a buffer using bufnum.\n> > > +If the buffer is invalid, the function returns false.\n> > > +The function checks for dirty buffer and flushes the dirty buffer before invalidating.\n> > > +If the buffer is still dirty it returns false.\n> > > +*/\n> > > +bool\n> >\n> > The star(*) and space are missing here. Please refer to the style of\n> > function comments and change accordingly.\n> >\n> > Thanks & Regards,\n> > Nitin Jadhav\n> >\n> > On Fri, Jun 30, 2023 at 4:17 PM Palak Chaturvedi\n> > <[email protected]> wrote:\n> > >\n> > > I hope this email finds you well. I am excited to share that I have\n> > > extended the functionality of the `pg_buffercache` extension by\n> > > implementing buffer invalidation capability, as requested by some\n> > > PostgreSQL contributors for improved testing scenarios.\n> > >\n> > > This marks my first time submitting a patch to pgsql-hackers, and I am\n> > > eager to receive your expert feedback on the changes made. Your\n> > > insights are invaluable, and any review or comments you provide will\n> > > be greatly appreciated.\n> > >\n> > > The primary objective of this enhancement is to enable explicit buffer\n> > > invalidation within the `pg_buffercache` extension. By doing so, we\n> > > can simulate scenarios where buffers are invalidated and observe the\n> > > resulting behavior in PostgreSQL.\n> > >\n> > > As part of this patch, a new function or mechanism has been introduced\n> > > to facilitate buffer invalidation. I would like to hear your thoughts\n> > > on whether this approach provides a good user interface for this\n> > > functionality. Additionally, I seek your evaluation of the buffer\n> > > locking protocol employed in the extension to ensure its correctness\n> > > and efficiency.\n> > >\n> > > Please note that I plan to add comprehensive documentation once the\n> > > details of this enhancement are agreed upon. This documentation will\n> > > serve as a valuable resource for users and contributors alike. I\n> > > believe that your expertise will help uncover any potential issues and\n> > > opportunities for further improvement.\n> > >\n> > > I have attached the patch file to this email for your convenience.\n> > > Your valuable time and consideration in reviewing this extension are\n> > > sincerely appreciated.\n> > >\n> > > Thank you for your continued support and guidance. I am looking\n> > > forward to your feedback and collaboration in enhancing the PostgreSQL\n> > > ecosystem.\n> > >\n> > > The working of the extension:\n> > >\n> > > 1. Creating the extension pg_buffercache and then call select query on\n> > > a table and note the buffer to be cleared.\n> > > pgbench=# create extension pg_buffercache;\n> > > CREATE EXTENSION\n> > > pgbench=# select count(*) from pgbench_accounts;\n> > > count\n> > > --------\n> > > 100000\n> > > (1 row)\n> > >\n> > > pgbench=# SELECT *\n> > > FROM pg_buffercache\n> > > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > > bufferid | relfilenode | reltablespace | reldatabase | relforknumber\n> > > | relblocknumber | isdirty | usagecount | pinning_backends\n> > > ----------+-------------+---------------+-------------+---------------+----------------+---------+------------+------------------\n> > > 233 | 16397 | 1663 | 16384 | 0\n> > > | 0 | f | 1 | 0\n> > > 234 | 16397 | 1663 | 16384 | 0\n> > > | 1 | f | 1 | 0\n> > > 235 | 16397 | 1663 | 16384 | 0\n> > > | 2 | f | 1 | 0\n> > > 236 | 16397 | 1663 | 16384 | 0\n> > > | 3 | f | 1 | 0\n> > > 237 | 16397 | 1663 | 16384 | 0\n> > > | 4 | f | 1 | 0\n> > >\n> > >\n> > > 2. Clearing a single buffer by entering the bufferid.\n> > > pgbench=# SELECT count(*)\n> > > FROM pg_buffercache\n> > > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > > count\n> > > -------\n> > > 1660\n> > > (1 row)\n> > >\n> > > pgbench=# select pg_buffercache_invalidate(233);\n> > > pg_buffercache_invalidate\n> > > ---------------------------\n> > > t\n> > > (1 row)\n> > >\n> > > pgbench=# SELECT count(*)\n> > > FROM pg_buffercache\n> > > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > > count\n> > > -------\n> > > 1659\n> > > (1 row)\n> > >\n> > > 3. Clearing the entire buffer for a relation using the function.\n> > > pgbench=# SELECT count(*)\n> > > FROM pg_buffercache\n> > > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > > count\n> > > -------\n> > > 1659\n> > > (1 row)\n> > >\n> > > pgbench=# select count(pg_buffercache_invalidate(bufferid)) from\n> > > pg_buffercache where relfilenode =\n> > > pg_relation_filenode('pgbench_accounts'::regclass);\n> > > count\n> > > -------\n> > > 1659\n> > > (1 row)\n> > >\n> > > pgbench=# SELECT count(*)\n> > > FROM pg_buffercache\n> > > WHERE relfilenode = pg_relation_filenode('pgbench_accounts'::regclass);\n> > > count\n> > > -------\n> > > 0\n> > > (1 row)\n> > >\n> > >\n> > > Best regards,\n> > > Palak",
"msg_date": "Tue, 11 Jul 2023 18:39:36 +0530",
"msg_from": "Palak Chaturvedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hi,\n\nI wanted this feature a couple times before...\n\nOn 2023-07-03 13:56:29 +0530, Palak Chaturvedi wrote:\n> +PG_FUNCTION_INFO_V1(pg_buffercache_invalidate);\n> +Datum\n> +pg_buffercache_invalidate(PG_FUNCTION_ARGS)\n\n\nI don't think \"invalidating\" is the right terminology. Note that we already\nhave InvalidateBuffer() - but it's something we can't allow users to do, as it\nthrows away dirty buffer contents (it's used for things like dropping a\ntable).\n\nHow about using \"discarding\" for this functionality?\n\n\n\nUsing the buffer ID as the identifier doesn't seem great, because what that\nbuffer is used for, could have changed since the buffer ID has been acquired\n(via the pg_buffercache view presumably)?\n\nMy suspicion is that the usual usecase for this would be to drop all buffers\nthat can be dropped?\n\n\n> +\tif (bufnum < 0 || bufnum > NBuffers)\n> +\t{\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"buffernum is not valid\")));\n> +\n> +\t}\n> +\n> +\tresult = TryInvalidateBuffer(bufnum);\n> +\tPG_RETURN_BOOL(result);\n> +}\n\nI think this should be restricted to superuser by default (by revoking\npermissions from PUBLIC). We allow normal users to use pg_prewarm(...), true -\nbut we perform an ACL check on the relation, so it can only be used for\nrelations you have access too. This function could be used to affect\nperformance of other users quite substantially.\n\n\n\n\n> +/*\n> +Try Invalidating a buffer using bufnum.\n> +If the buffer is invalid, the function returns false.\n> +The function checks for dirty buffer and flushes the dirty buffer before invalidating.\n> +If the buffer is still dirty it returns false.\n> +*/\n> +bool\n> +TryInvalidateBuffer(Buffer bufnum)\n> +{\n> +\tBufferDesc *bufHdr = GetBufferDescriptor(bufnum - 1);\n> +\tuint32\t\tbuf_state;\n> +\n> +\tReservePrivateRefCountEntry();\n> +\n> +\tbuf_state = LockBufHdr(bufHdr);\n> +\tif ((buf_state & BM_VALID) == BM_VALID)\n> +\t{\n> +\t\t/*\n> +\t\t * The buffer is pinned therefore cannot invalidate.\n> +\t\t */\n> +\t\tif (BUF_STATE_GET_REFCOUNT(buf_state) > 0)\n> +\t\t{\n> +\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> +\t\t\treturn false;\n> +\t\t}\n> +\t\tif ((buf_state & BM_DIRTY) == BM_DIRTY)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * Try once to flush the dirty buffer.\n> +\t\t\t */\n> +\t\t\tPinBuffer_Locked(bufHdr);\n> +\t\t\tLWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED);\n> +\t\t\tFlushBuffer(bufHdr, NULL, IOOBJECT_RELATION, IOCONTEXT_NORMAL);\n> +\t\t\tLWLockRelease(BufferDescriptorGetContentLock(bufHdr));\n> +\t\t\tUnpinBuffer(bufHdr);\n> +\t\t\tbuf_state = LockBufHdr(bufHdr);\n> +\t\t\tif (BUF_STATE_GET_REFCOUNT(buf_state) > 0)\n> +\t\t\t{\n> +\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> +\t\t\t\treturn false;\n> +\t\t\t}\n> +\n> +\t\t\t/*\n> +\t\t\t * If its dirty again or not valid anymore give up.\n> +\t\t\t */\n> +\n> +\t\t\tif ((buf_state & (BM_DIRTY | BM_VALID)) != (BM_VALID))\n> +\t\t\t{\n> +\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> +\t\t\t\treturn false;\n> +\t\t\t}\n> +\n> +\t\t}\n> +\n> +\t\tInvalidateBuffer(bufHdr);\n\nI'm wary of using InvalidateBuffer() here, it's typically used for different\npurposes, including throwing valid contents away. That seems a bit scary.\n\nI think you should be able to just use InvalidateVictimBuffer()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jul 2023 17:45:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:45 PM Andres Freund <[email protected]> wrote:\n> I don't think \"invalidating\" is the right terminology. Note that we already\n> have InvalidateBuffer() - but it's something we can't allow users to do, as it\n> throws away dirty buffer contents (it's used for things like dropping a\n> table).\n>\n> How about using \"discarding\" for this functionality?\n\n+1\n\n> Using the buffer ID as the identifier doesn't seem great, because what that\n> buffer is used for, could have changed since the buffer ID has been acquired\n> (via the pg_buffercache view presumably)?\n>\n> My suspicion is that the usual usecase for this would be to drop all buffers\n> that can be dropped?\n\nWell the idea was to be able to drop less than everything. Instead of\nhaving to bike-shed what the user interface should look like to\nspecify what subset of everything you want to drop, you can just write\nSQL queries (mostly likely involving the pg_buffercache view, indeed).\nIt's true that buffer IDs can change underneath your feet between\nSELECT and discard, but the whole concept is inherently racy like\nthat. Suppose we instead had pg_unwarm('my_table') or whatever\ninstead. Immediately after it runs and before it even returns, some\nblocks of my_table can finish up coming back into the pool. It's also\ninteresting to be able to kick individual pages out when testing code\nthat caches buffers IDs for ReadRecentBuffer(), and other buffer-pool\nwork. Hence desire to not try to be clever at all here, and just come\nup with the absolute bare minimum thing that can kick buffers out by\nID and leave the rest up to hackers/experts who are willing and able\nto write queries to supply them. You can still drop everything that\ncan be dropped -- generate_series. Or whatever you want.\n\n\n",
"msg_date": "Wed, 19 Jul 2023 13:26:30 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": " Hello\n\nI had a look at the patch and tested it on CI bot, it compiles and tests fine both autoconf and meson. I noticed that the v2 patch contains the v1 patch file as well. Not sure if intended but put there my accident.\n\n> I don't think \"invalidating\" is the right terminology. Note that we already\n > have InvalidateBuffer() - but it's something we can't allow users to do, as it\n > throws away dirty buffer contents (it's used for things like dropping a\n > table).\n >\n > How about using \"discarding\" for this functionality?\n\nI think \"invalidating\" is the right terminology here, it is exactly what the feature is doing, it tries to invalidate a buffer ID by calling InvalidateBuffer() routine inside buffer manager and calls FlushBuffer() before invalidating if marked dirty. \n\nThe problem here is that InvalidateBuffer() could be dangerous because it allows a user to invalidate buffer that may have data in other tables not owned by the current user, \n\nI think it all comes down to the purpose of this feature. Based on the description in this email thread, I feel like this feature should be categorized as a developer-only feature, to be used by PG developer to experiment and observe some development works by invalidating one more more specific buffers..... If this is the case, it may be helpful to add a \"DEVELOPER_OPTIONS\" in GUC, which allows or disallows the TryInvalidateBuffer() to run or to return error if user does not have this developer option enabled.\n\nIf the purpose of this feature is for general users, then it would make sense to have something like pg_unwarm (exactly opposite of pg_prewarm) that takes table name (instead of buffer ID) and drop all buffers associated with that table name. There will be permission checks as well so a user cannot pg_unwarm a table owned by someone else. User in this case won't be able to invalidate a particular buffer, but he/she should not have to as a regular user anyway.\n\nthanks!\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\[email protected]\nwww.highgo.ca\n\n\n\n",
"msg_date": "Fri, 28 Jul 2023 14:25:04 -0700",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hii,\nThanks for your feedback. We have decided to add a role for the\nextension to solve that problem.\nAnd concerning to pg_unwarm table I think we can create a new function\nto do that but I think a general user would not require to clear a\ntable from buffercache.\nWe can use bufferid and where statements to do the same if a\nsuperuser/(specific role) requests it.\n\nThanks.\n\nOn Sat, 29 Jul 2023 at 02:55, Cary Huang <[email protected]> wrote:\n>\n> Hello\n>\n> I had a look at the patch and tested it on CI bot, it compiles and tests fine both autoconf and meson. I noticed that the v2 patch contains the v1 patch file as well. Not sure if intended but put there my accident.\n>\n> > I don't think \"invalidating\" is the right terminology. Note that we already\n> > have InvalidateBuffer() - but it's something we can't allow users to do, as it\n> > throws away dirty buffer contents (it's used for things like dropping a\n> > table).\n> >\n> > How about using \"discarding\" for this functionality?\n>\n> I think \"invalidating\" is the right terminology here, it is exactly what the feature is doing, it tries to invalidate a buffer ID by calling InvalidateBuffer() routine inside buffer manager and calls FlushBuffer() before invalidating if marked dirty.\n>\n> The problem here is that InvalidateBuffer() could be dangerous because it allows a user to invalidate buffer that may have data in other tables not owned by the current user,\n>\n> I think it all comes down to the purpose of this feature. Based on the description in this email thread, I feel like this feature should be categorized as a developer-only feature, to be used by PG developer to experiment and observe some development works by invalidating one more more specific buffers..... If this is the case, it may be helpful to add a \"DEVELOPER_OPTIONS\" in GUC, which allows or disallows the TryInvalidateBuffer() to run or to return error if user does not have this developer option enabled.\n>\n> If the purpose of this feature is for general users, then it would make sense to have something like pg_unwarm (exactly opposite of pg_prewarm) that takes table name (instead of buffer ID) and drop all buffers associated with that table name. There will be permission checks as well so a user cannot pg_unwarm a table owned by someone else. User in this case won't be able to invalidate a particular buffer, but he/she should not have to as a regular user anyway.\n>\n> thanks!\n>\n> Cary Huang\n> -------------\n> HighGo Software Inc. (Canada)\n> [email protected]\n> www.highgo.ca\n>\n\n\n",
"msg_date": "Tue, 1 Aug 2023 10:08:52 +0530",
"msg_from": "Palak Chaturvedi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Le 01/07/2023 à 00:09, Thomas Munro a écrit :\n> On Fri, Jun 30, 2023 at 10:47 PM Palak Chaturvedi\n> <[email protected]> wrote:\n\n> We also talked a bit about how one might control the kernel page cache\n> in more fine-grained ways for testing purposes, but it seems like the\n> pgfincore project has that covered with its pgfadvise_willneed() and\n> pgfadvise_dontneed(). IMHO that project could use more page-oriented\n> operations (instead of just counts and coarse grains operations) but\n> that's something that could be material for patches to send to the\n> extension maintainers. This work, in contrast, is more tangled up\n> with bufmgr.c internals, so it feels like this feature belongs in a\n> core contrib module.\n\nPrecisely what pgfincore is doing/offering already.\nHappy to propose to postgresql tree if there are interest. Next step for \npgfincore is to add cachestat() syscall and evaluates benefits for \nPostgreSQL cost estimators of this new call.\n\nHere an example to achieve the warm/unwarm, each bit is a PostgreSQL \npage, so here we warm cache with the first 3 and remove the last 3 from \ncache (system cache, not shared buffers).\n\n-- Loading and Unloading\ncedric=# select * from pgfadvise_loader('pgbench_accounts', 0, true, \ntrue, B'111000');\n relpath | os_page_size | os_pages_free | pages_loaded | \npages_unloaded\n------------------+--------------+---------------+--------------+----------------\n base/11874/16447 | 4096 | 408376 | 3 | \n 3\n\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n",
"msg_date": "Wed, 22 Nov 2023 11:04:59 +0100",
"msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hi Palak,\n\nI did a quick review of the patch:\n\n+CREATE FUNCTION pg_buffercache_invalidate(IN int, IN bool default true)\n+RETURNS bool\n+AS 'MODULE_PATHNAME', 'pg_buffercache_invalidate'\n+LANGUAGE C PARALLEL SAFE;\n\n--> Not enforced anywhere, but you can also add a comment to the \nfunction, for end users...\n\n+PG_FUNCTION_INFO_V1(pg_buffercache_invalidate);\n+Datum\n+pg_buffercache_invalidate(PG_FUNCTION_ARGS)\n+{\n+ Buffer bufnum;\n\n\"Buffer blocknum\" is not correct in this context I believe. Buffer is \nwhen you have to manage Local buffer too (negative number).\nHere uint32 is probably the good choice at the end, as used in \npg_buffercache in other places.\n\nAlso in this extension bufferid is used, not buffernum.\n\n+ bufnum = PG_GETARG_INT32(0);\n\n+ if (bufnum <= 0 || bufnum > NBuffers)\n\nmaybe have a look at pageinspect and its PG_GETARG_UINT32.\n\n\n+ {\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"buffernum is not valid\")));\n\nhttps://www.postgresql.org/docs/16/error-style-guide.html let me think \nthat message like 'buffernum is not valid' can be enhanced: out of \nrange, cannot be negative or exceed number of shared buffers.... ? Maybe \nadd the value to the message.\n\n+\n+ }\n+\n+ /*\n+ * Check whether to force invalidate the dirty buffer. The default \nvalue of force is true.\n+ */\n+\n+ force = PG_GETARG_BOOL(1);\n\nI think you also need to test PG_ARGISNULL with force parameter.\n\n+/*\n+ * Try Invalidating a buffer using bufnum.\n+ * If the buffer is invalid, the function returns false.\n+ * The function checks for dirty buffer and flushes the dirty buffer \nbefore invalidating.\n+ * If the buffer is still dirty it returns false.\n+ */\n+bool\n+TryInvalidateBuffer(Buffer bufnum, bool force)\n+{\n+ BufferDesc *bufHdr = GetBufferDescriptor(bufnum - 1);\n\nthis is not safe, GetBufferDescriptor() accepts uint, but can receive \nnegative here. Use uint32 and bufferid.\n\n+ uint32 buf_state;\n+ ReservePrivateRefCountEntry();\n+\n+ buf_state = LockBufHdr(bufHdr);\n+ if ((buf_state & BM_VALID) == BM_VALID)\n+ {\n+ /*\n+ * The buffer is pinned therefore cannot invalidate.\n+ */\n+ if (BUF_STATE_GET_REFCOUNT(buf_state) > 0)\n+ {\n+ UnlockBufHdr(bufHdr, buf_state);\n+ return false;\n+ }\n+ if ((buf_state & BM_DIRTY) == BM_DIRTY)\n+ {\n+ /*\n+ * If the buffer is dirty and the user has not asked to \nclear the dirty buffer return false.\n+ * Otherwise clear the dirty buffer.\n+ */\n+ if(!force){\n+ return false;\n\nprobably need to unlockbuffer here too.\n\n+ }\n+ /*\n+ * Try once to flush the dirty buffer.\n+ */\n+ ResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n+ PinBuffer_Locked(bufHdr);\n+ LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), \nLW_SHARED);\n+ FlushBuffer(bufHdr, NULL, IOOBJECT_RELATION, IOCONTEXT_NORMAL);\n+ LWLockRelease(BufferDescriptorGetContentLock(bufHdr));\n+ UnpinBuffer(bufHdr);\n\nI am unsure of this area (the code is correct, but I wonder why there is \nno static code for this part -from pin to unpin- in PostgreSQL), and \nmaybe better to go with FlushOneBuffer() ?\nAlso it is probably required to account for the shared buffer eviction \nin some pg_stat* view or table.\nNot sure how disk syncing is handled after this sequence nor if it's \nimportant ?\n\n\n+ buf_state = LockBufHdr(bufHdr);\n+ if (BUF_STATE_GET_REFCOUNT(buf_state) > 0)\n+ {\n+ UnlockBufHdr(bufHdr, buf_state);\n+ return false;\n+ }\n+\n+ /*\n+ * If its dirty again or not valid anymore give up.\n+ */\n+\n+ if ((buf_state & (BM_DIRTY | BM_VALID)) != (BM_VALID))\n+ {\n+ UnlockBufHdr(bufHdr, buf_state);\n+ return false;\n+ }\n+\n+ }\n+\n+ InvalidateBuffer(bufHdr);\n+ return true;\n+ }\n+ else\n+ {\n+ UnlockBufHdr(bufHdr, buf_state);\n+ return false;\n+ }\n\n\nMaybe safe to remove the else {} ...\nMaybe more tempting to start the big if with the following instead less \nnested...):\n+ if ((buf_state & BM_VALID) != BM_VALID)\n+ {\n+ UnlockBufHdr(bufHdr, buf_state);\n+ return false;\n+ }\n\nDoc and test are absent.\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:25:03 +0100",
"msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "\n\n\n\n\nOn 1/3/24 10:25 AM, Cédric Villemain\n wrote:\n\nHi\n Palak,\n \n\n I did a quick review of the patch:\n \n\n +CREATE FUNCTION pg_buffercache_invalidate(IN int, IN bool default\n true)\n \n +RETURNS bool\n \n +AS 'MODULE_PATHNAME', 'pg_buffercache_invalidate'\n \n +LANGUAGE C PARALLEL SAFE;\n \n\n --> Not enforced anywhere, but you can also add a comment to\n the function, for end users...\n \n\nThe arguments should also have names...\n\n + force = PG_GETARG_BOOL(1);\n \n\n I think you also need to test PG_ARGISNULL with force parameter.\n \n\n Actually, that's true for the first argument as well. Or, just mark\n the function as STRICT.\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n",
"msg_date": "Wed, 3 Jan 2024 17:15:11 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hi Palak,\n\nthere is currently even more interest in your patch as it should help \nbuilding tests for on-going development around cache/read \nmanagement/effects.\n\nDo you expect to be able to follow-up in the coming future ?\n\nThank you,\nCédric\n\nOn 04/01/2024 00:15, Jim Nasby wrote:\n> On 1/3/24 10:25 AM, Cédric Villemain wrote:\n>> Hi Palak,\n>>\n>> I did a quick review of the patch:\n>>\n>> +CREATE FUNCTION pg_buffercache_invalidate(IN int, IN bool default true)\n>> +RETURNS bool\n>> +AS 'MODULE_PATHNAME', 'pg_buffercache_invalidate'\n>> +LANGUAGE C PARALLEL SAFE;\n>>\n>> --> Not enforced anywhere, but you can also add a comment to the \n>> function, for end users...\n> \n> The arguments should also have names...\n> \n>>\n>> + force = PG_GETARG_BOOL(1);\n>>\n>> I think you also need to test PG_ARGISNULL with force parameter.\n> Actually, that's true for the first argument as well. Or, just mark the \n> function as STRICT.\n> \n> -- \n> Jim Nasby, Data Architect, Austin TX\n> \n\n-- \n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n",
"msg_date": "Sun, 14 Jan 2024 14:36:26 +0100",
"msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "[Sorry to those who received this message twice -- the first time got\nbounced by the list because of a defunct email address in the CC\nlist.]\n\nHere is a rebase of Palak's v2 patch. I didn't change anything except\nfor the required resource manager API change, a pgindent run, and\nremoval of a stray file, and there is still some feedback to be\naddressed before we can get this in, but I wanted to fix the bitrot\nand re-open this CF item because this is very useful work. It's\nessential for testing the prefetching-related stuff happening in\nvarious other threads, where you want to be able to get the buffer\npool into various interesting states.",
"msg_date": "Tue, 27 Feb 2024 11:41:23 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Quite an interesting patch, in my opinion. I've decided to work on it a\nbit, did some refactoring (sorry) and add\nbasic tests. Also, I try to take into account as much as possible notes on\nthe patch, mentioned by Cédric Villemain.\n\n> and maybe better to go with FlushOneBuffer() ?\nIt's a good idea, but I'm not sure at the moment. I'll try to dig some\ndeeper into it. At least, FlushOneBuffer does\nnot work for a local buffers. So, we have to decide whatever\npg_buffercache_invalidate should or should not\nwork for local buffers. For now, I don't see why it should not. Maybe I\nmiss something?\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 7 Mar 2024 20:20:11 +0300",
"msg_from": "Maxim Orlov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 6:20 AM Maxim Orlov <[email protected]> wrote:\n> Quite an interesting patch, in my opinion. I've decided to work on it a bit, did some refactoring (sorry) and add\n> basic tests. Also, I try to take into account as much as possible notes on the patch, mentioned by Cédric Villemain.\n\nThanks! Unfortunately I don't think it's possible to include a\nregression test that looks at the output, because it'd be\nnon-deterministic. Any other backend could pin or dirty the buffer\nyou try to evict, changing the behaviour.\n\n> > and maybe better to go with FlushOneBuffer() ?\n> It's a good idea, but I'm not sure at the moment. I'll try to dig some deeper into it. At least, FlushOneBuffer does\n> not work for a local buffers. So, we have to decide whatever pg_buffercache_invalidate should or should not\n> work for local buffers. For now, I don't see why it should not. Maybe I miss something?\n\nI think it's OK to ignore local buffers for now. pg_buffercache\ngenerally doesn't support/show them so I don't feel inclined to\nsupport them for this. I removed a few traces of local support.\n\nIt didn't seem appropriate to use the pg_monitor role for this, so I\nmade it superuser-only. I don't think it makes much sense to use this\non any kind of production system so I don't think we need a new role\nfor it, and existing roles don't seem too appropriate. pageinspect et\nal use the same approach.\n\nI added a VOLATILE qualifier to the function.\n\nI added some documentation.\n\nI changed the name to pg_buffercache_evict().\n\nI got rid of the 'force' flag which was used to say 'I only want to\nevict this buffer it is clean'. I don't really see the point in that,\nwe might as well keep it simple. You could filter buffers on\n\"isdirty\" if you want.\n\nI added comments to scare anyone off using EvictBuffer() for anything\nmuch, and marking it as something for developer convenience. (I am\naware of an experimental patch that uses this same function as part of\na buffer pool resizing operation, but that has other infrastructure to\nmake that safe and would adjust those remarks accordingly.)\n\nI wondered whether it should really be testing for BM_TAG_VALID\nrather than BM_VALID. Arguably, but it doesn't seem important for\nnow. The distinction would arise if someone had tried to read in a\nbuffer, got an I/O error and abandoned ship, leaving a buffer with a\nvalid tag but not valid contents. Anyone who tries to ReadBuffer() it\nwill then try to read it again, but in the meantime this function\nwon't be able to evict it (it'll just return false). Doesn't seem\nthat obvious to me that this obscure case needs to be handled. That\ndoesn't happen *during* a non-error case, because then it's pinned and\nwe already return false in this code for pins.\n\nI contemplated whether InvalidateBuffer() or InvalidateVictimBuffer()\nwould be better here and realised that Andres's intuition was probably\nright when he suggested the latter up-thread. It is designed with the\nright sort of arbitrary concurrent activity in mind, where the former\nassumes things about locking and dropping, which could get us into\ntrouble if not now maybe in the future.\n\nI ran the following diabolical buffer blaster loop while repeatedly\nrunning installcheck:\n\ndo\n$$\nbegin\n loop\n perform pg_buffercache_evict(bufferid)\n from pg_buffercache\n where random() <= 0.25;\n end loop;\nEnd;\n$$;\n\nThe only ill-effect was a hot laptop.\n\nThoughts, objections, etc?\n\nVery simple example of use:\n\ncreate or replace function uncache_relation(name text)\nreturns boolean\nbegin atomic;\n select bool_and(pg_buffercache_evict(bufferid))\n from pg_buffercache\n where reldatabase = (select oid\n from pg_database\n where datname = current_database())\n and relfilenode = pg_relation_filenode(name);\nend;\n\nMore interesting for those of us hacking on streaming I/O stuff was\nthe ability to evict just parts of things and see how the I/O merging\nand I/O depth react.",
"msg_date": "Thu, 4 Apr 2024 16:22:17 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On second thoughts, I think the original \"invalidate\" terminology was\nfine, no need to invent a new term.\n\nI thought of a better name for the bufmgr.c function though:\nInvalidateUnpinnedBuffer(). That name seemed better to me after I\nfestooned it with warnings about why exactly it's inherently racy and\nonly for testing use.\n\nI suppose someone could propose an additional function\npg_buffercache_invalidate(db, tbspc, rel, fork, blocknum) that would\nbe slightly better in the sense that it couldn't accidentally evict\nsome innocent block that happened to replace the real target just\nbefore it runs, but I don't think it matters much for this purpose and\nit would still be racy on return (vacuum decides to load your block\nback in) so I don't think it's worth bothering with.\n\nSo this is the version I plan to commit.",
"msg_date": "Sun, 7 Apr 2024 11:07:58 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Sat, Apr 6, 2024 at 7:08 PM Thomas Munro <[email protected]> wrote:\n>\n> On second thoughts, I think the original \"invalidate\" terminology was\n> fine, no need to invent a new term.\n>\n> I thought of a better name for the bufmgr.c function though:\n> InvalidateUnpinnedBuffer(). That name seemed better to me after I\n> festooned it with warnings about why exactly it's inherently racy and\n> only for testing use.\n>\n> I suppose someone could propose an additional function\n> pg_buffercache_invalidate(db, tbspc, rel, fork, blocknum) that would\n> be slightly better in the sense that it couldn't accidentally evict\n> some innocent block that happened to replace the real target just\n> before it runs, but I don't think it matters much for this purpose and\n> it would still be racy on return (vacuum decides to load your block\n> back in) so I don't think it's worth bothering with.\n>\n> So this is the version I plan to commit.\n\nI've reviewed v6. I think you should mention in the docs that it only\nworks for shared buffers -- so specifically not buffers containing\nblocks of temp tables.\n\nIn the function pg_buffercache_invalidate(), why not use the\nBufferIsValid() function?\n\n- if (buf < 1 || buf > NBuffers)\n+ if (!BufferIsValid(buf) || buf > NBuffers)\n\nI thought the below would be more clear for the comment above\nInvalidateUnpinnedBuffer().\n\n- * Returns true if the buffer was valid and it has now been made invalid.\n- * Returns false if the wasn't valid, or it couldn't be evicted due to a pin,\n- * or if the buffer becomes dirty again while we're trying to write it out.\n+ * Returns true if the buffer was valid and has now been made invalid. Returns\n+ * false if it wasn't valid, if it couldn't be evicted due to a pin, or if the\n+ * buffer becomes dirty again while we're trying to write it out.\n\nSome of that probably applies for the docs too (i.e. you have some\nsimilar wording in the docs). There is actually one typo in your\nversion, so even if you don't adopt my suggestion, you should fix that\ntypo.\n\nI didn't notice anything else out of place. I tried it and it worked\nas expected. I'm excited to have this feature!\n\nI didn't read through this whole thread, but was there any talk of\nadding other functions to let me invalidate a bunch of buffers at once\nor even some options -- like invalidate every 3rd buffer or whatever?\n(Not the concern of this patch, but just wondering because that would\nbe a useful future enhancement IMO).\n\n- Melanie\n\n\n",
"msg_date": "Sun, 7 Apr 2024 19:53:22 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-07 11:07:58 +1200, Thomas Munro wrote:\n> I thought of a better name for the bufmgr.c function though:\n> InvalidateUnpinnedBuffer(). That name seemed better to me after I\n> festooned it with warnings about why exactly it's inherently racy and\n> only for testing use.\n\nI still dislike that, fwiw, due to the naming similarity to\nInvalidateBuffer(), which throws away dirty buffer contents too. Which\nobviously isn't acceptable from \"userspace\". I'd just name it\npg_buffercache_evict() - given that the commit message's first paragraph uses\n\"it is useful to be able to evict arbitrary blocks\" that seems to describe\nthings at least as well as \"invalidate\"?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 17:10:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 12:10 PM Andres Freund <[email protected]> wrote:\n> On 2024-04-07 11:07:58 +1200, Thomas Munro wrote:\n> > I thought of a better name for the bufmgr.c function though:\n> > InvalidateUnpinnedBuffer(). That name seemed better to me after I\n> > festooned it with warnings about why exactly it's inherently racy and\n> > only for testing use.\n>\n> I still dislike that, fwiw, due to the naming similarity to\n> InvalidateBuffer(), which throws away dirty buffer contents too. Which\n> obviously isn't acceptable from \"userspace\". I'd just name it\n> pg_buffercache_evict() - given that the commit message's first paragraph uses\n> \"it is useful to be able to evict arbitrary blocks\" that seems to describe\n> things at least as well as \"invalidate\"?\n\nAlright, sold. I'll go with EvictUnpinnedBuffer() in bufmgr.c and\npg_buffercache_evict() in the contrib module.\n\n\n",
"msg_date": "Mon, 8 Apr 2024 16:30:34 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Mon, Apr 8, 2024 at 11:53 AM Melanie Plageman\n<[email protected]> wrote:\n> I've reviewed v6. I think you should mention in the docs that it only\n> works for shared buffers -- so specifically not buffers containing\n> blocks of temp tables.\n\nThanks for looking! The whole pg_buffercache extension is for working\nwith shared buffers only, as mentioned at the top. I have tried to\nimprove that paragraph though, as it only mentioned examining them.\n\n> In the function pg_buffercache_invalidate(), why not use the\n> BufferIsValid() function?\n>\n> - if (buf < 1 || buf > NBuffers)\n> + if (!BufferIsValid(buf) || buf > NBuffers)\n\nIt doesn't check the range (it has assertions, not errors).\n\n> I thought the below would be more clear for the comment above\n> InvalidateUnpinnedBuffer().\n>\n> - * Returns true if the buffer was valid and it has now been made invalid.\n> - * Returns false if the wasn't valid, or it couldn't be evicted due to a pin,\n> - * or if the buffer becomes dirty again while we're trying to write it out.\n> + * Returns true if the buffer was valid and has now been made invalid. Returns\n> + * false if it wasn't valid, if it couldn't be evicted due to a pin, or if the\n> + * buffer becomes dirty again while we're trying to write it out.\n\nFixed.\n\n> Some of that probably applies for the docs too (i.e. you have some\n> similar wording in the docs). There is actually one typo in your\n> version, so even if you don't adopt my suggestion, you should fix that\n> typo.\n\nYeah, thanks, improved similarly there.\n\n> I didn't notice anything else out of place. I tried it and it worked\n> as expected. I'm excited to have this feature!\n\nThanks!\n\n> I didn't read through this whole thread, but was there any talk of\n> adding other functions to let me invalidate a bunch of buffers at once\n> or even some options -- like invalidate every 3rd buffer or whatever?\n> (Not the concern of this patch, but just wondering because that would\n> be a useful future enhancement IMO).\n\nTBH I tried to resist people steering in that direction because you\ncan also just define a SQL function to do that built on this, and if\nyou had specialised functions they'd never be quite right. IMHO we\nsucceeded in minimising the engineering and maximising flexibility,\n'cause it's for hackers. Crude, but already able to express a wide\nrange of stuff by punting the problem to SQL.\n\nThanks to Palak for the patch. Pushed.\n\n\n",
"msg_date": "Mon, 8 Apr 2024 17:02:58 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On 07.04.2024 02:07, Thomas Munro wrote:\n\n> So this is the version I plan to commit.\n>\n> +bool\n> +EvictUnpinnedBuffer(Buffer buf)\n> +{\n> ...\n> + /* This will return false if it becomes dirty or someone else pins it. */\n> + result = InvalidateVictimBuffer(desc);\n> +\n> + UnpinBuffer(desc);\n> +\n> + return result;\n> +}\n\n\nHi, Thomas!\n\nShould not we call at the end the StrategyFreeBuffer() function to add \ntarget buffer to freelist and not miss it after invalidation?\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n\n\n\nOn 07.04.2024 02:07, Thomas Munro wrote:\n\n\n\nSo this is the version I plan to commit.\n\n+bool\n+EvictUnpinnedBuffer(Buffer buf)\n+{\n...\n+ /* This will return false if it becomes dirty or someone else pins it. */\n+ result = InvalidateVictimBuffer(desc);\n+\n+ UnpinBuffer(desc);\n+\n+ return result;\n+}\n\n\n\n\nHi, Thomas!\nShould not we call at the end the StrategyFreeBuffer() function\n to add target buffer to freelist and not miss it after\n invalidation?\n\n\n\n-- \nBest regards,\nMaksim Milyutin",
"msg_date": "Sun, 14 Apr 2024 21:16:15 +0300",
"msg_from": "Maksim Milyutin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On 14.04.2024 21:16, Maksim Milyutin wrote:\n\n> On 07.04.2024 02:07, Thomas Munro wrote:\n>\n>> So this is the version I plan to commit.\n>>\n>> +bool\n>> +EvictUnpinnedBuffer(Buffer buf)\n>> +{\n>> ...\n>> + /* This will return false if it becomes dirty or someone else pins it. */\n>> + result = InvalidateVictimBuffer(desc);\n>> +\n>> + UnpinBuffer(desc);\n>> +\n>> + return result;\n>> +}\n>\n>\n> Hi, Thomas!\n>\n> Should not we call at the end the StrategyFreeBuffer() function to add \n> target buffer to freelist and not miss it after invalidation?\n>\n\nHello everyone!\n\nPlease take a look at this issue, current implementation of \nEvictUnpinnedBuffer() IMO is erroneous - evicted buffers are lost \npermanently and will not be reused again\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n\n\n\nOn 14.04.2024 21:16, Maksim Milyutin wrote:\n\n\nOn 07.04.2024 02:07, Thomas Munro wrote:\n\n\n\nSo this is the version I plan to commit.\n\n+bool\n+EvictUnpinnedBuffer(Buffer buf)\n+{\n...\n+ /* This will return false if it becomes dirty or someone else pins it. */\n+ result = InvalidateVictimBuffer(desc);\n+\n+ UnpinBuffer(desc);\n+\n+ return result;\n+}\n\n\n\n\nHi, Thomas!\nShould not we call at the end the StrategyFreeBuffer() function\n to add target buffer to freelist and not miss it after\n invalidation?\n\n\n\nHello everyone!\nPlease take a look at this issue, current implementation of\n EvictUnpinnedBuffer() IMO is erroneous - evicted buffers are lost\n permanently and will not be reused again\n\n\n\n \n-- \nBest regards,\nMaksim Milyutin",
"msg_date": "Mon, 29 Apr 2024 21:47:41 +0300",
"msg_from": "Maksim Milyutin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Tue, Apr 30, 2024 at 6:47 AM Maksim Milyutin <[email protected]> wrote:\n>> Should not we call at the end the StrategyFreeBuffer() function to add target buffer to freelist and not miss it after invalidation?\n\n> Please take a look at this issue, current implementation of EvictUnpinnedBuffer() IMO is erroneous - evicted buffers are lost permanently and will not be reused again\n\nHi Maksim,\n\nOops, thanks, will fix.\n\n\n",
"msg_date": "Tue, 30 Apr 2024 07:17:34 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On Tue, Apr 30, 2024 at 7:17 AM Thomas Munro <[email protected]> wrote:\n> On Tue, Apr 30, 2024 at 6:47 AM Maksim Milyutin <[email protected]> wrote:\n> >> Should not we call at the end the StrategyFreeBuffer() function to add target buffer to freelist and not miss it after invalidation?\n>\n> > Please take a look at this issue, current implementation of EvictUnpinnedBuffer() IMO is erroneous - evicted buffers are lost permanently and will not be reused again\n\nI don't think that's true: it is not lost permanently, it'll be found\nby the regular clock hand. Perhaps it should be put on the freelist\nso it can be found again quickly, but I'm not sure that's a bug, is\nit? If it were true, even basic testing eg select\ncount(pg_buffercache_evict(bufferid)) from pg_buffercache would leave\nthe system non-functional, but it doesn't, the usual CLOCK algorithm\njust does its thing.\n\n\n",
"msg_date": "Tue, 30 Apr 2024 08:59:15 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
},
{
"msg_contents": "On 29.04.2024 23:59, Thomas Munro wrote:\n> On Tue, Apr 30, 2024 at 7:17 AM Thomas Munro<[email protected]> wrote:\n>> On Tue, Apr 30, 2024 at 6:47 AM Maksim Milyutin<[email protected]> wrote:\n>>>> Should not we call at the end the StrategyFreeBuffer() function to add target buffer to freelist and not miss it after invalidation?\n>>> Please take a look at this issue, current implementation of EvictUnpinnedBuffer() IMO is erroneous - evicted buffers are lost permanently and will not be reused again\n> I don't think that's true: it is not lost permanently, it'll be found\n> by the regular clock hand. Perhaps it should be put on the freelist\n> so it can be found again quickly, but I'm not sure that's a bug, is\n> it?\n\n\nYeah, you are right. Thanks for clarification.\n\nCLOCK algorithm will reuse it eventually but being of evicted cleared \nbuffer in freelist might greatly restrict the time of buffer allocation \nwhen all others buffers were in use.\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n\n\n\n\n\nOn 29.04.2024 23:59, Thomas Munro\n wrote:\n\n\nOn Tue, Apr 30, 2024 at 7:17 AM Thomas Munro <[email protected]> wrote:\n\n\nOn Tue, Apr 30, 2024 at 6:47 AM Maksim Milyutin <[email protected]> wrote:\n\n\n\nShould not we call at the end the StrategyFreeBuffer() function to add target buffer to freelist and not miss it after invalidation?\n\n\n\n\n\n\nPlease take a look at this issue, current implementation of EvictUnpinnedBuffer() IMO is erroneous - evicted buffers are lost permanently and will not be reused again\n\n\n\n\nI don't think that's true: it is not lost permanently, it'll be found\nby the regular clock hand. Perhaps it should be put on the freelist\nso it can be found again quickly, but I'm not sure that's a bug, is\nit?\n\n\n\nYeah, you are right. Thanks for clarification.\nCLOCK algorithm will reuse it eventually but being of evicted\n cleared buffer in freelist might greatly restrict the time of\n buffer allocation when all others buffers were in use.\n\n\n\n-- \nBest regards,\nMaksim Milyutin",
"msg_date": "Tue, 30 Apr 2024 09:59:07 +0300",
"msg_from": "Maksim Milyutin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extension Enhancement: Buffer Invalidation in pg_buffercache"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAt Neon, we've been working on removing the file system dependency\nfrom PostgreSQL and replacing it with a distributed storage layer. For\nnow, we've seen most success in this by replacing the implementation\nof the smgr API, but it did require some core modifications like those\nproposed early last year by Anastasia [0].\n\nAs mentioned in the previous thread, there are several reasons why you\nwould want to use a non-default storage manager: storage-level\ncompression, encryption, and disk limit quotas [0]; offloading of cold\nrelation data was also mentioned [1].\n\nIn the thread on Anastasia's patch, Yura Sokolov mentioned that\ninstead of a hook-based smgr extension, a registration-based smgr\nwould be preferred, with integration into namespaces. Please find\nattached an as of yet incomplete patch that starts to do that.\n\nThe patch is yet incomplete (as it isn't derived from Anastasia's\npatch), but I would like comments on this regardless, as this is a\nfairly fundamental component of PostgreSQL that is being modified, and\nit is often better to get comments early in the development cycle. One\nsignificant issue that I've seen so far are that catcache is not\nguaranteed to be available in all backends that need to do smgr\noperations, and I've not yet found a good solution.\n\nChanges compared to HEAD:\n- smgrsw is now dynamically allocated and grows as new storage\nmanagers are loaded (during shared_preload_libraries)\n- CREATE TABLESPACE has new optional syntax USING smgrname (option [, ...])\n- tablespace storage is (planned) fully managed by smgr through some\nnew smgr apis\n\nChanges compared to Anastasia's patch:\n- extensions do not get to hook and replace the api of the smgr code\ndirectly - they are hidden behind the smgr registry.\n\nSuccesses:\n- 0001 passes tests (make check-world)\n- 0002 builds without warnings (make)\n\nTODO:\n- fix dependency failures when catcache is unavailable\n- tablespace redo is currently broken with 0002\n- fix tests for 0002\n- ensure that pg_dump etc. works with the new tablespace storage manager options\n\nLooking forward to any comments, suggestions and reviews.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n[0] https://www.postgresql.org/message-id/CAP4vRV6JKXyFfEOf%3Dn%2Bv5RGsZywAQ3CTM8ESWvgq%2BS87Tmgx_g%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Fri, 30 Jun 2023 14:26:44 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 14:26:44 +0200, Matthias van de Meent wrote:\n> diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c\n> index 4c49393fc5..8685b9fde6 100644\n> --- a/src/backend/postmaster/postmaster.c\n> +++ b/src/backend/postmaster/postmaster.c\n> @@ -1002,6 +1002,11 @@ PostmasterMain(int argc, char *argv[])\n> \t */\n> \tApplyLauncherRegister();\n> \n> +\t/*\n> +\t * Register built-in managers that are not part of static arrays\n> +\t */\n> +\tregister_builtin_dynamic_managers();\n> +\n> \t/*\n> \t * process any libraries that should be preloaded at postmaster start\n> \t */\n\nThat doesn't strike me as a good place to initialize this, we'll need it in\nmultiple places that way. How about putting it into BaseInit()?\n\n\n> -static const f_smgr smgrsw[] = {\n> +static f_smgr *smgrsw;\n\nThis adds another level of indirection. I would rather limit the number of\nregisterable smgrs than do that.\n\n\n\n> +SMgrId\n> +smgr_register(const f_smgr *smgr, Size smgrrelation_size)\n> +{\n\n> +\tMemoryContextSwitchTo(old);\n> +\n> +\tpg_compiler_barrier();\n\nHuh, what's that about?\n\n\n> @@ -59,14 +63,8 @@ typedef struct SMgrRelationData\n> \t * Fields below here are intended to be private to smgr.c and its\n> \t * submodules. Do not touch them from elsewhere.\n> \t */\n> -\tint\t\t\tsmgr_which;\t\t/* storage manager selector */\n> -\n> -\t/*\n> -\t * for md.c; per-fork arrays of the number of open segments\n> -\t * (md_num_open_segs) and the segments themselves (md_seg_fds).\n> -\t */\n> -\tint\t\t\tmd_num_open_segs[MAX_FORKNUM + 1];\n> -\tstruct _MdfdVec *md_seg_fds[MAX_FORKNUM + 1];\n> +\tSMgrId\t\tsmgr_which;\t\t/* storage manager selector */\n> +\tint\t\t\tsmgrrelation_size;\t/* size of this struct, incl. smgr-specific data */\n\nIt looked to me like you determined this globally - why do we need it in every\nentry then?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jun 2023 18:50:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "> Subject: [PATCH v1 1/2] Expose f_smgr to extensions for manual implementation\n\n From what I can see, all the md* APIs that were exposed in md.h can now\nbe made static in md.c. The only other references to those APIs were in\nsmgr.c.\n\n> Subject: [PATCH v1 2/2] Prototype: Allow tablespaces to specify which SMGR\n> they use\n\n> -typedef uint8 SMgrId;\n> +/*\n> + * volatile ID of the smgr. Across various configurations IDs may vary,\n> + * true identity is the name of each smgr.\n> + */\n> +typedef int SMgrId;\n> \n> -#define MaxSMgrId UINT8_MAX\n> +#define MaxSMgrId INT_MAX\n\nIn a future revision of this patch, seems worthwhile to just start as\nint instead of a uint8 to avoid this song and dance. Maybe int8 instead\nof int?\n\n> +static SMgrId recent_smgrid = -1;\n\nYou could use InvalidSmgrId here.\n\n> +void smgrvalidatetspopts(const char *smgrname, List *opts)\n> +{\n> + SMgrId smgrid = get_smgr_by_name(smgrname, false);\n> +\n> + smgrsw[smgrid].smgr_validate_tspopts(opts);\n> +}\n> +\n> +void smgrcreatetsp(const char *smgrname, Oid tsp, List *opts, bool isredo)\n> +{\n> + SMgrId smgrid = get_smgr_by_name(smgrname, false);\n> +\n> + smgrsw[smgrid].smgr_create_tsp(tsp, opts, isredo);\n> +}\n> +\n> +void smgrdroptsp(const char *smgrname, Oid tsp, bool isredo)\n> +{\n> + SMgrId smgrid = get_smgr_by_name(smgrname, false);\n> +\n> + smgrsw[smgrid].smgr_drop_tsp(tsp, isredo);\n> +}\n\nDo you not need to check if smgrid is the InvalidSmgrId? I didn't see\nany other validation anywhere.\n\n> + char *smgr;\n> + List *smgropts; /* list of DefElem nodes */\n\nsmgrname would probably work better alongside tablespacename in that\nstruct.\n\n> @@ -221,7 +229,7 @@ mdexists(SMgrRelation reln, ForkNumber forknum)\n> if (!InRecovery)\n> mdclose(reln, forknum);\n> \n> - return (mdopenfork(reln, forknum, EXTENSION_RETURN_NULL) != NULL);\n> + return (mdopenfork(mdreln, forknum, EXTENSION_RETURN_NULL) != NULL);\n> }\n\nWas this a victim of a bad rebase? Seems like it belongs in the previous\npatch.\n\n> +void mddroptsp(Oid tsp, bool isredo)\n> +{\n> +\n> +}\n\nSome functions in this file have the return type on the previous line.\n\nThis is a pretty slick patchset. Excited to read more dicussion and how\nit evolves.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 11 Jul 2023 15:57:34 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "Found these warnings while compiling while only 0001 is applied.\n\n[1166/2337] Compiling C object src/backend/postgres_lib.a.p/storage_smgr_md.c.o\n../src/backend/storage/smgr/md.c: In function ‘mdexists’:\n../src/backend/storage/smgr/md.c:224:28: warning: passing argument 1 of ‘mdopenfork’ from incompatible pointer type [-Wincompatible-pointer-types]\n 224 | return (mdopenfork(reln, forknum, EXTENSION_RETURN_NULL) != NULL);\n | ^~~~\n | |\n | SMgrRelation {aka SMgrRelationData *}\n../src/backend/storage/smgr/md.c:167:43: note: expected ‘MdSMgrRelation’ {aka ‘MdSMgrRelationData *’} but argument is of type ‘SMgrRelation’ {aka ‘SMgrRelationData *’}\n 167 | static MdfdVec *mdopenfork(MdSMgrRelation reln, ForkNumber forknum, int behavior);\n | ~~~~~~~~~~~~~~~^~~~\n../src/backend/storage/smgr/md.c: In function ‘mdcreate’:\n../src/backend/storage/smgr/md.c:287:40: warning: passing argument 1 of ‘register_dirty_segment’ from incompatible pointer type [-Wincompatible-pointer-types]\n 287 | register_dirty_segment(reln, forknum, mdfd);\n | ^~~~\n | |\n | SMgrRelation {aka SMgrRelationData *}\n../src/backend/storage/smgr/md.c:168:51: note: expected ‘MdSMgrRelation’ {aka ‘MdSMgrRelationData *’} but argument is of type ‘SMgrRelation’ {aka ‘SMgrRelationData *’}\n 168 | static void register_dirty_segment(MdSMgrRelation reln, ForkNumber forknum,\n\nHere is a diff to be applied to 0001 which fixes the warnings that get \ngenerated when compiling. I did see that one of the warnings gets fixed \n0002 (the mdexists() one). I am assuming that change was just missed \nwhile rebasing the patchset or something. I did not see a fix for\nmdcreate() in 0002 however.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 19 Sep 2023 15:54:53 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "Sorry for double-posting, I accidentally replied to Matthias, not the\nmailing list :(\n\n---------- Forwarded message ---------\nFrom: Kirill Reshke <[email protected]>\nDate: Mon, 4 Dec 2023 at 19:46\nSubject: Re: Extensible storage manager API - SMGR hook Redux\nTo: Matthias van de Meent <[email protected]>\n\n\nHi!\n\nOn Fri, 30 Jun 2023 at 15:27, Matthias van de Meent <\[email protected]> wrote:\n\n> Hi hackers,\n>\n> At Neon, we've been working on removing the file system dependency\n> from PostgreSQL and replacing it with a distributed storage layer. For\n> now, we've seen most success in this by replacing the implementation\n> of the smgr API, but it did require some core modifications like those\n> proposed early last year by Anastasia [0].\n>\n> As mentioned in the previous thread, there are several reasons why you\n> would want to use a non-default storage manager: storage-level\n> compression, encryption, and disk limit quotas [0]; offloading of cold\n> relation data was also mentioned [1].\n>\n> In the thread on Anastasia's patch, Yura Sokolov mentioned that\n> instead of a hook-based smgr extension, a registration-based smgr\n> would be preferred, with integration into namespaces. Please find\n> attached an as of yet incomplete patch that starts to do that.\n>\n> The patch is yet incomplete (as it isn't derived from Anastasia's\n> patch), but I would like comments on this regardless, as this is a\n> fairly fundamental component of PostgreSQL that is being modified, and\n> it is often better to get comments early in the development cycle. One\n> significant issue that I've seen so far are that catcache is not\n> guaranteed to be available in all backends that need to do smgr\n> operations, and I've not yet found a good solution.\n>\n> Changes compared to HEAD:\n> - smgrsw is now dynamically allocated and grows as new storage\n> managers are loaded (during shared_preload_libraries)\n> - CREATE TABLESPACE has new optional syntax USING smgrname (option [, ...])\n> - tablespace storage is (planned) fully managed by smgr through some\n> new smgr apis\n>\n> Changes compared to Anastasia's patch:\n> - extensions do not get to hook and replace the api of the smgr code\n> directly - they are hidden behind the smgr registry.\n>\n> Successes:\n> - 0001 passes tests (make check-world)\n> - 0002 builds without warnings (make)\n>\n> TODO:\n> - fix dependency failures when catcache is unavailable\n> - tablespace redo is currently broken with 0002\n> - fix tests for 0002\n> - ensure that pg_dump etc. works with the new tablespace storage manager\n> options\n>\n> Looking forward to any comments, suggestions and reviews.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech/)\n>\n>\n> [0]\n> https://www.postgresql.org/message-id/CAP4vRV6JKXyFfEOf%3Dn%2Bv5RGsZywAQ3CTM8ESWvgq%2BS87Tmgx_g%40mail.gmail.com\n> [1]\n> https://www.postgresql.org/message-id/[email protected]\n\n\nSo, 0002 patch uses the `get_tablespace` function, which searches Catalog\nto tablespace SMGR id. I wonder how `smgr_redo` would work with it?\nIs it possible to query the system catalog during crash recovery? As far as\ni understand the answer is \"no\", correct me if I'm wrong.\nFurthermore, why do we only allow tablespace to have its own SMGR\nimplementation, can we have per-relation SMGR? Maybe we can do it in a way\nsimilar to custom RMGR (meaning, write SMGR OID into WAL and use it in\ncrash recovery etc.)?\n\nSorry for double-posting, I accidentally replied to Matthias, not the mailing list :(---------- Forwarded message ---------From: Kirill Reshke <[email protected]>Date: Mon, 4 Dec 2023 at 19:46Subject: Re: Extensible storage manager API - SMGR hook ReduxTo: Matthias van de Meent <[email protected]>Hi!On Fri, 30 Jun 2023 at 15:27, Matthias van de Meent <[email protected]> wrote:Hi hackers,\n\nAt Neon, we've been working on removing the file system dependency\nfrom PostgreSQL and replacing it with a distributed storage layer. For\nnow, we've seen most success in this by replacing the implementation\nof the smgr API, but it did require some core modifications like those\nproposed early last year by Anastasia [0].\n\nAs mentioned in the previous thread, there are several reasons why you\nwould want to use a non-default storage manager: storage-level\ncompression, encryption, and disk limit quotas [0]; offloading of cold\nrelation data was also mentioned [1].\n\nIn the thread on Anastasia's patch, Yura Sokolov mentioned that\ninstead of a hook-based smgr extension, a registration-based smgr\nwould be preferred, with integration into namespaces. Please find\nattached an as of yet incomplete patch that starts to do that.\n\nThe patch is yet incomplete (as it isn't derived from Anastasia's\npatch), but I would like comments on this regardless, as this is a\nfairly fundamental component of PostgreSQL that is being modified, and\nit is often better to get comments early in the development cycle. One\nsignificant issue that I've seen so far are that catcache is not\nguaranteed to be available in all backends that need to do smgr\noperations, and I've not yet found a good solution.\n\nChanges compared to HEAD:\n- smgrsw is now dynamically allocated and grows as new storage\nmanagers are loaded (during shared_preload_libraries)\n- CREATE TABLESPACE has new optional syntax USING smgrname (option [, ...])\n- tablespace storage is (planned) fully managed by smgr through some\nnew smgr apis\n\nChanges compared to Anastasia's patch:\n- extensions do not get to hook and replace the api of the smgr code\ndirectly - they are hidden behind the smgr registry.\n\nSuccesses:\n- 0001 passes tests (make check-world)\n- 0002 builds without warnings (make)\n\nTODO:\n- fix dependency failures when catcache is unavailable\n- tablespace redo is currently broken with 0002\n- fix tests for 0002\n- ensure that pg_dump etc. works with the new tablespace storage manager options\n\nLooking forward to any comments, suggestions and reviews.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n[0] https://www.postgresql.org/message-id/CAP4vRV6JKXyFfEOf%3Dn%2Bv5RGsZywAQ3CTM8ESWvgq%2BS87Tmgx_g%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/[email protected], 0002 patch uses the `get_tablespace` function, which searches Catalog to tablespace SMGR id. I wonder how `smgr_redo` would work with it?Is it possible to query the system catalog during crash recovery? As far as i understand the answer is \"no\", correct me if I'm wrong. Furthermore, why do we only allow tablespace to have its own SMGR implementation, can we have per-relation SMGR? Maybe we can do it in a way similar to custom RMGR (meaning, write SMGR OID into WAL and use it in crash recovery etc.)?",
"msg_date": "Mon, 4 Dec 2023 19:50:41 +0300",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "On Mon, 4 Dec 2023 at 17:51, Kirill Reshke <[email protected]> wrote:\n>\n> So, 0002 patch uses the `get_tablespace` function, which searches Catalog to tablespace SMGR id. I wonder how `smgr_redo` would work with it?\n\nThat's a very good point I hadn't considered in detail yet. Quite\nclearly, the current code is wrong in assuming that the catalog is\naccessible, and it should probably be stored in a way similar to\npg_filenode.map in a file managed outside the buffer pool.\n\n> Is it possible to query the system catalog during crash recovery? As far as i understand the answer is \"no\", correct me if I'm wrong.\n\nYes, you're correct, we can't access buffers like this during\nrecovery. That's going to need some more effort.\n\n> Furthermore, why do we only allow tablespace to have its own SMGR implementation, can we have per-relation SMGR? Maybe we can do it in a way similar to custom RMGR (meaning, write SMGR OID into WAL and use it in crash recovery etc.)?\n\nAMs (and by extension, their RMGRs) that use Postgres' buffer pool\nhave control over how they want to layout their blocks and files, but\ngenerally don't care about where those blocks and files are located,\nas long as they _can_ be retrieved.\n\nTablespaces, however, describe 'drives' or 'storage pools' in which\nthe tables/relations are stored, which to me seems to be the more\nlogical place to configure the SMGR abstraction of how and where to\nstore the actual data, as SMGRs manage the low-level relation block IO\n(= file accesses), and tablespaces manage where files are stored.\n\nNote that nothing prevents you from using one tablespace (thus\ndifferent SMGR) per relation, apart from bloated catalogs and the\nsuperuser permissions required for creating those tablespaces. It'd be\ndifficult to manage, but not impossible.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 4 Dec 2023 20:21:12 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "On Mon, 4 Dec 2023 at 22:21, Matthias van de Meent <\[email protected]> wrote:\n\n> On Mon, 4 Dec 2023 at 17:51, Kirill Reshke <[email protected]> wrote:\n> >\n> > So, 0002 patch uses the `get_tablespace` function, which searches\n> Catalog to tablespace SMGR id. I wonder how `smgr_redo` would work with it?\n>\n> That's a very good point I hadn't considered in detail yet. Quite\n> clearly, the current code is wrong in assuming that the catalog is\n> accessible, and it should probably be stored in a way similar to\n> pg_filenode.map in a file managed outside the buffer pool.\n>\n> Hmm, pg_filenode.map is a nice idea. So, simply maintain TableSpaceOId ->\nsmgr id mapping in a separate file and update the whole file on any\nchanges, right?\nLooks reasonable to me, but it is clear that this solution can be really\nslow in some patterns, like if we create many-many tablespaces(the way you\nsuggested it in the per-relation SMGR feature). Maybe we can store data in\nfiles somehow separately, and only update one chunk per operation.\n\nAnyway, if we use a `pg_filenode.map` - like solution, we need to reuse its\ncode infrasture, right? For example, it seems that code that calculates\nchecksums can be reused.\nSo, we need to refactor code here, define something like FileMap API maybe.\nOr is it not really worth it? We can just write similar code twice.\n\nOn Mon, 4 Dec 2023 at 22:21, Matthias van de Meent <[email protected]> wrote:On Mon, 4 Dec 2023 at 17:51, Kirill Reshke <[email protected]> wrote:\n>\n> So, 0002 patch uses the `get_tablespace` function, which searches Catalog to tablespace SMGR id. I wonder how `smgr_redo` would work with it?\n\nThat's a very good point I hadn't considered in detail yet. Quite\nclearly, the current code is wrong in assuming that the catalog is\naccessible, and it should probably be stored in a way similar to\npg_filenode.map in a file managed outside the buffer pool.\nHmm, pg_filenode.map is a nice idea. So, simply maintain TableSpaceOId -> smgr id mapping in a separate file and update the whole file on any changes, right? Looks reasonable to me, but it is clear that this solution can be really slow in some patterns, like if we create many-many tablespaces(the way you suggested it in the per-relation SMGR feature). Maybe we can store data in files somehow separately, and only update one chunk per operation. Anyway, if we use a `pg_filenode.map` - like solution, we need to reuse its code infrasture, right? For example, it seems that code that calculates checksums can be reused. So, we need to refactor code here, define something like FileMap API maybe. Or is it not really worth it? We can just write similar code twice.",
"msg_date": "Tue, 5 Dec 2023 00:02:59 +0300",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "On Mon, 4 Dec 2023 at 22:03, Kirill Reshke <[email protected]> wrote:\n>\n> On Mon, 4 Dec 2023 at 22:21, Matthias van de Meent <[email protected]> wrote:\n>>\n>> On Mon, 4 Dec 2023 at 17:51, Kirill Reshke <[email protected]> wrote:\n>> >\n>> > So, 0002 patch uses the `get_tablespace` function, which searches Catalog to tablespace SMGR id. I wonder how `smgr_redo` would work with it?\n>>\n>> That's a very good point I hadn't considered in detail yet. Quite\n>> clearly, the current code is wrong in assuming that the catalog is\n>> accessible, and it should probably be stored in a way similar to\n>> pg_filenode.map in a file managed outside the buffer pool.\n>>\n> Hmm, pg_filenode.map is a nice idea. So, simply maintain TableSpaceOId -> smgr id mapping in a separate file and update the whole file on any changes, right?\n> Looks reasonable to me, but it is clear that this solution can be really slow in some patterns, like if we create many-many tablespaces(the way you suggested it in the per-relation SMGR feature). Maybe we can store data in files somehow separately, and only update one chunk per operation.\n\nYes, but that's a later issue... I'm not sure many-many tablespaces is\nactually a good thing. There are already very few reasons to store\ntables in more than just the default tablespace. For temporary\nrelations, there is indeed a guc to automatically put them into one\ntablespace; and I can see a similar thing being useful for temporary\nrelations, too. Then there I can see high-performant local disks vs\nlower-performant (but cheaper) local disks also as something\nreasonable. But that only gets us to ~6 tablespaces, assuming separate\ntablespaces for each combination of (normal, temp, unlogged) * (fast,\ncheap). I'm not sure there are many other reasons to add tablespaces,\nlet alone making one for each table.\n\nNote that you can select which tablespace a table is stored in, so I\nsee very little reason to actually do something about large numbers of\ntablespaces being prohibitively expensive performance-wise.\n\nWhy do you want to have a whole new storage configuration for each of\nyour relations?\n\n> Anyway, if we use a `pg_filenode.map` - like solution, we need to reuse its code infrasture, right? For example, it seems that code that calculates checksums can be reused.\n> So, we need to refactor code here, define something like FileMap API maybe. Or is it not really worth it? We can just write similar code twice.\n\nI'm not sure about that. I really doubt we'll need things that are\nthat similar: right now, the tablespace->smgr mapping could be\nconsidered to be implied by the symlinks in /pg_tblspc/. Non-MD\ntablespaces could add a file <oid>.tblspc that detail their\nconfiguration, which would also fix the issue of spcoid->smgr mapping.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 4 Dec 2023 22:30:36 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "Thought I would show off what is possible with this patchset.\n\nHeikki, a couple of months ago in our internal Slack, said:\n\n> [I would like] a debugging tool that checks that we're not missing any \n> fsyncs. I bumped into a few missing fsync bugs with unlogged tables \n> lately and a tool like that would've been very helpful.\n\nMy task was to create such a tool, and off I went. I started with the \nstorage manager extension patch that Matthias sent to the list last \nyear[0].\n\nAndres, in another thread[1], said:\n\n> I've been thinking that we need a validation layer for fsyncs, it's too hard\n> to get right without testing, and crash testing is not likel enough to catch\n> problems quickly / resource intensive.\n>\n> My thought was that we could keep a shared hash table of all files created /\n> dirtied at the fd layer, with the filename as key and the value the current\n> LSN. We'd delete files from it when they're fsynced. At checkpoints we go\n> through the list and see if there's any files from before the redo that aren't\n> yet fsynced. All of this in assert builds only, of course.\n\nI took this idea and ran with it. I call it the fsync_checker™️. It is an \nextension that prints relations that haven't been fsynced prior to \na CHECKPOINT. Note that this idea doesn't work in practice because \nrelations might not be fsynced, but they might be WAL-logged, like in \nthe case of createdb. See log_smgrcreate(). I can't think of an easy way \nto solve this problem looking at the codebase as it stands.\n\nHere is a description of the patches:\n\n0001:\n\nThis is essentially just the patch that Matthias posted earlier, but \nrebased and adjusted a little bit so storage managers can \"inherit\" from \nother storage managers.\n\n0002:\n\nThis is an extension of 0001, which allows for extensions to set \na global storage manager. This is pretty hacky, and if it was going to \nbe pulled into mainline, it would need some better protection. For \ninstance, only one extension should be able to set the global storage \nmanager. We wouldn't want extensions stepping over each other, etc.\n\n0003:\n\nAdds a hook for extensions to inspect a checkpoint before it actually \noccurs. The purpose for the fsync_checker is so that I can iterate over\nall the relations the extension tracks to find files that haven't been\nsynced prior to the completion of the checkpoint.\n\n0004:\n\nThis is the actual fsync_checker extension itself. It must be preloaded.\n\nHopefully this is a good illustration of how the initial patch could be \nused, even though it isn't perfect.\n\n[0]: https://www.postgresql.org/message-id/CAEze2WgMySu2suO_TLvFyGY3URa4mAx22WeoEicnK%3DPCNWEMrA%40mail.gmail.com\n[1]: https://www.postgresql.org/message-id/20220127182838.ba3434dp2pe5vcia%40alap3.anarazel.de\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Fri, 12 Jan 2024 13:57:22 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "Hi,\n\n> Thought I would show off what is possible with this patchset.\n>\n> [...]\n\nJust wanted to let you know that cfbot doesn't seem to be entirely\nhappy with the patch [1]. Please consider submitting an updated\nversion.\n\nBest regards,\nAleksander Alekseev (wearing co-CFM hat)\n\n[1]: http://cfbot.cputube.org/\n\n\n",
"msg_date": "Fri, 15 Mar 2024 17:15:42 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
},
{
"msg_contents": "Hi,\n\nI reviewed the discussion and took a look at the patch sets. It seems\nlike many things are combined here. Based on the subject, I initially\nthought it aimed to provide the infrastructure to easily extend\nstorage managers. This would allow anyone to create their own storage\nmanagers using this infrastructure. While it addresses this, it also\nincludes additional features like fsync_checker, which I believe\nshould be a separate feature. Even though it might use the same\ninfrastructure, it appears to be a different functionality. I think we\nshould focus solely on providing the infrastructure here.\n\nWe need to decide on our approach—whether to use a hook-based method\nor a registration-based method—and I believe this requires further\ndiscussion.\n\nThe hook-based approach is simple and works well for anyone writing\ntheir own storage manager. However, it has its limitations as we can\neither use the default storage manager or a custom-built one for all\nthe work load, but we cannot choose between multiple storage managers.\nOn the other hand, the registration-based approach allows choosing\nbetween multiple storage managers based on the workload, though it\nrequires a lot of changes.\n\nAre we planning to support other storage managers in PostgreSQL in the\nnear future? If not, it is better to go with the hook-based approach.\nOtherwise, the registration-based approach is preferable as it offers\nmore flexibility to users and enhances PostgreSQL’s functionality.\n\nCould you please share your thoughts on this? Also, let me know if\nthis topic has already been discussed and if any conclusions were\nreached.\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft\n\nOn Sat, Jan 13, 2024 at 1:27 AM Tristan Partin <[email protected]> wrote:\n>\n> Thought I would show off what is possible with this patchset.\n>\n> Heikki, a couple of months ago in our internal Slack, said:\n>\n> > [I would like] a debugging tool that checks that we're not missing any\n> > fsyncs. I bumped into a few missing fsync bugs with unlogged tables\n> > lately and a tool like that would've been very helpful.\n>\n> My task was to create such a tool, and off I went. I started with the\n> storage manager extension patch that Matthias sent to the list last\n> year[0].\n>\n> Andres, in another thread[1], said:\n>\n> > I've been thinking that we need a validation layer for fsyncs, it's too hard\n> > to get right without testing, and crash testing is not likel enough to catch\n> > problems quickly / resource intensive.\n> >\n> > My thought was that we could keep a shared hash table of all files created /\n> > dirtied at the fd layer, with the filename as key and the value the current\n> > LSN. We'd delete files from it when they're fsynced. At checkpoints we go\n> > through the list and see if there's any files from before the redo that aren't\n> > yet fsynced. All of this in assert builds only, of course.\n>\n> I took this idea and ran with it. I call it the fsync_checker™️. It is an\n> extension that prints relations that haven't been fsynced prior to\n> a CHECKPOINT. Note that this idea doesn't work in practice because\n> relations might not be fsynced, but they might be WAL-logged, like in\n> the case of createdb. See log_smgrcreate(). I can't think of an easy way\n> to solve this problem looking at the codebase as it stands.\n>\n> Here is a description of the patches:\n>\n> 0001:\n>\n> This is essentially just the patch that Matthias posted earlier, but\n> rebased and adjusted a little bit so storage managers can \"inherit\" from\n> other storage managers.\n>\n> 0002:\n>\n> This is an extension of 0001, which allows for extensions to set\n> a global storage manager. This is pretty hacky, and if it was going to\n> be pulled into mainline, it would need some better protection. For\n> instance, only one extension should be able to set the global storage\n> manager. We wouldn't want extensions stepping over each other, etc.\n>\n> 0003:\n>\n> Adds a hook for extensions to inspect a checkpoint before it actually\n> occurs. The purpose for the fsync_checker is so that I can iterate over\n> all the relations the extension tracks to find files that haven't been\n> synced prior to the completion of the checkpoint.\n>\n> 0004:\n>\n> This is the actual fsync_checker extension itself. It must be preloaded.\n>\n> Hopefully this is a good illustration of how the initial patch could be\n> used, even though it isn't perfect.\n>\n> [0]: https://www.postgresql.org/message-id/CAEze2WgMySu2suO_TLvFyGY3URa4mAx22WeoEicnK%3DPCNWEMrA%40mail.gmail.com\n> [1]: https://www.postgresql.org/message-id/20220127182838.ba3434dp2pe5vcia%40alap3.anarazel.de\n>\n> --\n> Tristan Partin\n> Neon (https://neon.tech)\n\n\n",
"msg_date": "Sat, 21 Sep 2024 23:54:58 +0530",
"msg_from": "Nitin Jadhav <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Extensible storage manager API - SMGR hook Redux"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I read the documents and coding of SPI, [1]\nI found that the following the SPI_start_transaction does not support\ntransaciton_mode(ISOLATION LEVEL, READ WRITE/READ ONLY) like BEGIN \ncommand. [2]\nIs there a reason for this?\n\nI would like to be able to set transaciton_mode in \nSPI_start_transaction.\nWhat do you think?\n\n[1]\nhttps://www.postgresql.org/docs/devel/spi-spi-start-transaction.html\n\n[2]\nhttps://www.postgresql.org/docs/devel/sql-begin.html\n\nRegards,\n-- \nSeino Yuki\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 30 Jun 2023 23:15:54 +0900",
"msg_from": "Seino Yuki <[email protected]>",
"msg_from_op": true,
"msg_subject": "SPI isolation changes"
},
{
"msg_contents": "On 30/06/2023 17:15, Seino Yuki wrote:\n> Hi,\n> \n> When I read the documents and coding of SPI, [1]\n> I found that the following the SPI_start_transaction does not support\n> transaciton_mode(ISOLATION LEVEL, READ WRITE/READ ONLY) like BEGIN\n> command. [2]\n> Is there a reason for this?\n\nPer the documentation for SPI_start_transaction that you linked to:\n\n\"SPI_start_transaction does nothing, and exists only for code \ncompatibility with earlier PostgreSQL releases.\"\n\nI haven't tested it, but perhaps you can do \"SET TRANSACTION ISOLATION \nLEVEL\" in the new transaction after calling SPI_commit() though. Or \"SET \nDEFAULT TRANSACTION ISOLATION LEVEL\" before committing.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 30 Jun 2023 17:26:57 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI isolation changes"
},
{
"msg_contents": "Thanks for the reply!\n\nOn 2023-06-30 23:26, Heikki Linnakangas wrote:\n> On 30/06/2023 17:15, Seino Yuki wrote:\n>> Hi,\n>> \n>> When I read the documents and coding of SPI, [1]\n>> I found that the following the SPI_start_transaction does not support\n>> transaciton_mode(ISOLATION LEVEL, READ WRITE/READ ONLY) like BEGIN\n>> command. [2]\n>> Is there a reason for this?\n> \n> Per the documentation for SPI_start_transaction that you linked to:\n> \n> \"SPI_start_transaction does nothing, and exists only for code\n> compatibility with earlier PostgreSQL releases.\"\n> \n> I haven't tested it, but perhaps you can do \"SET TRANSACTION ISOLATION\n> LEVEL\" in the new transaction after calling SPI_commit() though. Or\n> \"SET DEFAULT TRANSACTION ISOLATION LEVEL\" before committing.\n\nI understand that too.\nHowever, I thought SPI_start_transaction was the function equivalent to \nBEGIN (or START TRANSACTION).\nTherefore, I did not understand why the same option could not be \nspecified.\n\nI also thought that using SPI_start_transaction would be more readable\nthan using SPI_commit/SPI_rollback to implicitly start a transaction.\nWhat do you think?\n\nRegards,\n-- \nSeino Yuki\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 30 Jun 2023 23:56:48 +0900",
"msg_from": "Seino Yuki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SPI isolation changes"
},
{
"msg_contents": "Seino Yuki <[email protected]> writes:\n> I also thought that using SPI_start_transaction would be more readable\n> than using SPI_commit/SPI_rollback to implicitly start a transaction.\n> What do you think?\n\nI think you're trying to get us to undo commit 2e517818f, which\nis not going to happen. See the threads that led up to that:\n\nDiscussion: https://postgr.es/m/[email protected]\nDiscussion: https://postgr.es/m/[email protected]\n\nIt looks to me like you can just change the transaction property\nsettings immediately after SPI_start_transaction if you want to.\nCompare this bit in SnapBuildExportSnapshot:\n\n\tStartTransactionCommand();\n\n\t/* There doesn't seem to a nice API to set these */\n\tXactIsoLevel = XACT_REPEATABLE_READ;\n\tXactReadOnly = true;\n\nAlso look at the implementation of SPI_commit_and_chain,\nparticularly RestoreTransactionCharacteristics.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jun 2023 11:06:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI isolation changes"
},
{
"msg_contents": "On 2023-07-01 00:06, Tom Lane wrote:\n> Seino Yuki <[email protected]> writes:\n>> I also thought that using SPI_start_transaction would be more readable\n>> than using SPI_commit/SPI_rollback to implicitly start a transaction.\n>> What do you think?\n> \n> I think you're trying to get us to undo commit 2e517818f, which\n> is not going to happen. See the threads that led up to that:\n> \n> Discussion:\n> https://postgr.es/m/[email protected]\n> Discussion: https://postgr.es/m/[email protected]\n> \n> It looks to me like you can just change the transaction property\n> settings immediately after SPI_start_transaction if you want to.\n> Compare this bit in SnapBuildExportSnapshot:\n> \n> \tStartTransactionCommand();\n> \n> \t/* There doesn't seem to a nice API to set these */\n> \tXactIsoLevel = XACT_REPEATABLE_READ;\n> \tXactReadOnly = true;\n> \n> Also look at the implementation of SPI_commit_and_chain,\n> particularly RestoreTransactionCharacteristics.\n> \n> \t\t\tregards, tom lane\n\nThanks for sharing past threads.\nI was understand how SPI_start_transaction went no-operation.\n\nI also understand how to set the transaction property.\nHowever, it was a little disappointing that the transaction property\ncould not be changed only by SPI commands.\n\nOf course, executing SET TRANSACTION ISOLATION LEVEL with SPI_execute \nwill result in error.\n---\nSPI_execute(\"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE\", false, 0);\n\n(Log Output)\nERROR: SET TRANSACTION ISOLATION LEVEL must be called before any query\nCONTEXT: SQL statement \"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE\"\n---\n\nThanks for answering.\n\n\n",
"msg_date": "Sat, 01 Jul 2023 01:21:35 +0900",
"msg_from": "Seino Yuki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SPI isolation changes"
},
{
"msg_contents": "Seino Yuki <[email protected]> writes:\n> Of course, executing SET TRANSACTION ISOLATION LEVEL with SPI_execute \n> will result in error.\n> ---\n> SPI_execute(\"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE\", false, 0);\n\n> (Log Output)\n> ERROR: SET TRANSACTION ISOLATION LEVEL must be called before any query\n> CONTEXT: SQL statement \"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE\"\n\nEven if you just did SPI_commit? That *should* fail if you just do\nit right off the bat in a SPI-using procedure, because you're already\nwithin the transaction that called the procedure. But I think it\nwill work if you do SPI_commit followed by this SPI_execute.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jun 2023 12:47:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SPI isolation changes"
},
{
"msg_contents": "On 2023-07-01 01:47, Tom Lane wrote:\n> Seino Yuki <[email protected]> writes:\n>> Of course, executing SET TRANSACTION ISOLATION LEVEL with SPI_execute\n>> will result in error.\n>> ---\n>> SPI_execute(\"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE\", false, 0);\n> \n>> (Log Output)\n>> ERROR: SET TRANSACTION ISOLATION LEVEL must be called before any \n>> query\n>> CONTEXT: SQL statement \"SET TRANSACTION ISOLATION LEVEL SERIALIZABLE\"\n> \n> Even if you just did SPI_commit? That *should* fail if you just do\n> it right off the bat in a SPI-using procedure, because you're already\n> within the transaction that called the procedure. But I think it\n> will work if you do SPI_commit followed by this SPI_execute.\n> \n> \t\t\tregards, tom lane\n\nI'm sorry. I understood wrongly.\nSPI_execute(SET TRANSACTION ISOLATION LEVEL ~ ) after executing \nSPI_commit succeeded.\n\nThank you. My problem is solved.\n\nRegards,\n-- \nSeino Yuki\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 01 Jul 2023 20:31:39 +0900",
"msg_from": "Seino Yuki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SPI isolation changes"
}
] |
[
{
"msg_contents": "I spotted this comment in walsender.c:\n\n> \t/*-------\n> \t * When reading from a historic timeline, and there is a timeline switch\n> \t * [.. long comment omitted ...]\n> \t * portion of the old segment is copied to the new file. -------\n> \t */\n\nNote the bogus dashes at the end. This was introduced in commit \n0dc8ead463, which moved and reformatted the comment. It's supposed to \nend the pgindent-guard at the beginning of the comment like this:\n\n> \t/*-------\n> \t * When reading from a historic timeline, and there is a timeline switch\n> \t * [.. long comment omitted ...]\n> \t * portion of the old segment is copied to the new file.\n> \t *-------\n> \t */\n\nBut that got me wondering, do we care about those end-guards? pgindent \ndoesn't need them. We already have a bunch of comments that don't have \nthe end-guard, for example:\n\nanalyze.c:\n> \t\t\t\t/*------\n> \t\t\t\t translator: %s is a SQL row locking clause such as FOR UPDATE */\n\ngistproc.c:\n> \t\t/*----\n> \t\t * The goal is to form a left and right interval, so that every entry\n> \t\t * interval is contained by either left or right interval (or both).\n> \t\t *\n> \t\t * For example, with the intervals (0,1), (1,3), (2,3), (2,4):\n> \t\t *\n> \t\t * 0 1 2 3 4\n> \t\t * +-+\n> \t\t *\t +---+\n> \t\t *\t +-+\n> \t\t *\t +---+\n> \t\t *\n> \t\t * The left and right intervals are of the form (0,a) and (b,4).\n> \t\t * We first consider splits where b is the lower bound of an entry.\n> \t\t * We iterate through all entries, and for each b, calculate the\n> \t\t * smallest possible a. Then we consider splits where a is the\n> \t\t * upper bound of an entry, and for each a, calculate the greatest\n> \t\t * possible b.\n> \t\t *\n> \t\t * In the above example, the first loop would consider splits:\n> \t\t * b=0: (0,1)-(0,4)\n> \t\t * b=1: (0,1)-(1,4)\n> \t\t * b=2: (0,3)-(2,4)\n> \t\t *\n> \t\t * And the second loop:\n> \t\t * a=1: (0,1)-(1,4)\n> \t\t * a=3: (0,3)-(2,4)\n> \t\t * a=4: (0,4)-(2,4)\n> \t\t */\n\npredicate.c:\n> \t\t/*----------\n> \t\t * The SLRU is no longer needed. Truncate to head before we set head\n> \t\t * invalid.\n> \t\t *\n> \t\t * XXX: It's possible that the SLRU is not needed again until XID\n> \t\t * wrap-around has happened, so that the segment containing headPage\n> \t\t * that we leave behind will appear to be new again. In that case it\n> \t\t * won't be removed until XID horizon advances enough to make it\n> \t\t * current again.\n> \t\t *\n> \t\t * XXX: This should happen in vac_truncate_clog(), not in checkpoints.\n> \t\t * Consider this scenario, starting from a system with no in-progress\n> \t\t * transactions and VACUUM FREEZE having maximized oldestXact:\n> \t\t * - Start a SERIALIZABLE transaction.\n> \t\t * - Start, finish, and summarize a SERIALIZABLE transaction, creating\n> \t\t * one SLRU page.\n> \t\t * - Consume XIDs to reach xidStopLimit.\n> \t\t * - Finish all transactions. Due to the long-running SERIALIZABLE\n> \t\t * transaction, earlier checkpoints did not touch headPage. The\n> \t\t * next checkpoint will change it, but that checkpoint happens after\n> \t\t * the end of the scenario.\n> \t\t * - VACUUM to advance XID limits.\n> \t\t * - Consume ~2M XIDs, crossing the former xidWrapLimit.\n> \t\t * - Start, finish, and summarize a SERIALIZABLE transaction.\n> \t\t * SerialAdd() declines to create the targetPage, because headPage\n> \t\t * is not regarded as in the past relative to that targetPage. The\n> \t\t * transaction instigating the summarize fails in\n> \t\t * SimpleLruReadPage().\n> \t\t */\nindexcmds.c:\n> \t/*-----\n> \t * Now we have all the indexes we want to process in indexIds.\n> \t *\n> \t * The phases now are:\n> \t *\n> \t * 1. create new indexes in the catalog\n> \t * 2. build new indexes\n> \t * 3. let new indexes catch up with tuples inserted in the meantime\n> \t * 4. swap index names\n> \t * 5. mark old indexes as dead\n> \t * 6. drop old indexes\n> \t *\n> \t * We process each phase for all indexes before moving to the next phase,\n> \t * for efficiency.\n> \t */\n\n\nExcept for the translator comments, I think those others forgot about \nthe end-guards by accident. But they look just as nice to me. It's \nprobably not worth the code churn to remove them from existing comments, \nbut how about we stop requiring them in new code, and update the \npgindent README accordingly?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 30 Jun 2023 18:07:16 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "On /*----- comments"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Except for the translator comments, I think those others forgot about \n> the end-guards by accident. But they look just as nice to me. It's \n> probably not worth the code churn to remove them from existing comments, \n> but how about we stop requiring them in new code, and update the \n> pgindent README accordingly?\n\nSeems reasonable; the trailing dashes eat a line without adding much.\n\nShould we also provide specific guidance about how many leading dashes\nto use for this? I vaguely recall that pgindent might only need one,\nbut I think using somewhere around 5 to 10 looks better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jun 2023 11:22:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On /*----- comments"
},
{
"msg_contents": "> On 30 Jun 2023, at 17:22, Tom Lane <[email protected]> wrote:\n\n> Seems reasonable; the trailing dashes eat a line without adding much.\n\n+1\n\n> Should we also provide specific guidance about how many leading dashes\n> to use for this? I vaguely recall that pgindent might only need one,\n> but I think using somewhere around 5 to 10 looks better.\n\nThere are ~50 different lenghts used when looking at block comments from line 2\n(to avoid the file header comment) and onwards in files, the ones with 10 or\nmore occurrences are:\n\n 145 /*----------\n 78 /*------\n 76 /*-------------------------------------------------------------------------\n 37 /*----------------------------------------------------------\n 29 /*------------------------\n 23 /*----------------------------------------------------------------\n 22 /*--------------------\n 21 /*----\n 15 /*---------------------------------------------------------------------\n 14 /*--\n 13 /*-------------------------------------------------------\n 13 /*---\n 12 /*----------------------\n\n10 leading dashes is the clear winner so recommending that for new/edited\ncomments seem like a good way to reduce churn.\n\nLooking at line 1 comments for fun shows pretty strong consistency:\n\n1611 /*-------------------------------------------------------------------------\n 22 /*--------------------------------------------------------------------------\n 18 /*------------------------------------------------------------------------\n 13 /*--------------------------------------------------------------------\n 7 /*---------------------------------------------------------------------------\n 4 /*-----------------------------------------------------------------------\n 4 /*----------------------------------------------------------------------\n 1 /*--------------------------\n\nplpy_util.h being the only one that sticks out.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 10:48:17 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On /*----- comments"
},
{
"msg_contents": "On 03/07/2023 11:48, Daniel Gustafsson wrote:\n>> On 30 Jun 2023, at 17:22, Tom Lane <[email protected]> wrote:\n> \n>> Seems reasonable; the trailing dashes eat a line without adding much.\n> \n> +1\n\nPushed a patch to remove the end-guard from the example in the pgindent \nREADME. And fixed the bogus end-guard in walsender.c.\n\n>> Should we also provide specific guidance about how many leading dashes\n>> to use for this? I vaguely recall that pgindent might only need one,\n>> but I think using somewhere around 5 to 10 looks better.\n> \n> There are ~50 different lenghts used when looking at block comments from line 2\n> (to avoid the file header comment) and onwards in files, the ones with 10 or\n> more occurrences are:\n> \n> 145 /*----------\n> 78 /*------\n> 76 /*-------------------------------------------------------------------------\n> 37 /*----------------------------------------------------------\n> 29 /*------------------------\n> 23 /*----------------------------------------------------------------\n> 22 /*--------------------\n> 21 /*----\n> 15 /*---------------------------------------------------------------------\n> 14 /*--\n> 13 /*-------------------------------------------------------\n> 13 /*---\n> 12 /*----------------------\n> \n> 10 leading dashes is the clear winner so recommending that for new/edited\n> comments seem like a good way to reduce churn.\n\nThe example in the pgindent README also uses 10 dashes.\n\nI'm not sure there is a universal best length. It depends on the comment \nwhat looks best. The very long ones in particular would not look good on \ncomments in a deeply indented block. So I think the status quo is fine.\n\n> Looking at line 1 comments for fun shows pretty strong consistency:\n> \n> 1611 /*-------------------------------------------------------------------------\n> 22 /*--------------------------------------------------------------------------\n> 18 /*------------------------------------------------------------------------\n> 13 /*--------------------------------------------------------------------\n> 7 /*---------------------------------------------------------------------------\n> 4 /*-----------------------------------------------------------------------\n> 4 /*----------------------------------------------------------------------\n> 1 /*--------------------------\n> \n> plpy_util.h being the only one that sticks out.\n\nI don't see any reason for the variance in these. Seems accidental..\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 20:26:55 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: On /*----- comments"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 03/07/2023 11:48, Daniel Gustafsson wrote:\n> On 30 Jun 2023, at 17:22, Tom Lane <[email protected]> wrote:\n>>> Seems reasonable; the trailing dashes eat a line without adding much.\n\n>> +1\n\n> Pushed a patch to remove the end-guard from the example in the pgindent \n> README. And fixed the bogus end-guard in walsender.c.\n\nI don't see any actual push?\n\n> I'm not sure there is a universal best length. It depends on the comment \n> what looks best. The very long ones in particular would not look good on \n> comments in a deeply indented block. So I think the status quo is fine.\n\nOK, no strong feeling about that here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jul 2023 14:36:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On /*----- comments"
},
{
"msg_contents": "On 04/07/2023 21:36, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> Pushed a patch to remove the end-guard from the example in the pgindent\n>> README. And fixed the bogus end-guard in walsender.c.\n> \n> I don't see any actual push?\n\nForgot it after all. Pushed now, thanks.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 10:03:33 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: On /*----- comments"
}
] |
[
{
"msg_contents": "I noticed that pgrowlocks will use different names for shared locks\ndepending on whether the locks are intermediated by a multixact or not.\nParticularly, if a single transaction has locked a row, it may return \"For\nKey Share\" or \"For Share\" in the \"modes\" array, while if multiple\ntransactions have locked a row, it may return \"Key Share\" or \"Share\". The\ndocumentation of the pgrowlocks function only lists \"Key Share\" and \"Share\"\nas possible modes. (The four exclusive lock modes use the correct names in\nboth cases)\n\nThe attached patch (against the master branch) fixes this discrepancy, by\nusing \"Key Share\" and \"Share\" in the single transaction case, since that\nmatches the documentation. I also updated the test's expected output so it\npasses again.\n\nThanks,\n--David Cook",
"msg_date": "Fri, 30 Jun 2023 10:45:57 -0500",
"msg_from": "David Cook <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] pgrowlocks: Make mode names consistent with docs"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 10:45:57AM -0500, David Cook wrote:\n> I noticed that pgrowlocks will use different names for shared locks depending\n> on whether the locks are intermediated by a multixact or not. Particularly, if\n> a single transaction has locked a row, it may return \"For Key Share\" or \"For\n> Share\" in the \"modes\" array, while if multiple transactions have locked a row,\n> it may return \"Key Share\" or \"Share\". The documentation of the pgrowlocks\n> function only lists \"Key Share\" and \"Share\" as possible modes. (The four\n> exclusive lock modes use the correct names in both cases)\n> \n> The attached patch (against the master branch) fixes this discrepancy, by using\n> \"Key Share\" and \"Share\" in the single transaction case, since that matches the\n> documentation. I also updated the test's expected output so it passes again.\n\nYou are right something is wrong. However, I looked at your patch and I\nam thinking we need to go the other way and add \"For\" in the upper\nblock, rather than removing it in the lower one. I have two reasons. \nLooking at the code block:\n\n case MultiXactStatusUpdate:\n\tsnprintf(buf, NCHARS, \"Update\");\n\tbreak;\n case MultiXactStatusNoKeyUpdate:\n\tsnprintf(buf, NCHARS, \"No Key Update\");\n\tbreak;\n case MultiXactStatusForUpdate:\n\tsnprintf(buf, NCHARS, \"For Update\");\n\tbreak;\n case MultiXactStatusForNoKeyUpdate:\n\tsnprintf(buf, NCHARS, \"For No Key Update\");\n\tbreak;\n case MultiXactStatusForShare:\n\tsnprintf(buf, NCHARS, \"Share\");\n\tbreak;\n case MultiXactStatusForKeyShare:\n\tsnprintf(buf, NCHARS, \"Key Share\");\n\tbreak;\n\nYou will notice there are \"For\" and non-\"For\" versions of \"Update\" and\n\"No Key Update\". Notice that \"For\" appears in the macro names for the\n\"For\" macro versions of update, but \"For\" does not appear in the \"Share\"\nand \"Key Share\" versions, though the macro has \"For\".\n\nSecond, notice that the \"For\" and non-\"For\" match in the block below\nthat, which suggests it is correct, and the non-\"For\" block is later:\n\n values[Atnum_modes] = palloc(NCHARS);\n if (infomask & HEAP_XMAX_LOCK_ONLY)\n {\n\tif (HEAP_XMAX_IS_SHR_LOCKED(infomask))\n\tsnprintf(values[Atnum_modes], NCHARS, \"{For Share}\");\n\telse if (HEAP_XMAX_IS_KEYSHR_LOCKED(infomask))\n\tsnprintf(values[Atnum_modes], NCHARS, \"{For Key Share}\");\n\telse if (HEAP_XMAX_IS_EXCL_LOCKED(infomask))\n\t{\n\t if (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)\n\t\tsnprintf(values[Atnum_modes], NCHARS, \"{For Update}\");\n\t else\n\t\tsnprintf(values[Atnum_modes], NCHARS, \"{For No Key Update}\");\n\t}\n\telse\n\t/* neither keyshare nor exclusive bit it set */\n\tsnprintf(values[Atnum_modes], NCHARS,\n\t\t \"{transient upgrade status}\");\n }\n else\n {\n\tif (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)\n\tsnprintf(values[Atnum_modes], NCHARS, \"{Update}\");\n\telse\n\tsnprintf(values[Atnum_modes], NCHARS, \"{No Key Update}\");\n }\n\nI therefore suggest this attached patch, which should be marked as an\nincompatibility in PG 17.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 7 Sep 2023 12:58:29 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgrowlocks: Make mode names consistent with docs"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 12:58:29PM -0400, Bruce Momjian wrote:\n> You are right something is wrong. However, I looked at your patch and I\n> am thinking we need to go the other way and add \"For\" in the upper\n> block, rather than removing it in the lower one. I have two reasons. \n> Looking at the code block:\n> \n> case MultiXactStatusUpdate:\n> \tsnprintf(buf, NCHARS, \"Update\");\n> \tbreak;\n> case MultiXactStatusNoKeyUpdate:\n> \tsnprintf(buf, NCHARS, \"No Key Update\");\n> \tbreak;\n> case MultiXactStatusForUpdate:\n> \tsnprintf(buf, NCHARS, \"For Update\");\n> \tbreak;\n> case MultiXactStatusForNoKeyUpdate:\n> \tsnprintf(buf, NCHARS, \"For No Key Update\");\n> \tbreak;\n> case MultiXactStatusForShare:\n> \tsnprintf(buf, NCHARS, \"Share\");\n> \tbreak;\n> case MultiXactStatusForKeyShare:\n> \tsnprintf(buf, NCHARS, \"Key Share\");\n> \tbreak;\n> \n> You will notice there are \"For\" and non-\"For\" versions of \"Update\" and\n> \"No Key Update\". Notice that \"For\" appears in the macro names for the\n> \"For\" macro versions of update, but \"For\" does not appear in the \"Share\"\n> and \"Key Share\" versions, though the macro has \"For\".\n> \n> Second, notice that the \"For\" and non-\"For\" match in the block below\n> that, which suggests it is correct, and the non-\"For\" block is later:\n> \n> values[Atnum_modes] = palloc(NCHARS);\n> if (infomask & HEAP_XMAX_LOCK_ONLY)\n> {\n> \tif (HEAP_XMAX_IS_SHR_LOCKED(infomask))\n> \tsnprintf(values[Atnum_modes], NCHARS, \"{For Share}\");\n> \telse if (HEAP_XMAX_IS_KEYSHR_LOCKED(infomask))\n> \tsnprintf(values[Atnum_modes], NCHARS, \"{For Key Share}\");\n> \telse if (HEAP_XMAX_IS_EXCL_LOCKED(infomask))\n> \t{\n> \t if (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)\n> \t\tsnprintf(values[Atnum_modes], NCHARS, \"{For Update}\");\n> \t else\n> \t\tsnprintf(values[Atnum_modes], NCHARS, \"{For No Key Update}\");\n> \t}\n> \telse\n> \t/* neither keyshare nor exclusive bit it set */\n> \tsnprintf(values[Atnum_modes], NCHARS,\n> \t\t \"{transient upgrade status}\");\n> }\n> else\n> {\n> \tif (tuple->t_data->t_infomask2 & HEAP_KEYS_UPDATED)\n> \tsnprintf(values[Atnum_modes], NCHARS, \"{Update}\");\n> \telse\n> \tsnprintf(values[Atnum_modes], NCHARS, \"{No Key Update}\");\n> }\n> \n> I therefore suggest this attached patch, which should be marked as an\n> incompatibility in PG 17.\n\nPatch applied to master.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 26 Sep 2023 17:42:13 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgrowlocks: Make mode names consistent with docs"
}
] |
[
{
"msg_contents": "Hi,\n\nAs discussed somewhat at PgCon, enclosed is v1 of a patch to provide\nvariable block sizes; basically instead of BLCKSZ being a compile-time\nconstant, a single set of binaries can support all of the block sizes\nPg can support, using the value stored in pg_control as the basis.\n(Possible future plans would be to make this something even more\ndynamic, such as configured per tablespace, but this is out of scope;\nthis just sets up the infrastructure for this.)\n\nWhereas we had traditionally used BLCKSZ to indicate the compile-time selected\nblock size, this commit adjusted things so the cluster block size can be\nselected at initdb time.\n\nIn order to code for this, we introduce a few new defines:\n\n- CLUSTER_BLOCK_SIZE is the blocksize for this cluster itself. This is not\nvalid until BlockSizeInit() has been called in the given backend, which we do as\nearly as possible by parsing the ControlFile and using the blcksz field.\n\n- MIN_BLOCK_SIZE and MAX_BLOCK_SIZE are the limits for the selectable block\nsize. It is required that CLUSTER_BLOCK_SIZE is a power of 2 between these two\nconstants.\n\n- DEFAULT_BLOCK_SIZE is the moral equivalent of BLCKSZ; it is the built-in\ndefault value. This is used in a few places that just needed a buffer of an\narbitrary size, but the dynamic value CLUSTER_BLOCK_SIZE should almost always be\nused instead.\n\n- CLUSTER_RELSEG_SIZE is used instead of RELSEG_SIZE, since internally we are\nstoring the segment size in terms of number of blocks. RELSEG_SIZE is still\nkept, but is used in terms of the number of blocks of DEFAULT_BLOCK_SIZE;\nCLUSTER_RELSEG_SIZE scales appropriately (and is the only thing used internally)\nto keep the same target total segment size regardless of block size.\n\nThis patch uses a precalculated table to store the block size itself, as well as\nadditional derived values that have traditionally been compile-time\nconstants (example: MaxHeapTuplesPerPage). The traditional macro names are kept\nso code that doesn't care about it should not need to change, however the\ndefinition of these has changed (see the CalcXXX() routines in blocksize.h for\ndetails).\n\nA new function, BlockSizeInit() populates the appropriate values based on the\ntarget block size. This should be called as early as possible in any code that\nutilizes block sizes. This patch adds this in the appropriate place on the\nhandful of src/bin/ programs that used BLCKSZ, so this caveat mainly impacts new\ncode.\n\nCode which had previously used BLCKZ should likely be able to get away with\nchanging these instances to CLUSTER_BLOCK_SIZE, unless you're using a structure\nallocated on the stack. In these cases, the compiler will complain about\ndynamic structure. The solution is to utilize an expression with MAX_BLOCK_SIZE\ninstead of BLCKSZ, ensuring enough stack space is allocated for the maximum\nsize. This also does require using CLUSTER_BLOCK_SIZE or an expression based on\nit when actually using this structure, so in practice more stack space may be\nallocated then used in principal; as long as there is plenty of stack this\nshould have no specific impacts on code.\n\nInitial (basic) performance testing shows only minor changes with the pgbench -S\nbenchmark, though this is obviously an area that will need considerable\ntesting/verification across multiple workloads.\n\nThanks,\n\nDavid",
"msg_date": "Fri, 30 Jun 2023 12:35:09 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Initdb-time block size specification"
},
{
"msg_contents": "On 6/30/23 19:35, David Christensen wrote:\n> Hi,\n> \n> As discussed somewhat at PgCon, enclosed is v1 of a patch to provide\n> variable block sizes; basically instead of BLCKSZ being a compile-time\n> constant, a single set of binaries can support all of the block sizes\n> Pg can support, using the value stored in pg_control as the basis.\n> (Possible future plans would be to make this something even more\n> dynamic, such as configured per tablespace, but this is out of scope;\n> this just sets up the infrastructure for this.)\n> \n> Whereas we had traditionally used BLCKSZ to indicate the compile-time selected\n> block size, this commit adjusted things so the cluster block size can be\n> selected at initdb time.\n> \n> In order to code for this, we introduce a few new defines:\n> \n> - CLUSTER_BLOCK_SIZE is the blocksize for this cluster itself. This is not\n> valid until BlockSizeInit() has been called in the given backend, which we do as\n> early as possible by parsing the ControlFile and using the blcksz field.\n> \n> - MIN_BLOCK_SIZE and MAX_BLOCK_SIZE are the limits for the selectable block\n> size. It is required that CLUSTER_BLOCK_SIZE is a power of 2 between these two\n> constants.\n> \n> - DEFAULT_BLOCK_SIZE is the moral equivalent of BLCKSZ; it is the built-in\n> default value. This is used in a few places that just needed a buffer of an\n> arbitrary size, but the dynamic value CLUSTER_BLOCK_SIZE should almost always be\n> used instead.\n> \n> - CLUSTER_RELSEG_SIZE is used instead of RELSEG_SIZE, since internally we are\n> storing the segment size in terms of number of blocks. RELSEG_SIZE is still\n> kept, but is used in terms of the number of blocks of DEFAULT_BLOCK_SIZE;\n> CLUSTER_RELSEG_SIZE scales appropriately (and is the only thing used internally)\n> to keep the same target total segment size regardless of block size.\n> \n\nDo we really want to prefix the option with CLUSTER_? That seems to just\nadd a lot of noise into the patch, and I don't see much value in this\nrename. I'd prefer keeping BLCKSZ and tweak just the couple places that\nneed \"default\" to use BLCKSZ_DEFAULT or something like that.\n\nBut more importantly, I'd say we use CAPITALIZED_NAMES for compile-time\nvalues, so after making this initdb-time parameter we should abandon\nthat (just like fc49e24fa69a did for segment sizes). Perhaps something\nlike cluste_block_size would work ... (yes, that touches all the places\ntoo).\n\n> This patch uses a precalculated table to store the block size itself, as well as\n> additional derived values that have traditionally been compile-time\n> constants (example: MaxHeapTuplesPerPage). The traditional macro names are kept\n> so code that doesn't care about it should not need to change, however the\n> definition of these has changed (see the CalcXXX() routines in blocksize.h for\n> details).\n> \n> A new function, BlockSizeInit() populates the appropriate values based on the\n> target block size. This should be called as early as possible in any code that\n> utilizes block sizes. This patch adds this in the appropriate place on the\n> handful of src/bin/ programs that used BLCKSZ, so this caveat mainly impacts new\n> code.\n> \n> Code which had previously used BLCKZ should likely be able to get away with\n> changing these instances to CLUSTER_BLOCK_SIZE, unless you're using a structure\n> allocated on the stack. In these cases, the compiler will complain about\n> dynamic structure. The solution is to utilize an expression with MAX_BLOCK_SIZE\n> instead of BLCKSZ, ensuring enough stack space is allocated for the maximum\n> size. This also does require using CLUSTER_BLOCK_SIZE or an expression based on\n> it when actually using this structure, so in practice more stack space may be\n> allocated then used in principal; as long as there is plenty of stack this\n> should have no specific impacts on code.\n> \n> Initial (basic) performance testing shows only minor changes with the pgbench -S\n> benchmark, though this is obviously an area that will need considerable\n> testing/verification across multiple workloads.\n> \n\nI wonder how to best evaluate/benchmark this. AFAICS what we want to\nmeasure is the extra cost of making the values dynamic (which means the\ncompiler can't just optimize them out). I'd say a \"pgbench -S\" seems\nlike a good test.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 30 Jun 2023 20:14:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 1:14 PM Tomas Vondra\n<[email protected]> wrote:\n\n> Do we really want to prefix the option with CLUSTER_? That seems to just\n> add a lot of noise into the patch, and I don't see much value in this\n> rename. I'd prefer keeping BLCKSZ and tweak just the couple places that\n> need \"default\" to use BLCKSZ_DEFAULT or something like that.\n>\n> But more importantly, I'd say we use CAPITALIZED_NAMES for compile-time\n> values, so after making this initdb-time parameter we should abandon\n> that (just like fc49e24fa69a did for segment sizes). Perhaps something\n> like cluste_block_size would work ... (yes, that touches all the places\n> too).\n\nYes, I can see that being an equivalent change; thanks for the pointer\nthere. Definitely the \"cluster_block_size\" could be an approach,\nthough since it's just currently a #define for GetBlockSize(), maybe\nwe just replace with the equivalent instead. I was mainly trying to\nmake it something that was conceptually similar and easy to reason\nabout without getting bogged down in the details locally, but can see\nthat ALL_CAPS does have a specific meaning. Also eliminating the\nBLCKSZ symbol meant it was easier to catch anything which depended on\nthat value. If we wanted to keep BLCKSZ, I'd be okay with that at\nthis point vs the CLUSTER_BLOCK_SIZE approach, could help to make the\npatch smaller at this point.\n\n> > Initial (basic) performance testing shows only minor changes with the pgbench -S\n> > benchmark, though this is obviously an area that will need considerable\n> > testing/verification across multiple workloads.\n> >\n>\n> I wonder how to best evaluate/benchmark this. AFAICS what we want to\n> measure is the extra cost of making the values dynamic (which means the\n> compiler can't just optimize them out). I'd say a \"pgbench -S\" seems\n> like a good test.\n\nYep, I tested 100 runs apiece with pgbench -S at scale factor 100,\ndefault settings for optimized builds of the same base commit with and\nwithout the patch; saw a reduction of TPS around 1% in that case, but\nI do think we'd want to look at different workloads; I presume the\nlargest impact would be seen when it's all in shared_buffers and no IO\nis required.\n\nThanks,\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Jun 2023 14:09:55 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 12:35:09 -0500, David Christensen wrote:\n> As discussed somewhat at PgCon, enclosed is v1 of a patch to provide\n> variable block sizes; basically instead of BLCKSZ being a compile-time\n> constant, a single set of binaries can support all of the block sizes\n> Pg can support, using the value stored in pg_control as the basis.\n> (Possible future plans would be to make this something even more\n> dynamic, such as configured per tablespace, but this is out of scope;\n> this just sets up the infrastructure for this.)\n\nI am extremely doubtful this is a good idea. For one it causes a lot of churn,\nbut more importantly it turns currently cheap code paths into more expensive\nones.\n\nChanges like\n\n> -#define BufHdrGetBlock(bufHdr)\t((Block) (BufferBlocks + ((Size) (bufHdr)->buf_id) * BLCKSZ))\n> +#define BufHdrGetBlock(bufHdr)\t((Block) (BufferBlocks + ((Size) (bufHdr)->buf_id) * CLUSTER_BLOCK_SIZE))\n\nNote that CLUSTER_BLOCK_SIZE, despite looking like a macro that's constant, is\nactually variable.\n\nI am fairly certain this is going to be causing substantial performance\nregressions. I think we should reject this even if we don't immediately find\nthem, because it's almost guaranteed to cause some.\n\n\nBesides this, I've not really heard any convincing justification for needing\nthis in the first place.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jun 2023 12:39:42 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 2:39 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-06-30 12:35:09 -0500, David Christensen wrote:\n> > As discussed somewhat at PgCon, enclosed is v1 of a patch to provide\n> > variable block sizes; basically instead of BLCKSZ being a compile-time\n> > constant, a single set of binaries can support all of the block sizes\n> > Pg can support, using the value stored in pg_control as the basis.\n> > (Possible future plans would be to make this something even more\n> > dynamic, such as configured per tablespace, but this is out of scope;\n> > this just sets up the infrastructure for this.)\n>\n> I am extremely doubtful this is a good idea. For one it causes a lot of churn,\n> but more importantly it turns currently cheap code paths into more expensive\n> ones.\n>\n> Changes like\n>\n> > -#define BufHdrGetBlock(bufHdr) ((Block) (BufferBlocks + ((Size) (bufHdr)->buf_id) * BLCKSZ))\n> > +#define BufHdrGetBlock(bufHdr) ((Block) (BufferBlocks + ((Size) (bufHdr)->buf_id) * CLUSTER_BLOCK_SIZE))\n>\n> Note that CLUSTER_BLOCK_SIZE, despite looking like a macro that's constant, is\n> actually variable.\n\nCorrect; that is mainly a notational device which would be easy enough\nto change (and presumably would follow along the lines of the commit\nTomas pointed out above).\n\n> I am fairly certain this is going to be causing substantial performance\n> regressions. I think we should reject this even if we don't immediately find\n> them, because it's almost guaranteed to cause some.\n\nWhat would be considered substantial? Some overhead would be expected,\nbut I think having an actual patch to evaluate lets us see what\npotential there is. Seems like this will likely be optimized as an\noffset stored in a register, so wouldn't expect huge changes here.\n(There may be other approaches I haven't thought of in terms of\ngetting this.)\n\n> Besides this, I've not really heard any convincing justification for needing\n> this in the first place.\n\nDoing this would open up experiments in larger block sizes, so we\nwould be able to have larger indexable tuples, say, or be able to\nstore data types that are larger than currently supported for tuple\nrow limits without dropping to toast (native vector data types come to\nmind as a candidate here). We've had 8k blocks for a long time while\nhardware has improved over 20+ years, and it would be interesting to\nsee how tuning things would open up additional avenues for performance\nwithout requiring packagers to make a single choice on this regardless\nof use-case. (The fact that we allow compiling this at a different\nvalue suggests there is thought to be some utility having this be\nsomething other than the default value.)\n\nI just think it's one of those things that is hard to evaluate without\nactually having something specific, which is why we have this patch\nnow.\n\nThanks,\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Jun 2023 15:05:54 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 14:09:55 -0500, David Christensen wrote:\n> On Fri, Jun 30, 2023 at 1:14 PM Tomas Vondra\n> > I wonder how to best evaluate/benchmark this. AFAICS what we want to\n> > measure is the extra cost of making the values dynamic (which means the\n> > compiler can't just optimize them out). I'd say a \"pgbench -S\" seems\n> > like a good test.\n> \n> Yep, I tested 100 runs apiece with pgbench -S at scale factor 100,\n> default settings for optimized builds of the same base commit with and\n> without the patch; saw a reduction of TPS around 1% in that case, but\n> I do think we'd want to look at different workloads; I presume the\n> largest impact would be seen when it's all in shared_buffers and no IO\n> is required.\n\nI think pgbench -S indeed isn't a good workload - the overhead for it is much\nmore in context switches and instantiating executor state etc than code that\nis affected by the change.\n\nAnd indeed. Comparing e.g. TPC-H, I see *massive* regressions. Some queries\nare the same, sobut others regress by up to 70% (although more commonly around\n10-20%).\n\nThat's larger than I thought, which makes me suspect that there's some bug in\nthe new code.\n\nInterestingly, repeating the benchmark with a larger work_mem setting, the\nregressions are still quite present, but smaller. I suspect the planner\nchooses smarter plans which move bottlenecks more towards hashjoin code etc,\nwhich won't be affected by this change.\n\n\nIOW, you seriously need to evaluate analytics queries before this is worth\nlooking at further.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jun 2023 13:27:08 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 15:05:54 -0500, David Christensen wrote:\n> > I am fairly certain this is going to be causing substantial performance\n> > regressions. I think we should reject this even if we don't immediately find\n> > them, because it's almost guaranteed to cause some.\n> \n> What would be considered substantial? Some overhead would be expected,\n> but I think having an actual patch to evaluate lets us see what\n> potential there is.\n\nAnything beyond 1-2%, although even that imo is a hard sell.\n\n\n> > Besides this, I've not really heard any convincing justification for needing\n> > this in the first place.\n> \n> Doing this would open up experiments in larger block sizes, so we\n> would be able to have larger indexable tuples, say, or be able to\n> store data types that are larger than currently supported for tuple\n> row limits without dropping to toast (native vector data types come to\n> mind as a candidate here).\n\nYou can do experiments today with the compile time option. Which does not\nrequire regressing performance for everyone.\n\n\n> We've had 8k blocks for a long time while hardware has improved over 20+\n> years, and it would be interesting to see how tuning things would open up\n> additional avenues for performance without requiring packagers to make a\n> single choice on this regardless of use-case.\n\nI suspect you're going to see more benefits from going to a *lower* setting\nthan a higher one. Some practical issues aside, plenty of storage hardware\nthese days would allow to get rid of FPIs if you go to 4k blocks (although it\noften requires explicit sysadmin action to reformat the drive into that mode\netc). But obviously that's problematic from the \"postgres limits\" POV.\n\n\nIf we really wanted to do this - but I don't think we do - I'd argue for\nworking on the buildsystem support to build the postgres binary multiple\ntimes, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\nexec's the relevant \"real\" binary based on the pg_control value. I really\ndon't see us ever wanting to make BLCKSZ runtime configurable within one\npostgres binary. There's just too much intrinsic overhead associated with\nthat.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jun 2023 14:11:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "\n\nOn 6/30/23 22:05, David Christensen wrote:\n> On Fri, Jun 30, 2023 at 2:39 PM Andres Freund <[email protected]> wrote:\n>>\n>> ...\n>>\n>> Besides this, I've not really heard any convincing justification for needing\n>> this in the first place.\n> \n> Doing this would open up experiments in larger block sizes, so we\n> would be able to have larger indexable tuples, say, or be able to\n> store data types that are larger than currently supported for tuple\n> row limits without dropping to toast (native vector data types come to\n> mind as a candidate here). We've had 8k blocks for a long time while\n> hardware has improved over 20+ years, and it would be interesting to\n> see how tuning things would open up additional avenues for performance\n> without requiring packagers to make a single choice on this regardless\n> of use-case. (The fact that we allow compiling this at a different\n> value suggests there is thought to be some utility having this be\n> something other than the default value.)\n> \n> I just think it's one of those things that is hard to evaluate without\n> actually having something specific, which is why we have this patch\n> now.\n> \n\nBut it's possible to evaluate that - you just need to rebuild with a\ndifferent configuration option. Yes, allowing doing that at initdb is\nsimpler and allows testing this on systems where rebuilding is not\nconvenient. And having a binary that can deal with any block size would\nbe nice too.\n\nIn fact, I did exactly that a year ago for a conference, and I spoke\nabout it at the 2022 unconference too. Not sure if there's recording\nfrom pgcon, but there is one from the other conference [1][2].\n\nThe short story is that the possible gains are significant (say +50%)\nfor data sets that don't fit into RAM. But that was with block size set\nat compile time, the question is what's the impact of making it a\nvariable instead of a macro ....\n\n[1] https://www.youtube.com/watch?v=mVKpoQxtCXk\n[2] https://blog.pgaddict.com/pdf/block-sizes-postgresvision-2022.pdf\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 30 Jun 2023 23:12:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "\n\nOn 6/30/23 23:11, Andres Freund wrote:\n> ...\n> \n> If we really wanted to do this - but I don't think we do - I'd argue for\n> working on the buildsystem support to build the postgres binary multiple\n> times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n> exec's the relevant \"real\" binary based on the pg_control value. I really\n> don't see us ever wanting to make BLCKSZ runtime configurable within one\n> postgres binary. There's just too much intrinsic overhead associated with\n> that.\n> \n\nI don't quite understand why we shouldn't do this (or at least try to).\nIMO the benefits of using smaller blocks were substantial (especially\nfor 4kB, most likely due matching the internal SSD page size). The other\nbenefits (reducing WAL volume) seem rather interesting too.\n\nSure, there are challenges (e.g. the overhead due to making it dynamic).\nNo doubt about that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 30 Jun 2023 23:27:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 3:29 PM Andres Freund <[email protected]> wrote:\n>> And indeed. Comparing e.g. TPC-H, I see *massive* regressions. Some queries\n> are the same, sobut others regress by up to 70% (although more commonly around\n> 10-20%).\n\nHmm, that is definitely not good.\n\n> That's larger than I thought, which makes me suspect that there's some bug in\n> the new code.\n\nWill do a little profiling here to see if I can figure out the\nregression. Which build optimization settings are you seeing this\nunder?\n\n> Interestingly, repeating the benchmark with a larger work_mem setting, the\n> regressions are still quite present, but smaller. I suspect the planner\n> chooses smarter plans which move bottlenecks more towards hashjoin code etc,\n> which won't be affected by this change.\n\nInteresting.\n\n> IOW, you seriously need to evaluate analytics queries before this is worth\n> looking at further.\n\nMakes sense, thanks for reviewing.\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:28:59 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 4:12 PM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/30/23 22:05, David Christensen wrote:\n> > On Fri, Jun 30, 2023 at 2:39 PM Andres Freund <[email protected]> wrote:\n> >>\n> >> ...\n> >>\n> >> Besides this, I've not really heard any convincing justification for needing\n> >> this in the first place.\n> >\n> > Doing this would open up experiments in larger block sizes, so we\n> > would be able to have larger indexable tuples, say, or be able to\n> > store data types that are larger than currently supported for tuple\n> > row limits without dropping to toast (native vector data types come to\n> > mind as a candidate here). We've had 8k blocks for a long time while\n> > hardware has improved over 20+ years, and it would be interesting to\n> > see how tuning things would open up additional avenues for performance\n> > without requiring packagers to make a single choice on this regardless\n> > of use-case. (The fact that we allow compiling this at a different\n> > value suggests there is thought to be some utility having this be\n> > something other than the default value.)\n> >\n> > I just think it's one of those things that is hard to evaluate without\n> > actually having something specific, which is why we have this patch\n> > now.\n> >\n>\n> But it's possible to evaluate that - you just need to rebuild with a\n> different configuration option. Yes, allowing doing that at initdb is\n> simpler and allows testing this on systems where rebuilding is not\n> convenient. And having a binary that can deal with any block size would\n> be nice too.\n>\n> In fact, I did exactly that a year ago for a conference, and I spoke\n> about it at the 2022 unconference too. Not sure if there's recording\n> from pgcon, but there is one from the other conference [1][2].\n>\n> The short story is that the possible gains are significant (say +50%)\n> for data sets that don't fit into RAM. But that was with block size set\n> at compile time, the question is what's the impact of making it a\n> variable instead of a macro ....\n>\n> [1] https://www.youtube.com/watch?v=mVKpoQxtCXk\n> [2] https://blog.pgaddict.com/pdf/block-sizes-postgresvision-2022.pdf\n\nCool, thanks for the video links; will review.\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:29:40 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 4:27 PM Tomas Vondra\n<[email protected]> wrote:\n> On 6/30/23 23:11, Andres Freund wrote:\n> > ...\n> >\n> > If we really wanted to do this - but I don't think we do - I'd argue for\n> > working on the buildsystem support to build the postgres binary multiple\n> > times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n> > exec's the relevant \"real\" binary based on the pg_control value. I really\n> > don't see us ever wanting to make BLCKSZ runtime configurable within one\n> > postgres binary. There's just too much intrinsic overhead associated with\n> > that.\n> >\n>\n> I don't quite understand why we shouldn't do this (or at least try to).\n> IMO the benefits of using smaller blocks were substantial (especially\n> for 4kB, most likely due matching the internal SSD page size). The other\n> benefits (reducing WAL volume) seem rather interesting too.\n\nIf it's dynamic, we could also be able to add detection of the best\nblock size at initdb time, leading to improvements all around, say.\n:-)\n\n> Sure, there are challenges (e.g. the overhead due to making it dynamic).\n> No doubt about that.\n\nI definitely agree that it seems worth looking into. If nothing else,\nbeing able to quantify just what/where the overhead comes from may be\ninstructive for future efforts.\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:33:39 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 4:17 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-06-30 15:05:54 -0500, David Christensen wrote:\n> > > I am fairly certain this is going to be causing substantial performance\n> > > regressions. I think we should reject this even if we don't immediately find\n> > > them, because it's almost guaranteed to cause some.\n> >\n> > What would be considered substantial? Some overhead would be expected,\n> > but I think having an actual patch to evaluate lets us see what\n> > potential there is.\n>\n> Anything beyond 1-2%, although even that imo is a hard sell.\n\nI'd agree that that threshold seems like a reasonable target, and\nanything much above that would be regressive.\n\n> > > Besides this, I've not really heard any convincing justification for needing\n> > > this in the first place.\n> >\n> > Doing this would open up experiments in larger block sizes, so we\n> > would be able to have larger indexable tuples, say, or be able to\n> > store data types that are larger than currently supported for tuple\n> > row limits without dropping to toast (native vector data types come to\n> > mind as a candidate here).\n>\n> You can do experiments today with the compile time option. Which does not\n> require regressing performance for everyone.\n\nSure, not arguing that this is more performant than the current approach.\n\n> > We've had 8k blocks for a long time while hardware has improved over 20+\n> > years, and it would be interesting to see how tuning things would open up\n> > additional avenues for performance without requiring packagers to make a\n> > single choice on this regardless of use-case.\n>\n> I suspect you're going to see more benefits from going to a *lower* setting\n> than a higher one. Some practical issues aside, plenty of storage hardware\n> these days would allow to get rid of FPIs if you go to 4k blocks (although it\n> often requires explicit sysadmin action to reformat the drive into that mode\n> etc). But obviously that's problematic from the \"postgres limits\" POV.\n>\n>\n> If we really wanted to do this - but I don't think we do - I'd argue for\n> working on the buildsystem support to build the postgres binary multiple\n> times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n> exec's the relevant \"real\" binary based on the pg_control value. I really\n> don't see us ever wanting to make BLCKSZ runtime configurable within one\n> postgres binary. There's just too much intrinsic overhead associated with\n> that.\n\nYou may well be right, but I think if we haven't tried to do that and\nmeasure it, it's hard to say exactly. There are of course more parts\nof the system that are about BLCKSZ than the backend, plus you'd need\nto build other extensions to support each option, so there is a lot\nmore that would need to change. (That's neither here nor there, as my\napproach also involves changing all those places, so change isn't\ninherently bad; just saying it's not a trivial solution to merely\niterate over the block size for binaries.)\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:40:20 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "\n\nOn 6/30/23 23:11, Andres Freund wrote:\n> Hi,\n>\n> ...\n> \n> I suspect you're going to see more benefits from going to a *lower* setting\n> than a higher one. Some practical issues aside, plenty of storage hardware\n> these days would allow to get rid of FPIs if you go to 4k blocks (although it\n> often requires explicit sysadmin action to reformat the drive into that mode\n> etc). But obviously that's problematic from the \"postgres limits\" POV.\n> \n\nI wonder what are the conditions/options for disabling FPI. I kinda\nassume it'd apply to new drives with 4k sectors, with properly aligned\npartitions etc. But I haven't seen any particularly clear confirmation\nthat's correct.\n\n> \n> If we really wanted to do this - but I don't think we do - I'd argue for\n> working on the buildsystem support to build the postgres binary multiple\n> times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n> exec's the relevant \"real\" binary based on the pg_control value. I really\n> don't see us ever wanting to make BLCKSZ runtime configurable within one\n> postgres binary. There's just too much intrinsic overhead associated with\n> that.\n> \n\nHow would that work for extensions which may be built for a particular\nBLCKSZ value (like pageinspect)?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 30 Jun 2023 23:42:30 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 11:42:30PM +0200, Tomas Vondra wrote:\n> \n> \n> On 6/30/23 23:11, Andres Freund wrote:\n> > Hi,\n> >\n> > ...\n> > \n> > I suspect you're going to see more benefits from going to a *lower* setting\n> > than a higher one. Some practical issues aside, plenty of storage hardware\n> > these days would allow to get rid of FPIs if you go to 4k blocks (although it\n> > often requires explicit sysadmin action to reformat the drive into that mode\n> > etc). But obviously that's problematic from the \"postgres limits\" POV.\n> > \n> \n> I wonder what are the conditions/options for disabling FPI. I kinda\n> assume it'd apply to new drives with 4k sectors, with properly aligned\n> partitions etc. But I haven't seen any particularly clear confirmation\n> that's correct.\n\nI don't think we have ever had to study this --- we just request the\nwrite to the operating system, and we either get a successful reply or\nwe go into WAL recovery to reread the pre-image. We never really care\nif the write is atomic, e.g., an 8k write can be done in 2 4kB writes 4\n2kB writes --- we don't care --- we only care if they are all done or\nnot.\n\nFor a 4kB write, to say it is not partially written would be to require\nthe operating system to guarantee that the 4kB write is not split into\nsmaller writes which might each be atomic because smaller atomic writes\nwould not help us.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 17:53:34 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 23:27:45 +0200, Tomas Vondra wrote:\n> On 6/30/23 23:11, Andres Freund wrote:\n> > ...\n> > \n> > If we really wanted to do this - but I don't think we do - I'd argue for\n> > working on the buildsystem support to build the postgres binary multiple\n> > times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n> > exec's the relevant \"real\" binary based on the pg_control value. I really\n> > don't see us ever wanting to make BLCKSZ runtime configurable within one\n> > postgres binary. There's just too much intrinsic overhead associated with\n> > that.\n>\n> I don't quite understand why we shouldn't do this (or at least try to).\n>\n> IMO the benefits of using smaller blocks were substantial (especially\n> for 4kB, most likely due matching the internal SSD page size). The other\n> benefits (reducing WAL volume) seem rather interesting too.\n\nMostly because I think there are bigger gains to be had elsewhere.\n\nIME not a whole lot of storage ships by default with externally visible 4k\nsectors, but needs to be manually reformated [1], which looses all data, so it\nhas to be done initially. Then postgres would also need OS specific trickery\nto figure out that indeed the IO stack is entirely 4k (checking sector size is\nnot enough). And you run into the issue that suddenly the #column and\nindex-tuple-size limits are lower, which won't make it easier.\n\n\nI think we should change the default of the WAL blocksize to 4k\nthough. There's practically no downsides, and it drastically reduces\npostgres-side write amplification in many transactional workloads, by only\nwriting out partially filled 4k pages instead of partially filled 8k pages.\n\n\n> Sure, there are challenges (e.g. the overhead due to making it dynamic).\n> No doubt about that.\n\nI don't think the runtime-dynamic overhead is avoidable with reasonable effort\n(leaving aside compiling code multiple times and switching between).\n\nIf we were to start building postgres for multiple compile-time settings, I\nthink there are uses other than switching between BLCKSZ, potentially more\ninteresting. E.g. you can see substantially improved performance by being able\nto use SSE4.2 without CPU dispatch (partially because it allows compiler\nautovectorization, partially because it allows to compiler to use newer\nnon-vectorized math instructions (when targetting AVX IIRC), partially because\nthe dispatch overhead is not insubstantial). Another example: ARMv8\nperformance is substantially better if you target ARMv8.1-A instead of\nARMv8.0, due to having atomic instructions instead of LL/SC (it still baffles\nme that they didn't do this earlier, ll/sc is just inherently inefficient).\n\nGreetings,\n\nAndres Freund\n\n\n[1] to see the for d in /dev/nvme*n1; do echo \"$d:\"; sudo nvme id-ns -H $d|grep '^LBA Format';echo ;done\n\n\n",
"msg_date": "Fri, 30 Jun 2023 15:05:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 23:42:30 +0200, Tomas Vondra wrote:\n> I wonder what are the conditions/options for disabling FPI. I kinda\n> assume it'd apply to new drives with 4k sectors, with properly aligned\n> partitions etc. But I haven't seen any particularly clear confirmation\n> that's correct.\n\nYea, it's not trivial. And the downsides are also substantial from a\nreplication / crash recovery performance POV - even with reading blocks ahead\nof WAL replay, it's hard to beat just sequentially reading nearly all the data\nyou're going to need.\n\n\n> On 6/30/23 23:11, Andres Freund wrote:\n> > If we really wanted to do this - but I don't think we do - I'd argue for\n> > working on the buildsystem support to build the postgres binary multiple\n> > times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n> > exec's the relevant \"real\" binary based on the pg_control value. I really\n> > don't see us ever wanting to make BLCKSZ runtime configurable within one\n> > postgres binary. There's just too much intrinsic overhead associated with\n> > that.\n>\n> How would that work for extensions which may be built for a particular\n> BLCKSZ value (like pageinspect)?\n\nI think we'd need to do something similar for extensions, likely loading them\nfrom a path that includes the \"subvariant\" the server currently is running. Or\nalternatively adding a suffix to the filename indicating the\nvariant. Something like pageinspect.x86-64-v4-4kB.so.\n\nThe x86-64-v* stuff is something gcc and clang added a couple years ago, so\nthat not every project has to define different \"baseline\" levels. I think it\nwas done in collaboration with the sytem-v/linux AMD64 ABI specification group\n([1]).\n\nGreetings,\n\nAndres\n\n[1] https://gitlab.com/x86-psABIs/x86-64-ABI/-/jobs/artifacts/master/raw/x86-64-ABI/abi.pdf?job=build\nsection 3.1.1.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 15:20:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "\n\nOn 6/30/23 23:53, Bruce Momjian wrote:\n> On Fri, Jun 30, 2023 at 11:42:30PM +0200, Tomas Vondra wrote:\n>>\n>>\n>> On 6/30/23 23:11, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> ...\n>>>\n>>> I suspect you're going to see more benefits from going to a *lower* setting\n>>> than a higher one. Some practical issues aside, plenty of storage hardware\n>>> these days would allow to get rid of FPIs if you go to 4k blocks (although it\n>>> often requires explicit sysadmin action to reformat the drive into that mode\n>>> etc). But obviously that's problematic from the \"postgres limits\" POV.\n>>>\n>>\n>> I wonder what are the conditions/options for disabling FPI. I kinda\n>> assume it'd apply to new drives with 4k sectors, with properly aligned\n>> partitions etc. But I haven't seen any particularly clear confirmation\n>> that's correct.\n> \n> I don't think we have ever had to study this --- we just request the\n> write to the operating system, and we either get a successful reply or\n> we go into WAL recovery to reread the pre-image. We never really care\n> if the write is atomic, e.g., an 8k write can be done in 2 4kB writes 4\n> 2kB writes --- we don't care --- we only care if they are all done or\n> not.\n> \n> For a 4kB write, to say it is not partially written would be to require\n> the operating system to guarantee that the 4kB write is not split into\n> smaller writes which might each be atomic because smaller atomic writes\n> would not help us.\n> \n\nRight, that's the dance we do to protect against torn pages. But Andres\nsuggested that if you have modern storage and configure it correctly,\nwriting with 4kB pages would be atomic. So we wouldn't need to do this\nFPI stuff, eliminating pretty significant source of write amplification.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Jul 2023 00:21:03 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Sat, Jul 1, 2023 at 12:21:03AM +0200, Tomas Vondra wrote:\n> On 6/30/23 23:53, Bruce Momjian wrote:\n> > For a 4kB write, to say it is not partially written would be to require\n> > the operating system to guarantee that the 4kB write is not split into\n> > smaller writes which might each be atomic because smaller atomic writes\n> > would not help us.\n> \n> Right, that's the dance we do to protect against torn pages. But Andres\n> suggested that if you have modern storage and configure it correctly,\n> writing with 4kB pages would be atomic. So we wouldn't need to do this\n> FPI stuff, eliminating pretty significant source of write amplification.\n\nI agree the hardware is atomic for 4k writes, but do we know the OS\nalways issues 4k writes?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 18:37:39 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 17:53:34 -0400, Bruce Momjian wrote:\n> On Fri, Jun 30, 2023 at 11:42:30PM +0200, Tomas Vondra wrote:\n> > On 6/30/23 23:11, Andres Freund wrote:\n> > > I suspect you're going to see more benefits from going to a *lower* setting\n> > > than a higher one. Some practical issues aside, plenty of storage hardware\n> > > these days would allow to get rid of FPIs if you go to 4k blocks (although it\n> > > often requires explicit sysadmin action to reformat the drive into that mode\n> > > etc). But obviously that's problematic from the \"postgres limits\" POV.\n> > >\n> >\n> > I wonder what are the conditions/options for disabling FPI. I kinda\n> > assume it'd apply to new drives with 4k sectors, with properly aligned\n> > partitions etc. But I haven't seen any particularly clear confirmation\n> > that's correct.\n>\n> I don't think we have ever had to study this --- we just request the\n> write to the operating system, and we either get a successful reply or\n> we go into WAL recovery to reread the pre-image. We never really care\n> if the write is atomic, e.g., an 8k write can be done in 2 4kB writes 4\n> 2kB writes --- we don't care --- we only care if they are all done or\n> not.\n\nWell, that works because we have FPI. This sub-discussion is motivated by\ngetting rid of FPIs.\n\n\n> For a 4kB write, to say it is not partially written would be to require\n> the operating system to guarantee that the 4kB write is not split into\n> smaller writes which might each be atomic because smaller atomic writes\n> would not help us.\n\nThat's why were talking about drives with 4k sector size - you *can't* split\nthe writes below that.\n\nThe problem is that, as far as I know,it's not always obvious what block size\nis being used on the actual storage level. It's not even trivial when\noperating on a filesystem directly stored on a single block device ([1]). Once\nthere's things like LVM or disk encryption involved, it gets pretty hairy\n([2]). Once you know all the block devices, it's not too bad, but ...\n\nGreetings,\n\nAndres Freund\n\n[1] On linux I think you need to use stat() to figure out the st_dev for a\nfile, then look in /proc/self/mountinfo for the block device, use the name\nof the file to look in /sys/block/$d/queue/physical_block_size.\n\n[2] The above doesn't work because e.g. a device mapper target might only\nsupport 4k sectors, even though the sectors on the underlying storage device\nare 512b sectors. E.g. my root filesystem is encrypted, and if you follow the\nabove recipe (with the added step of resolving the symlink to know the actual\ndevice name), you would see a 4k sector size. Even though the underlying NVMe\ndisk only supports 512b sectors.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 15:51:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On 7/1/23 00:05, Andres Freund wrote:\n> Hi,\n> \n> On 2023-06-30 23:27:45 +0200, Tomas Vondra wrote:\n>> On 6/30/23 23:11, Andres Freund wrote:\n>>> ...\n>>>\n>>> If we really wanted to do this - but I don't think we do - I'd argue for\n>>> working on the buildsystem support to build the postgres binary multiple\n>>> times, for 4, 8, 16 kB BLCKSZ and having a wrapper postgres binary that just\n>>> exec's the relevant \"real\" binary based on the pg_control value. I really\n>>> don't see us ever wanting to make BLCKSZ runtime configurable within one\n>>> postgres binary. There's just too much intrinsic overhead associated with\n>>> that.\n>>\n>> I don't quite understand why we shouldn't do this (or at least try to).\n>>\n>> IMO the benefits of using smaller blocks were substantial (especially\n>> for 4kB, most likely due matching the internal SSD page size). The other\n>> benefits (reducing WAL volume) seem rather interesting too.\n> \n> Mostly because I think there are bigger gains to be had elsewhere.\n> \n\nI think that decision is up to whoever chooses to work on it, especially\nif performance is not their primary motivation (IIRC this was discussed\nas part of the TDE session).\n\n> IME not a whole lot of storage ships by default with externally visible 4k\n> sectors, but needs to be manually reformated [1], which looses all data, so it\n> has to be done initially.\n\nI don't see why \"you have to configure stuff\" would be a reason against\nimprovements in this area. I don't know how prevalent storage with 4k\nsectors is now, but AFAIK it's not hard to get and it's likely to get\nyet more common in the future.\n\nFWIW I don't think the benefits of different (lower) page sizes hinge on\n4k sectors - it's just that not having to do FPIs would make it even\nmore interesting.\n\n> Then postgres would also need OS specific trickery\n> to figure out that indeed the IO stack is entirely 4k (checking sector size is\n> not enough).\n\nI haven't suggested we should be doing that automatically (would be\nnice, but I'd be happy with knowing when it's safe to disable FPW using\nthe GUC in config). But knowing when it's safe would make it yet more\ninteresting be able to use a different block page size at initdb.\n\n> And you run into the issue that suddenly the #column and\n> index-tuple-size limits are lower, which won't make it easier.\n> \n\nTrue. This limit is annoying, but no one is proposing to change the\ndefault page size. initdb would just provide a more convenient way to do\nthat, but the user would have to check. (I rather doubt many people\nactually index such large values).\n\n> \n> I think we should change the default of the WAL blocksize to 4k\n> though. There's practically no downsides, and it drastically reduces\n> postgres-side write amplification in many transactional workloads, by only\n> writing out partially filled 4k pages instead of partially filled 8k pages.\n> \n\n+1 (although in my tests the benefits we much smaller than for BLCKSZ)\n\n> \n>> Sure, there are challenges (e.g. the overhead due to making it dynamic).\n>> No doubt about that.\n> \n> I don't think the runtime-dynamic overhead is avoidable with reasonable effort\n> (leaving aside compiling code multiple times and switching between).\n> \n> If we were to start building postgres for multiple compile-time settings, I\n> think there are uses other than switching between BLCKSZ, potentially more\n> interesting. E.g. you can see substantially improved performance by being able\n> to use SSE4.2 without CPU dispatch (partially because it allows compiler\n> autovectorization, partially because it allows to compiler to use newer\n> non-vectorized math instructions (when targetting AVX IIRC), partially because\n> the dispatch overhead is not insubstantial). Another example: ARMv8\n> performance is substantially better if you target ARMv8.1-A instead of\n> ARMv8.0, due to having atomic instructions instead of LL/SC (it still baffles\n> me that they didn't do this earlier, ll/sc is just inherently inefficient).\n> \n\nMaybe, although I think it depends on what parts of the code this would\naffect. If it's sufficiently small/isolated, it'd be possible to have\nmultiple paths, specialized to a particular page size (pretty common\ntechnique for GPU/SIMD, I believe).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Jul 2023 00:56:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 03:51:18PM -0700, Andres Freund wrote:\n> > For a 4kB write, to say it is not partially written would be to require\n> > the operating system to guarantee that the 4kB write is not split into\n> > smaller writes which might each be atomic because smaller atomic writes\n> > would not help us.\n> \n> That's why were talking about drives with 4k sector size - you *can't* split\n> the writes below that.\n\nOkay, good point.\n\n> The problem is that, as far as I know,it's not always obvious what block size\n> is being used on the actual storage level. It's not even trivial when\n> operating on a filesystem directly stored on a single block device ([1]). Once\n> there's things like LVM or disk encryption involved, it gets pretty hairy\n> ([2]). Once you know all the block devices, it's not too bad, but ...\n> \n> Greetings,\n> \n> Andres Freund\n> \n> [1] On linux I think you need to use stat() to figure out the st_dev for a\n> file, then look in /proc/self/mountinfo for the block device, use the name\n> of the file to look in /sys/block/$d/queue/physical_block_size.\n\nI just got a new server:\n\n\thttps://momjian.us/main/blogs/blog/2023.html#June_28_2023\n\nso tested this on my new M.2 NVME storage device:\n\n\t$ /sys/block/nvme0n1/queue/physical_block_size\n\t262144\n\nthat's 256k, not 4k.\n\n> [2] The above doesn't work because e.g. a device mapper target might only\n> support 4k sectors, even though the sectors on the underlying storage device\n> are 512b sectors. E.g. my root filesystem is encrypted, and if you follow the\n> above recipe (with the added step of resolving the symlink to know the actual\n> device name), you would see a 4k sector size. Even though the underlying NVMe\n> disk only supports 512b sectors.\n\nGood point.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 18:58:20 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On 2023-06-30 18:37:39 -0400, Bruce Momjian wrote:\n> On Sat, Jul 1, 2023 at 12:21:03AM +0200, Tomas Vondra wrote:\n> > On 6/30/23 23:53, Bruce Momjian wrote:\n> > > For a 4kB write, to say it is not partially written would be to require\n> > > the operating system to guarantee that the 4kB write is not split into\n> > > smaller writes which might each be atomic because smaller atomic writes\n> > > would not help us.\n> > \n> > Right, that's the dance we do to protect against torn pages. But Andres\n> > suggested that if you have modern storage and configure it correctly,\n> > writing with 4kB pages would be atomic. So we wouldn't need to do this\n> > FPI stuff, eliminating pretty significant source of write amplification.\n> \n> I agree the hardware is atomic for 4k writes, but do we know the OS\n> always issues 4k writes?\n\nWhen using a sector size of 4K you *can't* make smaller writes via normal\npaths. The addressing unit is in sectors. The details obviously differ between\nstorage protocol, but you pretty much always just specify a start sector and a\nnumber of sectors to be operated on.\n\nObviously the kernel could read 4k, modify 512 bytes in-memory, and then write\n4k back, but that shouldn't be a danger here. There might also be debug\ninterfaces to allow reading/writing in different increments, but that'd not be\nsomething happening during normal operation.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 15:59:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 06:58:20PM -0400, Bruce Momjian wrote:\n> I just got a new server:\n> \n> \thttps://momjian.us/main/blogs/blog/2023.html#June_28_2023\n> \n> so tested this on my new M.2 NVME storage device:\n> \n> \t$ /sys/block/nvme0n1/queue/physical_block_size\n> \t262144\n> \n> that's 256k, not 4k.\n\nI have another approach to this. My storage device has power\nprotection, so even though it has a 256k physical block size, it should\nbe fine with 4k write atomicity.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 19:01:29 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 18:58:20 -0400, Bruce Momjian wrote:\n> > [1] On linux I think you need to use stat() to figure out the st_dev for a\n> > file, then look in /proc/self/mountinfo for the block device, use the name\n> > of the file to look in /sys/block/$d/queue/physical_block_size.\n>\n> I just got a new server:\n>\n> \thttps://momjian.us/main/blogs/blog/2023.html#June_28_2023\n>\n> so tested this on my new M.2 NVME storage device:\n>\n> \t$ /sys/block/nvme0n1/queue/physical_block_size\n> \t262144\n\nAh, I got the relevant filename wrong. I think it's logical_block_size, not\nphysical one (that's the size of addressing). I didn't realize because the\ndevices I looked at have the same...\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:04:57 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On 7/1/23 00:59, Andres Freund wrote:\n> On 2023-06-30 18:37:39 -0400, Bruce Momjian wrote:\n>> On Sat, Jul 1, 2023 at 12:21:03AM +0200, Tomas Vondra wrote:\n>>> On 6/30/23 23:53, Bruce Momjian wrote:\n>>>> For a 4kB write, to say it is not partially written would be to require\n>>>> the operating system to guarantee that the 4kB write is not split into\n>>>> smaller writes which might each be atomic because smaller atomic writes\n>>>> would not help us.\n>>>\n>>> Right, that's the dance we do to protect against torn pages. But Andres\n>>> suggested that if you have modern storage and configure it correctly,\n>>> writing with 4kB pages would be atomic. So we wouldn't need to do this\n>>> FPI stuff, eliminating pretty significant source of write amplification.\n>>\n>> I agree the hardware is atomic for 4k writes, but do we know the OS\n>> always issues 4k writes?\n> \n> When using a sector size of 4K you *can't* make smaller writes via normal\n> paths. The addressing unit is in sectors. The details obviously differ between\n> storage protocol, but you pretty much always just specify a start sector and a\n> number of sectors to be operated on.\n> \n> Obviously the kernel could read 4k, modify 512 bytes in-memory, and then write\n> 4k back, but that shouldn't be a danger here. There might also be debug\n> interfaces to allow reading/writing in different increments, but that'd not be\n> something happening during normal operation.\n\nI think it's important to point out that there's a physical and logical\nsector size. The \"physical\" is what the drive does internally, \"logical\"\ndefines what OS does.\n\nSome drives have 4k physical sectors but only 512B logical sectors.\nAFAIK most \"old\" SATA SSDs do it that way, for compatibility reasons.\n\nNew drives may have 4k physical sectors but typically support both 512B\nand 4k logical sectors - my nvme SSDs do this, for example.\n\nMy understanding is that for drives with 4k physical+logical sectors,\nthe OS would only issue \"full\" 4k writes.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Jul 2023 01:13:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 04:04:57PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2023-06-30 18:58:20 -0400, Bruce Momjian wrote:\n> > > [1] On linux I think you need to use stat() to figure out the st_dev for a\n> > > file, then look in /proc/self/mountinfo for the block device, use the name\n> > > of the file to look in /sys/block/$d/queue/physical_block_size.\n> >\n> > I just got a new server:\n> >\n> > \thttps://momjian.us/main/blogs/blog/2023.html#June_28_2023\n> >\n> > so tested this on my new M.2 NVME storage device:\n> >\n> > \t$ /sys/block/nvme0n1/queue/physical_block_size\n> > \t262144\n> \n> Ah, I got the relevant filename wrong. I think it's logical_block_size, not\n> physical one (that's the size of addressing). I didn't realize because the\n> devices I looked at have the same...\n\nThat one reports 512 _bytes_ for me:\n\n\t$ cat /sys/block/nvme0n1/queue/logical_block_size\n\t512\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 19:16:18 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On 7/1/23 01:16, Bruce Momjian wrote:\n> On Fri, Jun 30, 2023 at 04:04:57PM -0700, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-06-30 18:58:20 -0400, Bruce Momjian wrote:\n>>>> [1] On linux I think you need to use stat() to figure out the st_dev for a\n>>>> file, then look in /proc/self/mountinfo for the block device, use the name\n>>>> of the file to look in /sys/block/$d/queue/physical_block_size.\n>>>\n>>> I just got a new server:\n>>>\n>>> \thttps://momjian.us/main/blogs/blog/2023.html#June_28_2023\n>>>\n>>> so tested this on my new M.2 NVME storage device:\n>>>\n>>> \t$ /sys/block/nvme0n1/queue/physical_block_size\n>>> \t262144\n>>\n>> Ah, I got the relevant filename wrong. I think it's logical_block_size, not\n>> physical one (that's the size of addressing). I didn't realize because the\n>> devices I looked at have the same...\n> \n> That one reports 512 _bytes_ for me:\n> \n> \t$ cat /sys/block/nvme0n1/queue/logical_block_size\n> \t512\n> \n\nWhat does \"smartctl -a /dev/nvme0n1\" say? There should be something like\nthis:\n\nSupported LBA Sizes (NSID 0x1)\nId Fmt Data Metadt Rel_Perf\n 0 - 4096 0 0\n 1 + 512 0 0\n\nwhich says the drive supports 4k and 512B sectors, and is currently\nconfigures to use 512B sectors.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Jul 2023 01:18:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Sat, Jul 1, 2023 at 01:18:34AM +0200, Tomas Vondra wrote:\n> What does \"smartctl -a /dev/nvme0n1\" say? There should be something like\n> this:\n> \n> Supported LBA Sizes (NSID 0x1)\n> Id Fmt Data Metadt Rel_Perf\n> 0 - 4096 0 0\n> 1 + 512 0 0\n> \n> which says the drive supports 4k and 512B sectors, and is currently\n> configures to use 512B sectors.\n\nIt says:\n\n\tSupported LBA Sizes (NSID 0x1)\n\tId Fmt Data Metadt Rel_Perf\n\t 0 + 512 0 2\n\t 1 - 4096 0 0\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 19:26:12 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 16:28:59 -0500, David Christensen wrote:\n> On Fri, Jun 30, 2023 at 3:29 PM Andres Freund <[email protected]> wrote:\n> >> And indeed. Comparing e.g. TPC-H, I see *massive* regressions. Some queries\n> > are the same, sobut others regress by up to 70% (although more commonly around\n> > 10-20%).\n>\n> Hmm, that is definitely not good.\n>\n> > That's larger than I thought, which makes me suspect that there's some bug in\n> > the new code.\n>\n> Will do a little profiling here to see if I can figure out the\n> regression. Which build optimization settings are you seeing this\n> under?\n\ngcc 12 with:\n\nmeson setup \\\n -Doptimization=3 -Ddebug=true \\\n -Dc_args=\"-ggdb -g3 -march=native -mtune=native -fno-plt -fno-semantic-interposition -Wno-array-bounds\" \\\n -Dc_link_args=\"-fuse-ld=mold -Wl,--gdb-index,--Bsymbolic\" \\\n ...\n\nRelevant postgres settings:\n-c huge_pages=on -c shared_buffers=12GB -c max_connections=120\n-c work_mem=32MB\n-c autovacuum=0 # I always do that for comparative benchmarks, too much variance\n-c track_io_timing=on\n\nThe later run where I saw the smaller regression was with work_mem=1GB. I\njust had forgotten to adjust that.\n\n\nI had loaded tpch scale 5 before, which is why I just used that.\n\n\nFWIW, even just \"SELECT count(*) FROM lineitem;\" shows a substantial\nregression.\n\nI disabled parallelism, prewarmed the data and pinned postgres to a single\ncore to reduce noise. The result is the best of three (variance was low in all\ncases).\n\n HEAD patch\nindex only scan 1896.364 2242.288\nseq scan 1586.990 1628.042\n\n\nA profile shows that 20% of the runtime in the IOS case is in\nvisibilitymap_get_status():\n\n+ 20.50% postgres.new postgres.new [.] visibilitymap_get_status\n+ 19.54% postgres.new postgres.new [.] ExecInterpExpr\n+ 14.47% postgres.new postgres.new [.] IndexOnlyNext\n+ 6.47% postgres.new postgres.new [.] index_deform_tuple_internal\n+ 4.67% postgres.new postgres.new [.] ExecScan\n+ 4.12% postgres.new postgres.new [.] btgettuple\n+ 3.97% postgres.new postgres.new [.] ExecAgg\n+ 3.92% postgres.new postgres.new [.] _bt_next\n+ 3.71% postgres.new postgres.new [.] _bt_readpage\n+ 3.43% postgres.new postgres.new [.] fetch_input_tuple\n+ 2.87% postgres.new postgres.new [.] index_getnext_tid\n+ 2.45% postgres.new postgres.new [.] MemoryContextReset\n+ 2.35% postgres.new postgres.new [.] tts_virtual_clear\n+ 1.37% postgres.new postgres.new [.] index_deform_tuple\n+ 1.14% postgres.new postgres.new [.] ExecStoreVirtualTuple\n+ 1.13% postgres.new postgres.new [.] PredicateLockPage\n+ 1.12% postgres.new postgres.new [.] int8inc\n+ 1.04% postgres.new postgres.new [.] ExecIndexOnlyScan\n+ 0.57% postgres.new postgres.new [.] BufferGetBlockNumber\n\nmostly due to\n\n │ BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk);\n 2.46 │ lea -0x60(,%rdx,4),%rcx\n │ xor %edx,%edx\n 59.79 │ div %rcx\n\n\nYou can't have have divisions for this kind of thing in the vicinity of a\npeformance critical path. With compile time constants the compiler can turn\nthis into shifts, but that's not possible as-is after the change.\n\nWhile not quite as bad as divisions, the paths with multiplications are also\nnot going to be ok. E.g.\n\t\treturn (Block) (BufferBlocks + ((Size) (buffer - 1)) * CLUSTER_BLOCK_SIZE);\nis going to be noticeable.\n\n\nYou'd have to turn all of this into shifts (and enforce power of 2 sizes, if\nyou aren't yet).\n\n\nI don't think pre-computed tables are a viable answer FWIW. Even just going\nthrough a memory indirection is going to be noticable. This stuff is in a\ncrapton of hot code paths.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jun 2023 16:44:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On 01.07.23 00:21, Tomas Vondra wrote:\n> Right, that's the dance we do to protect against torn pages. But Andres\n> suggested that if you have modern storage and configure it correctly,\n> writing with 4kB pages would be atomic. So we wouldn't need to do this\n> FPI stuff, eliminating pretty significant source of write amplification.\n\nThis work in progress for the Linux kernel was also mentioned at PGCon: \n<https://lwn.net/Articles/933015/>. Subject the various conditions, the \nkernel would then guarantee atomic writes for blocks larger than the \nhardware's native size.\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 17:17:55 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi, enclosed is v2 of the variable blocksize patch. This series is\nbased atop 9b581c5341.\n\nPreparation phase:\n\n0001 - add utility script for retokenizing all necessary scripts.\nThis is mainly for my own use in generating 0003, which is a simple\nrename/preparation patch to change all symbols from their UPPER_CASE\nto lower_case form, with several exceptions in renames.\n0002 - add script to harness 0001 and apply to the relevant files in the repo\n0003 - capture the effects of 0002 on the repo\n\nThe other patches in this series are as follows:\n\n0004 - the \"main\" variable blocksize patch where the bulk of the code\nchanges take place - see comments here\n0005 - utility functions for fast div/mod operations; basically\nmontgomery multiplication\n0006 - use fastdiv code in the visiblity map, the main place where\nthis change is required\n0007 - (optional) add/use libdivide for division which is license\ncompatible with other headers we bundle\n0008 - (optional) tweaks to libdivide to make compiler/CI happy\n\nI have also replaced multiple instances of division or multiplication\nof BLOCKSZ with bitshift operations based on the number of bits in the\nunderlying blocksize.\n\nThe current approach for this is to replace any affected constant with\nan inline switch statement based on an enum for the blocksize and the\ncompile-time calculation for that version. In practice with -O2 this\ngenerates a simple lookup table inline in the assembly with the costs\nfor calculating paid at compile time.\n\nThe visibility map was the main hot path which was affected by the\nswitch from compile-time sizes with the previous version of this\npatch. With the switch to a modified approach in 0005/0006 this issue\nhas been rectified in our testing.\n\nI have tested a few workloads with this modified patch and have seen\npositive results compared to v1. I look forward to additional\nreview/testing/feedback.\n\nThanks,\n\nDavid",
"msg_date": "Wed, 30 Aug 2023 20:50:53 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Enclosed are TPC-H results for 1GB shared_buffers, 64MB work_mem on a\n64GB laptop with SSD storage; everything else is default settings.\n\nTL;DR: unpatched version: 17.30 seconds, patched version: 17.15; there\nare some slight variations in runtime, but seems to be within the\nnoise level at this point.",
"msg_date": "Thu, 31 Aug 2023 08:07:38 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 8:51 AM David Christensen <\[email protected]> wrote:\n\n> 0005 - utility functions for fast div/mod operations; basically\n> montgomery multiplication\n\n+/*\n+ * pg_fastmod - calculates the modulus of a 32-bit number against a\nconstant\n+ * divisor without using the division operator\n+ */\n+static inline uint32 pg_fastmod(uint32 n, uint32 divisor, uint64 fastinv)\n+{\n+#ifdef HAVE_INT128\n+ uint64_t lowbits = fastinv * n;\n+ return ((uint128)lowbits * divisor) >> 64;\n+#else\n+ return n % divisor;\n+#endif\n+}\n\nRequiring 128-bit arithmetic to avoid serious regression is a non-starter\nas written. Software that relies on fast 128-bit multiplication has to do\nbackflips to get that working on multiple platforms. But I'm not sure it's\nnecessary -- if the max block number is UINT32_MAX and max block size is\nUINT16_MAX, can't we just use 64-bit multiplication?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 31, 2023 at 8:51 AM David Christensen <[email protected]> wrote:> 0005 - utility functions for fast div/mod operations; basically> montgomery multiplication+/*+ * pg_fastmod - calculates the modulus of a 32-bit number against a constant+ * divisor without using the division operator+ */+static inline uint32 pg_fastmod(uint32 n, uint32 divisor, uint64 fastinv)+{+#ifdef HAVE_INT128+\tuint64_t lowbits = fastinv * n;+\treturn ((uint128)lowbits * divisor) >> 64;+#else+\treturn n % divisor;+#endif+}Requiring 128-bit arithmetic to avoid serious regression is a non-starter as written. Software that relies on fast 128-bit multiplication has to do backflips to get that working on multiple platforms. But I'm not sure it's necessary -- if the max block number is UINT32_MAX and max block size is UINT16_MAX, can't we just use 64-bit multiplication?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 31 Aug 2023 22:54:02 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "> + * pg_fastmod - calculates the modulus of a 32-bit number against a constant\n> + * divisor without using the division operator\n> + */\n> +static inline uint32 pg_fastmod(uint32 n, uint32 divisor, uint64 fastinv)\n> +{\n> +#ifdef HAVE_INT128\n> + uint64_t lowbits = fastinv * n;\n> + return ((uint128)lowbits * divisor) >> 64;\n> +#else\n> + return n % divisor;\n> +#endif\n> +}\n>\n> Requiring 128-bit arithmetic to avoid serious regression is a non-starter as written. Software that relies on fast 128-bit multiplication has to do backflips to get that working on multiple platforms. But I'm not sure it's necessary -- if the max block number is UINT32_MAX and max block size is UINT16_MAX, can't we just use 64-bit multiplication?\n\nI was definitely hand-waving additional implementation here for\nnon-native 128 bit support; the modulus algorithm as presented\nrequires 4 times the space as the divisor, so a uint16 implementation\nshould work for all 64-bit machines. Certainly open to other ideas or\nimplementations, this was the one I was able to find initially. If\nthe 16bit approach is all that is needed in practice we can also see\nabout narrowing the domain and not worry about making this a\ngeneral-purpose function.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Thu, 31 Aug 2023 11:01:02 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "> I was definitely hand-waving additional implementation here for\n> non-native 128 bit support; the modulus algorithm as presented\n> requires 4 times the space as the divisor, so a uint16 implementation\n> should work for all 64-bit machines. Certainly open to other ideas or\n> implementations, this was the one I was able to find initially. If\n> the 16bit approach is all that is needed in practice we can also see\n> about narrowing the domain and not worry about making this a\n> general-purpose function.\n\nHere's a patch atop the series which converts to 16-bit uints and\npasses regressions, but I don't consider well-vetted at this point.\n\nDavid",
"msg_date": "Thu, 31 Aug 2023 11:13:18 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 2:32 PM David Christensen\n<[email protected]> wrote:\n> Here's a patch atop the series which converts to 16-bit uints and\n> passes regressions, but I don't consider well-vetted at this point.\n\nFor what it's worth, my gut reaction to this patch series is similar\nto that of Andres: I think it will be a disaster. If the disaster is\nnot evident to us, that's far more likely to mean that we've failed to\ntest the right things than it is to mean that there is no disaster.\n\nI don't see that there is a lot of upside, either. I don't think we\nhave a lot of evidence that changing the block size is really going to\nhelp performance. In fact, my guess is that there are large amounts of\ncode that are heavily optimized, without the authors even realizing\nit, for 8kB blocks, because that's what we've always had. If we had\nmuch larger or smaller blocks, the structure of heap pages or of the\nvarious index AMs used for blocks might no longer be optimal, or might\nbe less optimal than they are for an 8kB block size. If you use really\nlarge blocks, your blocks may need more internal structure than we\nhave today in order to avoid CPU inefficiencies. I suspect there's\nbeen so little testing of non-default block sizes that I wouldn't even\ncount on the code to not be outright buggy.\n\nIf we could find a safe way to get rid of full page writes, I would\ncertainly agree that that was worth considering. I'm not sure that\nanything in this thread adds up to that being a reasonable way to go,\nbut the savings would be massive.\n\nI feel like the proposal here is a bit like deciding to change the\nspeed limit on all American highways from 65 mph or whatever it is to\n130 mph or 32.5 mph and see which way works out best. The whole\ninfrastructure has basically been designed around the current rules.\nThe rate of curvature of the roads is appropriate for the speed that\nyou're currently allowed to drive on them. The vehicles are optimized\nfor long-term operation at about that speed. The people who drive the\nvehicles are accustomed to driving at that speed, and the people who\nmaintain them are accustomed to the problems that happen when you\ndrive them at that speed. Just changing the speed limit doesn't change\nall that other stuff, and changing all that other stuff is a truly\nmassive undertaking. Maybe this example somewhat overstates the\ndifficulties here, but I do think the difficulties are considerable.\nThe fact that we have 8kB block sizes has affected the thinking of\nhundreds of developers over decades in making thousands or tens of\nthousands or hundreds of thousands of decisions about algorithm\nselection and page format and all kinds of stuff. Even if some other\npage size seems to work better in a certain context, it's pretty hard\nto believe that it has much chance of being better overall, even\nwithout the added overhead of run-time configuration.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:57:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "\n\nOn 9/1/23 16:57, Robert Haas wrote:\n> On Thu, Aug 31, 2023 at 2:32 PM David Christensen\n> <[email protected]> wrote:\n>> Here's a patch atop the series which converts to 16-bit uints and\n>> passes regressions, but I don't consider well-vetted at this point.\n> \n> For what it's worth, my gut reaction to this patch series is similar\n> to that of Andres: I think it will be a disaster. If the disaster is\n> not evident to us, that's far more likely to mean that we've failed to\n> test the right things than it is to mean that there is no disaster.\n> \n\nPerhaps. The block size certainly affects a lot of places - both in\nterms of the actual value, and being known (constant) at compile time.\n\n> I don't see that there is a lot of upside, either. I don't think we\n> have a lot of evidence that changing the block size is really going to\n> help performance.\n\nI don't think that's quite true. We have plenty of empirical evidence\nthat smaller block sizes bring significant improvements for certain\nworkloads. And we also have theoretical explanations for why that is.\n\n> In fact, my guess is that there are large amounts of\n> code that are heavily optimized, without the authors even realizing\n> it, for 8kB blocks, because that's what we've always had. If we had\n> much larger or smaller blocks, the structure of heap pages or of the\n> various index AMs used for blocks might no longer be optimal, or might\n> be less optimal than they are for an 8kB block size. If you use really\n> large blocks, your blocks may need more internal structure than we\n> have today in order to avoid CPU inefficiencies. I suspect there's\n> been so little testing of non-default block sizes that I wouldn't even\n> count on the code to not be outright buggy.\n> \n\nSure, and there are even various places where the page size implies hard\nlimits (e.g. index key size for btree indexes).\n\nBut so what? If that matters for your workload, keep using 8kB ...\n\n> If we could find a safe way to get rid of full page writes, I would\n> certainly agree that that was worth considering. I'm not sure that\n> anything in this thread adds up to that being a reasonable way to go,\n> but the savings would be massive.\n> \n\nThat's true, that'd be great. But that's clearly just a next level of\nthe optimization. It doesn't mean that if you can't eliminate FPW for\nwhatever reason it's worthless.\n\n> I feel like the proposal here is a bit like deciding to change the\n> speed limit on all American highways from 65 mph or whatever it is to\n> 130 mph or 32.5 mph and see which way works out best. The whole\n> infrastructure has basically been designed around the current rules.\n> The rate of curvature of the roads is appropriate for the speed that\n> you're currently allowed to drive on them. The vehicles are optimized\n> for long-term operation at about that speed. The people who drive the\n> vehicles are accustomed to driving at that speed, and the people who\n> maintain them are accustomed to the problems that happen when you\n> drive them at that speed. Just changing the speed limit doesn't change\n> all that other stuff, and changing all that other stuff is a truly\n> massive undertaking. Maybe this example somewhat overstates the\n> difficulties here, but I do think the difficulties are considerable.\n> The fact that we have 8kB block sizes has affected the thinking of\n> hundreds of developers over decades in making thousands or tens of\n> thousands or hundreds of thousands of decisions about algorithm\n> selection and page format and all kinds of stuff. Even if some other\n> page size seems to work better in a certain context, it's pretty hard\n> to believe that it has much chance of being better overall, even\n> without the added overhead of run-time configuration.\n> \n\nExcept that no one is forcing you to actually go 130mph or 32mph, right?\nYou make it seem like this patch forces people to use some other page\nsize, but that's clearly not what it's doing - it gives you the option\nto use smaller or larger block, if you chose to. Just like increasing\nthe speed limit to 130mph doesn't mean you can't keep going 65mph.\n\nThe thing is - we *already* allow using different block size, except\nthat you have to do custom build. This just makes it easier.\n\nI don't have strong opinions on how the patch actually does that, and\nthere certainly can be negative effects of making it dynamic. And yes,\nwe will have to do more testing with non-default block sizes. But\nfrankly, that's a gap we probably need to address anyway, considering we\nallow changing the block size.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Sep 2023 21:09:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 3:09 PM Tomas Vondra\n<[email protected]> wrote:\n> Except that no one is forcing you to actually go 130mph or 32mph, right?\n> You make it seem like this patch forces people to use some other page\n> size, but that's clearly not what it's doing - it gives you the option\n> to use smaller or larger block, if you chose to. Just like increasing\n> the speed limit to 130mph doesn't mean you can't keep going 65mph.\n>\n> The thing is - we *already* allow using different block size, except\n> that you have to do custom build. This just makes it easier.\n\nRight. Which is worth doing if it doesn't hurt performance and is\nlikely to be useful to a lot of people, and is not worth doing if it\nwill hurt performance and be useful to relatively few people. My bet\nis on the latter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 10:04:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 9:04 AM Robert Haas <[email protected]> wrote:\n>\n> On Sat, Sep 2, 2023 at 3:09 PM Tomas Vondra\n> <[email protected]> wrote:\n> > Except that no one is forcing you to actually go 130mph or 32mph, right?\n> > You make it seem like this patch forces people to use some other page\n> > size, but that's clearly not what it's doing - it gives you the option\n> > to use smaller or larger block, if you chose to. Just like increasing\n> > the speed limit to 130mph doesn't mean you can't keep going 65mph.\n> >\n> > The thing is - we *already* allow using different block size, except\n> > that you have to do custom build. This just makes it easier.\n>\n> Right. Which is worth doing if it doesn't hurt performance and is\n> likely to be useful to a lot of people, and is not worth doing if it\n> will hurt performance and be useful to relatively few people. My bet\n> is on the latter.\n\nAgreed that this doesn't make sense if there are major performance\nregressions, however the goal here is patch evaluation to measure this\nagainst other workloads and see if this is the case; from my localized\ntesting things were within acceptable noise levels with the latest\nversion.\n\nI agree with Tomas' earlier thoughts: we already allow different block\nsizes, and if there are baked-in algorithmic assumptions about block\nsize (which there probably are), then identifying those or places in\nthe code where we need additional work or test coverage will only\nimprove things overall for those non-standard block sizes.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Tue, 5 Sep 2023 09:31:01 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Something I also asked at this years Unconference - Do we currently\nhave Build Farm animals testing with different page sizes ?\n\nI'd say that testing all sizes from 4KB up (so 4, 8, 16, 32) should be\ndone at least before each release if not continuously.\n\n-- Cheers\n\nHannu\n\n\nOn Tue, Sep 5, 2023 at 4:31 PM David Christensen\n<[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 9:04 AM Robert Haas <[email protected]> wrote:\n> >\n> > On Sat, Sep 2, 2023 at 3:09 PM Tomas Vondra\n> > <[email protected]> wrote:\n> > > Except that no one is forcing you to actually go 130mph or 32mph, right?\n> > > You make it seem like this patch forces people to use some other page\n> > > size, but that's clearly not what it's doing - it gives you the option\n> > > to use smaller or larger block, if you chose to. Just like increasing\n> > > the speed limit to 130mph doesn't mean you can't keep going 65mph.\n> > >\n> > > The thing is - we *already* allow using different block size, except\n> > > that you have to do custom build. This just makes it easier.\n> >\n> > Right. Which is worth doing if it doesn't hurt performance and is\n> > likely to be useful to a lot of people, and is not worth doing if it\n> > will hurt performance and be useful to relatively few people. My bet\n> > is on the latter.\n>\n> Agreed that this doesn't make sense if there are major performance\n> regressions, however the goal here is patch evaluation to measure this\n> against other workloads and see if this is the case; from my localized\n> testing things were within acceptable noise levels with the latest\n> version.\n>\n> I agree with Tomas' earlier thoughts: we already allow different block\n> sizes, and if there are baked-in algorithmic assumptions about block\n> size (which there probably are), then identifying those or places in\n> the code where we need additional work or test coverage will only\n> improve things overall for those non-standard block sizes.\n>\n> Best,\n>\n> David\n>\n>\n\n\n",
"msg_date": "Tue, 5 Sep 2023 21:52:18 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-05 21:52:18 +0200, Hannu Krosing wrote:\n> Something I also asked at this years Unconference - Do we currently\n> have Build Farm animals testing with different page sizes ?\n\nYou can check that yourself as easily as anybody else.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Sep 2023 13:23:29 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 2:52 PM Hannu Krosing <[email protected]> wrote:\n>\n> Something I also asked at this years Unconference - Do we currently\n> have Build Farm animals testing with different page sizes ?\n>\n> I'd say that testing all sizes from 4KB up (so 4, 8, 16, 32) should be\n> done at least before each release if not continuously.\n>\n> -- Cheers\n>\n> Hannu\n\nThe regression tests currently have a lot of breakage when running\nagainst non-standard block sizes, so I would assume the answer here is\nno. I would expect that we will want to add regression test variants\nor otherwise normalize results so they will work with differing block\nsizes, but have not done that yet.\n\n\n",
"msg_date": "Tue, 5 Sep 2023 15:57:06 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Sure, I was just hoping that somebody already knew without needing to\nspecifically check :)\n\nAnd as I see in David's response, the tests are actually broken for other sizes.\n\nI'll see if I can (convince somebody to) set this up .\n\nCheers\nHannu\n\nOn Tue, Sep 5, 2023 at 10:23 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-09-05 21:52:18 +0200, Hannu Krosing wrote:\n> > Something I also asked at this years Unconference - Do we currently\n> > have Build Farm animals testing with different page sizes ?\n>\n> You can check that yourself as easily as anybody else.\n>\n> Greetings,\n>\n> Andres Freund\n\n\n",
"msg_date": "Tue, 5 Sep 2023 23:38:28 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initdb-time block size specification"
},
{
"msg_contents": "Hi,\n\nHere is version 3 of the patch series, rebased on 6d0c39a293. The main\ntweak was to tweak per John N's suggestion and utilize only the 16-bit\nunsigned mod/div for the utility routines. The breakdown of the patch\nseries is the same as the last one, but re-including the descriptions here:\n\nPreparation phase:\n\n0001 - add utility script for retokenizing all necessary scripts.\nThis is mainly for my own use in generating 0003, which is a simple\nrename/preparation patch to change all symbols from their UPPER_CASE\nto lower_case form, with several exceptions in renames.\n0002 - add script to harness 0001 and apply to the relevant files in the\nrepo\n0003 - capture the effects of 0002 on the repo\n\nThe other patches in this series are as follows:\n\n0004 - the \"main\" variable blocksize patch where the bulk of the code\nchanges take place - see comments here\n0005 - utility functions for fast div/mod operations; basically\nmontgomery multiplication\n0006 - use fastdiv code in the visiblity map, the main place where\nthis change is required\n0007 - (optional) add/use libdivide for division which is license\ncompatible with other headers we bundle\n0008 - (optional) tweaks to libdivide to make compiler/CI happy\n\nBest,\n\nDavid",
"msg_date": "Mon, 2 Oct 2023 10:39:28 -0500",
"msg_from": "David Christensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initdb-time block size specification"
}
] |
[
{
"msg_contents": "I think this is the second decennial thread [0] for removing this GUC.\nThis topic came up at PGCon, so I thought I'd start the discussion on the\nlists.\n\nI'm personally not aware of anyone using this parameter. A couple of my\ncolleagues claimed to have used it in the aughts, but they also noted that\nusers were confused by the current implementation, and they seemed\ngenerally in favor of removing it. AFAICT the strongest reason for keeping\nit is that there is presently no viable replacement. Does this opinion\nstill stand? If so, perhaps we can look into adding a viable replacement\nfor v17.\n\nThe attached patch simply removes the GUC.\n\n[0] https://postgr.es/m/CAA-aLv6wnwp6Qr5fZo%2B7K%3DVSr51qFMuLsCeYvTkSF3E5qEPvqA%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Jun 2023 13:05:09 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should we remove db_user_namespace?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 01:05:09PM -0700, Nathan Bossart wrote:\n> The attached patch simply removes the GUC.\n\nAnd here's a new version of the patch with docs that actually build.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Jun 2023 13:42:11 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 01:05:09PM -0700, Nathan Bossart wrote:\n> I think this is the second decennial thread [0] for removing this GUC.\n> This topic came up at PGCon, so I thought I'd start the discussion on the\n> lists.\n> \n> I'm personally not aware of anyone using this parameter. A couple of my\n> colleagues claimed to have used it in the aughts, but they also noted that\n> users were confused by the current implementation, and they seemed\n> generally in favor of removing it. AFAICT the strongest reason for keeping\n> it is that there is presently no viable replacement. Does this opinion\n> still stand? If so, perhaps we can look into adding a viable replacement\n> for v17.\n\nI am the original author, and it was a hack designed to support\nper-database user names. I am fine with its removal.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 30 Jun 2023 17:29:04 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> I'm personally not aware of anyone using this parameter.\n\nMight be worth asking on pgsql-general whether anyone knows of\nactive use of this feature. If not, I'm good with killing it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jun 2023 17:40:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 05:29:04PM -0400, Bruce Momjian wrote:\n> I am the original author, and it was a hack designed to support\n> per-database user names. I am fine with its removal.\n\nThanks, Bruce.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 14:42:50 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 05:40:18PM -0400, Tom Lane wrote:\n> Might be worth asking on pgsql-general whether anyone knows of\n> active use of this feature. If not, I'm good with killing it.\n\nWill do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 14:43:08 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 11:43 PM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Fri, Jun 30, 2023 at 05:40:18PM -0400, Tom Lane wrote:\n> > Might be worth asking on pgsql-general whether anyone knows of\n> > active use of this feature. If not, I'm good with killing it.\n>\n> Will do.\n\nStrong +1 from here for removing it, assuming you don't find a bunch\nof users on -general who are using it. Having never come across one\nmyself, I think it's unlikely, but I agree it's good to ask.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 1 Jul 2023 00:13:26 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Sat, Jul 01, 2023 at 12:13:26AM +0200, Magnus Hagander wrote:\n> Strong +1 from here for removing it, assuming you don't find a bunch\n> of users on -general who are using it. Having never come across one\n> myself, I think it's unlikely, but I agree it's good to ask.\n\nCool. I'll let that thread [0] sit for a while.\n\n[0] https://postgr.es/m/20230630215608.GD2941194%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 15:27:50 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 01:05:09PM -0700, Nathan Bossart wrote:\n> The attached patch simply removes the GUC.\n\nI am on the side of +1'ing for the removal.\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 16:20:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 04:20:39PM +0900, Michael Paquier wrote:\n> I am on the side of +1'ing for the removal.\n\nHere is a rebased version of the patch. So far no one has responded to the\npgsql-general thread [0], and no one here has argued for keeping this\nparameter. I'm planning to bump the pgsql-general thread next week to give\nfolks one more opportunity to object.\n\n[0] https://postgr.es/m/20230630215608.GD2941194%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 5 Jul 2023 14:29:27 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 02:29:27PM -0700, Nathan Bossart wrote:\n> \t},\n> -\t{\n> -\t\t{\"db_user_namespace\", PGC_SIGHUP, CONN_AUTH_AUTH,\n> -\t\t\tgettext_noop(\"Enables per-database user names.\"),\n> -\t\t\tNULL\n> -\t\t},\n> -\t\t&Db_user_namespace,\n> -\t\tfalse,\n> -\t\tNULL, NULL, NULL\n> -\t},\n> \t{\n\nRemoving the GUC from this table is kind of annoying. Cannot this be\nhandled like default_with_oids or ssl_renegotiation_limit to avoid any\nkind of issues with the reload of dump files and the kind?\n--\nMichael",
"msg_date": "Thu, 6 Jul 2023 08:21:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 08:21:18AM +0900, Michael Paquier wrote:\n> Removing the GUC from this table is kind of annoying. Cannot this be\n> handled like default_with_oids or ssl_renegotiation_limit to avoid any\n> kind of issues with the reload of dump files and the kind?\n\nAh, good catch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 5 Jul 2023 20:49:26 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 08:49:26PM -0700, Nathan Bossart wrote:\n> On Thu, Jul 06, 2023 at 08:21:18AM +0900, Michael Paquier wrote:\n>> Removing the GUC from this table is kind of annoying. Cannot this be\n>> handled like default_with_oids or ssl_renegotiation_limit to avoid any\n>> kind of issues with the reload of dump files and the kind?\n> \n> Ah, good catch.\n\nThanks. Reading through the patch, this version should be able to\nhandle the dump reloads.\n--\nMichael",
"msg_date": "Mon, 10 Jul 2023 15:43:07 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 03:43:07PM +0900, Michael Paquier wrote:\n> Thanks. Reading through the patch, this version should be able to\n> handle the dump reloads.\n\nThanks for reviewing. I'm currently planning to commit this sometime next\nweek.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 10:47:10 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 03:43:07PM +0900, Michael Paquier wrote:\n> On Wed, Jul 05, 2023 at 08:49:26PM -0700, Nathan Bossart wrote:\n>> On Thu, Jul 06, 2023 at 08:21:18AM +0900, Michael Paquier wrote:\n>>> Removing the GUC from this table is kind of annoying. Cannot this be\n>>> handled like default_with_oids or ssl_renegotiation_limit to avoid any\n>>> kind of issues with the reload of dump files and the kind?\n>> \n>> Ah, good catch.\n> \n> Thanks. Reading through the patch, this version should be able to\n> handle the dump reloads.\n\nHm. Do we actually need to worry about this? It's a PGC_SIGHUP GUC, so it\ncan only be set at postmaster start or via a configuration file. Any dump\nfiles that are trying to set it or clients that are trying to add it to\nstartup packets are already broken. I guess keeping the GUC around would\navoid breaking any configuration files or startup scripts that happen to be\nsetting it to false, but I don't know if that's really worth worrying\nabout.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 14 Jul 2023 16:34:28 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Sat, Jul 15, 2023 at 1:34 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Jul 10, 2023 at 03:43:07PM +0900, Michael Paquier wrote:\n> > On Wed, Jul 05, 2023 at 08:49:26PM -0700, Nathan Bossart wrote:\n> >> On Thu, Jul 06, 2023 at 08:21:18AM +0900, Michael Paquier wrote:\n> >>> Removing the GUC from this table is kind of annoying. Cannot this be\n> >>> handled like default_with_oids or ssl_renegotiation_limit to avoid any\n> >>> kind of issues with the reload of dump files and the kind?\n> >>\n> >> Ah, good catch.\n> >\n> > Thanks. Reading through the patch, this version should be able to\n> > handle the dump reloads.\n>\n> Hm. Do we actually need to worry about this? It's a PGC_SIGHUP GUC, so it\n> can only be set at postmaster start or via a configuration file. Any dump\n> files that are trying to set it or clients that are trying to add it to\n> startup packets are already broken. I guess keeping the GUC around would\n> avoid breaking any configuration files or startup scripts that happen to be\n> setting it to false, but I don't know if that's really worth worrying\n> about.\n\nI'd lean towards \"no\". A hard break, when it's a major release, is\nbetter than a \"it stopped having effect but didn't tell you anything\"\nbreak. Especially when it comes to things like startup scripts etc.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 16 Jul 2023 13:24:06 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we remove db_user_namespace?"
},
{
"msg_contents": "On Sun, Jul 16, 2023 at 01:24:06PM +0200, Magnus Hagander wrote:\n> I'd lean towards \"no\". A hard break, when it's a major release, is\n> better than a \"it stopped having effect but didn't tell you anything\"\n> break. Especially when it comes to things like startup scripts etc.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Jul 2023 11:47:12 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we remove db_user_namespace?"
}
] |
[
{
"msg_contents": "Greetings,\n\nAttached please find a tarball (rather than a patch) for a proposed new \ncontrib extension, pg_stat_logmsg.\n\nThe basic idea is to mirror how pg_stat_statements works, except the \nlogged messages keyed by filename, lineno, and elevel are saved with a \naggregate count. The format string is displayed (similar to a query \njumble) for context, along with function name and sqlerrcode.\n\nI threw this together rather quickly over the past couple of days \nbetween meetings, so not claiming that it is committable (and lacks \ndocumentation and regression tests as well), but I would love to get \nfeedback on:\n\n1/ the general concept\n2/ the pg_stat_statement-like implementation\n3/ contrib vs core vs external project\n\nSome samples and data:\n\n`make installcheck` with the extension loaded:\n8<------------------\n# All 215 tests passed.\n\n\nreal 2m24.854s\nuser 0m0.086s\nsys 0m0.283s\n8<------------------\n\n`make installcheck` without the extension loaded:\n8<------------------\n\n# All 215 tests passed.\n\nreal 2m26.765s\nuser 0m0.076s\nsys 0m0.293s\n8<------------------\n\nSample output after running make installcheck a couple times (plus a few \nmanually generated ERRORs):\n\n8<------------------\ntest=# select sum(count) from pg_stat_logmsg where elevel > 20;\n sum\n-------\n 10554\n(1 row)\n\ntest=# \\x\nExpanded display is on.\ntest=# select * from pg_stat_logmsg where elevel > 20 order by count desc;\n-[ RECORD 1 ]-------------------------------\nfilename | aclchk.c\nlineno | 2811\nelevel | 21\nfuncname | aclcheck_error\nsqlerrcode | 42501\nmessage | permission denied for schema %s\ncount | 578\n-[ RECORD 2 ]-------------------------------\nfilename | scan.l\nlineno | 1241\nelevel | 21\nfuncname | scanner_yyerror\nsqlerrcode | 42601\nmessage | %s at or near \"%s\"\ncount | 265\n...\n\ntest=# select * from pg_stat_logmsg where elevel > 20 and sqlerrcode = \n'XX000';\n-[ RECORD 1 ]---------------------------------------\nfilename | tid.c\nlineno | 352\nelevel | 21\nfuncname | currtid_for_view\nsqlerrcode | XX000\nmessage | ctid isn't of type TID\ncount | 2\n-[ RECORD 2 ]---------------------------------------\nfilename | pg_locale.c\nlineno | 2493\nelevel | 21\nfuncname | pg_ucol_open\nsqlerrcode | XX000\nmessage | could not open collator for locale \"%s\": %s\ncount | 2\n...\n\n8<------------------\n\nPart of the thinking is that people with fleets of postgres instances \ncan use this to scan for various errors that they care about. \nAdditionally it would be useful to look for \"should not happen\" errors.\n\nI will register this in the July CF and will appreciate feedback.\n\nThanks!\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Jun 2023 19:57:09 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "RFC: pg_stat_logmsg"
},
{
"msg_contents": "Hi\n\nso 1. 7. 2023 v 1:57 odesílatel Joe Conway <[email protected]> napsal:\n\n> Greetings,\n>\n> Attached please find a tarball (rather than a patch) for a proposed new\n> contrib extension, pg_stat_logmsg.\n>\n> The basic idea is to mirror how pg_stat_statements works, except the\n> logged messages keyed by filename, lineno, and elevel are saved with a\n> aggregate count. The format string is displayed (similar to a query\n> jumble) for context, along with function name and sqlerrcode.\n>\n> I threw this together rather quickly over the past couple of days\n> between meetings, so not claiming that it is committable (and lacks\n> documentation and regression tests as well), but I would love to get\n> feedback on:\n>\n> 1/ the general concept\n> 2/ the pg_stat_statement-like implementation\n> 3/ contrib vs core vs external project\n>\n> Some samples and data:\n>\n> `make installcheck` with the extension loaded:\n> 8<------------------\n> # All 215 tests passed.\n>\n>\n> real 2m24.854s\n> user 0m0.086s\n> sys 0m0.283s\n> 8<------------------\n>\n> `make installcheck` without the extension loaded:\n> 8<------------------\n>\n> # All 215 tests passed.\n>\n> real 2m26.765s\n> user 0m0.076s\n> sys 0m0.293s\n> 8<------------------\n>\n> Sample output after running make installcheck a couple times (plus a few\n> manually generated ERRORs):\n>\n> 8<------------------\n> test=# select sum(count) from pg_stat_logmsg where elevel > 20;\n> sum\n> -------\n> 10554\n> (1 row)\n>\n> test=# \\x\n> Expanded display is on.\n> test=# select * from pg_stat_logmsg where elevel > 20 order by count desc;\n> -[ RECORD 1 ]-------------------------------\n> filename | aclchk.c\n> lineno | 2811\n> elevel | 21\n> funcname | aclcheck_error\n> sqlerrcode | 42501\n> message | permission denied for schema %s\n> count | 578\n> -[ RECORD 2 ]-------------------------------\n> filename | scan.l\n> lineno | 1241\n> elevel | 21\n> funcname | scanner_yyerror\n> sqlerrcode | 42601\n> message | %s at or near \"%s\"\n> count | 265\n> ...\n>\n> test=# select * from pg_stat_logmsg where elevel > 20 and sqlerrcode =\n> 'XX000';\n> -[ RECORD 1 ]---------------------------------------\n> filename | tid.c\n> lineno | 352\n> elevel | 21\n> funcname | currtid_for_view\n> sqlerrcode | XX000\n> message | ctid isn't of type TID\n> count | 2\n> -[ RECORD 2 ]---------------------------------------\n> filename | pg_locale.c\n> lineno | 2493\n> elevel | 21\n> funcname | pg_ucol_open\n> sqlerrcode | XX000\n> message | could not open collator for locale \"%s\": %s\n> count | 2\n> ...\n>\n> 8<------------------\n>\n> Part of the thinking is that people with fleets of postgres instances\n> can use this to scan for various errors that they care about.\n> Additionally it would be useful to look for \"should not happen\" errors.\n>\n> I will register this in the July CF and will appreciate feedback.\n>\n\nThis can be a very interesting feature. I like it.\n\nRegards\n\nPavel\n\n\n> Thanks!\n>\n> --\n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\nHiso 1. 7. 2023 v 1:57 odesílatel Joe Conway <[email protected]> napsal:Greetings,\n\nAttached please find a tarball (rather than a patch) for a proposed new \ncontrib extension, pg_stat_logmsg.\n\nThe basic idea is to mirror how pg_stat_statements works, except the \nlogged messages keyed by filename, lineno, and elevel are saved with a \naggregate count. The format string is displayed (similar to a query \njumble) for context, along with function name and sqlerrcode.\n\nI threw this together rather quickly over the past couple of days \nbetween meetings, so not claiming that it is committable (and lacks \ndocumentation and regression tests as well), but I would love to get \nfeedback on:\n\n1/ the general concept\n2/ the pg_stat_statement-like implementation\n3/ contrib vs core vs external project\n\nSome samples and data:\n\n`make installcheck` with the extension loaded:\n8<------------------\n# All 215 tests passed.\n\n\nreal 2m24.854s\nuser 0m0.086s\nsys 0m0.283s\n8<------------------\n\n`make installcheck` without the extension loaded:\n8<------------------\n\n# All 215 tests passed.\n\nreal 2m26.765s\nuser 0m0.076s\nsys 0m0.293s\n8<------------------\n\nSample output after running make installcheck a couple times (plus a few \nmanually generated ERRORs):\n\n8<------------------\ntest=# select sum(count) from pg_stat_logmsg where elevel > 20;\n sum\n-------\n 10554\n(1 row)\n\ntest=# \\x\nExpanded display is on.\ntest=# select * from pg_stat_logmsg where elevel > 20 order by count desc;\n-[ RECORD 1 ]-------------------------------\nfilename | aclchk.c\nlineno | 2811\nelevel | 21\nfuncname | aclcheck_error\nsqlerrcode | 42501\nmessage | permission denied for schema %s\ncount | 578\n-[ RECORD 2 ]-------------------------------\nfilename | scan.l\nlineno | 1241\nelevel | 21\nfuncname | scanner_yyerror\nsqlerrcode | 42601\nmessage | %s at or near \"%s\"\ncount | 265\n...\n\ntest=# select * from pg_stat_logmsg where elevel > 20 and sqlerrcode = \n'XX000';\n-[ RECORD 1 ]---------------------------------------\nfilename | tid.c\nlineno | 352\nelevel | 21\nfuncname | currtid_for_view\nsqlerrcode | XX000\nmessage | ctid isn't of type TID\ncount | 2\n-[ RECORD 2 ]---------------------------------------\nfilename | pg_locale.c\nlineno | 2493\nelevel | 21\nfuncname | pg_ucol_open\nsqlerrcode | XX000\nmessage | could not open collator for locale \"%s\": %s\ncount | 2\n...\n\n8<------------------\n\nPart of the thinking is that people with fleets of postgres instances \ncan use this to scan for various errors that they care about. \nAdditionally it would be useful to look for \"should not happen\" errors.\n\nI will register this in the July CF and will appreciate feedback.This can be a very interesting feature. I like it.RegardsPavel\n\nThanks!\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 1 Jul 2023 05:20:08 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On 6/30/23 23:20, Pavel Stehule wrote:\n> so 1. 7. 2023 v 1:57 odesílatel Joe Conway <[email protected] \n> <mailto:[email protected]>> napsal:\n> Part of the thinking is that people with fleets of postgres instances\n> can use this to scan for various errors that they care about.\n> Additionally it would be useful to look for \"should not happen\" errors.\n> \n> I will register this in the July CF and will appreciate feedback.\n> \n> This can be a very interesting feature. I like it.\n\nThanks!\n\nFWIW, I just modified it to provide the localized text of the elevel \nrather than the internal number. I also localized the message format string:\n\n8<------------------------------\npsql (16beta2)\nType \"help\" for help.\n\ntest=# \\x\nExpanded display is on.\ntest=# select * from pg_stat_logmsg where elevel = 'ERROR' and \nsqlerrcode = 'XX000' and count > 1;\n-[ RECORD 1 ]---------------------------------------------\nfilename | tablecmds.c\nlineno | 10908\nelevel | ERROR\nfuncname | ATExecAlterConstraint\nsqlerrcode | XX000\nmessage | cannot alter constraint \"%s\" on relation \"%s\"\ncount | 2\n-[ RECORD 2 ]---------------------------------------------\nfilename | user.c\nlineno | 2130\nelevel | ERROR\nfuncname | check_role_membership_authorization\nsqlerrcode | XX000\nmessage | role \"%s\" cannot have explicit members\ncount | 2\n\ntest=# set lc_messages ='sv_SE.UTF8';\nSET\ntest=# select * from pg_stat_logmsg where elevel = 'FEL' and sqlerrcode \n= 'XX000' and count > 1;\n-[ RECORD 1 ]---------------------------------------------\nfilename | tablecmds.c\nlineno | 10908\nelevel | FEL\nfuncname | ATExecAlterConstraint\nsqlerrcode | XX000\nmessage | kan inte ändra villkoret \"%s\" i relation \"%s\"\ncount | 2\n-[ RECORD 2 ]---------------------------------------------\nfilename | user.c\nlineno | 2130\nelevel | FEL\nfuncname | check_role_membership_authorization\nsqlerrcode | XX000\nmessage | rollen \"%s\" kan inte ha explicita medlemmar\ncount | 2\n8<------------------------------\n\nNew tarball attached.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 1 Jul 2023 15:52:50 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jul 1, 2023 at 8:57 AM Joe Conway <[email protected]> wrote:\n>\n> Greetings,\n>\n> Attached please find a tarball (rather than a patch) for a proposed new\n> contrib extension, pg_stat_logmsg.\n>\n> The basic idea is to mirror how pg_stat_statements works, except the\n> logged messages keyed by filename, lineno, and elevel are saved with a\n> aggregate count. The format string is displayed (similar to a query\n> jumble) for context, along with function name and sqlerrcode.\n>\n>\n> Part of the thinking is that people with fleets of postgres instances\n> can use this to scan for various errors that they care about.\n> Additionally it would be useful to look for \"should not happen\" errors.\n\nInteresting idea and use cases.\n\nI'm concerned that displaying the format string could not be helpful\nin some cases. For example, when raising an ERROR while reading WAL\nrecords, we typically write the error message stored in\nrecord->errormsg_buf:\n\nin ReadRecord():\n if (errormsg)\n ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),\n (errmsg_internal(\"%s\", errormsg) /* already\ntranslated */ ));\n\nIn this case, the error message stored in pg_stat_logmsg is just '%s'.\nThe filename and line number columns might help identify the actual\nerror but it requires users to read the source code and cannot know\nthe actual error message.\n\nA similar case is where we construct the error message on the fly. For\nexample, in LogRecoveryConflict() the string of the recovery conflict\ndescription comes from get_recovery_conflict_desc():\n\nin LogRecoveryConflict():\n ereport(LOG,\n errmsg(\"recovery still waiting after %ld.%03d ms: %s\",\n msecs, usecs, get_recovery_conflict_desc(reason)),\n nprocs > 0 ? errdetail_log_plural(\"Conflicting process: %s.\",\n \"Conflicting processes: %s.\",\n nprocs, buf.data) : 0);\n\nThe user might want to search the error message by the actual conflict\nreason, but cannot. In this case, I'd like to see the actual error\nmessage (I'd like to normalize the number part, though).\n\nThat being said, using the format string for the error messages like\n\"ERROR: relation \"nonexist_table\" does not exist\" makes sense to me\nsince we can avoid having too many similar entries.\n\nSo I feel that we might need to figure out what part of the log\nmessage should be normalized like pg_stat_statement does for query\nstrings.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Jul 2023 16:36:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 12:37 AM Masahiko Sawada <[email protected]> wrote:\n> On Sat, Jul 1, 2023 at 8:57 AM Joe Conway <[email protected]> wrote:\n> >\n> > The basic idea is to mirror how pg_stat_statements works, except the\n> > logged messages keyed by filename, lineno, and elevel are saved with a\n> > aggregate count. The format string is displayed (similar to a query\n> > jumble) for context, along with function name and sqlerrcode.\n> >\n> >\n> > Part of the thinking is that people with fleets of postgres instances\n> > can use this to scan for various errors that they care about.\n> > Additionally it would be useful to look for \"should not happen\" errors.\n\n> I'm concerned that displaying the format string could not be helpful\n> in some cases. For example, when raising an ERROR while reading WAL\n> records, we typically write the error message stored in\n> record->errormsg_buf:\n>\n> in ReadRecord():\n> if (errormsg)\n> ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),\n> (errmsg_internal(\"%s\", errormsg) /* already\n> translated */ ));\n>\n> In this case, the error message stored in pg_stat_logmsg is just '%s'.\n> The filename and line number columns might help identify the actual\n> error but it requires users to read the source code and cannot know\n> the actual error message.\n\nI believe that the number of such error messages, the ones with very\nlittle descriptive content, is very low in Postgres code. These kinds\nof messages are not prevalent, and must be used when code hits obscure\nconditions, like seen in your example above. These are the kinds of\nerrors that Joe is referring to as \"should not happen\". In these\ncases, even if the error message was descriptive, the user will very\nlikely have to dive deep into code to find out the real cause.\n\nI feel that the unique combination of file name and line number is\nuseful information, even in cases where the format string not very\ndescriptive. So I believe the extension's behaviour in this regard is\nacceptable.\n\nIn cases where the extension's output is not descriptive enough, the\nuser can use the filename:lineno pair to look for fully formed error\nmessages in the actual log files; they may have to make appropriate\nchanges to log_* parameters, though.\n\nIf we still feel strongly about getting the actual message for these\ncases, perhaps we can develop a heuristic to identify such messages,\nand use either full or a prefix of the 'message' field, instead of\n'message_id' field. The heuristic could be: strlen(message_id) is too\nshort, or that message_id is all/overwhelmingly format specifiers\n(e.g. '%s: %d').\n\nThe core benefit of this extension is to make it easy for the user to\ndiscover error messages affecting their workload. The user may be\ninterested in knowing the most common log messages in their server\nlogs, so that they can work on reducing those errors or warnings. Or\nthey may be interested in knowing when their workload hits\nunexpected/serious error messages, even if they're infrequent. The\ndata exposed by pg_stat_logmsg view would serve as a starting point,\nbut I'm guessing it would not be sufficient for their investigation.\n\nOn Fri, Jun 30, 2023 at 4:57 PM Joe Conway <[email protected]> wrote:\n> I would love to get\n> feedback on:\n>\n> 1/ the general concept\n> 2/ the pg_stat_statement-like implementation\n> 3/ contrib vs core vs external project\n\n+1 for the idea; a monitoring system trained at this view can bubble\nup problems to users that may otherwise go unnoticed. Piggybacking on,\nor using pg_stat_statments implementation as a model seems fine. I\nbelieve making this available as a contrib module hits the right\nbalance; not all installations may want this, hence in-core, always\ncollecting data, seems undesirable; if it's an external project, it\nwon't be discoverable, I see that as a very high bar for someone to\ntrust it and begin to use it.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 6 Jul 2023 22:38:33 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On 7/7/23 01:38, Gurjeet Singh wrote:\n> On Thu, Jul 6, 2023 at 12:37 AM Masahiko Sawada <[email protected]> wrote:\n>> On Sat, Jul 1, 2023 at 8:57 AM Joe Conway <[email protected]> wrote:\n>> >\n>> > The basic idea is to mirror how pg_stat_statements works, except the\n>> > logged messages keyed by filename, lineno, and elevel are saved with a\n>> > aggregate count. The format string is displayed (similar to a query\n>> > jumble) for context, along with function name and sqlerrcode.\n>> >\n>> >\n>> > Part of the thinking is that people with fleets of postgres instances\n>> > can use this to scan for various errors that they care about.\n>> > Additionally it would be useful to look for \"should not happen\" errors.\n> \n>> I'm concerned that displaying the format string could not be helpful\n>> in some cases. For example, when raising an ERROR while reading WAL\n>> records, we typically write the error message stored in\n>> record->errormsg_buf:\n>>\n>> in ReadRecord():\n>> if (errormsg)\n>> ereport(emode_for_corrupt_record(emode, xlogreader->EndRecPtr),\n>> (errmsg_internal(\"%s\", errormsg) /* already\n>> translated */ ));\n>>\n>> In this case, the error message stored in pg_stat_logmsg is just '%s'.\n>> The filename and line number columns might help identify the actual\n>> error but it requires users to read the source code and cannot know\n>> the actual error message.\n> \n> I believe that the number of such error messages, the ones with very\n> little descriptive content, is very low in Postgres code. These kinds\n> of messages are not prevalent, and must be used when code hits obscure\n> conditions, like seen in your example above. These are the kinds of\n> errors that Joe is referring to as \"should not happen\". In these\n> cases, even if the error message was descriptive, the user will very\n> likely have to dive deep into code to find out the real cause.\n\nAgreed. As an example, after running `make installcheck`\n\n8<-----------------\nselect sum(count) from pg_stat_logmsg;\n sum\n------\n 6005\n(1 row)\n\nselect message,sum(count)\nfrom pg_stat_logmsg\nwhere length(message) < 30\n and elevel in ('ERROR','FATAL','PANIC')\n and message like '%\\%s%' escape '\\'\ngroup by message\norder by length(message);\n message | sum\n-------------------------------+-----\n %s | 107\n \"%s\" is a view | 9\n \"%s\" is a table | 4\n %s is a procedure | 1\n invalid size: \"%s\" | 13\n %s at or near \"%s\" | 131\n %s at end of input | 3\n...\n8<-----------------\n\nI only see three message formats there that are generic enough to be of \nconcern (the first and last two shown -- FWIW I did not see any more of \nthem as the fmt string gets longer)\n\nSo out of 6005 log events, 241 fit this concern.\n\nPerhaps given the small number of message format strings affected, it \nwould make sense to special case those, but I am not sure it is worth \nthe effort, at least for version 1.\n\n> I feel that the unique combination of file name and line number is\n> useful information, even in cases where the format string not very\n> descriptive. So I believe the extension's behaviour in this regard is\n> acceptable.\n> \n> In cases where the extension's output is not descriptive enough, the\n> user can use the filename:lineno pair to look for fully formed error\n> messages in the actual log files; they may have to make appropriate\n> changes to log_* parameters, though.\n\nRight\n\n> If we still feel strongly about getting the actual message for these\n> cases, perhaps we can develop a heuristic to identify such messages,\n> and use either full or a prefix of the 'message' field, instead of\n> 'message_id' field. The heuristic could be: strlen(message_id) is too\n> short, or that message_id is all/overwhelmingly format specifiers\n> (e.g. '%s: %d').\n\nBased on the above analysis (though granted, not all inclusive), it \nseems like just special casing the specific message format strings of \ninterest would work.\n\n> The core benefit of this extension is to make it easy for the user to\n> discover error messages affecting their workload. The user may be\n> interested in knowing the most common log messages in their server\n> logs, so that they can work on reducing those errors or warnings. Or\n> they may be interested in knowing when their workload hits\n> unexpected/serious error messages, even if they're infrequent. The\n> data exposed by pg_stat_logmsg view would serve as a starting point,\n> but I'm guessing it would not be sufficient for their investigation.\n\nYes, exactly.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sun, 9 Jul 2023 14:13:09 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On 7/9/23 14:13, Joe Conway wrote:\n> On 7/7/23 01:38, Gurjeet Singh wrote:\n>>> In this case, the error message stored in pg_stat_logmsg is just '%s'.\n>>> The filename and line number columns might help identify the actual\n>>> error but it requires users to read the source code and cannot know\n>>> the actual error message.\n>> \n>> I believe that the number of such error messages, the ones with very\n>> little descriptive content, is very low in Postgres code. These kinds\n>> of messages are not prevalent, and must be used when code hits obscure\n>> conditions, like seen in your example above. These are the kinds of\n>> errors that Joe is referring to as \"should not happen\". In these\n>> cases, even if the error message was descriptive, the user will very\n>> likely have to dive deep into code to find out the real cause.\n> \n> Agreed. As an example, after running `make installcheck`\n> \n> 8<-----------------\n> select sum(count) from pg_stat_logmsg;\n> sum\n> ------\n> 6005\n> (1 row)\n> \n> select message,sum(count)\n> from pg_stat_logmsg\n> where length(message) < 30\n> and elevel in ('ERROR','FATAL','PANIC')\n> and message like '%\\%s%' escape '\\'\n> group by message\n> order by length(message);\n> message | sum\n> -------------------------------+-----\n> %s | 107\n> \"%s\" is a view | 9\n> \"%s\" is a table | 4\n> %s is a procedure | 1\n> invalid size: \"%s\" | 13\n> %s at or near \"%s\" | 131\n> %s at end of input | 3\n> ...\n> 8<-----------------\n> \n> I only see three message formats there that are generic enough to be of\n> concern (the first and last two shown -- FWIW I did not see any more of\n> them as the fmt string gets longer)\n> \n> So out of 6005 log events, 241 fit this concern.\n> \n> Perhaps given the small number of message format strings affected, it\n> would make sense to special case those, but I am not sure it is worth\n> the effort, at least for version 1.\n\nAttached is an update, this time as a patch against 17devel. It is not \nquite complete, but getting close IMHO.\n\nChanges:\n--------\n1. Now is integrated into contrib as a \"Additional Supplied Extension\"\n\n2. Documentation added\n\n3. I had a verbal conversation with Andres and he reminded me that the \noriginal idea for this was to collect data across fleets of pg hosts so \nwe could look for how often \"should never happen\" errors actually \nhappen. As such, we need to think in terms of aggregating the info from \nmany hosts, potentially with many localized languages for the messages. \nSo I converted the \"message\" column back to an untranslated string, and \nadded a \"translated_message\" column which is localized.\n\nAn alternative approach might be to provide a separate function that \naccepts the message string and returns the translation. Thoughts on that?\n\n4. In the same vein, I added a pgversion column since the filename and \nline number for the same log message could vary across major or even \nminor releases.\n\nNot done:\n---------\n1. The main thing lacking at this point is a regression test.\n\n2. No special casing for message == \"%s\". I am still not convinced it is \nworthwhile to do so.\n\nComments gratefully welcomed.\n\nThanks,\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Sep 2023 15:30:45 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "Hi,\n\nI noticed this patch hasn't moved since September 2023, so I wonder\nwhat's the main blocker / what is needed to move this?\n\nAs for the feature, I've never done a fleet-wide analysis, so if this is\none of the main use cases, I'm not really sure I can judge if this is a\ngood tool for that. It seems like it might be a convenient way to do\nthat, but does that require we add the module to contrib?\n\nAs for the code, I wonder if the instability of line numbers could be a\nproblem - these can change (a little bit) between minor releases, so\nafter an upgrade we'll read the dump file with line numbers from the old\nrelease, and then start adding entries with new line numbers. Do we need\nto handle this in some way?\n\nThis might be partially solved by eviction of entries from the old\nrelease - we apply decay, so after a while their usage will be 0. But\nwhat if there's no pressure for space, we'll not actually evict them.\nAnd it'll be confusing to have a mix of old/new line numbers.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Jul 2024 00:14:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 12:14:36AM +0200, Tomas Vondra wrote:\n> I noticed this patch hasn't moved since September 2023, so I wonder\n> what's the main blocker / what is needed to move this?\n\n+ /* Location of permanent stats file (valid when database is shut down) */\n+ #define PGLM_DUMP_FILE\tPGSTAT_STAT_PERMANENT_DIRECTORY\n\"/pg_stat_logmsg.stat\n\nPerhaps this does not count as a valid reason, but does it really make\nsense to implement things this way, knowing that this could be changed\nto rely on a potential pluggable pgstats? I mean this one I've\nproposed:\nhttps://www.postgresql.org/message-id/Zmqm9j5EO0I4W8dx%40paquier.xyz\n\nOne potential implementation is to stick that to be fixed-numbered,\nbecause there is a maximum cap to the number of entries proposed by\nthe patch, while keeping the whole in memory.\n\n+ logmsg_store(ErrorData *edata, Size *logmsg_offset,\n+ \t\t\t int *logmsg_len, int *gc_count)\n\nThe patch shares a lot of perks with pg_stat_statements that don't\nscale well. I'm wondering if it is a good idea to duplicate these\nproperties in a second, different, module, like the external file can\ncould be written out on a periodic basis depending on the workload.\nI am not saying that the other thread is a magic solution for\neverything (not looked yet at how this would plug with the cap in\nentries that pg_stat_statements wants), just one option on the table.\n\n> As for the code, I wonder if the instability of line numbers could be a\n> problem - these can change (a little bit) between minor releases, so\n> after an upgrade we'll read the dump file with line numbers from the old\n> release, and then start adding entries with new line numbers. Do we need\n> to handle this in some way?\n\nIndeed. Perhaps a PostgreSQL version number assigned to each entry to\nknow from which binary an entry comes from? This would cost a couple\nof extra bytes for each entry still that would be the best information\npossible to match that with a correct code tree. If it comes to that,\neven getting down to a commit SHA1 could be useful to provide the\nlowest level of granularity. Another thing would be to give up on the\nline number, stick to the uniqueness in the stats entries with the\nerrcode and the file name, but that won't help for things like\ntablecmds.c.\n\n> This might be partially solved by eviction of entries from the old\n> release - we apply decay, so after a while their usage will be 0. But\n> what if there's no pressure for space, we'll not actually evict them.\n> And it'll be confusing to have a mix of old/new line numbers.\n\nOnce we know that these stats not going to be relevant anymore as of a\nminor upgrade flow, resetting them could be the move that makes the\nmost sense, leaving the reset to the provider doing the upgrades,\nwhile taking a snapshot of the past data before the reset? I find the\nwhole problem tricky to define, TBH.\n--\nMichael",
"msg_date": "Wed, 17 Jul 2024 08:08:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On 7/16/24 18:14, Tomas Vondra wrote:\n> I noticed this patch hasn't moved since September 2023, so I wonder\n> what's the main blocker / what is needed to move this?\n\nMainly me finding time I'm afraid.\n\n> As for the feature, I've never done a fleet-wide analysis, so if this is\n> one of the main use cases, I'm not really sure I can judge if this is a\n> good tool for that. It seems like it might be a convenient way to do\n> that, but does that require we add the module to contrib?\n\nI had an offlist chat with Andres about this IIRC and he suggested he \nthought it ought to be just built in to the backend as part of the \nstatistics subsystem. Lately though I have been toying with the idea of \nkeeping it as an extension and basing it off Michael Paquier's work for \nPluggable cumulative statistics.\n\n> As for the code, I wonder if the instability of line numbers could be a\n> problem - these can change (a little bit) between minor releases, so\n> after an upgrade we'll read the dump file with line numbers from the old\n> release, and then start adding entries with new line numbers. Do we need\n> to handle this in some way?\n\nHmm, yeah, I had been planning to include postgres version as part of \nthe output, but maybe it would need to be part of the key.\n\n> This might be partially solved by eviction of entries from the old\n> release - we apply decay, so after a while their usage will be 0. But\n> what if there's no pressure for space, we'll not actually evict them.\n> And it'll be confusing to have a mix of old/new line numbers.\n\nAgreed.\n\nI am going to try hard to get back to this sooner rather than later, but \nrealistically that might be in time for the September commitfest rather \nthan during this one.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 17 Jul 2024 07:43:13 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On 7/16/24 19:08, Michael Paquier wrote:\n> On Wed, Jul 17, 2024 at 12:14:36AM +0200, Tomas Vondra wrote:\n>> I noticed this patch hasn't moved since September 2023, so I wonder\n>> what's the main blocker / what is needed to move this?\n> \n> + /* Location of permanent stats file (valid when database is shut down) */\n> + #define PGLM_DUMP_FILE\tPGSTAT_STAT_PERMANENT_DIRECTORY\n> \"/pg_stat_logmsg.stat\n> \n> Perhaps this does not count as a valid reason, but does it really make\n> sense to implement things this way, knowing that this could be changed\n> to rely on a potential pluggable pgstats? I mean this one I've\n> proposed:\n> https://www.postgresql.org/message-id/Zmqm9j5EO0I4W8dx%40paquier.xyz\n\nYep, see my adjacent reply to Tomas.\n\n> One potential implementation is to stick that to be fixed-numbered,\n> because there is a maximum cap to the number of entries proposed by\n> the patch, while keeping the whole in memory.\n> \n> + logmsg_store(ErrorData *edata, Size *logmsg_offset,\n> + \t\t\t int *logmsg_len, int *gc_count)\n> \n> The patch shares a lot of perks with pg_stat_statements that don't\n> scale well. I'm wondering if it is a good idea to duplicate these\n> properties in a second, different, module, like the external file can\n> could be written out on a periodic basis depending on the workload.\n> I am not saying that the other thread is a magic solution for\n> everything (not looked yet at how this would plug with the cap in\n> entries that pg_stat_statements wants), just one option on the table.\n> \n>> As for the code, I wonder if the instability of line numbers could be a\n>> problem - these can change (a little bit) between minor releases, so\n>> after an upgrade we'll read the dump file with line numbers from the old\n>> release, and then start adding entries with new line numbers. Do we need\n>> to handle this in some way?\n> \n> Indeed. Perhaps a PostgreSQL version number assigned to each entry to\n> know from which binary an entry comes from? This would cost a couple\n> of extra bytes for each entry still that would be the best information\n> possible to match that with a correct code tree. If it comes to that,\n> even getting down to a commit SHA1 could be useful to provide the\n> lowest level of granularity. Another thing would be to give up on the\n> line number, stick to the uniqueness in the stats entries with the\n> errcode and the file name, but that won't help for things like\n> tablecmds.c.\n\nI think including version in the key makes most sense. Also do we even \nhave a mechanism to grab the commit sha in running code?\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 17 Jul 2024 07:48:15 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 07:48:15AM -0400, Joe Conway wrote:\n> I think including version in the key makes most sense. Also do we even have\n> a mechanism to grab the commit sha in running code?\n\nNot directly, still that's doable.\n\nThe closest thing I would consider here is to get the output of\nsomething like 'git rev-parse --short HEAD` and attach it to\nPG_VERSION with --with-extra-version. I do that in my local builds\nbecause I always want to know from which commit I am building\nsomething. Then, PG_VERSION could be stored with the entries while\nhashing the stats key with the version string, the error code, the\nsource file name and/or the line number for uniqueness. 32 bytes of\nroom would be most likely enough when it comes to the PG_VERSION data\nstored in the stats entries.\n--\nMichael",
"msg_date": "Thu, 18 Jul 2024 13:26:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 07:43:13AM -0400, Joe Conway wrote:\n> On 7/16/24 18:14, Tomas Vondra wrote:\n>> As for the feature, I've never done a fleet-wide analysis, so if this is\n>> one of the main use cases, I'm not really sure I can judge if this is a\n>> good tool for that. It seems like it might be a convenient way to do\n>> that, but does that require we add the module to contrib?\n> \n> I had an offlist chat with Andres about this IIRC and he suggested he\n> thought it ought to be just built in to the backend as part of the\n> statistics subsystem. Lately though I have been toying with the idea of\n> keeping it as an extension and basing it off Michael Paquier's work for\n> Pluggable cumulative statistics.\n\nThis may live better as a contrib/ module, serving as well as an extra\ntemplate for what can be done with the pluggable stats. Adding that\nin core is of course OK for me if that's the consensus. The APIs for\npluggable stats are really the same as what you would store in core,\nminus the system functions you'd want to add in the catalog .dat\nfiles, of course.\n\nI'd like to get it this part done by the end of this commit fest to\nhave room with pg_stat_statements for this release, but well, we'll\nsee. As far as I can see everybody who commented on the thread seems\nkind of OK with the idea to fix the stats kinds IDs in time, like\ncustom RMGRs. That's just simpler implementation-wise, but I'm also\nlooking for more opinions.\n\n> Hmm, yeah, I had been planning to include postgres version as part of the\n> output, but maybe it would need to be part of the key.\n\nSeems to me that you should do both, then: add PG_VERSION to the\nentries, and hash the keys with it for uniqueness. You could also\nhave a reset function that performs a removal of the stats for\nanything else than the current PG_VERSION, for example.\n--\nMichael",
"msg_date": "Thu, 18 Jul 2024 13:32:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: pg_stat_logmsg"
}
] |
[
{
"msg_contents": "Hi,\n\nI just noticed that the comment for PG_CACHE_LINE_SIZE still says that \"it's\ncurrently used in xlog.c\", which hasn't been true for quite some time.\n\nPFA a naive patch to make the description more generic.",
"msg_date": "Sat, 1 Jul 2023 15:49:36 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Outdated description of PG_CACHE_LINE_SIZE"
},
{
"msg_contents": "On 01/07/2023 10:49, Julien Rouhaud wrote:\n> Hi,\n> \n> I just noticed that the comment for PG_CACHE_LINE_SIZE still says that \"it's\n> currently used in xlog.c\", which hasn't been true for quite some time.\n> \n> PFA a naive patch to make the description more generic.\n\nApplied, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:01:55 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outdated description of PG_CACHE_LINE_SIZE"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 12:01:55PM +0300, Heikki Linnakangas wrote:\n> On 01/07/2023 10:49, Julien Rouhaud wrote:\n> >\n> > I just noticed that the comment for PG_CACHE_LINE_SIZE still says that \"it's\n> > currently used in xlog.c\", which hasn't been true for quite some time.\n> >\n> > PFA a naive patch to make the description more generic.\n>\n> Applied, thanks!\n\nThanks!\n\n\n",
"msg_date": "Mon, 3 Jul 2023 17:24:23 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Outdated description of PG_CACHE_LINE_SIZE"
}
] |
[
{
"msg_contents": "Hi,\n\nI think there's some sort of bug in how dd38ff28ad deals with\ncontrecords. Consider something as simple as\n\n pgbench -i -s 100\n\nand then doing pg_waldump on the WAL segments, I get this for every\nsingle one:\n\n pg_waldump: error: error in WAL record at 0/1FFFF98: missing\n contrecord at 0/1FFFFE0\n\nThis only happens since dd38ff28ad, and revert makes it disappear.\n\nIt's possible we still have some issue with contrecords, but IIUC we\nfixed those. So unless there's some unknown one (and considering this\nseems to happen for *every* WAL segment that's hard to believe), this\nseems more like an issue in the error detection.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Jul 2023 15:40:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "possible bug in handling of contrecords in dd38ff28ad (Fix\n recovery_prefetch with low maintenance_io_concurrency)"
},
{
"msg_contents": "On Sun, Jul 2, 2023 at 1:40 AM Tomas Vondra\n<[email protected]> wrote:\n> I think there's some sort of bug in how dd38ff28ad deals with\n> contrecords. Consider something as simple as\n>\n> pgbench -i -s 100\n>\n> and then doing pg_waldump on the WAL segments, I get this for every\n> single one:\n>\n> pg_waldump: error: error in WAL record at 0/1FFFF98: missing\n> contrecord at 0/1FFFFE0\n>\n> This only happens since dd38ff28ad, and revert makes it disappear.\n>\n> It's possible we still have some issue with contrecords, but IIUC we\n> fixed those. So unless there's some unknown one (and considering this\n> seems to happen for *every* WAL segment that's hard to believe), this\n> seems more like an issue in the error detection.\n\nYeah. That message is due to this part of dd38ff28ad's change:\n\n Also add an explicit error message for missing contrecords. It was a\n bit strange that we didn't report an error already, and became a latent\n bug with prefetching, since the internal state that tracks aborted\n contrecords would not survive retrying, as revealed by\n 026_overwrite_contrecord.pl with this adjustment. Reporting an error\n prevents that.\n\nWe can change 'missing contrecord' back to silent end-of-decoding (as\nit was in 14) with the attached. Here [1] is some analysis of this\nerror that I wrote last year. The reason for my hesitation in pushing\na fix was something like this:\n\n1. For pg_waldump, it's \"you told me to decode only this WAL segment,\nso decoding failed here\", which is useless noise\n2. For walsender, it's \"you told me to shut down, so decoding failed\nhere\", which is useless noise\n3. For crash recovery, \"I ran out of data, so decoding failed here\",\nwhich seems like a report-worthy condition, I think?\n\nWhen I added that new error I was thinking about that third case. We\ngenerally expect to detect the end of WAL replay after a crash with an\nerror (\"invalid record length ...: wanted 24, got 0\" + several other\npossibilities), but in this rare case it would be silent. The effect\non the first two cases is cosmetic, but certainly annoying. Perhaps I\nshould go ahead and back-patch the attached change, and then we can\ndiscuss whether/how we should do a better job of distinguishing \"user\nrequested artificial end of decoding\" from \"unexpectedly ran out of\ndata\" for v17?\n\n[1] https://www.postgresql.org/message-id/CA+hUKG+WKsZpdoryeqM8_rk5uQPCqS2HGY92WiMGFsK2wVkcig@mail.gmail.com",
"msg_date": "Sun, 2 Jul 2023 14:09:07 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: possible bug in handling of contrecords in dd38ff28ad (Fix\n recovery_prefetch with low maintenance_io_concurrency)"
},
{
"msg_contents": "\n\nOn 7/2/23 04:09, Thomas Munro wrote:\n> On Sun, Jul 2, 2023 at 1:40 AM Tomas Vondra\n> <[email protected]> wrote:\n>> I think there's some sort of bug in how dd38ff28ad deals with\n>> contrecords. Consider something as simple as\n>>\n>> pgbench -i -s 100\n>>\n>> and then doing pg_waldump on the WAL segments, I get this for every\n>> single one:\n>>\n>> pg_waldump: error: error in WAL record at 0/1FFFF98: missing\n>> contrecord at 0/1FFFFE0\n>>\n>> This only happens since dd38ff28ad, and revert makes it disappear.\n>>\n>> It's possible we still have some issue with contrecords, but IIUC we\n>> fixed those. So unless there's some unknown one (and considering this\n>> seems to happen for *every* WAL segment that's hard to believe), this\n>> seems more like an issue in the error detection.\n> \n> Yeah. That message is due to this part of dd38ff28ad's change:\n> \n> Also add an explicit error message for missing contrecords. It was a\n> bit strange that we didn't report an error already, and became a latent\n> bug with prefetching, since the internal state that tracks aborted\n> contrecords would not survive retrying, as revealed by\n> 026_overwrite_contrecord.pl with this adjustment. Reporting an error\n> prevents that.\n> \n> We can change 'missing contrecord' back to silent end-of-decoding (as\n> it was in 14) with the attached. Here [1] is some analysis of this\n> error that I wrote last year. The reason for my hesitation in pushing\n> a fix was something like this:\n> \n> 1. For pg_waldump, it's \"you told me to decode only this WAL segment,\n> so decoding failed here\", which is useless noise\n> 2. For walsender, it's \"you told me to shut down, so decoding failed\n> here\", which is useless noise\n> 3. For crash recovery, \"I ran out of data, so decoding failed here\",\n> which seems like a report-worthy condition, I think?\n> \n> When I added that new error I was thinking about that third case. We\n> generally expect to detect the end of WAL replay after a crash with an\n> error (\"invalid record length ...: wanted 24, got 0\" + several other\n> possibilities), but in this rare case it would be silent. The effect\n> on the first two cases is cosmetic, but certainly annoying. Perhaps I\n> should go ahead and back-patch the attached change, and then we can\n> discuss whether/how we should do a better job of distinguishing \"user\n> requested artificial end of decoding\" from \"unexpectedly ran out of\n> data\" for v17?\n> \n\nYeah, I think that'd be sensible. IMHO we have a habit of scaring users\nwith stuff that might be dangerous/bad, but 99% of the time it's\nactually fine and perhaps even expected. It's almost as if we're\nconditioning people to ignore errors.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 2 Jul 2023 20:12:42 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: possible bug in handling of contrecords in dd38ff28ad (Fix\n recovery_prefetch with low maintenance_io_concurrency)"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 6:12 AM Tomas Vondra\n<[email protected]> wrote:\n> On 7/2/23 04:09, Thomas Munro wrote:\n> > When I added that new error I was thinking about that third case. We\n> > generally expect to detect the end of WAL replay after a crash with an\n> > error (\"invalid record length ...: wanted 24, got 0\" + several other\n> > possibilities), but in this rare case it would be silent. The effect\n> > on the first two cases is cosmetic, but certainly annoying. Perhaps I\n> > should go ahead and back-patch the attached change, and then we can\n> > discuss whether/how we should do a better job of distinguishing \"user\n> > requested artificial end of decoding\" from \"unexpectedly ran out of\n> > data\" for v17?\n> >\n>\n> Yeah, I think that'd be sensible. IMHO we have a habit of scaring users\n> with stuff that might be dangerous/bad, but 99% of the time it's\n> actually fine and perhaps even expected. It's almost as if we're\n> conditioning people to ignore errors.\n\nDone.\n\nThere is CF #2490 \"Make message at end-of-recovery less scary\".\nPerhaps we should think about how to classify this type of failure in\nthe context of that proposal.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:32:07 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: possible bug in handling of contrecords in dd38ff28ad (Fix\n recovery_prefetch with low maintenance_io_concurrency)"
}
] |
[
{
"msg_contents": "Hi, there...\n\ndrop table infomask_test;\nCREATE TABLE infomask_test(acc_no integer PRIMARY KEY,amount\nnumeric,misc text);\nINSERT INTO infomask_test VALUES (1, 100.00,default), (2,\n200.00,repeat('abc',700));\n\nBEGIN;\nSELECT acc_no,ctid,xmin,xmax FROM infomask_test WHERE acc_no = 1 FOR KEY SHARE;\nSELECT acc_no,ctid,xmin,xmax FROM infomask_test WHERE acc_no = 2 FOR SHARE;\n\nselect t_ctid, raw_flags, combined_flags,t_xmin,t_xmax\nFROM heap_page_items(get_raw_page('infomask_test', 0))\n ,LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)\norder by t_ctid;\n\n t_ctid | raw_flags\n | combined_flags |\nt_xmin | t_xmax\n--------+------------------------------------------------------------------------------------------------------+----------------------+--------+--------\n (0,1) | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMAX_KEYSHR_LOCK,HEAP_XMAX_LOCK_ONLY,HEAP_XMIN_COMMITTED}\n | {} | 25655 | 25656\n (0,2) | {HEAP_HASVARWIDTH,HEAP_XMAX_KEYSHR_LOCK,HEAP_XMAX_EXCL_LOCK,HEAP_XMAX_LOCK_ONLY,HEAP_XMIN_COMMITTED}\n| {HEAP_XMAX_SHR_LOCK} | 25655 | 25656\n\nselect acc_no,ctid,xmin,xmax from infomask_test;\n acc_no | ctid | xmin | xmax\n--------+-------+-------+-------\n 1 | (0,1) | 25655 | 25656\n 2 | (0,2) | 25655 | 25656\n(2 rows)\nrollback;\n----------------------------------------------------------------------------------------------------------\n/main/postgres/src/include/access/htup_details.h:\n\n#define HEAP_XMAX_EXCL_LOCK 0x0040 /* xmax is exclusive locker */\n\nwhile manual:\nFOR SHARE: Behaves similarly to FOR NO KEY UPDATE, except that it\nacquires a shared lock rather than exclusive lock on each retrieved\nrow. A shared lock blocks other transactions from performing UPDATE,\nDELETE, SELECT FOR UPDATE or SELECT FOR NO KEY UPDATE on these rows,\nbut it does not prevent them from performing SELECT FOR SHARE or\nSELECT FOR KEY SHARE.\n\nI failed to distinguish/reconcile between exclusive locker (in source\ncode comment) and shared lock (in manual).\n-----------------------------------------------------------------------\naslo in /src/include/access/htup_details.h\n\n#define HEAP_UPDATED 0x2000 /* this is UPDATEd version of row */\n\npersonally I found this comment kind of confusing. Trigger concept old\ntable, the new table is very intuitive.\n\n\n",
"msg_date": "Sun, 2 Jul 2023 19:56:51 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "/src/include/access/htup_details.h some comments kind of\n confusing...."
}
] |
[
{
"msg_contents": "The buildfarm animal fairywren has been failing the tests for \npg_basebackup because it can't create a file with a path longer than 255 \nchars. This has just been tripped because for release 16 it's running \nTAP tests, and the branch name is part of the file path, and \n\"REL_16_STABLE\" is longer than \"HEAD\". I did think of chdir'ing into the \ndirectory to create the file, but experimentation shows that doesn't \nsolve matters. I also adjusted the machine's settings related to long \nfile names, but to no avail, so for now I propose to reduce slightly the \nname of the long file so it still exercises the check for file names \nlonger than 100 but doesn't trip this up on fairywren. But that's a \nbandaid. I don't have a good solution for now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nThe buildfarm animal fairywren has been\n failing the tests for pg_basebackup because it can't create a\n file with a path longer than 255 chars. This has just been\n tripped because for release 16 it's running TAP tests, and the\n branch name is part of the file path, and \"REL_16_STABLE\" is\n longer than \"HEAD\". I did think of chdir'ing into the directory\n to create the file, but experimentation shows that doesn't solve\n matters. I also adjusted the machine's settings related to long\n file names, but to no avail, so for now I propose to reduce\n slightly the name of the long file so it still exercises the\n check for file names longer than 100 but doesn't trip this up on\n fairywren. But that's a bandaid. I don't have a good solution\n for now. \n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 2 Jul 2023 09:15:17 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-02 Su 09:15, Andrew Dunstan wrote:\n>\n> The buildfarm animal fairywren has been failing the tests for \n> pg_basebackup because it can't create a file with a path longer than \n> 255 chars. This has just been tripped because for release 16 it's \n> running TAP tests, and the branch name is part of the file path, and \n> \"REL_16_STABLE\" is longer than \"HEAD\". I did think of chdir'ing into \n> the directory to create the file, but experimentation shows that \n> doesn't solve matters. I also adjusted the machine's settings related \n> to long file names, but to no avail, so for now I propose to reduce \n> slightly the name of the long file so it still exercises the check for \n> file names longer than 100 but doesn't trip this up on fairywren. But \n> that's a bandaid. I don't have a good solution for now.\n>\n\nI've pushed a better solution, which creates the file via a short \nsymlink. Experimentation on fairywren showed this working.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-02 Su 09:15, Andrew Dunstan\n wrote:\n\n\n\nThe buildfarm animal fairywren has been\n failing the tests for pg_basebackup because it can't create a\n file with a path longer than 255 chars. This has just been\n tripped because for release 16 it's running TAP tests, and the\n branch name is part of the file path, and \"REL_16_STABLE\" is\n longer than \"HEAD\". I did think of chdir'ing into the\n directory to create the file, but experimentation shows that\n doesn't solve matters. I also adjusted the machine's settings\n related to long file names, but to no avail, so for now I\n propose to reduce slightly the name of the long file so it\n still exercises the check for file names longer than 100 but\n doesn't trip this up on fairywren. But that's a bandaid. I\n don't have a good solution for now. \n\n\n\n\nI've pushed a better solution, which creates the file via a short\n symlink. Experimentation on fairywren showed this working.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 3 Jul 2023 10:12:49 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "> On 3 Jul 2023, at 16:12, Andrew Dunstan <[email protected]> wrote:\n\n> I've pushed a better solution, which creates the file via a short symlink. Experimentation on fairywren showed this working.\n\nThe buildfarm seems a tad upset after this?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:16:02 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-03 Mo 10:16, Daniel Gustafsson wrote:\n>> On 3 Jul 2023, at 16:12, Andrew Dunstan<[email protected]> wrote:\n>> I've pushed a better solution, which creates the file via a short symlink. Experimentation on fairywren showed this working.\n> The buildfarm seems a tad upset after this?\n>\n\nYeah :-(\n\nI think it should be fixing itself now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-03 Mo 10:16, Daniel\n Gustafsson wrote:\n\n\n\nOn 3 Jul 2023, at 16:12, Andrew Dunstan <[email protected]> wrote:\n\n\n\n\n\nI've pushed a better solution, which creates the file via a short symlink. Experimentation on fairywren showed this working.\n\n\n\nThe buildfarm seems a tad upset after this?\n\n\n\n\n\nYeah :-( \n\nI think it should be fixing itself now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 3 Jul 2023 11:18:25 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "> On 3 Jul 2023, at 17:18, Andrew Dunstan <[email protected]> wrote:\n> On 2023-07-03 Mo 10:16, Daniel Gustafsson wrote:\n>>> On 3 Jul 2023, at 16:12, Andrew Dunstan <[email protected]>\n>>> wrote:\n>>> \n>>> I've pushed a better solution, which creates the file via a short symlink. Experimentation on fairywren showed this working.\n>>> \n>> The buildfarm seems a tad upset after this?\n> \n> Yeah :-( \n> \n> I think it should be fixing itself now.\n\nYeah, thanks for speedy fix!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 17:19:51 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-03 Mo 11:18, Andrew Dunstan wrote:\n>\n>\n> On 2023-07-03 Mo 10:16, Daniel Gustafsson wrote:\n>>> On 3 Jul 2023, at 16:12, Andrew Dunstan<[email protected]> wrote:\n>>> I've pushed a better solution, which creates the file via a short symlink. Experimentation on fairywren showed this working.\n>> The buildfarm seems a tad upset after this?\n>>\n>\n> Yeah :-(\n>\n> I think it should be fixing itself now.\n>\n>\n>\n\nBut sadly we're kinda back where we started. fairywren is failing on \nREL_16_STABLE. Before the changes the failure occurred because the test \nscript was unable to create the file with a path > 255. Now that we have \na way to create the file the test for pg_basebackup to reject files with \nnames > 100 fails, I presume because the server can't actually see the \nfile. At this stage I'm thinking the best thing would be to skip the \ntest altogether on windows if the path is longer than 255.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-03 Mo 11:18, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-07-03 Mo 10:16, Daniel\n Gustafsson wrote:\n\n\n\nOn 3 Jul 2023, at 16:12, Andrew Dunstan <[email protected]> wrote:\n\n\n\nI've pushed a better solution, which creates the file via a short symlink. Experimentation on fairywren showed this working.\n\n\nThe buildfarm seems a tad upset after this?\n\n\n\n\n\nYeah :-( \n\nI think it should be fixing itself now.\n\n\n\n\n\n\nBut sadly we're kinda back where we started. fairywren is failing\n on REL_16_STABLE. Before the changes the failure occurred because\n the test script was unable to create the file with a path >\n 255. Now that we have a way to create the file the test for\n pg_basebackup to reject files with names > 100 fails, I presume\n because the server can't actually see the file. At this stage I'm\n thinking the best thing would be to skip the test altogether on\n windows if the path is longer than 255.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 4 Jul 2023 14:19:46 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "> On 4 Jul 2023, at 20:19, Andrew Dunstan <[email protected]> wrote:\n\n> But sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n\nThat does sound like a fairly large hammer for a nail small enough that we\nshould be able to fix it, but I don't have any other good ideas off the cuff.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 22:54:54 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n>> On 4 Jul 2023, at 20:19, Andrew Dunstan<[email protected]> wrote:\n>> But sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n> That does sound like a fairly large hammer for a nail small enough that we\n> should be able to fix it, but I don't have any other good ideas off the cuff.\n\n\nNot sure it's such a big hammer. Here's a patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Wed, 5 Jul 2023 08:49:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "> On 5 Jul 2023, at 14:49, Andrew Dunstan <[email protected]> wrote:\n> On 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n>>> On 4 Jul 2023, at 20:19, Andrew Dunstan <[email protected]>\n>>> wrote:\n>>> \n>>> But sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n>>> \n>> That does sound like a fairly large hammer for a nail small enough that we\n>> should be able to fix it, but I don't have any other good ideas off the cuff.\n> \n> Not sure it's such a big hammer. Here's a patch.\n\nNo objections to the patch, LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 15:50:57 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-06 Th 09:50, Daniel Gustafsson wrote:\n>> On 5 Jul 2023, at 14:49, Andrew Dunstan<[email protected]> wrote:\n>> On 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n>>>> On 4 Jul 2023, at 20:19, Andrew Dunstan<[email protected]>\n>>>> wrote:\n>>>>\n>>>> But sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n>>>>\n>>> That does sound like a fairly large hammer for a nail small enough that we\n>>> should be able to fix it, but I don't have any other good ideas off the cuff.\n>> Not sure it's such a big hammer. Here's a patch.\n> No objections to the patch, LGTM.\n\n\nThanks. pushed with a couple of tweaks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-06 Th 09:50, Daniel\n Gustafsson wrote:\n\n\n\nOn 5 Jul 2023, at 14:49, Andrew Dunstan <[email protected]> wrote:\nOn 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n\n\n\nOn 4 Jul 2023, at 20:19, Andrew Dunstan <[email protected]>\n wrote:\n\nBut sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n\n\n\nThat does sound like a fairly large hammer for a nail small enough that we\nshould be able to fix it, but I don't have any other good ideas off the cuff.\n\n\n\nNot sure it's such a big hammer. Here's a patch.\n\n\n\nNo objections to the patch, LGTM.\n\n\n\nThanks. pushed with a couple of tweaks.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 6 Jul 2023 12:38:03 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-06 Th 12:38, Andrew Dunstan wrote:\n>\n>\n> On 2023-07-06 Th 09:50, Daniel Gustafsson wrote:\n>>> On 5 Jul 2023, at 14:49, Andrew Dunstan<[email protected]> wrote:\n>>> On 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n>>>>> On 4 Jul 2023, at 20:19, Andrew Dunstan<[email protected]>\n>>>>> wrote:\n>>>>>\n>>>>> But sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n>>>>>\n>>>> That does sound like a fairly large hammer for a nail small enough that we\n>>>> should be able to fix it, but I don't have any other good ideas off the cuff.\n>>> Not sure it's such a big hammer. Here's a patch.\n>> No objections to the patch, LGTM.\n>\n>\n> Thanks. pushed with a couple of tweaks.\n>\n>\n>\n\nUnfortunately, skipping this has now exposed a further problem in this test.\n\n\nHere's the relevant log extracted from \n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-07-07%2022%3A03%3A06>, \nstarting with the skip mentioned above:\n\n\n[23:29:21.661](0.002s) ok 98 # skip File path too long\n### Stopping node \"main\" using mode fast\n# Running: pg_ctl -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -m fast stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"main\"\nJunction created for C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\t_010_pg_basebackup_main_data\\\\pgdata\\\\pg_replslot <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\\\\pg_replslot\n### Starting node \"main\"\n# Running: pg_ctl -w -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -l C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/log/010_pg_basebackup_main.log -o --cluster-name=main start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"main\" is 5184\nJunction created for C:\\\\tools\\\\nmsys64\\\\tmp\\\\6zkMt003MF\\\\tempdir <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\n# Taking pg_basebackup tarbackup2 from node \"main\"\n# Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\nWARNING: aborting backup due to backend exiting before pg_backup_stop was called\npg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n\n\nIt's worth pointing out that the path for the replslot junction is almost as long as the original path.\n\nSince this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n\n rename(\"$pgdata/pg_replslot\", \"$tempdir/pg_replslot\")\n or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\nto use the much shorter $sys_tempdir created a few lines below.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-06 Th 12:38, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-07-06 Th 09:50, Daniel\n Gustafsson wrote:\n\n\n\nOn 5 Jul 2023, at 14:49, Andrew Dunstan <[email protected]> wrote:\nOn 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n\n\n\nOn 4 Jul 2023, at 20:19, Andrew Dunstan <[email protected]>\n wrote:\n\nBut sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n\n\n\nThat does sound like a fairly large hammer for a nail small enough that we\nshould be able to fix it, but I don't have any other good ideas off the cuff.\n\n\nNot sure it's such a big hammer. Here's a patch.\n\n\nNo objections to the patch, LGTM.\n\n\n\nThanks. pushed with a couple of tweaks.\n\n\n\n\n\n\nUnfortunately, skipping this has now exposed a further problem in\n this test.\n\n\nHere's the relevant log extracted from\n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-07-07%2022%3A03%3A06>,\n starting with the skip mentioned above:\n\n\n[23:29:21.661](0.002s) ok 98 # skip File path too long\n### Stopping node \"main\" using mode fast\n# Running: pg_ctl -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -m fast stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"main\"\nJunction created for C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\t_010_pg_basebackup_main_data\\\\pgdata\\\\pg_replslot <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\\\\pg_replslot\n### Starting node \"main\"\n# Running: pg_ctl -w -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -l C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/log/010_pg_basebackup_main.log -o --cluster-name=main start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"main\" is 5184\nJunction created for C:\\\\tools\\\\nmsys64\\\\tmp\\\\6zkMt003MF\\\\tempdir <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\n# Taking pg_basebackup tarbackup2 from node \"main\"\n# Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\nWARNING: aborting backup due to backend exiting before pg_backup_stop was called\npg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n\n\nIt's worth pointing out that the path for the replslot junction is almost as long as the original path.\n\nSince this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n\nrename(\"$pgdata/pg_replslot\", \"$tempdir/pg_replslot\")\n or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\nto use the much shorter $sys_tempdir created a few lines below.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 8 Jul 2023 09:15:21 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-07-08 Sa 09:15, Andrew Dunstan wrote:\n>\n>\n> On 2023-07-06 Th 12:38, Andrew Dunstan wrote:\n>>\n>>\n>> On 2023-07-06 Th 09:50, Daniel Gustafsson wrote:\n>>>> On 5 Jul 2023, at 14:49, Andrew Dunstan<[email protected]> wrote:\n>>>> On 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n>>>>>> On 4 Jul 2023, at 20:19, Andrew Dunstan<[email protected]>\n>>>>>> wrote:\n>>>>>>\n>>>>>> But sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n>>>>>>\n>>>>> That does sound like a fairly large hammer for a nail small enough that we\n>>>>> should be able to fix it, but I don't have any other good ideas off the cuff.\n>>>> Not sure it's such a big hammer. Here's a patch.\n>>> No objections to the patch, LGTM.\n>>\n>>\n>> Thanks. pushed with a couple of tweaks.\n>>\n>>\n>>\n>\n> Unfortunately, skipping this has now exposed a further problem in this \n> test.\n>\n>\n> Here's the relevant log extracted from \n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-07-07%2022%3A03%3A06>, \n> starting with the skip mentioned above:\n>\n>\n> [23:29:21.661](0.002s) ok 98 # skip File path too long\n> ### Stopping node \"main\" using mode fast\n> # Running: pg_ctl -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -m fast stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"main\"\n> Junction created for C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\t_010_pg_basebackup_main_data\\\\pgdata\\\\pg_replslot <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\\\\pg_replslot\n> ### Starting node \"main\"\n> # Running: pg_ctl -w -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -l C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/log/010_pg_basebackup_main.log -o --cluster-name=main start\n> waiting for server to start.... done\n> server started\n> # Postmaster PID for node \"main\" is 5184\n> Junction created for C:\\\\tools\\\\nmsys64\\\\tmp\\\\6zkMt003MF\\\\tempdir <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\n> # Taking pg_basebackup tarbackup2 from node \"main\"\n> # Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\n> WARNING: aborting backup due to backend exiting before pg_backup_stop was called\n> pg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n>\n>\n> It's worth pointing out that the path for the replslot junction is almost as long as the original path.\n>\n> Since this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n>\n> rename(\"$pgdata/pg_replslot\", \"$tempdir/pg_replslot\")\n> or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n> dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n> or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n>\n> to use the much shorter $sys_tempdir created a few lines below.\n>\n>\n>\n\nPushed a tested fix along those lines.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-08 Sa 09:15, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-07-06 Th 12:38, Andrew\n Dunstan wrote:\n\n\n\n\n\nOn 2023-07-06 Th 09:50, Daniel\n Gustafsson wrote:\n\n\n\nOn 5 Jul 2023, at 14:49, Andrew Dunstan <[email protected]> wrote:\nOn 2023-07-04 Tu 16:54, Daniel Gustafsson wrote:\n\n\n\nOn 4 Jul 2023, at 20:19, Andrew Dunstan <[email protected]>\n wrote:\n\nBut sadly we're kinda back where we started. fairywren is failing on REL_16_STABLE. Before the changes the failure occurred because the test script was unable to create the file with a path > 255. Now that we have a way to create the file the test for pg_basebackup to reject files with names > 100 fails, I presume because the server can't actually see the file. At this stage I'm thinking the best thing would be to skip the test altogether on windows if the path is longer than 255.\n\n\n\nThat does sound like a fairly large hammer for a nail small enough that we\nshould be able to fix it, but I don't have any other good ideas off the cuff.\n\n\nNot sure it's such a big hammer. Here's a patch.\n\n\nNo objections to the patch, LGTM.\n\n\n\nThanks. pushed with a couple of tweaks.\n\n\n\n\n\n\nUnfortunately, skipping this has now exposed a further problem\n in this test.\n\n\nHere's the relevant log extracted from\n <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-07-07%2022%3A03%3A06>,\n starting with the skip mentioned above:\n\n\n[23:29:21.661](0.002s) ok 98 # skip File path too long\n### Stopping node \"main\" using mode fast\n# Running: pg_ctl -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -m fast stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"main\"\nJunction created for C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\t_010_pg_basebackup_main_data\\\\pgdata\\\\pg_replslot <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\\\\pg_replslot\n### Starting node \"main\"\n# Running: pg_ctl -w -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/pgdata -l C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/log/010_pg_basebackup_main.log -o --cluster-name=main start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"main\" is 5184\nJunction created for C:\\\\tools\\\\nmsys64\\\\tmp\\\\6zkMt003MF\\\\tempdir <<===>> C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build\\\\testrun\\\\pg_basebackup\\\\010_pg_basebackup\\\\data\\\\tmp_test_pjj2\n# Taking pg_basebackup tarbackup2 from node \"main\"\n# Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\nWARNING: aborting backup due to backend exiting before pg_backup_stop was called\npg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n\n\nIt's worth pointing out that the path for the replslot junction is almost as long as the original path.\n\nSince this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n\nrename(\"$pgdata/pg_replslot\", \"$tempdir/pg_replslot\")\n or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\nto use the much shorter $sys_tempdir created a few lines below.\n\n\n\n\n\n\nPushed a tested fix along those lines.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 8 Jul 2023 11:52:11 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "Hello Andrew,\n\n08.07.2023 18:52, Andrew Dunstan wrote:\n>> Since this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n>>\n>> rename(\"$pgdata/pg_replslot\", \"$tempdir/pg_replslot\")\n>> or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n>> dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n>> or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n>>\n>> to use the much shorter $sys_tempdir created a few lines below.\n>>\n> Pushed a tested fix along those lines.\n>\n\nToday I've started up my Windows VM to run some tests and discovered a test\nfailure caused by that fix (e213de8e7):\n >meson test\nOk: 246\nExpected Fail: 0\nFail: 1\nUnexpected Pass: 0\nSkipped: 14\nTimeout: 0\n\n...\\010_pg_basebackup\\log\\regress_log_010_pg_basebackup.txt contains:\n[04:42:45.321](0.291s) Bail out! could not move \nT:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n\nWith a diagnostic print added before rename() in 010_pg_basebackup.pl, I see:\nrename(\"T:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\", \n\"C:\\Users\\User\\AppData\\Local\\Temp\\fGT76tZUWr/pg_replslot\")\nThat is, I have the postgres source tree and the user tempdir placed on\ndifferent disks.\n\nperldoc on rename() says that it usually doesn't work across filesystem\nboundaries, so I think it's not a Windows-specific issue.\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Andrew,\n\n 08.07.2023 18:52, Andrew Dunstan wrote:\n\n\n\n\n\nSince this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n\nrename(\"$pgdata/pg_replslot\",\n \"$tempdir/pg_replslot\")\n or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\nto use the much shorter $sys_tempdir created a few lines\n below.\n\nPushed a tested fix along those lines.\n\n\n Today I've started up my Windows VM to run some tests and discovered\n a test\n failure caused by that fix (e213de8e7):\n >meson test\n Ok: 246\n Expected Fail: 0\n Fail: 1\n Unexpected Pass: 0\n Skipped: 14\n Timeout: 0\n\n ...\\010_pg_basebackup\\log\\regress_log_010_pg_basebackup.txt\n contains:\n [04:42:45.321](0.291s) Bail out! could not move\nT:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n\n With a diagnostic print added before rename() in\n 010_pg_basebackup.pl, I see:\nrename(\"T:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\",\n \"C:\\Users\\User\\AppData\\Local\\Temp\\fGT76tZUWr/pg_replslot\")\n That is, I have the postgres source tree and the user tempdir placed\n on\n different disks.\n\n perldoc on rename() says that it usually doesn't work across\n filesystem\n boundaries, so I think it's not a Windows-specific issue.\n\n Best regards,\n Alexander",
"msg_date": "Sat, 11 Nov 2023 16:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "Hi, Alexander\n\n\nOn 2023-11-11 Sa 08:00, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 08.07.2023 18:52, Andrew Dunstan wrote:\n>>> Since this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n>>>\n>>> rename(\"$pgdata/pg_replslot\", \"$tempdir/pg_replslot\")\n>>> or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n>>> dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n>>> or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n>>>\n>>> to use the much shorter $sys_tempdir created a few lines below.\n>>>\n>> Pushed a tested fix along those lines.\n>>\n>\n> Today I've started up my Windows VM to run some tests and discovered a \n> test\n> failure caused by that fix (e213de8e7):\n> >meson test\n> Ok: 246\n> Expected Fail: 0\n> Fail: 1\n> Unexpected Pass: 0\n> Skipped: 14\n> Timeout: 0\n>\n> ...\\010_pg_basebackup\\log\\regress_log_010_pg_basebackup.txt contains:\n> [04:42:45.321](0.291s) Bail out! could not move \n> T:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n>\n> With a diagnostic print added before rename() in 010_pg_basebackup.pl, \n> I see:\n> rename(\"T:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\", \n> \"C:\\Users\\User\\AppData\\Local\\Temp\\fGT76tZUWr/pg_replslot\")\n> That is, I have the postgres source tree and the user tempdir placed on\n> different disks.\n>\n> perldoc on rename() says that it usually doesn't work across filesystem\n> boundaries, so I think it's not a Windows-specific issue.\n>\n>\n\nHmm, maybe we should be using File::Copy::move() instead of rename(). \nThe docco for that says:\n\n If possible, move() will simply rename the file. Otherwise, it\n copies the file to the new location and deletes the original. If an\n error occurs during this copy-and-delete process, you may be left\n with a (possibly partial) copy of the file under the destination\n name.\n\n\nCan you try it out?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\nHi, Alexander\n\n\nOn 2023-11-11 Sa 08:00, Alexander\n Lakhin wrote:\n\n\n\nHello Andrew,\n\n 08.07.2023 18:52, Andrew Dunstan wrote:\n\n\n\n\nSince this test is passing on HEAD which has slightly shorter paths, I'm wondering if we should change this:\n\nrename(\"$pgdata/pg_replslot\",\n \"$tempdir/pg_replslot\")\n or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n dir_symlink(\"$tempdir/pg_replslot\", \"$pgdata/pg_replslot\")\n or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\nto use the much shorter $sys_tempdir created a few lines\n below.\n\nPushed a tested fix along those lines.\n\n\n Today I've started up my Windows VM to run some tests and\n discovered a test\n failure caused by that fix (e213de8e7):\n >meson test\n Ok: 246\n Expected Fail: 0\n Fail: 1\n Unexpected Pass: 0\n Skipped: 14\n Timeout: 0\n\n ...\\010_pg_basebackup\\log\\regress_log_010_pg_basebackup.txt\n contains:\n [04:42:45.321](0.291s) Bail out! could not move\nT:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n\n With a diagnostic print added before rename() in\n 010_pg_basebackup.pl, I see:\nrename(\"T:\\postgresql\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\",\n \"C:\\Users\\User\\AppData\\Local\\Temp\\fGT76tZUWr/pg_replslot\")\n That is, I have the postgres source tree and the user tempdir\n placed on\n different disks.\n\n perldoc on rename() says that it usually doesn't work across\n filesystem\n boundaries, so I think it's not a Windows-specific issue.\n\n\n\n\n\nHmm, maybe we should be using File::Copy::move() instead of\n rename(). The docco for that says:\n If possible, move() will simply rename the file. Otherwise, it\n copies the file to the new location and deletes the original. If an\n error occurs during this copy-and-delete process, you may be left\n with a (possibly partial) copy of the file under the destination\n name.\n\n\nCan you try it out?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 11 Nov 2023 10:18:51 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-Jul-08, Andrew Dunstan wrote:\n\n> # Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\n> WARNING: aborting backup due to backend exiting before pg_backup_stop was called\n> pg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n\nWhy not patch pgreadlink to use the method recommended by Microsoft,\nthat DeviceIoControl() is called first with a NULL reparseBuffer to\ndetermine the size needed, then a second time with a buffer of that\nsize?\n\nhttps://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-deviceiocontrol\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 11 Nov 2023 17:31:39 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "11.11.2023 18:18, Andrew Dunstan wrote:\n>\n> Hmm, maybe we should be using File::Copy::move() instead of rename(). The docco for that says:\n>\n> If possible, move() will simply rename the file. Otherwise, it\n> copies the file to the new location and deletes the original. If an\n> error occurs during this copy-and-delete process, you may be left\n> with a (possibly partial) copy of the file under the destination\n> name.\n\nUnfortunately, I've stumbled upon inability of File::Copy::move()\nto move directories across filesystems, exactly as described here:\nhttps://stackoverflow.com/questions/17628039/filecopy-move-directories-accross-drives-in-windows-not-working\n\n(I'm sorry for not looking above rename() where this stated explicitly:\n# On Windows use the short location to avoid path length issues.\n# Elsewhere use $tempdir to avoid file system boundary issues with moving.\nSo this issue affects Windows only.)\n\nBest regards,\nAlexander\n\n\n\n\n\n11.11.2023 18:18, Andrew Dunstan wrote:\n\n\n\nHmm, maybe we should be using File::Copy::move() instead of\n rename(). The docco for that says:\n If possible, move() will simply rename the file. Otherwise, it\n copies the file to the new location and deletes the original. If an\n error occurs during this copy-and-delete process, you may be left\n with a (possibly partial) copy of the file under the destination\n name.\n\n\n Unfortunately, I've stumbled upon inability of File::Copy::move()\n to move directories across filesystems, exactly as described here:\nhttps://stackoverflow.com/questions/17628039/filecopy-move-directories-accross-drives-in-windows-not-working\n\n (I'm sorry for not looking above rename() where this stated\n explicitly:\n # On Windows use the short location to avoid path length issues.\n # Elsewhere use $tempdir to avoid file system boundary issues with\n moving.\n So this issue affects Windows only.)\n\n Best regards,\n Alexander",
"msg_date": "Sat, 11 Nov 2023 20:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-11-11 Sa 12:00, Alexander Lakhin wrote:\n> 11.11.2023 18:18, Andrew Dunstan wrote:\n>>\n>> Hmm, maybe we should be using File::Copy::move() instead of rename(). \n>> The docco for that says:\n>>\n>> If possible, move() will simply rename the file. Otherwise, it\n>> copies the file to the new location and deletes the original. If an\n>> error occurs during this copy-and-delete process, you may be left\n>> with a (possibly partial) copy of the file under the destination\n>> name.\n>\n> Unfortunately, I've stumbled upon inability of File::Copy::move()\n> to move directories across filesystems, exactly as described here:\n> https://stackoverflow.com/questions/17628039/filecopy-move-directories-accross-drives-in-windows-not-working\n>\n> (I'm sorry for not looking above rename() where this stated explicitly:\n> # On Windows use the short location to avoid path length issues.\n> # Elsewhere use $tempdir to avoid file system boundary issues with moving.\n> So this issue affects Windows only.)\n>\n>\n\n*sigh*\n\nA probable workaround is to use a temp directory on the same device the \ntest is building on. Just set it up and set your environment TEMPDIR to \npoint to it, and I think it will be OK (i.e. I havent tested it).\n\nBut that doesn't mean I'm not searching for a better solution. Maybe \nAlvaro's suggestion nearby will help.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-11-11 Sa 12:00, Alexander\n Lakhin wrote:\n\n\n\n11.11.2023 18:18, Andrew Dunstan\n wrote:\n\n\n\nHmm, maybe we should be using File::Copy::move() instead of\n rename(). The docco for that says:\n If possible, move() will simply rename the file. Otherwise, it\n copies the file to the new location and deletes the original. If an\n error occurs during this copy-and-delete process, you may be left\n with a (possibly partial) copy of the file under the destination\n name.\n\n\n Unfortunately, I've stumbled upon inability of File::Copy::move()\n to move directories across filesystems, exactly as described here:\nhttps://stackoverflow.com/questions/17628039/filecopy-move-directories-accross-drives-in-windows-not-working\n\n (I'm sorry for not looking above rename() where this stated\n explicitly:\n # On Windows use the short location to avoid path length issues.\n # Elsewhere use $tempdir to avoid file system boundary issues with\n moving.\n So this issue affects Windows only.)\n\n\n\n\n\n*sigh*\nA probable workaround is to use a temp directory on the same\n device the test is building on. Just set it up and set your\n environment TEMPDIR to point to it, and I think it will be OK\n (i.e. I havent tested it).\n\nBut that doesn't mean I'm not searching for a better solution.\n Maybe Alvaro's suggestion nearby will help.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 12 Nov 2023 09:09:27 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-11-11 Sa 11:31, Alvaro Herrera wrote:\n> On 2023-Jul-08, Andrew Dunstan wrote:\n>\n>> # Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\n>> WARNING: aborting backup due to backend exiting before pg_backup_stop was called\n>> pg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n> Why not patch pgreadlink to use the method recommended by Microsoft,\n> that DeviceIoControl() is called first with a NULL reparseBuffer to\n> determine the size needed, then a second time with a buffer of that\n> size?\n>\n> https://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-deviceiocontrol\n\n\nHmm, here's what that page says - I can't see it saying what you're \nsuggesting here - am I missing something?:\n\n\n|[in] nOutBufferSize|\n\nThe size of the output buffer, in bytes.\n\n|[out, optional] lpBytesReturned|\n\nA pointer to a variable that receives the size of the data stored in the \noutput buffer, in bytes.\n\nIf the output buffer is too small to receive any data, the call fails, \nGetLastError \n<https://learn.microsoft.com/en-us/windows/desktop/api/errhandlingapi/nf-errhandlingapi-getlasterror> \nreturns *ERROR_INSUFFICIENT_BUFFER*, and /lpBytesReturned/ is zero.\n\nIf the output buffer is too small to hold all of the data but can hold \nsome entries, some drivers will return as much data as fits. In this \ncase, the call fails, GetLastError \n<https://learn.microsoft.com/en-us/windows/desktop/api/errhandlingapi/nf-errhandlingapi-getlasterror> \nreturns *ERROR_MORE_DATA*, and /lpBytesReturned/ indicates the amount of \ndata received. Your application should call *DeviceIoControl* again with \nthe same operation, specifying a new starting point.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-11-11 Sa 11:31, Alvaro Herrera\n wrote:\n\n\nOn 2023-Jul-08, Andrew Dunstan wrote:\n\n\n\n# Running: pg_basebackup -D C:\\\\tools\\\\nmsys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\REL_16_STABLE\\\\pgsql.build/testrun/pg_basebackup/010_pg_basebackup/data/t_010_pg_basebackup_main_data/backup/tarbackup2 -h C:/tools/nmsys64/tmp/63ohSgsh21 -p 54699 --checkpoint fast --no-sync -Ft\nWARNING: aborting backup due to backend exiting before pg_backup_stop was called\npg_basebackup: error: could not initiate base backup: ERROR: could not get junction for \"./pg_replslot\": More data is available.\n\n\n\nWhy not patch pgreadlink to use the method recommended by Microsoft,\nthat DeviceIoControl() is called first with a NULL reparseBuffer to\ndetermine the size needed, then a second time with a buffer of that\nsize?\n\nhttps://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-deviceiocontrol\n\n\n\nHmm, here's what that page says - I can't see it saying what\n you're suggesting here - am I missing something?:\n\n\n[in] nOutBufferSize\nThe size of the output buffer, in bytes.\n[out, optional] lpBytesReturned\nA pointer to a variable that receives the size of the data stored\n in the output buffer, in bytes.\nIf the output buffer is too small to receive any data, the call\n fails,\n GetLastError returns\n ERROR_INSUFFICIENT_BUFFER, and lpBytesReturned is\n zero.\nIf the output buffer is too small to hold all of the data but can\n hold some entries, some drivers will return\n as much data as fits. In this case, the call fails,\n GetLastError returns\n ERROR_MORE_DATA, and lpBytesReturned indicates the\n amount\n of data received. Your application should call\n DeviceIoControl again with the same operation,\n specifying a new starting point.\n\n\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 13 Nov 2023 08:27:39 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "On 2023-Nov-13, Andrew Dunstan wrote:\n\n> > size?\n> > \n> > https://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-deviceiocontrol\n> \n> Hmm, here's what that page says - I can't see it saying what you're\n> suggesting here - am I missing something?:\n\nI don't think so. I think I just confused myself. Reading the docs it\nappears that other Windows APIs work as I described, but not this one.\n\nAnyway, after looking at it a bit more, I realized that this code uses\nMAX_PATH as basis for its buffer's length limit -- and apparently on\nWindows that's only 260, much shorter than MAXPGPATH (1024) which our\nown code uses to limit the buffers given to readlink(). So maybe fixing\nthis is just a matter of doing s/MAX_PATH/MAXPGPATH/ in dirmod.c.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 15 Nov 2023 12:34:58 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
},
{
"msg_contents": "\nOn 2023-11-15 We 06:34, Alvaro Herrera wrote:\n> On 2023-Nov-13, Andrew Dunstan wrote:\n>\n>>> size?\n>>>\n>>> https://learn.microsoft.com/en-us/windows/win32/api/ioapiset/nf-ioapiset-deviceiocontrol\n>> Hmm, here's what that page says - I can't see it saying what you're\n>> suggesting here - am I missing something?:\n> I don't think so. I think I just confused myself. Reading the docs it\n> appears that other Windows APIs work as I described, but not this one.\n>\n> Anyway, after looking at it a bit more, I realized that this code uses\n> MAX_PATH as basis for its buffer's length limit -- and apparently on\n> Windows that's only 260, much shorter than MAXPGPATH (1024) which our\n> own code uses to limit the buffers given to readlink(). So maybe fixing\n> this is just a matter of doing s/MAX_PATH/MAXPGPATH/ in dirmod.c.\n\n\n\nI'll test it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 15 Nov 2023 12:15:32 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup check vs Windows file path limits"
}
] |
[
{
"msg_contents": "Hi,\n\nImproved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\ncompletion of alter default privileges like the below statement:\nALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\nALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\nALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n\n2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\npublic FOR \" like in below statement:\nALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\nON TABLES TO PUBLIC;\n\n3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\nREVOKE \" like in below statement:\nalter default privileges revoke grant option for select ON tables FROM dba1;\n\n4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\ncolumn-name SET\" like in:\nALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n\nAttached patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 2 Jul 2023 20:42:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER TABLE"
},
{
"msg_contents": "On Sun, 2 Jul 2023 at 20:42, vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> completion of alter default privileges like the below statement:\n> ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n>\n> 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> public FOR \" like in below statement:\n> ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> ON TABLES TO PUBLIC;\n>\n> 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> REVOKE \" like in below statement:\n> alter default privileges revoke grant option for select ON tables FROM dba1;\n>\n> 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> column-name SET\" like in:\n> ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n>\n> Attached patch has the changes for the same.\n\nAdded a commitfest entry for this:\nhttps://commitfest.postgresql.org/45/4587/\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 25 Sep 2023 22:37:38 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "n Fri, Nov 24, 2023 at 6:33 PM vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> completion of alter default privileges like the below statement:\n> ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n>\n> 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> public FOR \" like in below statement:\n> ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> ON TABLES TO PUBLIC;\n>\n> 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> REVOKE \" like in below statement:\n> alter default privileges revoke grant option for select ON tables FROM dba1;\n>\n> 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> column-name SET\" like in:\n> ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n>\n> Attached patch has the changes for the same.\n\n+ COMPLETE_WITH(\"ROLE\", \"USER\");\n+ /* ALTER DEFAULT PRIVILEGES REVOKE */\n+ else if (Matches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\", \"REVOKE\"))\n+ COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\", \"DELETE\", \"TRUNCATE\",\n+ \"REFERENCES\", \"TRIGGER\", \"CREATE\", \"EXECUTE\", \"USAGE\",\n+ \"MAINTAIN\", \"ALL\", \"GRANT OPTION FOR\");\n\nI could not find \"alter default privileges revoke maintain\", should\nthis be removed?\n\nRegards,\nShubham Khanna\n\n\n",
"msg_date": "Fri, 24 Nov 2023 18:37:37 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Fri, 24 Nov 2023 at 18:37, Shubham Khanna\n<[email protected]> wrote:\n>\n> n Fri, Nov 24, 2023 at 6:33 PM vignesh C <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > completion of alter default privileges like the below statement:\n> > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> >\n> > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > public FOR \" like in below statement:\n> > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > ON TABLES TO PUBLIC;\n> >\n> > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > REVOKE \" like in below statement:\n> > alter default privileges revoke grant option for select ON tables FROM dba1;\n> >\n> > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > column-name SET\" like in:\n> > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> >\n> > Attached patch has the changes for the same.\n>\n> + COMPLETE_WITH(\"ROLE\", \"USER\");\n> + /* ALTER DEFAULT PRIVILEGES REVOKE */\n> + else if (Matches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\", \"REVOKE\"))\n> + COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\", \"DELETE\", \"TRUNCATE\",\n> + \"REFERENCES\", \"TRIGGER\", \"CREATE\", \"EXECUTE\", \"USAGE\",\n> + \"MAINTAIN\", \"ALL\", \"GRANT OPTION FOR\");\n>\n> I could not find \"alter default privileges revoke maintain\", should\n> this be removed?\n\nThis was reverted as part of:\n151c22deee66a3390ca9a1c3675e29de54ae73fc.\nRevert MAINTAIN privilege and pg_maintain predefined role.\n\nThis reverts the following commits: 4dbdb82513, c2122aae63,\n5b1a879943, 9e1e9d6560, ff9618e82a, 60684dd834, 4441fc704d,\nand b5d6382496. A role with the MAINTAIN privilege may be able to\nuse search_path tricks to escalate privileges to the table owner.\nUnfortunately, it is too late in the v16 development cycle to apply\nthe proposed fix, i.e., restricting search_path when running\nmaintenance commands.\n\nThe attached v2 version has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 27 Nov 2023 21:58:00 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 9:58 PM vignesh C <[email protected]> wrote:\n>\n> On Fri, 24 Nov 2023 at 18:37, Shubham Khanna\n> <[email protected]> wrote:\n> >\n> > n Fri, Nov 24, 2023 at 6:33 PM vignesh C <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > > completion of alter default privileges like the below statement:\n> > > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> > >\n> > > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > > public FOR \" like in below statement:\n> > > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > > ON TABLES TO PUBLIC;\n> > >\n> > > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > > REVOKE \" like in below statement:\n> > > alter default privileges revoke grant option for select ON tables FROM dba1;\n> > >\n> > > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > > column-name SET\" like in:\n> > > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> > >\n> > > Attached patch has the changes for the same.\n> >\n> > + COMPLETE_WITH(\"ROLE\", \"USER\");\n> > + /* ALTER DEFAULT PRIVILEGES REVOKE */\n> > + else if (Matches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\", \"REVOKE\"))\n> > + COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\", \"DELETE\", \"TRUNCATE\",\n> > + \"REFERENCES\", \"TRIGGER\", \"CREATE\", \"EXECUTE\", \"USAGE\",\n> > + \"MAINTAIN\", \"ALL\", \"GRANT OPTION FOR\");\n> >\n> > I could not find \"alter default privileges revoke maintain\", should\n> > this be removed?\n>\n> This was reverted as part of:\n> 151c22deee66a3390ca9a1c3675e29de54ae73fc.\n> Revert MAINTAIN privilege and pg_maintain predefined role.\n>\n> This reverts the following commits: 4dbdb82513, c2122aae63,\n> 5b1a879943, 9e1e9d6560, ff9618e82a, 60684dd834, 4441fc704d,\n> and b5d6382496. A role with the MAINTAIN privilege may be able to\n> use search_path tricks to escalate privileges to the table owner.\n> Unfortunately, it is too late in the v16 development cycle to apply\n> the proposed fix, i.e., restricting search_path when running\n> maintenance commands.\n>\nI have executed the given changes and they are working fine.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 10:44:48 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Mon, 27 Nov 2023 at 21:58, vignesh C <[email protected]> wrote:\n>\n> On Fri, 24 Nov 2023 at 18:37, Shubham Khanna\n> <[email protected]> wrote:\n> >\n> > n Fri, Nov 24, 2023 at 6:33 PM vignesh C <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > > completion of alter default privileges like the below statement:\n> > > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> > >\n> > > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > > public FOR \" like in below statement:\n> > > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > > ON TABLES TO PUBLIC;\n> > >\n> > > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > > REVOKE \" like in below statement:\n> > > alter default privileges revoke grant option for select ON tables FROM dba1;\n> > >\n> > > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > > column-name SET\" like in:\n> > > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> > >\n> > > Attached patch has the changes for the same.\n> >\n> > + COMPLETE_WITH(\"ROLE\", \"USER\");\n> > + /* ALTER DEFAULT PRIVILEGES REVOKE */\n> > + else if (Matches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\", \"REVOKE\"))\n> > + COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\", \"DELETE\", \"TRUNCATE\",\n> > + \"REFERENCES\", \"TRIGGER\", \"CREATE\", \"EXECUTE\", \"USAGE\",\n> > + \"MAINTAIN\", \"ALL\", \"GRANT OPTION FOR\");\n> >\n> > I could not find \"alter default privileges revoke maintain\", should\n> > this be removed?\n>\n> This was reverted as part of:\n> 151c22deee66a3390ca9a1c3675e29de54ae73fc.\n> Revert MAINTAIN privilege and pg_maintain predefined role.\n>\n> This reverts the following commits: 4dbdb82513, c2122aae63,\n> 5b1a879943, 9e1e9d6560, ff9618e82a, 60684dd834, 4441fc704d,\n> and b5d6382496. A role with the MAINTAIN privilege may be able to\n> use search_path tricks to escalate privileges to the table owner.\n> Unfortunately, it is too late in the v16 development cycle to apply\n> the proposed fix, i.e., restricting search_path when running\n> maintenance commands.\n>\n> The attached v2 version has the changes for the same.\n\nThe patch was not applying because of a recent commit in tab\ncompletion, PSA new patch set.\n\nRegards,\nVignesh",
"msg_date": "Fri, 26 Jan 2024 08:16:10 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "Hi,\n\nThank you for the patch!\n\nOn Mon, Jul 3, 2023 at 12:12 AM vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> completion of alter default privileges like the below statement:\n> ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n\n+1\n\n>\n> 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> public FOR \" like in below statement:\n> ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> ON TABLES TO PUBLIC;\n\nSince there is no difference FOR USER and FOR ROLE, I'm not sure we\nreally want to support both in tab-completion.\n\n>\n> 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> REVOKE \" like in below statement:\n> alter default privileges revoke grant option for select ON tables FROM dba1;\n\n+1. But the v3 patch doesn't cover the following case:\n\n=# alter default privileges for role masahiko revoke [tab]\nALL CREATE DELETE EXECUTE INSERT MAINTAIN\n REFERENCES SELECT TRIGGER TRUNCATE UPDATE USAGE\n\nAnd it doesn't cover MAINTAIN neither:\n\n=# alter default privileges revoke [tab]\nALL DELETE GRANT OPTION FOR REFERENCES\n TRIGGER UPDATE\nCREATE EXECUTE INSERT SELECT\n TRUNCATE USAGE\n\nThe patch adds the completions for ALTER DEFAULT PRIVILEGES REVOKE,\nbut we handle such case in GRANT and REVOKE part:\n\n(around L3958)\n /*\n * With ALTER DEFAULT PRIVILEGES, restrict completion to grantable\n * privileges (can't grant roles)\n */\n if (HeadMatches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\"))\n COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\",\n \"DELETE\", \"TRUNCATE\", \"REFERENCES\", \"TRIGGER\",\n \"CREATE\", \"EXECUTE\", \"USAGE\", \"MAINTAIN\", \"ALL\");\n\nAlso, I think we can support WITH GRANT OPTION too. For example,\n\n=# alter default privileges for role masahiko grant all on tables to\npublic [tab]\n\nIt's already supported in the GRANT statement.\n\n>\n> 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> column-name SET\" like in:\n> ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n>\n\n+1. The patch looks good to me, so pushed.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 16:34:36 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 13:05, Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> Thank you for the patch!\n>\n> On Mon, Jul 3, 2023 at 12:12 AM vignesh C <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > completion of alter default privileges like the below statement:\n> > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n>\n> +1\n>\n> >\n> > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > public FOR \" like in below statement:\n> > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > ON TABLES TO PUBLIC;\n>\n> Since there is no difference FOR USER and FOR ROLE, I'm not sure we\n> really want to support both in tab-completion.\n\nI have removed this change\n\n> >\n> > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > REVOKE \" like in below statement:\n> > alter default privileges revoke grant option for select ON tables FROM dba1;\n>\n> +1. But the v3 patch doesn't cover the following case:\n>\n> =# alter default privileges for role masahiko revoke [tab]\n> ALL CREATE DELETE EXECUTE INSERT MAINTAIN\n> REFERENCES SELECT TRIGGER TRUNCATE UPDATE USAGE\n\nModified in the updated patch\n\n> And it doesn't cover MAINTAIN neither:\n>\n> =# alter default privileges revoke [tab]\n> ALL DELETE GRANT OPTION FOR REFERENCES\n> TRIGGER UPDATE\n> CREATE EXECUTE INSERT SELECT\n> TRUNCATE USAGE\n\nModified in the updated patch\n\n> The patch adds the completions for ALTER DEFAULT PRIVILEGES REVOKE,\n> but we handle such case in GRANT and REVOKE part:\n>\n> (around L3958)\n> /*\n> * With ALTER DEFAULT PRIVILEGES, restrict completion to grantable\n> * privileges (can't grant roles)\n> */\n> if (HeadMatches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\"))\n> COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\",\n> \"DELETE\", \"TRUNCATE\", \"REFERENCES\", \"TRIGGER\",\n> \"CREATE\", \"EXECUTE\", \"USAGE\", \"MAINTAIN\", \"ALL\");\n\nThe current patch handles the fix from here now.\n\n> Also, I think we can support WITH GRANT OPTION too. For example,\n>\n> =# alter default privileges for role masahiko grant all on tables to\n> public [tab]\n\nI have handled this in the updated patch\n\n> It's already supported in the GRANT statement.\n>\n> >\n> > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > column-name SET\" like in:\n> > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> >\n>\n> +1. The patch looks good to me, so pushed.\n\nThanks for committing this.\n\nThe updated patch has the changes for the above comments.\n\nRegards,\nVignesh",
"msg_date": "Mon, 1 Apr 2024 19:11:10 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 10:41 PM vignesh C <[email protected]> wrote:\n>\n> On Thu, 28 Mar 2024 at 13:05, Masahiko Sawada <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Thank you for the patch!\n> >\n> > On Mon, Jul 3, 2023 at 12:12 AM vignesh C <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > > completion of alter default privileges like the below statement:\n> > > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> >\n> > +1\n> >\n> > >\n> > > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > > public FOR \" like in below statement:\n> > > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > > ON TABLES TO PUBLIC;\n> >\n> > Since there is no difference FOR USER and FOR ROLE, I'm not sure we\n> > really want to support both in tab-completion.\n>\n> I have removed this change\n>\n> > >\n> > > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > > REVOKE \" like in below statement:\n> > > alter default privileges revoke grant option for select ON tables FROM dba1;\n> >\n> > +1. But the v3 patch doesn't cover the following case:\n> >\n> > =# alter default privileges for role masahiko revoke [tab]\n> > ALL CREATE DELETE EXECUTE INSERT MAINTAIN\n> > REFERENCES SELECT TRIGGER TRUNCATE UPDATE USAGE\n>\n> Modified in the updated patch\n>\n> > And it doesn't cover MAINTAIN neither:\n> >\n> > =# alter default privileges revoke [tab]\n> > ALL DELETE GRANT OPTION FOR REFERENCES\n> > TRIGGER UPDATE\n> > CREATE EXECUTE INSERT SELECT\n> > TRUNCATE USAGE\n>\n> Modified in the updated patch\n>\n> > The patch adds the completions for ALTER DEFAULT PRIVILEGES REVOKE,\n> > but we handle such case in GRANT and REVOKE part:\n> >\n> > (around L3958)\n> > /*\n> > * With ALTER DEFAULT PRIVILEGES, restrict completion to grantable\n> > * privileges (can't grant roles)\n> > */\n> > if (HeadMatches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\"))\n> > COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\",\n> > \"DELETE\", \"TRUNCATE\", \"REFERENCES\", \"TRIGGER\",\n> > \"CREATE\", \"EXECUTE\", \"USAGE\", \"MAINTAIN\", \"ALL\");\n>\n> The current patch handles the fix from here now.\n>\n> > Also, I think we can support WITH GRANT OPTION too. For example,\n> >\n> > =# alter default privileges for role masahiko grant all on tables to\n> > public [tab]\n>\n> I have handled this in the updated patch\n>\n> > It's already supported in the GRANT statement.\n> >\n> > >\n> > > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > > column-name SET\" like in:\n> > > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> > >\n> >\n> > +1. The patch looks good to me, so pushed.\n>\n> Thanks for committing this.\n>\n> The updated patch has the changes for the above comments.\n>\n\nThank you for updating the patch.\n\nI think it doesn't work well as \"GRANT OPTION FOR\" is complemented\ntwice. For example,\n\n=# alter default privileges for user masahiko revoke [tab]\nALL DELETE GRANT OPTION FOR MAINTAIN\n SELECT TRUNCATE USAGE\nCREATE EXECUTE INSERT REFERENCES\n TRIGGER UPDATE\n=# alter default privileges for user masahiko revoke grant option for [tab]\nALL DELETE GRANT OPTION FOR MAINTAIN\n SELECT TRUNCATE USAGE\nCREATE EXECUTE INSERT REFERENCES\n TRIGGER UPDATE\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Apr 2024 16:38:12 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Tue, 2 Apr 2024 at 13:08, Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Apr 1, 2024 at 10:41 PM vignesh C <[email protected]> wrote:\n> >\n> > On Thu, 28 Mar 2024 at 13:05, Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Thank you for the patch!\n> > >\n> > > On Mon, Jul 3, 2023 at 12:12 AM vignesh C <[email protected]> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > > > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > > > completion of alter default privileges like the below statement:\n> > > > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > > > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > > > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> > >\n> > > +1\n> > >\n> > > >\n> > > > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > > > public FOR \" like in below statement:\n> > > > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > > > ON TABLES TO PUBLIC;\n> > >\n> > > Since there is no difference FOR USER and FOR ROLE, I'm not sure we\n> > > really want to support both in tab-completion.\n> >\n> > I have removed this change\n> >\n> > > >\n> > > > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > > > REVOKE \" like in below statement:\n> > > > alter default privileges revoke grant option for select ON tables FROM dba1;\n> > >\n> > > +1. But the v3 patch doesn't cover the following case:\n> > >\n> > > =# alter default privileges for role masahiko revoke [tab]\n> > > ALL CREATE DELETE EXECUTE INSERT MAINTAIN\n> > > REFERENCES SELECT TRIGGER TRUNCATE UPDATE USAGE\n> >\n> > Modified in the updated patch\n> >\n> > > And it doesn't cover MAINTAIN neither:\n> > >\n> > > =# alter default privileges revoke [tab]\n> > > ALL DELETE GRANT OPTION FOR REFERENCES\n> > > TRIGGER UPDATE\n> > > CREATE EXECUTE INSERT SELECT\n> > > TRUNCATE USAGE\n> >\n> > Modified in the updated patch\n> >\n> > > The patch adds the completions for ALTER DEFAULT PRIVILEGES REVOKE,\n> > > but we handle such case in GRANT and REVOKE part:\n> > >\n> > > (around L3958)\n> > > /*\n> > > * With ALTER DEFAULT PRIVILEGES, restrict completion to grantable\n> > > * privileges (can't grant roles)\n> > > */\n> > > if (HeadMatches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\"))\n> > > COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\",\n> > > \"DELETE\", \"TRUNCATE\", \"REFERENCES\", \"TRIGGER\",\n> > > \"CREATE\", \"EXECUTE\", \"USAGE\", \"MAINTAIN\", \"ALL\");\n> >\n> > The current patch handles the fix from here now.\n> >\n> > > Also, I think we can support WITH GRANT OPTION too. For example,\n> > >\n> > > =# alter default privileges for role masahiko grant all on tables to\n> > > public [tab]\n> >\n> > I have handled this in the updated patch\n> >\n> > > It's already supported in the GRANT statement.\n> > >\n> > > >\n> > > > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > > > column-name SET\" like in:\n> > > > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> > > >\n> > >\n> > > +1. The patch looks good to me, so pushed.\n> >\n> > Thanks for committing this.\n> >\n> > The updated patch has the changes for the above comments.\n> >\n>\n> Thank you for updating the patch.\n>\n> I think it doesn't work well as \"GRANT OPTION FOR\" is complemented\n> twice. For example,\n>\n> =# alter default privileges for user masahiko revoke [tab]\n> ALL DELETE GRANT OPTION FOR MAINTAIN\n> SELECT TRUNCATE USAGE\n> CREATE EXECUTE INSERT REFERENCES\n> TRIGGER UPDATE\n> =# alter default privileges for user masahiko revoke grant option for [tab]\n> ALL DELETE GRANT OPTION FOR MAINTAIN\n> SELECT TRUNCATE USAGE\n> CREATE EXECUTE INSERT REFERENCES\n> TRIGGER UPDATE\n\nThanks for finding this issue, the attached v5 version patch has the\nfix for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 4 Apr 2024 21:48:11 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 1:18 AM vignesh C <[email protected]> wrote:\n>\n> On Tue, 2 Apr 2024 at 13:08, Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Apr 1, 2024 at 10:41 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Thu, 28 Mar 2024 at 13:05, Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > Thank you for the patch!\n> > > >\n> > > > On Mon, Jul 3, 2023 at 12:12 AM vignesh C <[email protected]> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > > > > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > > > > completion of alter default privileges like the below statement:\n> > > > > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > > > > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > > > > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> > > >\n> > > > +1\n> > > >\n> > > > >\n> > > > > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > > > > public FOR \" like in below statement:\n> > > > > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > > > > ON TABLES TO PUBLIC;\n> > > >\n> > > > Since there is no difference FOR USER and FOR ROLE, I'm not sure we\n> > > > really want to support both in tab-completion.\n> > >\n> > > I have removed this change\n> > >\n> > > > >\n> > > > > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > > > > REVOKE \" like in below statement:\n> > > > > alter default privileges revoke grant option for select ON tables FROM dba1;\n> > > >\n> > > > +1. But the v3 patch doesn't cover the following case:\n> > > >\n> > > > =# alter default privileges for role masahiko revoke [tab]\n> > > > ALL CREATE DELETE EXECUTE INSERT MAINTAIN\n> > > > REFERENCES SELECT TRIGGER TRUNCATE UPDATE USAGE\n> > >\n> > > Modified in the updated patch\n> > >\n> > > > And it doesn't cover MAINTAIN neither:\n> > > >\n> > > > =# alter default privileges revoke [tab]\n> > > > ALL DELETE GRANT OPTION FOR REFERENCES\n> > > > TRIGGER UPDATE\n> > > > CREATE EXECUTE INSERT SELECT\n> > > > TRUNCATE USAGE\n> > >\n> > > Modified in the updated patch\n> > >\n> > > > The patch adds the completions for ALTER DEFAULT PRIVILEGES REVOKE,\n> > > > but we handle such case in GRANT and REVOKE part:\n> > > >\n> > > > (around L3958)\n> > > > /*\n> > > > * With ALTER DEFAULT PRIVILEGES, restrict completion to grantable\n> > > > * privileges (can't grant roles)\n> > > > */\n> > > > if (HeadMatches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\"))\n> > > > COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\",\n> > > > \"DELETE\", \"TRUNCATE\", \"REFERENCES\", \"TRIGGER\",\n> > > > \"CREATE\", \"EXECUTE\", \"USAGE\", \"MAINTAIN\", \"ALL\");\n> > >\n> > > The current patch handles the fix from here now.\n> > >\n> > > > Also, I think we can support WITH GRANT OPTION too. For example,\n> > > >\n> > > > =# alter default privileges for role masahiko grant all on tables to\n> > > > public [tab]\n> > >\n> > > I have handled this in the updated patch\n> > >\n> > > > It's already supported in the GRANT statement.\n> > > >\n> > > > >\n> > > > > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > > > > column-name SET\" like in:\n> > > > > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> > > > >\n> > > >\n> > > > +1. The patch looks good to me, so pushed.\n> > >\n> > > Thanks for committing this.\n> > >\n> > > The updated patch has the changes for the above comments.\n> > >\n> >\n> > Thank you for updating the patch.\n> >\n> > I think it doesn't work well as \"GRANT OPTION FOR\" is complemented\n> > twice. For example,\n> >\n> > =# alter default privileges for user masahiko revoke [tab]\n> > ALL DELETE GRANT OPTION FOR MAINTAIN\n> > SELECT TRUNCATE USAGE\n> > CREATE EXECUTE INSERT REFERENCES\n> > TRIGGER UPDATE\n> > =# alter default privileges for user masahiko revoke grant option for [tab]\n> > ALL DELETE GRANT OPTION FOR MAINTAIN\n> > SELECT TRUNCATE USAGE\n> > CREATE EXECUTE INSERT REFERENCES\n> > TRIGGER UPDATE\n>\n> Thanks for finding this issue, the attached v5 version patch has the\n> fix for the same.\n\nThank you for updating the patch! I've pushed with minor adjustments.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 Apr 2024 13:58:55 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 10:29, Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Apr 5, 2024 at 1:18 AM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 2 Apr 2024 at 13:08, Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Mon, Apr 1, 2024 at 10:41 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On Thu, 28 Mar 2024 at 13:05, Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > Thank you for the patch!\n> > > > >\n> > > > > On Mon, Jul 3, 2023 at 12:12 AM vignesh C <[email protected]> wrote:\n> > > > > >\n> > > > > > Hi,\n> > > > > >\n> > > > > > Improved tab completion for \"ALTER DEFAULT PRIVILEGE\" and \"ALTER TABLE\":\n> > > > > > 1) GRANT, REVOKE and FOR USER keyword was not displayed in tab\n> > > > > > completion of alter default privileges like the below statement:\n> > > > > > ALTER DEFAULT PRIVILEGES GRANT INSERT ON tables TO PUBLIC;\n> > > > > > ALTER DEFAULT PRIVILEGES REVOKE INSERT ON tables FROM PUBLIC;\n> > > > > > ALTER DEFAULT PRIVILEGES FOR USER vignesh revoke INSERT ON tables FROM dba1;\n> > > > >\n> > > > > +1\n> > > > >\n> > > > > >\n> > > > > > 2) USER was not displayed for \"ALTER DEFAULT PRIVILEGES IN SCHEMA\n> > > > > > public FOR \" like in below statement:\n> > > > > > ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR USER dba1 GRANT INSERT\n> > > > > > ON TABLES TO PUBLIC;\n> > > > >\n> > > > > Since there is no difference FOR USER and FOR ROLE, I'm not sure we\n> > > > > really want to support both in tab-completion.\n> > > >\n> > > > I have removed this change\n> > > >\n> > > > > >\n> > > > > > 3) \"FOR GRANT OPTION\" was not display for \"ALTER DEFAULT PRIVILEGES\n> > > > > > REVOKE \" like in below statement:\n> > > > > > alter default privileges revoke grant option for select ON tables FROM dba1;\n> > > > >\n> > > > > +1. But the v3 patch doesn't cover the following case:\n> > > > >\n> > > > > =# alter default privileges for role masahiko revoke [tab]\n> > > > > ALL CREATE DELETE EXECUTE INSERT MAINTAIN\n> > > > > REFERENCES SELECT TRIGGER TRUNCATE UPDATE USAGE\n> > > >\n> > > > Modified in the updated patch\n> > > >\n> > > > > And it doesn't cover MAINTAIN neither:\n> > > > >\n> > > > > =# alter default privileges revoke [tab]\n> > > > > ALL DELETE GRANT OPTION FOR REFERENCES\n> > > > > TRIGGER UPDATE\n> > > > > CREATE EXECUTE INSERT SELECT\n> > > > > TRUNCATE USAGE\n> > > >\n> > > > Modified in the updated patch\n> > > >\n> > > > > The patch adds the completions for ALTER DEFAULT PRIVILEGES REVOKE,\n> > > > > but we handle such case in GRANT and REVOKE part:\n> > > > >\n> > > > > (around L3958)\n> > > > > /*\n> > > > > * With ALTER DEFAULT PRIVILEGES, restrict completion to grantable\n> > > > > * privileges (can't grant roles)\n> > > > > */\n> > > > > if (HeadMatches(\"ALTER\", \"DEFAULT\", \"PRIVILEGES\"))\n> > > > > COMPLETE_WITH(\"SELECT\", \"INSERT\", \"UPDATE\",\n> > > > > \"DELETE\", \"TRUNCATE\", \"REFERENCES\", \"TRIGGER\",\n> > > > > \"CREATE\", \"EXECUTE\", \"USAGE\", \"MAINTAIN\", \"ALL\");\n> > > >\n> > > > The current patch handles the fix from here now.\n> > > >\n> > > > > Also, I think we can support WITH GRANT OPTION too. For example,\n> > > > >\n> > > > > =# alter default privileges for role masahiko grant all on tables to\n> > > > > public [tab]\n> > > >\n> > > > I have handled this in the updated patch\n> > > >\n> > > > > It's already supported in the GRANT statement.\n> > > > >\n> > > > > >\n> > > > > > 4) \"DATA TYPE\" was missing in \"ALTER TABLE table-name ALTER COLUMN\n> > > > > > column-name SET\" like in:\n> > > > > > ALTER TABLE t1 ALTER COLUMN c1 SET DATA TYPE text;\n> > > > > >\n> > > > >\n> > > > > +1. The patch looks good to me, so pushed.\n> > > >\n> > > > Thanks for committing this.\n> > > >\n> > > > The updated patch has the changes for the above comments.\n> > > >\n> > >\n> > > Thank you for updating the patch.\n> > >\n> > > I think it doesn't work well as \"GRANT OPTION FOR\" is complemented\n> > > twice. For example,\n> > >\n> > > =# alter default privileges for user masahiko revoke [tab]\n> > > ALL DELETE GRANT OPTION FOR MAINTAIN\n> > > SELECT TRUNCATE USAGE\n> > > CREATE EXECUTE INSERT REFERENCES\n> > > TRIGGER UPDATE\n> > > =# alter default privileges for user masahiko revoke grant option for [tab]\n> > > ALL DELETE GRANT OPTION FOR MAINTAIN\n> > > SELECT TRUNCATE USAGE\n> > > CREATE EXECUTE INSERT REFERENCES\n> > > TRIGGER UPDATE\n> >\n> > Thanks for finding this issue, the attached v5 version patch has the\n> > fix for the same.\n>\n> Thank you for updating the patch! I've pushed with minor adjustments.\n\nThanks for pushing this patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 8 Apr 2024 12:10:02 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER DEFAULT PRIVILEGE and ALTER\n TABLE"
}
] |
[
{
"msg_contents": "Hi,\n\nI was rebasing my meson tree, which has more OSs added to CI, and noticed that\n010_database.pl started failing on openbsd recently-ish, without the CI\nenvironment for that having changed.\n\nThe tests passed on openbsd when my tree was based on 47b7051bc82\n(2023-06-01), but failed after rebasing onto a798660ebe3 (2023-06-29).\n\nExample of a failing run:\nhttps://cirrus-ci.com/task/6391476419035136?logs=test_world#L273\nhttps://api.cirrus-ci.com/v1/artifact/task/6391476419035136/testrun/build/testrun/icu/010_database/log/regress_log_010_database\nhttps://api.cirrus-ci.com/v1/artifact/task/6391476419035136/testrun/build/testrun/icu/010_database/log/010_database_node1.log\n\n[07:25:06.421](0.161s) not ok 6 - ICU-specific locale must be specified with ICU_LOCALE: exit code not 0\n[07:25:06.423](0.002s) # Failed test 'ICU-specific locale must be specified with ICU_LOCALE: exit code not 0'\n# at /home/postgres/postgres/src/test/icu/t/010_database.pl line 78.\n[07:25:06.423](0.000s) # got: '0'\n# expected: anything else\n[07:25:06.424](0.001s) not ok 7 - ICU-specific locale must be specified with ICU_LOCALE: error message\n[07:25:06.424](0.001s) # Failed test 'ICU-specific locale must be specified with ICU_LOCALE: error message'\n# at /home/postgres/postgres/src/test/icu/t/010_database.pl line 80.\n[07:25:06.424](0.000s) # 'psql:<stdin>:2: NOTICE: using standard form \"und-u-ks-level1\" for ICU locale \"@colStrength=primary\"'\n# doesn't match '(?^:ERROR: invalid LC_COLLATE locale name)'\n[07:25:06.425](0.000s) 1..7\n\nThe server log says:\n\n2023-07-02 07:25:05.946 UTC [15605][client backend] [010_database.pl][3/14:0] LOG: statement: CREATE DATABASE dbicu1 LOCALE_PROVIDER icu LOCALE 'C' TEMPLATE template0 ENCODING UTF8\n2023-07-02 07:25:05.947 UTC [15605][client backend] [010_database.pl][3/14:0] WARNING: could not convert locale name \"C\" to language tag: U_ILLEGAL_ARGUMENT_ERROR\n2023-07-02 07:25:05.948 UTC [15605][client backend] [010_database.pl][3/14:0] WARNING: ICU locale \"C\" has unknown language \"c\"\n2023-07-02 07:25:05.948 UTC [15605][client backend] [010_database.pl][3/14:0] HINT: To disable ICU locale validation, set parameter icu_validation_level to DISABLED.\n\n\nExample of a succeeding run:\nhttps://cirrus-ci.com/task/5893925412536320?logs=test_world#L261\n\nI have not yet debugged this further.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Jul 2023 09:56:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "010_database.pl fails on openbsd w/ LC_ALL=LANG=C"
},
{
"msg_contents": "On Sun, 2023-07-02 at 09:56 -0700, Andres Freund wrote:\n> # expected: anything else\n> [07:25:06.424](0.001s) not ok 7 - ICU-specific locale must be\n> specified with ICU_LOCALE: error message\n> [07:25:06.424](0.001s) # Failed test 'ICU-specific locale must be\n> specified with ICU_LOCALE: error message'\n> # at /home/postgres/postgres/src/test/icu/t/010_database.pl line\n> 80.\n> [07:25:06.424](0.000s) # 'psql:<stdin>:2: NOTICE: \n> using standard form \"und-u-ks-level1\" for ICU locale\n> \"@colStrength=primary\"'\n> # doesn't match '(?^:ERROR: invalid LC_COLLATE locale name)'\n> [07:25:06.425](0.000s) 1..7\n\n[I apologize for the delay.]\n\nThe test is assuming that locale \"@colStrength=primary\" is valid for\nICU but invalid for libc. It seems that on that platform, setlocale()\nis accepting it?\n\nIf some libc implementations are too permissive, I might need to just\ndisable this test. But if we can find a locale that is consistently\nacceptable in ICU but invalid in libc, then I can keep it... perhaps\n\"und@colStrength=primary\"?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 07 Jul 2023 08:52:34 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 010_database.pl fails on openbsd w/ LC_ALL=LANG=C"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 3:52 AM Jeff Davis <[email protected]> wrote:\n> The test is assuming that locale \"@colStrength=primary\" is valid for\n> ICU but invalid for libc. It seems that on that platform, setlocale()\n> is accepting it?\n>\n> If some libc implementations are too permissive, I might need to just\n> disable this test. But if we can find a locale that is consistently\n> acceptable in ICU but invalid in libc, then I can keep it... perhaps\n> \"und@colStrength=primary\"?\n\nDoesn't look too hopeful: https://man.openbsd.org/setlocale.3\n\n\n",
"msg_date": "Sat, 8 Jul 2023 07:04:59 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 010_database.pl fails on openbsd w/ LC_ALL=LANG=C"
},
{
"msg_contents": "On Sat, 2023-07-08 at 07:04 +1200, Thomas Munro wrote:\n> Doesn't look too hopeful: https://man.openbsd.org/setlocale.3\n\nHmm. I could try using a bogus encoding, but that may be too clever.\nI'll just remove the test.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 07 Jul 2023 12:17:20 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 010_database.pl fails on openbsd w/ LC_ALL=LANG=C"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen looking at Assert() failures and at PANICs, the number of \"pointless\"\nstack entries at the top seems to have grown over the years. Here's an\nexample of a stacktrace (that I obviously intentionally triggered):\n\nProgram terminated with signal SIGABRT, Aborted.\n#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44\n44\t./nptl/pthread_kill.c: No such file or directory.\n(gdb) bt\n#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44\n#1 0x00007f31920a815f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78\n#2 0x00007f319205a472 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#3 0x00007f31920444b2 in __GI_abort () at ./stdlib/abort.c:79\n#4 0x000055b3340c5140 in ExceptionalCondition (conditionName=0x55b3338c7ea0 \"\\\"I kid you not\\\" == NULL\",\n fileName=0x55b3338c6958 \"../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c\", lineNumber=4126)\n at ../../../../home/andres/src/postgresql/src/backend/utils/error/assert.c:66\n#5 0x000055b333ef46c4 in PostgresMain (dbname=0x55b336271608 \"postgres\", username=0x55b3361fa888 \"andres\")\n at ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n#6 0x000055b333e1fadd in BackendRun (port=0x55b336267ec0) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n#7 0x000055b333e1f369 in BackendStartup (port=0x55b336267ec0) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n#8 0x000055b333e1b406 in ServerLoop () at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n#9 0x000055b333e1ad17 in PostmasterMain (argc=73, argv=0x55b3361f83f0) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1463\n#10 0x000055b333d052e2 in main (argc=73, argv=0x55b3361f83f0) at ../../../../home/andres/src/postgresql/src/backend/main/main.c:198\n\nThat's due to glibc having a very complicated abort(). Which might be nice as\na backstop, but for the default Assert it's imo just noise.\n\nI'd like to propose that we do a configure test for __builtin_trap() and use\nit, if available, before the abort() in ExceptionalCondition(). Perhaps also\nfor PANIC, but it's not as clear to me whether we should.\n\nHere's a backtrace when using __builtin_trap():\n#0 ExceptionalCondition (conditionName=0x55e7e7c90ea0 \"\\\"I kid you not\\\" == NULL\",\n fileName=0x55e7e7c8f958 \"../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c\", lineNumber=4126)\n at ../../../../home/andres/src/postgresql/src/backend/utils/error/assert.c:66\n66\t\t__builtin_trap();\n(gdb) bt\n#0 ExceptionalCondition (conditionName=0x55e7e7c90ea0 \"\\\"I kid you not\\\" == NULL\",\n fileName=0x55e7e7c8f958 \"../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c\", lineNumber=4126)\n at ../../../../home/andres/src/postgresql/src/backend/utils/error/assert.c:66\n#1 0x000055e7e82bd6c4 in PostgresMain (dbname=0x55e7e9ea8608 \"postgres\", username=0x55e7e9e31888 \"andres\")\n at ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n#2 0x000055e7e81e8add in BackendRun (port=0x55e7e9e9eec0) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n#3 0x000055e7e81e8369 in BackendStartup (port=0x55e7e9e9eec0) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n#4 0x000055e7e81e4406 in ServerLoop () at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n#5 0x000055e7e81e3d17 in PostmasterMain (argc=73, argv=0x55e7e9e2f3f0) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1463\n#6 0x000055e7e80ce2e2 in main (argc=73, argv=0x55e7e9e2f3f0) at ../../../../home/andres/src/postgresql/src/backend/main/main.c:198\n\n\nMaybe I crash things too often, but I like to not have to deal with 4\npointless frames at the top...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Jul 2023 10:41:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replacing abort() with __builtin_trap()?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I'd like to propose that we do a configure test for __builtin_trap() and use\n> it, if available, before the abort() in ExceptionalCondition(). Perhaps also\n> for PANIC, but it's not as clear to me whether we should.\n\nDoes that still result in the same process exit signal being reported to\nthe postmaster? The GCC manual makes it sound like the reported signal\ncould be platform-dependent, which'd be kind of confusing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Jul 2023 13:55:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replacing abort() with __builtin_trap()?"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-02 13:55:53 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I'd like to propose that we do a configure test for __builtin_trap() and use\n> > it, if available, before the abort() in ExceptionalCondition(). Perhaps also\n> > for PANIC, but it's not as clear to me whether we should.\n>\n> Does that still result in the same process exit signal being reported to\n> the postmaster?\n\nIt does not on linux / x86-64.\n\n2023-07-02 10:52:55.103 PDT [1398197][postmaster][:0][] LOG: server process (PID 1398207) was terminated by signal 4: Illegal instruction\nvs today's\n2023-07-02 11:08:22.674 PDT [1401801][postmaster][:0][] LOG: server process (PID 1401809) was terminated by signal 6: Aborted\n\nIt wouldn't be bad for postmaster to be able to distinguish between PANIC and\nAssert(), but I agree that the non-determinism is a bit annoying.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Jul 2023 11:09:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replacing abort() with __builtin_trap()?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-07-02 13:55:53 -0400, Tom Lane wrote:\n>> Andres Freund <[email protected]> writes:\n>>> I'd like to propose that we do a configure test for __builtin_trap() and use\n>>> it, if available, before the abort() in ExceptionalCondition(). Perhaps also\n>>> for PANIC, but it's not as clear to me whether we should.\n\n>> Does that still result in the same process exit signal being reported to\n>> the postmaster?\n\n> It does not on linux / x86-64.\n\n> 2023-07-02 10:52:55.103 PDT [1398197][postmaster][:0][] LOG: server process (PID 1398207) was terminated by signal 4: Illegal instruction\n> vs today's\n> 2023-07-02 11:08:22.674 PDT [1401801][postmaster][:0][] LOG: server process (PID 1401809) was terminated by signal 6: Aborted\n\nHm, I do *not* like \"Illegal instruction\" in place of SIGABRT;\nthat looks too much like we vectored off into never-never land.\nI'd rather live with the admittedly-ugly stack traces.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Jul 2023 17:50:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replacing abort() with __builtin_trap()?"
}
] |
[
{
"msg_contents": "Hi,\n\nI like that we now have a builtin backtrace ability. Unfortunately I think the\nbacktraces are often not very useful, because only externally visible\nfunctions are symbolized.\n\nE.g.:\n\n2023-07-02 10:54:01.756 PDT [1398494][client backend][:0][[unknown]] LOG: will crash\n2023-07-02 10:54:01.756 PDT [1398494][client backend][:0][[unknown]] BACKTRACE:\n\tpostgres: dev assert: andres postgres [local] initializing(errbacktrace+0xbb) [0x562a44c97ca9]\n\tpostgres: dev assert: andres postgres [local] initializing(PostgresMain+0xb6) [0x562a44ac56d4]\n\tpostgres: dev assert: andres postgres [local] initializing(+0x806add) [0x562a449f0add]\n\tpostgres: dev assert: andres postgres [local] initializing(+0x806369) [0x562a449f0369]\n\tpostgres: dev assert: andres postgres [local] initializing(+0x802406) [0x562a449ec406]\n\tpostgres: dev assert: andres postgres [local] initializing(PostmasterMain+0x1676) [0x562a449ebd17]\n\tpostgres: dev assert: andres postgres [local] initializing(+0x6ec2e2) [0x562a448d62e2]\n\t/lib/x86_64-linux-gnu/libc.so.6(+0x276ca) [0x7f1e820456ca]\n\t/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7f1e82045785]\n\tpostgres: dev assert: andres postgres [local] initializing(_start+0x21) [0x562a445ede21]\n\nwhich is far from as useful as it could be.\n\n\nA lot of platforms have \"libbacktrace\" available, e.g. as part of gcc. I think\nwe should consider using it, when available, to produce more useful\nbacktraces.\n\nI hacked it up for ereport() to debug something, and the backtraces are\nconsiderably better:\n\n2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] LOG: will crash\n2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] BACKTRACE:\n\t[0x55fcd03e6143] PostgresMain: ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n\t[0x55fcd031154c] BackendRun: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n\t[0x55fcd0310dd8] BackendStartup: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n\t[0x55fcd030ce75] ServerLoop: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n\t[0x55fcd030c786] PostmasterMain: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1463\n\t[0x55fcd01f6d51] main: ../../../../home/andres/src/postgresql/src/backend/main/main.c:198\n\t[0x7fdd914456c9] __libc_start_call_main: ../sysdeps/nptl/libc_start_call_main.h:58\n\t[0x7fdd91445784] __libc_start_main_impl: ../csu/libc-start.c:360\n\t[0x55fccff0e890] [unknown]: [unknown]:0\n\nThe way each frame looks is my fault, not libbacktrace's...\n\nNice things about libbacktrace are that the generation of stack traces is\ndocumented to be async signal safe on most platforms (with a #define to figure\nthat out, and a more minimal safe version always available) and that it\nsupports a wide range of platforms:\n\nhttps://github.com/ianlancetaylor/libbacktrace\n As of October 2020, libbacktrace supports ELF, PE/COFF, Mach-O, and XCOFF\n executables with DWARF debugging information. In other words, it supports\n GNU/Linux, *BSD, macOS, Windows, and AIX. The library is written to make it\n straightforward to add support for other object file and debugging formats.\n\n\nThe state I currently have is very hacky, but if there's interest in\nupstreaming something like this, I could clean it up.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Jul 2023 11:31:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optionally using a better backtrace library?"
},
{
"msg_contents": "ne 2. 7. 2023 v 20:32 odesílatel Andres Freund <[email protected]> napsal:\n\n> Hi,\n>\n> I like that we now have a builtin backtrace ability. Unfortunately I think\n> the\n> backtraces are often not very useful, because only externally visible\n> functions are symbolized.\n>\n> E.g.:\n>\n> 2023-07-02 10:54:01.756 PDT [1398494][client backend][:0][[unknown]] LOG:\n> will crash\n> 2023-07-02 10:54:01.756 PDT [1398494][client backend][:0][[unknown]]\n> BACKTRACE:\n> postgres: dev assert: andres postgres [local]\n> initializing(errbacktrace+0xbb) [0x562a44c97ca9]\n> postgres: dev assert: andres postgres [local]\n> initializing(PostgresMain+0xb6) [0x562a44ac56d4]\n> postgres: dev assert: andres postgres [local]\n> initializing(+0x806add) [0x562a449f0add]\n> postgres: dev assert: andres postgres [local]\n> initializing(+0x806369) [0x562a449f0369]\n> postgres: dev assert: andres postgres [local]\n> initializing(+0x802406) [0x562a449ec406]\n> postgres: dev assert: andres postgres [local]\n> initializing(PostmasterMain+0x1676) [0x562a449ebd17]\n> postgres: dev assert: andres postgres [local]\n> initializing(+0x6ec2e2) [0x562a448d62e2]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x276ca) [0x7f1e820456ca]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)\n> [0x7f1e82045785]\n> postgres: dev assert: andres postgres [local]\n> initializing(_start+0x21) [0x562a445ede21]\n>\n> which is far from as useful as it could be.\n>\n>\n> A lot of platforms have \"libbacktrace\" available, e.g. as part of gcc. I\n> think\n> we should consider using it, when available, to produce more useful\n> backtraces.\n>\n> I hacked it up for ereport() to debug something, and the backtraces are\n> considerably better:\n>\n> 2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] LOG:\n> will crash\n> 2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]]\n> BACKTRACE:\n> [0x55fcd03e6143] PostgresMain:\n> ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n> [0x55fcd031154c] BackendRun:\n> ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n> [0x55fcd0310dd8] BackendStartup:\n> ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n> [0x55fcd030ce75] ServerLoop:\n> ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n> [0x55fcd030c786] PostmasterMain:\n> ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1463\n> [0x55fcd01f6d51] main:\n> ../../../../home/andres/src/postgresql/src/backend/main/main.c:198\n> [0x7fdd914456c9] __libc_start_call_main:\n> ../sysdeps/nptl/libc_start_call_main.h:58\n> [0x7fdd91445784] __libc_start_main_impl: ../csu/libc-start.c:360\n> [0x55fccff0e890] [unknown]: [unknown]:0\n>\n> The way each frame looks is my fault, not libbacktrace's...\n>\n> Nice things about libbacktrace are that the generation of stack traces is\n> documented to be async signal safe on most platforms (with a #define to\n> figure\n> that out, and a more minimal safe version always available) and that it\n> supports a wide range of platforms:\n>\n> https://github.com/ianlancetaylor/libbacktrace\n> As of October 2020, libbacktrace supports ELF, PE/COFF, Mach-O, and XCOFF\n> executables with DWARF debugging information. In other words, it supports\n> GNU/Linux, *BSD, macOS, Windows, and AIX. The library is written to make\n> it\n> straightforward to add support for other object file and debugging\n> formats.\n>\n>\n> The state I currently have is very hacky, but if there's interest in\n> upstreaming something like this, I could clean it up.\n>\n\nLooks nice\n\n+1\n\nPavel\n\n\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nne 2. 7. 2023 v 20:32 odesílatel Andres Freund <[email protected]> napsal:Hi,\n\nI like that we now have a builtin backtrace ability. Unfortunately I think the\nbacktraces are often not very useful, because only externally visible\nfunctions are symbolized.\n\nE.g.:\n\n2023-07-02 10:54:01.756 PDT [1398494][client backend][:0][[unknown]] LOG: will crash\n2023-07-02 10:54:01.756 PDT [1398494][client backend][:0][[unknown]] BACKTRACE:\n postgres: dev assert: andres postgres [local] initializing(errbacktrace+0xbb) [0x562a44c97ca9]\n postgres: dev assert: andres postgres [local] initializing(PostgresMain+0xb6) [0x562a44ac56d4]\n postgres: dev assert: andres postgres [local] initializing(+0x806add) [0x562a449f0add]\n postgres: dev assert: andres postgres [local] initializing(+0x806369) [0x562a449f0369]\n postgres: dev assert: andres postgres [local] initializing(+0x802406) [0x562a449ec406]\n postgres: dev assert: andres postgres [local] initializing(PostmasterMain+0x1676) [0x562a449ebd17]\n postgres: dev assert: andres postgres [local] initializing(+0x6ec2e2) [0x562a448d62e2]\n /lib/x86_64-linux-gnu/libc.so.6(+0x276ca) [0x7f1e820456ca]\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85) [0x7f1e82045785]\n postgres: dev assert: andres postgres [local] initializing(_start+0x21) [0x562a445ede21]\n\nwhich is far from as useful as it could be.\n\n\nA lot of platforms have \"libbacktrace\" available, e.g. as part of gcc. I think\nwe should consider using it, when available, to produce more useful\nbacktraces.\n\nI hacked it up for ereport() to debug something, and the backtraces are\nconsiderably better:\n\n2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] LOG: will crash\n2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] BACKTRACE:\n [0x55fcd03e6143] PostgresMain: ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n [0x55fcd031154c] BackendRun: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n [0x55fcd0310dd8] BackendStartup: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n [0x55fcd030ce75] ServerLoop: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n [0x55fcd030c786] PostmasterMain: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1463\n [0x55fcd01f6d51] main: ../../../../home/andres/src/postgresql/src/backend/main/main.c:198\n [0x7fdd914456c9] __libc_start_call_main: ../sysdeps/nptl/libc_start_call_main.h:58\n [0x7fdd91445784] __libc_start_main_impl: ../csu/libc-start.c:360\n [0x55fccff0e890] [unknown]: [unknown]:0\n\nThe way each frame looks is my fault, not libbacktrace's...\n\nNice things about libbacktrace are that the generation of stack traces is\ndocumented to be async signal safe on most platforms (with a #define to figure\nthat out, and a more minimal safe version always available) and that it\nsupports a wide range of platforms:\n\nhttps://github.com/ianlancetaylor/libbacktrace\n As of October 2020, libbacktrace supports ELF, PE/COFF, Mach-O, and XCOFF\n executables with DWARF debugging information. In other words, it supports\n GNU/Linux, *BSD, macOS, Windows, and AIX. The library is written to make it\n straightforward to add support for other object file and debugging formats.\n\n\nThe state I currently have is very hacky, but if there's interest in\nupstreaming something like this, I could clean it up.Looks nice+1Pavel\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 2 Jul 2023 20:34:59 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On 7/2/23 14:31, Andres Freund wrote:\n> Nice things about libbacktrace are that the generation of stack traces is\n> documented to be async signal safe on most platforms (with a #define to figure\n> that out, and a more minimal safe version always available) and that it\n> supports a wide range of platforms:\n> \n> https://github.com/ianlancetaylor/libbacktrace\n> As of October 2020, libbacktrace supports ELF, PE/COFF, Mach-O, and XCOFF\n> executables with DWARF debugging information. In other words, it supports\n> GNU/Linux, *BSD, macOS, Windows, and AIX. The library is written to make it\n> straightforward to add support for other object file and debugging formats.\n> \n> \n> The state I currently have is very hacky, but if there's interest in\n> upstreaming something like this, I could clean it up.\n\n+1\nSeems useful!\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sun, 2 Jul 2023 16:34:40 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "At Sun, 2 Jul 2023 11:31:56 -0700, Andres Freund <[email protected]> wrote in \n> The state I currently have is very hacky, but if there's interest in\n> upstreaming something like this, I could clean it up.\n\nI can't help voting +1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 03 Jul 2023 13:46:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "Hello,\n\nOn 2023-Jul-02, Andres Freund wrote:\n\n> I like that we now have a builtin backtrace ability. Unfortunately I think the\n> backtraces are often not very useful, because only externally visible\n> functions are symbolized.\n\nAgreed, these backtraces are pretty close to useless. Not completely,\nbut I haven't found a practical way to use them for actual debugging\nof production problems.\n\n> I hacked it up for ereport() to debug something, and the backtraces are\n> considerably better:\n> \n> 2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] LOG: will crash\n> 2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] BACKTRACE:\n> \t[0x55fcd03e6143] PostgresMain: ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n> \t[0x55fcd031154c] BackendRun: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n> \t[0x55fcd0310dd8] BackendStartup: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n> \t[0x55fcd030ce75] ServerLoop: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n\nYeah, this looks much more usable.\n\n> Nice things about libbacktrace are that the generation of stack traces is\n> documented to be async signal safe on most platforms (with a #define to figure\n> that out, and a more minimal safe version always available) and that it\n> supports a wide range of platforms:\n\nSadly, it looks like the library is seldom distributed. For example,\nDebian seems to only have a package called android-libbacktrace which I\nimagine is not what we want. On my system I see a static library only\n-- is that enough? That file is part of package libgcc-10-dev, which\ntells me that we can't depend on that for packaging purposes.\n\nI think it's pretty much the same in the RPM side of the world.\n\nSo the only way to get this into customer systems would be to include\nthe library in our packages.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:58:25 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On 7/3/23 11:58, Alvaro Herrera wrote:\n> \n>> Nice things about libbacktrace are that the generation of stack traces is\n>> documented to be async signal safe on most platforms (with a #define to figure\n>> that out, and a more minimal safe version always available) and that it\n>> supports a wide range of platforms:\n> \n> Sadly, it looks like the library is seldom distributed. For example,\n> Debian seems to only have a package called android-libbacktrace which I\n> imagine is not what we want. On my system I see a static library only\n> -- is that enough? That file is part of package libgcc-10-dev, which\n> tells me that we can't depend on that for packaging purposes.\n\nIt would be a pretty big win even if the improved backtrace is only \navailable in dev environments -- this is what pgBackRest currently does.\n\nWe are also considering adding this library to production builds but \nhave not pulled the trigger on that yet since we are a bit worried about \npossible performance impact and have not had time to benchmark.\n\nRegards,\n-David\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:24:36 +0200",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On 02.07.23 20:31, Andres Freund wrote:\n> A lot of platforms have \"libbacktrace\" available, e.g. as part of gcc. I think\n> we should consider using it, when available, to produce more useful\n> backtraces.\n> \n> I hacked it up for ereport() to debug something, and the backtraces are\n> considerably better:\n\nMakes sense. When we first added backtrace support, we considered \nlibunwind, which didn't really give better backtraces than the built-in \nstuff, so it wasn't worth dealing with an additional dependency.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:26:24 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-03 11:58:25 +0200, Alvaro Herrera wrote:\n> On 2023-Jul-02, Andres Freund wrote:\n> > I like that we now have a builtin backtrace ability. Unfortunately I think the\n> > backtraces are often not very useful, because only externally visible\n> > functions are symbolized.\n> \n> Agreed, these backtraces are pretty close to useless. Not completely,\n> but I haven't found a practical way to use them for actual debugging\n> of production problems.\n\nYea. And I've grown pretty tired asking people to break out gdb in production\nscenarios :/\n\n\n> > Nice things about libbacktrace are that the generation of stack traces is\n> > documented to be async signal safe on most platforms (with a #define to figure\n> > that out, and a more minimal safe version always available) and that it\n> > supports a wide range of platforms:\n> \n> Sadly, it looks like the library is seldom distributed.\n\nIt's often distributed as part of gcc.\n\n\n> For example, Debian seems to only have a package called android-libbacktrace\n> which I imagine is not what we want.\n\nIndeed not.\n\n\n> On my system I see a static library only -- is that enough? That file is\n> part of package libgcc-10-dev, which tells me that we can't depend on that\n> for packaging purposes.\n\nWe should be able to depend on that gcc-NN depends on libgcc-NN-dev, it\ncontains all the compiler version specific stuff. It's where the intrinsics\nheaders, C runtime initialization, sanitizer libraries all live. clang will\ntypically also depend on libgcc-NN-dev on unixoid systems.\n\nAnd since it's statically linked (and needs to be apparently), you don't need\nlibgcc-NN-dev installed at runtime.\n\n\n> I think it's pretty much the same in the RPM side of the world.\n\nI don't know much about that side of the world...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Jul 2023 10:43:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On Mon Jul 3, 2023 at 12:43 PM CDT, Andres Freund wrote:\n> On 2023-07-03 11:58:25 +0200, Alvaro Herrera wrote:\n> > On 2023-Jul-02, Andres Freund wrote:\n> > > Nice things about libbacktrace are that the generation of stack traces is\n> > > documented to be async signal safe on most platforms (with a #define to figure\n> > > that out, and a more minimal safe version always available) and that it\n> > > supports a wide range of platforms:\n> > \n> > Sadly, it looks like the library is seldom distributed.\n>\n> It's often distributed as part of gcc.\n>\n>\n> > For example, Debian seems to only have a package called android-libbacktrace\n> > which I imagine is not what we want.\n>\n> Indeed not.\n>\n>\n> > On my system I see a static library only -- is that enough? That file is\n> > part of package libgcc-10-dev, which tells me that we can't depend on that\n> > for packaging purposes.\n>\n> We should be able to depend on that gcc-NN depends on libgcc-NN-dev, it\n> contains all the compiler version specific stuff. It's where the intrinsics\n> headers, C runtime initialization, sanitizer libraries all live. clang will\n> typically also depend on libgcc-NN-dev on unixoid systems.\n>\n> And since it's statically linked (and needs to be apparently), you don't need\n> libgcc-NN-dev installed at runtime.\n>\n>\n> > I think it's pretty much the same in the RPM side of the world.\n>\n> I don't know much about that side of the world...\n\nI could not find this packaged in Fedora. I did find it in FreeBSD\nhowever. We could add libbacktrace as a Meson subproject.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 03 Jul 2023 12:54:50 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 11:58:25AM +0200, Alvaro Herrera wrote:\n> On 2023-Jul-02, Andres Freund wrote:\n> > I like that we now have a builtin backtrace ability. Unfortunately I think the\n> > backtraces are often not very useful, because only externally visible\n> > functions are symbolized.\n> \n> Agreed, these backtraces are pretty close to useless. Not completely,\n> but I haven't found a practical way to use them for actual debugging\n> of production problems.\n\nFor what it's worth, I use the attached script to convert the current\nerrbacktrace output to a fully-symbolized backtrace. Nonetheless, ...\n\n> > I hacked it up for ereport() to debug something, and the backtraces are\n> > considerably better:\n> > \n> > 2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] LOG: will crash\n> > 2023-07-02 10:52:54.863 PDT [1398207][client backend][:0][[unknown]] BACKTRACE:\n> > \t[0x55fcd03e6143] PostgresMain: ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4126\n> > \t[0x55fcd031154c] BackendRun: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n> > \t[0x55fcd0310dd8] BackendStartup: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n> > \t[0x55fcd030ce75] ServerLoop: ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n> \n> Yeah, this looks much more usable.\n\n... +1 for offering this.",
"msg_date": "Mon, 4 Sep 2023 20:36:05 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On 2023-Sep-04, Noah Misch wrote:\n\n> On Mon, Jul 03, 2023 at 11:58:25AM +0200, Alvaro Herrera wrote:\n\n> > Agreed, these backtraces are pretty close to useless. Not completely,\n> > but I haven't found a practical way to use them for actual debugging\n> > of production problems.\n> \n> For what it's worth, I use the attached script to convert the current\n> errbacktrace output to a fully-symbolized backtrace.\n\nMuch appreciated! I can put this to good use.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:59:40 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 2:59 AM Alvaro Herrera <[email protected]> wrote:\n> Much appreciated! I can put this to good use.\n\nI was just reminded of how our existing backtrace support is lacklustre.\n\nAre you planning on submitting a patch for this?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Nov 2023 15:17:49 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optionally using a better backtrace library?"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit 89e46da5e5 allowed us to use indexes for searching on REPLICA\nIDENTITY FULL tables. The documentation explains:\n\nWhen replica identity <quote>full</quote> is specified,\nindexes can be used on the subscriber side for searching the rows. Candidate\nindexes must be btree, non-partial, and have at least one column reference\n(i.e. cannot consist of only expressions).\n\nTo be exact, IIUC the column reference must be on the leftmost column\nof indexes. Does it make sense to mention that? I've attached the\npatch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 3 Jul 2023 11:15:05 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "doc: improve the restriction description of using indexes on REPLICA\n IDENTITY FULL table."
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 7:45 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Commit 89e46da5e5 allowed us to use indexes for searching on REPLICA\n> IDENTITY FULL tables. The documentation explains:\n>\n> When replica identity <quote>full</quote> is specified,\n> indexes can be used on the subscriber side for searching the rows. Candidate\n> indexes must be btree, non-partial, and have at least one column reference\n> (i.e. cannot consist of only expressions).\n>\n> To be exact, IIUC the column reference must be on the leftmost column\n> of indexes. Does it make sense to mention that?\n>\n\nYeah, I think it is good to mention that. Accordingly, the comments\natop build_replindex_scan_key(),\nFindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\nshould also be adjusted.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 4 Jul 2023 15:21:35 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "Hi. Here are some review comments for this patch.\n\n+1 for the patch idea.\n\n------\n\nI wasn't sure about the code comment adjustments suggested by Amit [1]:\n\"Accordingly, the comments atop build_replindex_scan_key(),\nFindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\nshould also be adjusted.\"\n\nActually, I thought the FindUsableIndexForReplicaIdentityFull()\nfunction comment is *already* describing the limitation about the\nleftmost column (see fragment below), so IIUC the Sawada-san patch is\nonly trying to expose that same information in the PG docs.\n\n[FindUsableIndexForReplicaIdentityFull comment fragment]\n * We also skip indexes if the remote relation does not contain the leftmost\n * column of the index. This is because in most such cases sequential scan is\n * favorable over index scan.\n\n~\n\nOTOH, it may be better if these limitation rule details were not\nscattered in the code. e.g. build_replindex_scan_key() function\ncomment can be simplified:\n\nCURRENT:\n * This is not generic routine, it expects the idxrel to be a btree, non-partial\n * and have at least one column reference (i.e. cannot consist of only\n * expressions).\n\nSUGGESTION:\nThis is not a generic routine. It expects the 'idxrel' to be an index\ndeemed \"usable\" by the function\nFindUsableIndexForReplicaIdentityFull().\n\n------\ndoc/src/sgml/logical-replication.sgml\n\n1.\n the key. When replica identity <literal>FULL</literal> is specified,\n indexes can be used on the subscriber side for searching the rows.\nCandidate\n indexes must be btree, non-partial, and have at least one column reference\n- (i.e. cannot consist of only expressions). These restrictions\n- on the non-unique index properties adhere to some of the restrictions that\n- are enforced for primary keys. If there are no such suitable indexes,\n+ at the leftmost column indexes (i.e. cannot consist of only\nexpressions). These\n+ restrictions on the non-unique index properties adhere to some of\nthe restrictions\n+ that are enforced for primary keys. If there are no such suitable indexes,\n the search on the subscriber side can be very inefficient, therefore\n replica identity <literal>FULL</literal> should only be used as a\n fallback if no other solution is possible. If a replica identity other\n\nIsn't this using the word \"indexes\" with different meanings in the\nsame sentence? e.g. IIUC \"leftmost column indexes\" is referring to the\nordinal number of the index fields. TBH, I am not sure the patch\nwording is even describing the limitation in quite the same way as\nwhat the code is actually doing.\n\nHEAD (code comment):\n * We also skip indexes if the remote relation does not contain the leftmost\n * column of the index. This is because in most such cases sequential scan is\n * favorable over index scan.\n\nHEAD (rendered docs)\nCandidate indexes must be btree, non-partial, and have at least one\ncolumn reference (i.e. cannot consist of only expressions). These\nrestrictions on the non-unique index properties adhere to some of the\nrestrictions that are enforced for primary keys.\n\nPATCHED (rendered docs)\nCandidate indexes must be btree, non-partial, and have at least one\ncolumn reference at the leftmost column indexes (i.e. cannot consist\nof only expressions). These restrictions on the non-unique index\nproperties adhere to some of the restrictions that are enforced for\nprimary keys.\n\nMY SUGGESTION:\nCandidate indexes must be btree, non-partial, and have at least one\ncolumn reference (i.e. cannot consist of only expressions).\nFurthermore, the leftmost field of the candidate index must be a\ncolumn of the published table. These restrictions on the non-unique\nindex properties adhere to some of the restrictions that are enforced\nfor primary keys.\n\n------\n[1] Amit suggestions -\nhttps://www.postgresql.org/message-id/CAA4eK1LZ_A-UmC_P%2B_hLi%2BPbwyqak%2BvRKemZ7imzk2puVTpHOA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 5 Jul 2023 13:30:41 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 9:01 AM Peter Smith <[email protected]> wrote:\n>\n> Hi. Here are some review comments for this patch.\n>\n> +1 for the patch idea.\n>\n> ------\n>\n> I wasn't sure about the code comment adjustments suggested by Amit [1]:\n> \"Accordingly, the comments atop build_replindex_scan_key(),\n> FindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\n> should also be adjusted.\"\n>\n> Actually, I thought the FindUsableIndexForReplicaIdentityFull()\n> function comment is *already* describing the limitation about the\n> leftmost column (see fragment below), so IIUC the Sawada-san patch is\n> only trying to expose that same information in the PG docs.\n>\n> [FindUsableIndexForReplicaIdentityFull comment fragment]\n> * We also skip indexes if the remote relation does not contain the leftmost\n> * column of the index. This is because in most such cases sequential scan is\n> * favorable over index scan.\n>\n\nThis implies that the leftmost column of the index must be\nnon-expression but I feel what the patch intends to say in docs is\nmore straightforward and it doesn't match what the proposed docs says.\n\n> ~\n>\n> OTOH, it may be better if these limitation rule details were not\n> scattered in the code. e.g. build_replindex_scan_key() function\n> comment can be simplified:\n>\n> CURRENT:\n> * This is not generic routine, it expects the idxrel to be a btree, non-partial\n> * and have at least one column reference (i.e. cannot consist of only\n> * expressions).\n>\n> SUGGESTION:\n> This is not a generic routine. It expects the 'idxrel' to be an index\n> deemed \"usable\" by the function\n> FindUsableIndexForReplicaIdentityFull().\n>\n\nNote that for PK/ReplicaIdentity, we don't even call\nFindUsableIndexForReplicaIdentityFull() but build_replindex_scan_key()\nwould still be called for such index. So, I am not sure your proposed\nwording is an improvement.\n\n> ------\n> doc/src/sgml/logical-replication.sgml\n>\n> 1.\n> the key. When replica identity <literal>FULL</literal> is specified,\n> indexes can be used on the subscriber side for searching the rows.\n> Candidate\n> indexes must be btree, non-partial, and have at least one column reference\n> - (i.e. cannot consist of only expressions). These restrictions\n> - on the non-unique index properties adhere to some of the restrictions that\n> - are enforced for primary keys. If there are no such suitable indexes,\n> + at the leftmost column indexes (i.e. cannot consist of only\n> expressions). These\n> + restrictions on the non-unique index properties adhere to some of\n> the restrictions\n> + that are enforced for primary keys. If there are no such suitable indexes,\n> the search on the subscriber side can be very inefficient, therefore\n> replica identity <literal>FULL</literal> should only be used as a\n> fallback if no other solution is possible. If a replica identity other\n>\n> Isn't this using the word \"indexes\" with different meanings in the\n> same sentence? e.g. IIUC \"leftmost column indexes\" is referring to the\n> ordinal number of the index fields. TBH, I am not sure the patch\n> wording is even describing the limitation in quite the same way as\n> what the code is actually doing.\n>\n> HEAD (code comment):\n> * We also skip indexes if the remote relation does not contain the leftmost\n> * column of the index. This is because in most such cases sequential scan is\n> * favorable over index scan.\n>\n> HEAD (rendered docs)\n> Candidate indexes must be btree, non-partial, and have at least one\n> column reference (i.e. cannot consist of only expressions). These\n> restrictions on the non-unique index properties adhere to some of the\n> restrictions that are enforced for primary keys.\n>\n> PATCHED (rendered docs)\n> Candidate indexes must be btree, non-partial, and have at least one\n> column reference at the leftmost column indexes (i.e. cannot consist\n> of only expressions). These restrictions on the non-unique index\n> properties adhere to some of the restrictions that are enforced for\n> primary keys.\n>\n> MY SUGGESTION:\n> Candidate indexes must be btree, non-partial, and have at least one\n> column reference (i.e. cannot consist of only expressions).\n> Furthermore, the leftmost field of the candidate index must be a\n> column of the published table. These restrictions on the non-unique\n> index properties adhere to some of the restrictions that are enforced\n> for primary keys.\n>\n\nI don't know if this suggestion is what the code is actually doing. In\nfunction RemoteRelContainsLeftMostColumnOnIdx(), we have the following\nchecks:\n==========\nkeycol = indexInfo->ii_IndexAttrNumbers[0];\nif (!AttributeNumberIsValid(keycol))\nreturn false;\n\nif (attrmap->maplen <= AttrNumberGetAttrOffset(keycol))\nreturn false;\n\nreturn attrmap->attnums[AttrNumberGetAttrOffset(keycol)] >= 0;\n==========\n\nThe first of these checks indicates that the leftmost column of the\nindex should be non-expression, second and third indicates what you\nsuggest in your wording. We can also think that what you wrote in a\nway is a superset of \"leftmost index column is a non-expression\" and\n\"leftmost index column should be present in remote rel\" but I feel it\nwould be better to explicit about the first part as it is easy to\nunderstand for users at least in docs.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 11:16:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 2:46 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 9:01 AM Peter Smith <[email protected]> wrote:\n> >\n> > Hi. Here are some review comments for this patch.\n> >\n> > +1 for the patch idea.\n> >\n> > ------\n> >\n> > I wasn't sure about the code comment adjustments suggested by Amit [1]:\n> > \"Accordingly, the comments atop build_replindex_scan_key(),\n> > FindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\n> > should also be adjusted.\"\n\nAs for IsIndexOnlyOnExpression(), what part do you think we need to\nadjust? It says:\n\n/*\n * Returns true if the given index consists only of expressions such as:\n * CREATE INDEX idx ON table(foo(col));\n *\n * Returns false even if there is one column reference:\n * CREATE INDEX idx ON table(foo(col), col_2);\n */\n\nand it seems to me that the function doesn't check if the leftmost\nindex column is a non-expression.\n\n> > doc/src/sgml/logical-replication.sgml\n> >\n> > 1.\n> > the key. When replica identity <literal>FULL</literal> is specified,\n> > indexes can be used on the subscriber side for searching the rows.\n> > Candidate\n> > indexes must be btree, non-partial, and have at least one column reference\n> > - (i.e. cannot consist of only expressions). These restrictions\n> > - on the non-unique index properties adhere to some of the restrictions that\n> > - are enforced for primary keys. If there are no such suitable indexes,\n> > + at the leftmost column indexes (i.e. cannot consist of only\n> > expressions). These\n> > + restrictions on the non-unique index properties adhere to some of\n> > the restrictions\n> > + that are enforced for primary keys. If there are no such suitable indexes,\n> > the search on the subscriber side can be very inefficient, therefore\n> > replica identity <literal>FULL</literal> should only be used as a\n> > fallback if no other solution is possible. If a replica identity other\n> >\n> > Isn't this using the word \"indexes\" with different meanings in the\n> > same sentence? e.g. IIUC \"leftmost column indexes\" is referring to the\n> > ordinal number of the index fields.\n\nThat was my mistake, it should be \" at the leftmost column\".\n\n>\n> I don't know if this suggestion is what the code is actually doing. In\n> function RemoteRelContainsLeftMostColumnOnIdx(), we have the following\n> checks:\n> ==========\n> keycol = indexInfo->ii_IndexAttrNumbers[0];\n> if (!AttributeNumberIsValid(keycol))\n> return false;\n>\n> if (attrmap->maplen <= AttrNumberGetAttrOffset(keycol))\n> return false;\n>\n> return attrmap->attnums[AttrNumberGetAttrOffset(keycol)] >= 0;\n> ==========\n>\n> The first of these checks indicates that the leftmost column of the\n> index should be non-expression, second and third indicates what you\n> suggest in your wording. We can also think that what you wrote in a\n> way is a superset of \"leftmost index column is a non-expression\" and\n> \"leftmost index column should be present in remote rel\" but I feel it\n> would be better to explicit about the first part as it is easy to\n> understand for users at least in docs.\n\n+1\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 5 Jul 2023 15:31:33 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 12:02 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 2:46 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 5, 2023 at 9:01 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > Hi. Here are some review comments for this patch.\n> > >\n> > > +1 for the patch idea.\n> > >\n> > > ------\n> > >\n> > > I wasn't sure about the code comment adjustments suggested by Amit [1]:\n> > > \"Accordingly, the comments atop build_replindex_scan_key(),\n> > > FindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\n> > > should also be adjusted.\"\n>\n> As for IsIndexOnlyOnExpression(), what part do you think we need to\n> adjust? It says:\n>\n> /*\n> * Returns true if the given index consists only of expressions such as:\n> * CREATE INDEX idx ON table(foo(col));\n> *\n> * Returns false even if there is one column reference:\n> * CREATE INDEX idx ON table(foo(col), col_2);\n> */\n>\n> and it seems to me that the function doesn't check if the leftmost\n> index column is a non-expression.\n>\n\nRight, so, we can leave this as is.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 16:06:31 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 4:32 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 2:46 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 5, 2023 at 9:01 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > Hi. Here are some review comments for this patch.\n> > >\n> > > +1 for the patch idea.\n> > >\n> > > ------\n> > >\n> > > I wasn't sure about the code comment adjustments suggested by Amit [1]:\n> > > \"Accordingly, the comments atop build_replindex_scan_key(),\n> > > FindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\n> > > should also be adjusted.\"\n>\n> As for IsIndexOnlyOnExpression(), what part do you think we need to\n> adjust? It says:\n>\n> /*\n> * Returns true if the given index consists only of expressions such as:\n> * CREATE INDEX idx ON table(foo(col));\n> *\n> * Returns false even if there is one column reference:\n> * CREATE INDEX idx ON table(foo(col), col_2);\n> */\n>\n> and it seems to me that the function doesn't check if the leftmost\n> index column is a non-expression.\n>\n\nTBH, this IsIndexOnlyOnExpression() function seemed redundant to me,\notherwise, there can be some indexes that are firstly considered\n\"useable\" but then fail the subsequent leftmost check. It does not\nseem right.\n\n> > > doc/src/sgml/logical-replication.sgml\n> > >\n> > > 1.\n> > > the key. When replica identity <literal>FULL</literal> is specified,\n> > > indexes can be used on the subscriber side for searching the rows.\n> > > Candidate\n> > > indexes must be btree, non-partial, and have at least one column reference\n> > > - (i.e. cannot consist of only expressions). These restrictions\n> > > - on the non-unique index properties adhere to some of the restrictions that\n> > > - are enforced for primary keys. If there are no such suitable indexes,\n> > > + at the leftmost column indexes (i.e. cannot consist of only\n> > > expressions). These\n> > > + restrictions on the non-unique index properties adhere to some of\n> > > the restrictions\n> > > + that are enforced for primary keys. If there are no such suitable indexes,\n> > > the search on the subscriber side can be very inefficient, therefore\n> > > replica identity <literal>FULL</literal> should only be used as a\n> > > fallback if no other solution is possible. If a replica identity other\n> > >\n> > > Isn't this using the word \"indexes\" with different meanings in the\n> > > same sentence? e.g. IIUC \"leftmost column indexes\" is referring to the\n> > > ordinal number of the index fields.\n>\n> That was my mistake, it should be \" at the leftmost column\".\n\nIIUC the subscriber-side table can have more columns than the\npublisher-side table, so just describing in the docs that the leftmost\nINDEX field must be a column may not be quite enough; it also needs to\nsay that column has to exist on the publisher-table, doesn't it?\n\nAlso, after you document this 'leftmost field restriction' that\nalready implies there *must* be a non-expression in the INDEX. So I\nthought we can just omit the \"(i.e. cannot consist of only\nexpressions)\" part.\n\nAnyway, I will wait to see the wording of the updated patch before\ncommenting further.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 6 Jul 2023 08:58:01 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 7:58 AM Peter Smith <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 4:32 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jul 5, 2023 at 2:46 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 5, 2023 at 9:01 AM Peter Smith <[email protected]> wrote:\n> > > >\n> > > > Hi. Here are some review comments for this patch.\n> > > >\n> > > > +1 for the patch idea.\n> > > >\n> > > > ------\n> > > >\n> > > > I wasn't sure about the code comment adjustments suggested by Amit [1]:\n> > > > \"Accordingly, the comments atop build_replindex_scan_key(),\n> > > > FindUsableIndexForReplicaIdentityFull(), IsIndexOnlyOnExpression()\n> > > > should also be adjusted.\"\n> >\n> > As for IsIndexOnlyOnExpression(), what part do you think we need to\n> > adjust? It says:\n> >\n> > /*\n> > * Returns true if the given index consists only of expressions such as:\n> > * CREATE INDEX idx ON table(foo(col));\n> > *\n> > * Returns false even if there is one column reference:\n> > * CREATE INDEX idx ON table(foo(col), col_2);\n> > */\n> >\n> > and it seems to me that the function doesn't check if the leftmost\n> > index column is a non-expression.\n> >\n>\n> TBH, this IsIndexOnlyOnExpression() function seemed redundant to me,\n> otherwise, there can be some indexes that are firstly considered\n> \"useable\" but then fail the subsequent leftmost check. It does not\n> seem right.\n\nI see your point. IsIndexUsableForReplicaIdentityFull(), the sole user\nof IsIndexOnlyOnExpression(), is also called by\nRelationFindReplTupleByIndex() in an assertion build. I thought this\nis the reason why we have separate IsIndexOnlyOnExpression() ( and\nIsIndexUsableForReplicaIdentityFull()). But this assertion doesn't\ncheck if the leftmost index column exists on the remote relation. What\nare we doing this check for? If it's not necessary, we can remove this\nassertion and merge both IsIndexOnlyOnExpression() and\nIsIndexUsableForReplicaIdentityFull() into\nFindUsableIndexForReplicaIdentityFull().\n\n>\n> > > > doc/src/sgml/logical-replication.sgml\n> > > >\n> > > > 1.\n> > > > the key. When replica identity <literal>FULL</literal> is specified,\n> > > > indexes can be used on the subscriber side for searching the rows.\n> > > > Candidate\n> > > > indexes must be btree, non-partial, and have at least one column reference\n> > > > - (i.e. cannot consist of only expressions). These restrictions\n> > > > - on the non-unique index properties adhere to some of the restrictions that\n> > > > - are enforced for primary keys. If there are no such suitable indexes,\n> > > > + at the leftmost column indexes (i.e. cannot consist of only\n> > > > expressions). These\n> > > > + restrictions on the non-unique index properties adhere to some of\n> > > > the restrictions\n> > > > + that are enforced for primary keys. If there are no such suitable indexes,\n> > > > the search on the subscriber side can be very inefficient, therefore\n> > > > replica identity <literal>FULL</literal> should only be used as a\n> > > > fallback if no other solution is possible. If a replica identity other\n> > > >\n> > > > Isn't this using the word \"indexes\" with different meanings in the\n> > > > same sentence? e.g. IIUC \"leftmost column indexes\" is referring to the\n> > > > ordinal number of the index fields.\n> >\n> > That was my mistake, it should be \" at the leftmost column\".\n>\n> IIUC the subscriber-side table can have more columns than the\n> publisher-side table, so just describing in the docs that the leftmost\n> INDEX field must be a column may not be quite enough; it also needs to\n> say that column has to exist on the publisher-table, doesn't it?\n\nRight. I've updated the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 6 Jul 2023 10:41:48 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "Hi, Here are my review comments for patch v2.\n\n======\n1. doc/src/sgml/logical-replication.sgml\n\nCandidate indexes must be btree, non-partial, and have at least one\ncolumn reference to the published table column at the leftmost column\n(i.e. cannot consist of only expressions).\n\n~\n\nThere is only one column which can be the leftmost one, so the wording\n\"At least one ... at the leftmost\" seemed a bit strange to me.\nPersonally, I would phrase it something like below:\n\nSUGGESTION #1\nCandidate indexes must be btree, non-partial, and the leftmost index\ncolumn must reference a published table column (i.e. the index cannot\nconsist of only expressions).\n\nSUGGESTION #2 (same as above, but omitting the \"only expressions\"\npart, which I think is implied by the \"leftmost\" rule anyway)\nCandidate indexes must be btree, non-partial, and the leftmost index\ncolumn must reference a published table column.\n\n======\n2. src/backend/replication/logical/relation.c\n\n * Returns the oid of an index that can be used by the apply worker to scan\n * the relation. The index must be btree, non-partial, and have at least\n- * one column reference (i.e. cannot consist of only expressions). These\n- * limitations help to keep the index scan similar to PK/RI index scans.\n+ * one column reference to the remote relation's column at the leftmost column\n+ * (i.e. cannot consist of only expressions). These limitations help\nto keep the\n+ * index scan similar to PK/RI index scans.\n\nThis comment text is similar to the docs change, so refer to the same\nsuggestions as #1 above.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 7 Jul 2023 11:54:57 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 10:55 AM Peter Smith <[email protected]> wrote:\n>\n> Hi, Here are my review comments for patch v2.\n>\n> ======\n> 1. doc/src/sgml/logical-replication.sgml\n>\n> Candidate indexes must be btree, non-partial, and have at least one\n> column reference to the published table column at the leftmost column\n> (i.e. cannot consist of only expressions).\n>\n> ~\n>\n> There is only one column which can be the leftmost one, so the wording\n> \"At least one ... at the leftmost\" seemed a bit strange to me.\n> Personally, I would phrase it something like below:\n>\n> SUGGESTION #1\n> Candidate indexes must be btree, non-partial, and the leftmost index\n> column must reference a published table column (i.e. the index cannot\n> consist of only expressions).\n\nI prefer the first suggestion. I've attached the updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 7 Jul 2023 17:05:40 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n>\n> I prefer the first suggestion. I've attached the updated patch.\n>\n\nThis looks mostly good to me but I think it would be better if we can\nalso add the information that the leftmost index column must be a\nnon-expression. So, how about: \"Candidate indexes must be btree,\nnon-partial, and the leftmost index column must be a non-expression\nand reference to a published table column (i.e. cannot consist of only\nexpressions).\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 8 Jul 2023 09:19:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 1:49 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 7, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > I prefer the first suggestion. I've attached the updated patch.\n> >\n>\n> This looks mostly good to me but I think it would be better if we can\n> also add the information that the leftmost index column must be a\n> non-expression. So, how about: \"Candidate indexes must be btree,\n> non-partial, and the leftmost index column must be a non-expression\n> and reference to a published table column (i.e. cannot consist of only\n> expressions).\"?\n\nThat part in parentheses ought to say \"the index ...\" because it is\nreferring to the full INDEX, not to the leftmost column. I think this\nwas missed when Sawada-san took my previous suggestion [1].\n\nIMO it doesn't sound right to say the \"index column must be a\nnon-expression\". It is already a non-expression because it is a\ncolumn. So I think it would be better to refer to this as an INDEX\n\"field\" instead of an INDEX column. Note that \"field\" is the same\nterminology used in the docs for CREATE INDEX [2].\n\nSUGGESTION\nCandidate indexes must be btree, non-partial, and the leftmost index\nfield must be a column that references a published table column (i.e.\nthe index cannot consist of only expressions).\n\n~~~~\n\nWhat happened to the earlier idea of removing/merging the redundant\n(?) function IsIndexOnlyOnExpression()?\n- Something wrong with that?\n- Chose not to do it?\n- Will do it raised in another thread?\n\n------\n[1] my review v2 -\nhttps://www.postgresql.org/message-id/CAHut%2BPsFdTZJ7DG1jyu7BpA_1d4hwEd-Q%2BmQAPWcj1ZLD_X5Dw%40mail.gmail.com\n[2] create index - https://www.postgresql.org/docs/current/sql-createindex.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 10 Jul 2023 12:24:43 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 7:55 AM Peter Smith <[email protected]> wrote:\n>\n> On Sat, Jul 8, 2023 at 1:49 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Jul 7, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > I prefer the first suggestion. I've attached the updated patch.\n> > >\n> >\n> > This looks mostly good to me but I think it would be better if we can\n> > also add the information that the leftmost index column must be a\n> > non-expression. So, how about: \"Candidate indexes must be btree,\n> > non-partial, and the leftmost index column must be a non-expression\n> > and reference to a published table column (i.e. cannot consist of only\n> > expressions).\"?\n>\n> That part in parentheses ought to say \"the index ...\" because it is\n> referring to the full INDEX, not to the leftmost column. I think this\n> was missed when Sawada-san took my previous suggestion [1].\n>\n> IMO it doesn't sound right to say the \"index column must be a\n> non-expression\". It is already a non-expression because it is a\n> column. So I think it would be better to refer to this as an INDEX\n> \"field\" instead of an INDEX column. Note that \"field\" is the same\n> terminology used in the docs for CREATE INDEX [2].\n>\n\nI thought it would be better to be explicit for this case but I am\nfine if Sawada-San and you prefer some other way to document it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jul 2023 09:51:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 2:21 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 10, 2023 at 7:55 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Sat, Jul 8, 2023 at 1:49 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Fri, Jul 7, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > I prefer the first suggestion. I've attached the updated patch.\n> > > >\n> > >\n> > > This looks mostly good to me but I think it would be better if we can\n> > > also add the information that the leftmost index column must be a\n> > > non-expression. So, how about: \"Candidate indexes must be btree,\n> > > non-partial, and the leftmost index column must be a non-expression\n> > > and reference to a published table column (i.e. cannot consist of only\n> > > expressions).\"?\n> >\n> > That part in parentheses ought to say \"the index ...\" because it is\n> > referring to the full INDEX, not to the leftmost column. I think this\n> > was missed when Sawada-san took my previous suggestion [1].\n> >\n> > IMO it doesn't sound right to say the \"index column must be a\n> > non-expression\". It is already a non-expression because it is a\n> > column. So I think it would be better to refer to this as an INDEX\n> > \"field\" instead of an INDEX column. Note that \"field\" is the same\n> > terminology used in the docs for CREATE INDEX [2].\n> >\n>\n> I thought it would be better to be explicit for this case but I am\n> fine if Sawada-San and you prefer some other way to document it.\n>\n\nI see. How about just moving the parenthesized part to explicitly\nrefer only to the leftmost field?\n\nSUGGESTION\nCandidate indexes must be btree, non-partial, and the leftmost index\nfield must be a column (not an expression) that references a published\ntable column.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 11 Jul 2023 09:23:36 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 4:54 AM Peter Smith <[email protected]> wrote:\n>\n> On Mon, Jul 10, 2023 at 2:21 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jul 10, 2023 at 7:55 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > On Sat, Jul 8, 2023 at 1:49 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Fri, Jul 7, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > I prefer the first suggestion. I've attached the updated patch.\n> > > > >\n> > > >\n> > > > This looks mostly good to me but I think it would be better if we can\n> > > > also add the information that the leftmost index column must be a\n> > > > non-expression. So, how about: \"Candidate indexes must be btree,\n> > > > non-partial, and the leftmost index column must be a non-expression\n> > > > and reference to a published table column (i.e. cannot consist of only\n> > > > expressions).\"?\n> > >\n> > > That part in parentheses ought to say \"the index ...\" because it is\n> > > referring to the full INDEX, not to the leftmost column. I think this\n> > > was missed when Sawada-san took my previous suggestion [1].\n> > >\n> > > IMO it doesn't sound right to say the \"index column must be a\n> > > non-expression\". It is already a non-expression because it is a\n> > > column. So I think it would be better to refer to this as an INDEX\n> > > \"field\" instead of an INDEX column. Note that \"field\" is the same\n> > > terminology used in the docs for CREATE INDEX [2].\n> > >\n> >\n> > I thought it would be better to be explicit for this case but I am\n> > fine if Sawada-San and you prefer some other way to document it.\n> >\n>\n> I see. How about just moving the parenthesized part to explicitly\n> refer only to the leftmost field?\n>\n> SUGGESTION\n> Candidate indexes must be btree, non-partial, and the leftmost index\n> field must be a column (not an expression) that references a published\n> table column.\n>\n\nYeah, something like that works for me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 09:35:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 1:05 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jul 11, 2023 at 4:54 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Mon, Jul 10, 2023 at 2:21 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 10, 2023 at 7:55 AM Peter Smith <[email protected]> wrote:\n> > > >\n> > > > On Sat, Jul 8, 2023 at 1:49 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Fri, Jul 7, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n> > > > > >\n> > > > > > I prefer the first suggestion. I've attached the updated patch.\n> > > > > >\n> > > > >\n> > > > > This looks mostly good to me but I think it would be better if we can\n> > > > > also add the information that the leftmost index column must be a\n> > > > > non-expression. So, how about: \"Candidate indexes must be btree,\n> > > > > non-partial, and the leftmost index column must be a non-expression\n> > > > > and reference to a published table column (i.e. cannot consist of only\n> > > > > expressions).\"?\n> > > >\n> > > > That part in parentheses ought to say \"the index ...\" because it is\n> > > > referring to the full INDEX, not to the leftmost column. I think this\n> > > > was missed when Sawada-san took my previous suggestion [1].\n> > > >\n> > > > IMO it doesn't sound right to say the \"index column must be a\n> > > > non-expression\". It is already a non-expression because it is a\n> > > > column. So I think it would be better to refer to this as an INDEX\n> > > > \"field\" instead of an INDEX column. Note that \"field\" is the same\n> > > > terminology used in the docs for CREATE INDEX [2].\n> > > >\n> > >\n> > > I thought it would be better to be explicit for this case but I am\n> > > fine if Sawada-San and you prefer some other way to document it.\n> > >\n> >\n> > I see. How about just moving the parenthesized part to explicitly\n> > refer only to the leftmost field?\n> >\n> > SUGGESTION\n> > Candidate indexes must be btree, non-partial, and the leftmost index\n> > field must be a column (not an expression) that references a published\n> > table column.\n> >\n>\n> Yeah, something like that works for me.\n\nLooks good to me. I've attached the updated patch. In the comment in\nRemoteRelContainsLeftMostColumnOnIdx(), I used \"remote relation\"\ninstead of \"published table\" as it's more consistent with surrounding\ncomments. Also, I've removed the comment starting with \"We also skip\nindedes...\" as the new comment now covers it. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 11 Jul 2023 14:47:45 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "Here are my comments for v4.\n\n======\n\nDocs/Comments:\n\nAll the docs and updated comments LTGM, except I felt one sentence\nmight be written differently to avoid nested parentheses.\n\nBEFORE\n...used for REPLICA IDENTITY FULL table (see\nFindUsableIndexForReplicaIdentityFull() for details).\n\nAFTER\n...used for REPLICA IDENTITY FULL table. See\nFindUsableIndexForReplicaIdentityFull() for details.\n\n====\n\nLogic:\n\nWhat was the decision about the earlier question [1] of\nremoving/merging the function IsIndexOnlyOnExpression()?\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuGhGHp9Uq8-Wk7uBiirAHF5quDY_1Z6WDoUKRZqkn%2Brg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 11 Jul 2023 18:31:30 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n>\n> Here are my comments for v4.\n>\n> ======\n>\n> Docs/Comments:\n>\n> All the docs and updated comments LTGM, except I felt one sentence\n> might be written differently to avoid nested parentheses.\n>\n> BEFORE\n> ...used for REPLICA IDENTITY FULL table (see\n> FindUsableIndexForReplicaIdentityFull() for details).\n>\n> AFTER\n> ...used for REPLICA IDENTITY FULL table. See\n> FindUsableIndexForReplicaIdentityFull() for details.\n>\n> ====\n\nAgreed. I've attached the updated patch. I'll push it barring any objections.\n\n>\n> Logic:\n>\n> What was the decision about the earlier question [1] of\n> removing/merging the function IsIndexOnlyOnExpression()?\n>\n\nI don't think we have concluded any action for it. I agree that\nIsIndexOnlyOnExpression() is redundant. We don't need to check *all*\nindex fields actually. I've attached a draft patch. It removes\nIsIndexOnlyOnExpression() and merges\nRemoteRelContainsLeftMostColumnOnIdx() to\nFindUsableIndexForReplicaIdentityFull. One concern is that we no\nlonger do the assertion check with\nIsIndexUsableForReplicaIdentityFull(). What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 12 Jul 2023 16:00:25 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 12:31 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> >\n>\n> I don't think we have concluded any action for it. I agree that\n> IsIndexOnlyOnExpression() is redundant. We don't need to check *all*\n> index fields actually. I've attached a draft patch. It removes\n> IsIndexOnlyOnExpression() and merges\n> RemoteRelContainsLeftMostColumnOnIdx() to\n> FindUsableIndexForReplicaIdentityFull. One concern is that we no\n> longer do the assertion check with\n> IsIndexUsableForReplicaIdentityFull(). What do you think?\n>\n\nI think this is a valid concern. Can't we move all the checks\n(including the remote attrs check) inside\nIsIndexUsableForReplicaIdentityFull() and then call it from both\nplaces? Won't we have attrmap information available in the callers of\nFindReplTupleInLocalRel() via ApplyExecutionData?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 15:38:43 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 7:08 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 12:31 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> > >\n> >\n> > I don't think we have concluded any action for it. I agree that\n> > IsIndexOnlyOnExpression() is redundant. We don't need to check *all*\n> > index fields actually. I've attached a draft patch. It removes\n> > IsIndexOnlyOnExpression() and merges\n> > RemoteRelContainsLeftMostColumnOnIdx() to\n> > FindUsableIndexForReplicaIdentityFull. One concern is that we no\n> > longer do the assertion check with\n> > IsIndexUsableForReplicaIdentityFull(). What do you think?\n> >\n>\n> I think this is a valid concern. Can't we move all the checks\n> (including the remote attrs check) inside\n> IsIndexUsableForReplicaIdentityFull() and then call it from both\n> places? Won't we have attrmap information available in the callers of\n> FindReplTupleInLocalRel() via ApplyExecutionData?\n\nYou mean to pass ApplyExecutionData or attrmap down to\nRelationFindReplTupleByIndex()? I think it would be better to call it\nfrom FindReplTupleInLocalRel() instead.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Jul 2023 23:15:01 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "Hi Amit, all\n\nAmit Kapila <[email protected]>, 12 Tem 2023 Çar, 13:09 tarihinde\nşunu yazdı:\n\n> On Wed, Jul 12, 2023 at 12:31 PM Masahiko Sawada <[email protected]>\n> wrote:\n> >\n> > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]>\n> wrote:\n> > >\n> >\n> > I don't think we have concluded any action for it. I agree that\n> > IsIndexOnlyOnExpression() is redundant. We don't need to check *all*\n> > index fields actually. I've attached a draft patch. It removes\n> > IsIndexOnlyOnExpression() and merges\n> > RemoteRelContainsLeftMostColumnOnIdx() to\n> > FindUsableIndexForReplicaIdentityFull. One concern is that we no\n> > longer do the assertion check with\n> > IsIndexUsableForReplicaIdentityFull(). What do you think?\n> >\n>\n> I think this is a valid concern. Can't we move all the checks\n> (including the remote attrs check) inside\n> IsIndexUsableForReplicaIdentityFull() and then call it from both\n> places? Won't we have attrmap information available in the callers of\n> FindReplTupleInLocalRel() via ApplyExecutionData?\n>\n>\n>\nI think such an approach is slightly better than the proposed changes on\nremove_redundant_check.patch\n\nI think one reason we ended up with IsIndexUsableForReplicaIdentityFull()\nis that it\nis a nice way for documenting the requirements in the code.\n\nHowever, as you also alluded to in the\nthread, RemoteRelContainsLeftMostColumnOnIdx()\nbreaks this documentation.\n\nI agree that it is nice to have all the logic to be in the same place. I\nthink remove_redundant_check.patch\ndoes that by inlining IsIndexUsableForReplicaIdentityFull\nand RemoteRelContainsLeftMostColumnOnIdx\ninto FindUsableIndexForReplicaIdentityFull().\n\nAs Amit noted, the other way around might be more interesting. We expand\nIsIndexUsableForReplicaIdentityFull() such that it also includes\nRemoteRelContainsLeftMostColumnOnIdx(). With that, readers of\nIsIndexUsableForReplicaIdentityFull() can follow the requirements slightly\neasier.\n\nThough, not sure yet if we can get all the necessary information for the\nAssert\nvia ApplyExecutionData in FindReplTupleInLocalRel. Perhaps yes.\n\nThanks,\nOnder\n\nHi Amit, allAmit Kapila <[email protected]>, 12 Tem 2023 Çar, 13:09 tarihinde şunu yazdı:On Wed, Jul 12, 2023 at 12:31 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> >\n>\n> I don't think we have concluded any action for it. I agree that\n> IsIndexOnlyOnExpression() is redundant. We don't need to check *all*\n> index fields actually. I've attached a draft patch. It removes\n> IsIndexOnlyOnExpression() and merges\n> RemoteRelContainsLeftMostColumnOnIdx() to\n> FindUsableIndexForReplicaIdentityFull. One concern is that we no\n> longer do the assertion check with\n> IsIndexUsableForReplicaIdentityFull(). What do you think?\n>\n\nI think this is a valid concern. Can't we move all the checks\n(including the remote attrs check) inside\nIsIndexUsableForReplicaIdentityFull() and then call it from both\nplaces? Won't we have attrmap information available in the callers of\nFindReplTupleInLocalRel() via ApplyExecutionData?\nI think such an approach is slightly better than the proposed changes onremove_redundant_check.patch I think one reason we ended up with IsIndexUsableForReplicaIdentityFull() is that itis a nice way for documenting the requirements in the code.However, as you also alluded to in the thread, RemoteRelContainsLeftMostColumnOnIdx()breaks this documentation. I agree that it is nice to have all the logic to be in the same place. I think remove_redundant_check.patch does that by inlining IsIndexUsableForReplicaIdentityFull and RemoteRelContainsLeftMostColumnOnIdx into FindUsableIndexForReplicaIdentityFull().As Amit noted, the other way around might be more interesting. We expand IsIndexUsableForReplicaIdentityFull() such that it also includes RemoteRelContainsLeftMostColumnOnIdx(). With that, readers of IsIndexUsableForReplicaIdentityFull() can follow the requirements slightly easier.Though, not sure yet if we can get all the necessary information for the Assertvia ApplyExecutionData in FindReplTupleInLocalRel. Perhaps yes.Thanks,Onder",
"msg_date": "Wed, 12 Jul 2023 17:15:17 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 5:01 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> >\n> > Here are my comments for v4.\n> >\n> > ======\n> >\n> > Docs/Comments:\n> >\n\n> > ====\n>\n> Agreed. I've attached the updated patch. I'll push it barring any objections.\n>\n> >\n\nI checked v5-0001 and noticed the following:\n\n======\ndoc/src/sgml/logical-replication.sgml\n\nBEFORE\n... and the leftmost index field must be a column (not an expression)\nthat reference a published table column.\n\nSUGGESTION (\"references the\", instead of \"reference a\")\n... and the leftmost index field must be a column (not an expression)\nthat references the published table column.\n\n(maybe that last word \"column\" is also unnecessary?)\n\n======\nsrc/backend/replication/logical/relation.c\n\nBEFORE\nThe index must be btree, non-partial, and the leftmost field must be a\ncolumn (not an expression) that reference the remote relation.\n\nSUGGESTION (\"references\", instead of \"reference\")\nThe index must be btree, non-partial, and the leftmost field must be a\ncolumn (not an expression) that references the remote relation.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Jul 2023 09:03:27 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 8:03 AM Peter Smith <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 5:01 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> > >\n> > > Here are my comments for v4.\n> > >\n> > > ======\n> > >\n> > > Docs/Comments:\n> > >\n>\n> > > ====\n> >\n> > Agreed. I've attached the updated patch. I'll push it barring any objections.\n> >\n> > >\n>\n> I checked v5-0001 and noticed the following:\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> BEFORE\n> ... and the leftmost index field must be a column (not an expression)\n> that reference a published table column.\n>\n> SUGGESTION (\"references the\", instead of \"reference a\")\n> ... and the leftmost index field must be a column (not an expression)\n> that references the published table column.\n\nThanks, will fix.\n\n>\n> (maybe that last word \"column\" is also unnecessary?)\n\nBut an index column doesn't reference the published table, but the\npublished table's column, no?\n\n>\n> ======\n> src/backend/replication/logical/relation.c\n>\n> BEFORE\n> The index must be btree, non-partial, and the leftmost field must be a\n> column (not an expression) that reference the remote relation.\n>\n> SUGGESTION (\"references\", instead of \"reference\")\n> The index must be btree, non-partial, and the leftmost field must be a\n> column (not an expression) that references the remote relation.\n>\n\nWill fix.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 10:27:41 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 11:28 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jul 13, 2023 at 8:03 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Wed, Jul 12, 2023 at 5:01 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> > > >\n...\n> >\n> > I checked v5-0001 and noticed the following:\n> >\n> > ======\n> > doc/src/sgml/logical-replication.sgml\n> >\n> > BEFORE\n> > ... and the leftmost index field must be a column (not an expression)\n> > that reference a published table column.\n> >\n> > SUGGESTION (\"references the\", instead of \"reference a\")\n> > ... and the leftmost index field must be a column (not an expression)\n> > that references the published table column.\n>\n> Thanks, will fix.\n>\n> >\n> > (maybe that last word \"column\" is also unnecessary?)\n>\n> But an index column doesn't reference the published table, but the\n> published table's column, no?\n>\n\nYeah, but there is some inconsistency with the other code comment that\njust says \"... that references the remote relation.\", so I thought one\nof them needs to change. If not this one, then the other one.\n\n> >\n> > ======\n> > src/backend/replication/logical/relation.c\n> >\n> > BEFORE\n> > The index must be btree, non-partial, and the leftmost field must be a\n> > column (not an expression) that reference the remote relation.\n> >\n> > SUGGESTION (\"references\", instead of \"reference\")\n> > The index must be btree, non-partial, and the leftmost field must be a\n> > column (not an expression) that references the remote relation.\n> >\n>\n> Will fix.\n>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Jul 2023 12:12:27 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 11:12 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Jul 13, 2023 at 11:28 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Jul 13, 2023 at 8:03 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > On Wed, Jul 12, 2023 at 5:01 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> > > > >\n> ...\n> > >\n> > > I checked v5-0001 and noticed the following:\n> > >\n> > > ======\n> > > doc/src/sgml/logical-replication.sgml\n> > >\n> > > BEFORE\n> > > ... and the leftmost index field must be a column (not an expression)\n> > > that reference a published table column.\n> > >\n> > > SUGGESTION (\"references the\", instead of \"reference a\")\n> > > ... and the leftmost index field must be a column (not an expression)\n> > > that references the published table column.\n> >\n> > Thanks, will fix.\n> >\n> > >\n> > > (maybe that last word \"column\" is also unnecessary?)\n> >\n> > But an index column doesn't reference the published table, but the\n> > published table's column, no?\n> >\n>\n> Yeah, but there is some inconsistency with the other code comment that\n> just says \"... that references the remote relation.\", so I thought one\n> of them needs to change. If not this one, then the other one.\n\nRight. So let's add \"column\" in both places. Attached the updated patch\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 13 Jul 2023 11:21:34 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 12:22 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jul 13, 2023 at 11:12 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Thu, Jul 13, 2023 at 11:28 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Thu, Jul 13, 2023 at 8:03 AM Peter Smith <[email protected]> wrote:\n> > > >\n> > > > On Wed, Jul 12, 2023 at 5:01 PM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> > > > > >\n> > ...\n> > > >\n> > > > I checked v5-0001 and noticed the following:\n> > > >\n> > > > ======\n> > > > doc/src/sgml/logical-replication.sgml\n> > > >\n> > > > BEFORE\n> > > > ... and the leftmost index field must be a column (not an expression)\n> > > > that reference a published table column.\n> > > >\n> > > > SUGGESTION (\"references the\", instead of \"reference a\")\n> > > > ... and the leftmost index field must be a column (not an expression)\n> > > > that references the published table column.\n> > >\n> > > Thanks, will fix.\n> > >\n> > > >\n> > > > (maybe that last word \"column\" is also unnecessary?)\n> > >\n> > > But an index column doesn't reference the published table, but the\n> > > published table's column, no?\n> > >\n> >\n> > Yeah, but there is some inconsistency with the other code comment that\n> > just says \"... that references the remote relation.\", so I thought one\n> > of them needs to change. If not this one, then the other one.\n>\n> Right. So let's add \"column\" in both places. Attached the updated patch\n>\n\nv6-0001 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Jul 2023 13:09:05 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 11:15 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 7:08 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Jul 12, 2023 at 12:31 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Tue, Jul 11, 2023 at 5:31 PM Peter Smith <[email protected]> wrote:\n> > > >\n> > >\n> > > I don't think we have concluded any action for it. I agree that\n> > > IsIndexOnlyOnExpression() is redundant. We don't need to check *all*\n> > > index fields actually. I've attached a draft patch. It removes\n> > > IsIndexOnlyOnExpression() and merges\n> > > RemoteRelContainsLeftMostColumnOnIdx() to\n> > > FindUsableIndexForReplicaIdentityFull. One concern is that we no\n> > > longer do the assertion check with\n> > > IsIndexUsableForReplicaIdentityFull(). What do you think?\n> > >\n> >\n> > I think this is a valid concern. Can't we move all the checks\n> > (including the remote attrs check) inside\n> > IsIndexUsableForReplicaIdentityFull() and then call it from both\n> > places? Won't we have attrmap information available in the callers of\n> > FindReplTupleInLocalRel() via ApplyExecutionData?\n>\n> You mean to pass ApplyExecutionData or attrmap down to\n> RelationFindReplTupleByIndex()? I think it would be better to call it\n> from FindReplTupleInLocalRel() instead.\n\nI've attached a draft patch for this idea.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 13 Jul 2023 14:25:15 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 10:55 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 11:15 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > > I think this is a valid concern. Can't we move all the checks\n> > > (including the remote attrs check) inside\n> > > IsIndexUsableForReplicaIdentityFull() and then call it from both\n> > > places? Won't we have attrmap information available in the callers of\n> > > FindReplTupleInLocalRel() via ApplyExecutionData?\n> >\n> > You mean to pass ApplyExecutionData or attrmap down to\n> > RelationFindReplTupleByIndex()? I think it would be better to call it\n> > from FindReplTupleInLocalRel() instead.\n>\n> I've attached a draft patch for this idea.\n>\n\nLooks reasonable to me. However, I am not very sure if we need to\nchange the prototype of RelationFindReplTupleByIndex(). Few other\nminor comments:\n\n1.\n- * has been implemented as a tri-state with values DISABLED, PENDING, and\n+n * has been implemented as a tri-state with values DISABLED, PENDING, and\n * ENABLED.\n *\nThe above seems like a spurious change.\n\n2.\n+ /* And must reference the remote relation column */\n+ if (attrmap->maplen <= AttrNumberGetAttrOffset(keycol) ||\n+ attrmap->attnums[AttrNumberGetAttrOffset(keycol)] < 0)\n+ return false;\n+\n\nI think we should specify the reason for this. I see that in the\ncommit fd48a86c62 [1], the reason for this is removed. Shouldn't we\nretain that in some form?\n\n[1] -\n- * We also skip indexes if the remote relation does not contain the leftmost\n- * column of the index. This is because in most such cases sequential scan is\n- * favorable over index scan.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Jul 2023 10:41:03 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "Hi,\n\n\n> I've attached a draft patch for this idea.\n\n\nI think it needs a rebase after edca3424342da323499a1998d18a888283e52ac7.\n\nAlso, as discussed in [1], I think one improvement we had was to\nkeep IsIndexUsableForReplicaIdentityFull() in a way that it is easier to\nread & better documented in the code. So, it would be nice to stick to that.\n\nOverall, the proposed changes make sense to me as well. Once the patch is\nready, I'm happy to review & test in detail.\n\nThanks,\nOnder\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAA4eK1Jcyrxt_84wt2%3DQnOcwwJEC2et%2BtCLjAuTXzE6N3FXqQw%40mail.gmail.com\n\nHi,\nI've attached a draft patch for this idea.I think it needs a rebase after edca3424342da323499a1998d18a888283e52ac7.Also, as discussed in [1], I think one improvement we had was to keep IsIndexUsableForReplicaIdentityFull() in a way that it is easier to read & better documented in the code. So, it would be nice to stick to that.Overall, the proposed changes make sense to me as well. Once the patch is ready, I'm happy to review & test in detail.Thanks,Onder[1] https://www.postgresql.org/message-id/CAA4eK1Jcyrxt_84wt2%3DQnOcwwJEC2et%2BtCLjAuTXzE6N3FXqQw%40mail.gmail.com",
"msg_date": "Mon, 17 Jul 2023 10:24:27 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Sat, Jul 15, 2023 at 2:11 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jul 13, 2023 at 10:55 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jul 12, 2023 at 11:15 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > > I think this is a valid concern. Can't we move all the checks\n> > > > (including the remote attrs check) inside\n> > > > IsIndexUsableForReplicaIdentityFull() and then call it from both\n> > > > places? Won't we have attrmap information available in the callers of\n> > > > FindReplTupleInLocalRel() via ApplyExecutionData?\n> > >\n> > > You mean to pass ApplyExecutionData or attrmap down to\n> > > RelationFindReplTupleByIndex()? I think it would be better to call it\n> > > from FindReplTupleInLocalRel() instead.\n> >\n> > I've attached a draft patch for this idea.\n> >\n>\n> Looks reasonable to me. However, I am not very sure if we need to\n> change the prototype of RelationFindReplTupleByIndex(). Few other\n> minor comments:\n\nAgreed. I reverted the change.\n\n>\n> 1.\n> - * has been implemented as a tri-state with values DISABLED, PENDING, and\n> +n * has been implemented as a tri-state with values DISABLED, PENDING, and\n> * ENABLED.\n> *\n> The above seems like a spurious change.\n\nFixed.\n\n>\n> 2.\n> + /* And must reference the remote relation column */\n> + if (attrmap->maplen <= AttrNumberGetAttrOffset(keycol) ||\n> + attrmap->attnums[AttrNumberGetAttrOffset(keycol)] < 0)\n> + return false;\n> +\n>\n> I think we should specify the reason for this. I see that in the\n> commit fd48a86c62 [1], the reason for this is removed. Shouldn't we\n> retain that in some form?\n\nAgreed.\n\nI've updated the patch. Regarding one change in the patch:\n\n * Returns the oid of an index that can be used by the apply worker to scan\n- * the relation. The index must be btree or hash, non-partial, and the leftmost\n- * field must be a column (not an expression) that references the remote\n- * relation column. These limitations help to keep the index scan similar\n- * to PK/RI index scans.\n+ * the relation.\n\nI moved the second sentence to IsIndexUsableForReplicaIdentityFull()\nbecause this function is now responsible for checking if the given\nindex is usable for REPLICA IDENTITY FULL tables. I think it would be\nbetter to mention all conditions for such usable indexes in one place.\nCurrently, the conditions are explained in\nFindUsableIndexForReplicaIdentityFull() but the checks are performed\nand the details are explained in\nIsIndexUsableForReplicaIdentityFull().\n\nBTW, IsIndexOnlyExpression() is not necessary but the current code\nstill works fine. So do we need to backpatch it to PG16? I'm thinking\nwe can apply it to only HEAD.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 18 Jul 2023 15:39:32 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 12:10 PM Masahiko Sawada <[email protected]> wrote:\n>\n> BTW, IsIndexOnlyExpression() is not necessary but the current code\n> still works fine. So do we need to backpatch it to PG16? I'm thinking\n> we can apply it to only HEAD.\n>\n\nEither way is fine but I think if we backpatch it then the code\nremains consistent and the backpatching would be easier.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Jul 2023 15:04:35 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "Hi Masahiko, Amit, all\n\nI've updated the patch.\n>\n\nI think the flow is much nicer now compared to the HEAD. I really don't\nhave any\ncomments regarding the accuracy of the code changes, all looks good to me.\nOverall, I cannot see any behavioral changes as you already alluded to.\n\nMaybe few minor notes regarding the comments:\n\n /*\n> + * And must reference the remote relation column. This is because if it\n> + * doesn't, the sequential scan is favorable over index scan in most\n> + * cases..\n> + */\n\n\nI think the reader might have lost the context (or say in the future due to\nanother refactor etc). So maybe start with:\n\n/* And the leftmost index field must refer to the ...\n\n\nAlso, now in IsIndexUsableForReplicaIdentityFull() some of the conditions\nhave comments\nsome don't. Should we comment on the missing ones as well, maybe such as:\n\n/* partial indexes are not support *\n> if (indexInfo->ii_Predicate != NIL)\n>\nand,\n\n> /* all indexes must have at least one attribute */\n> Assert(indexInfo->ii_NumIndexAttrs >= 1);\n\n\n\n\n>\n>>\n>> BTW, IsIndexOnlyExpression() is not necessary but the current code\n>> still works fine. So do we need to backpatch it to PG16? I'm thinking\n>> we can apply it to only HEAD.\n>\n> Either way is fine but I think if we backpatch it then the code\n> remains consistent and the backpatching would be easier.\n>\n\nYeah, I also have a slight preference for backporting. It could make it\neasier to maintain the code\nin the future in case of another backport(s). With the cost of making it\nslightly harder for you now :)\n\nThanks,\nOnder\n\nHi Masahiko, Amit, all\nI've updated the patch.I think the flow is much nicer now compared to the HEAD. I really don't have anycomments regarding the accuracy of the code changes, all looks good to me.Overall, I cannot see any behavioral changes as you already alluded to.Maybe few minor notes regarding the comments: /*+\t * And must reference the remote relation column. This is because if it+\t * doesn't, the sequential scan is favorable over index scan in most+\t * cases..+\t */I think the reader might have lost the context (or say in the future due to another refactor etc). So maybe start with: /* And the leftmost index field must refer to the ... Also, now in IsIndexUsableForReplicaIdentityFull() some of the conditions have comments some don't. Should we comment on the missing ones as well, maybe such as:/* partial indexes are not support *if (indexInfo->ii_Predicate != NIL) and, /* all indexes must have at least one attribute */Assert(indexInfo->ii_NumIndexAttrs >= 1); \n\nBTW, IsIndexOnlyExpression() is not necessary but the current code\nstill works fine. So do we need to backpatch it to PG16? I'm thinking\nwe can apply it to only HEAD. \nEither way is fine but I think if we backpatch it then the coderemains consistent and the backpatching would be easier.Yeah, I also have a slight preference for backporting. It could make it easier to maintain the code in the future in case of another backport(s). With the cost of making it slightly harder for you now :) Thanks,Onder",
"msg_date": "Wed, 19 Jul 2023 11:09:41 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 5:09 PM Önder Kalacı <[email protected]> wrote:\n>\n> Hi Masahiko, Amit, all\n>\n>> I've updated the patch.\n>\n>\n> I think the flow is much nicer now compared to the HEAD. I really don't have any\n> comments regarding the accuracy of the code changes, all looks good to me.\n> Overall, I cannot see any behavioral changes as you already alluded to.\n\nThank you for reviewing the patch.\n\n>\n> Maybe few minor notes regarding the comments:\n>\n>> /*\n>> + * And must reference the remote relation column. This is because if it\n>> + * doesn't, the sequential scan is favorable over index scan in most\n>> + * cases..\n>> + */\n>\n>\n> I think the reader might have lost the context (or say in the future due to\n> another refactor etc). So maybe start with:\n>\n>> /* And the leftmost index field must refer to the ...\n\nFixed.\n\n>\n>\n> Also, now in IsIndexUsableForReplicaIdentityFull() some of the conditions have comments\n> some don't. Should we comment on the missing ones as well, maybe such as:\n>\n>> /* partial indexes are not support *\n>> if (indexInfo->ii_Predicate != NIL)\n>\n> and,\n>>\n>> /* all indexes must have at least one attribute */\n>> Assert(indexInfo->ii_NumIndexAttrs >= 1);\n\nAgreed. But I don't think the latter comment is necessary as it's obvious.\n\n>\n>\n>\n>>\n>>>\n>>>\n>>> BTW, IsIndexOnlyExpression() is not necessary but the current code\n>>> still works fine. So do we need to backpatch it to PG16? I'm thinking\n>>> we can apply it to only HEAD.\n>>\n>> Either way is fine but I think if we backpatch it then the code\n>> remains consistent and the backpatching would be easier.\n>\n>\n> Yeah, I also have a slight preference for backporting. It could make it easier to maintain the code\n> in the future in case of another backport(s). With the cost of making it slightly harder for you now :)\n\nAgreed.\n\nI've attached the updated patch. I'll push it early next week, barring\nany objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Jul 2023 10:25:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 6:55 AM Masahiko Sawada <[email protected]> wrote:\n>\n> I've attached the updated patch. I'll push it early next week, barring\n> any objections.\n>\n\nYou have moved most of the comments related to the restriction of\nwhich index can be picked atop IsIndexUsableForReplicaIdentityFull().\nNow, the comments related to limitation atop\nFindUsableIndexForReplicaIdentityFull() look slightly odd as it refers\nto limitations but those limitation were not stated. The comments I am\nreferring to are: \"Note that the limitations of index scans for\nreplica identity full only .... might not be a good idea in some\ncases\". Shall we move these as well atop\nIsIndexUsableForReplicaIdentityFull()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Jul 2023 16:02:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Sat, Jul 22, 2023 at 7:32 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 21, 2023 at 6:55 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > I've attached the updated patch. I'll push it early next week, barring\n> > any objections.\n> >\n>\n> You have moved most of the comments related to the restriction of\n> which index can be picked atop IsIndexUsableForReplicaIdentityFull().\n> Now, the comments related to limitation atop\n> FindUsableIndexForReplicaIdentityFull() look slightly odd as it refers\n> to limitations but those limitation were not stated. The comments I am\n> referring to are: \"Note that the limitations of index scans for\n> replica identity full only .... might not be a good idea in some\n> cases\". Shall we move these as well atop\n> IsIndexUsableForReplicaIdentityFull()?\n\nGood point.\n\nLooking at neighbor comments, the following comment looks slightly odd to me:\n\n * XXX: See IsIndexUsableForReplicaIdentityFull() to know the challenges in\n * supporting indexes other than btree and hash. For partial indexes, the\n * required changes are likely to be larger. If none of the tuples satisfy\n * the expression for the index scan, we fall-back to sequential execution,\n * which might not be a good idea in some cases.\n\nAre the first and second sentences related actually?\n\nI think we can move it as well to\nIsIndexUsableForReplicaIdentityFull() with some adjustments. I've\nattached the updated patch that incorporated your comment and included\nmy idea to update the comment.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Jul 2023 10:09:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 6:39 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Sat, Jul 22, 2023 at 7:32 PM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > You have moved most of the comments related to the restriction of\n> > which index can be picked atop IsIndexUsableForReplicaIdentityFull().\n> > Now, the comments related to limitation atop\n> > FindUsableIndexForReplicaIdentityFull() look slightly odd as it refers\n> > to limitations but those limitation were not stated. The comments I am\n> > referring to are: \"Note that the limitations of index scans for\n> > replica identity full only .... might not be a good idea in some\n> > cases\". Shall we move these as well atop\n> > IsIndexUsableForReplicaIdentityFull()?\n>\n> Good point.\n>\n> Looking at neighbor comments, the following comment looks slightly odd to me:\n>\n> * XXX: See IsIndexUsableForReplicaIdentityFull() to know the challenges in\n> * supporting indexes other than btree and hash. For partial indexes, the\n> * required changes are likely to be larger. If none of the tuples satisfy\n> * the expression for the index scan, we fall-back to sequential execution,\n> * which might not be a good idea in some cases.\n>\n> Are the first and second sentences related actually?\n>\n\nNot really.\n\n> I think we can move it as well to\n> IsIndexUsableForReplicaIdentityFull() with some adjustments. I've\n> attached the updated patch that incorporated your comment and included\n> my idea to update the comment.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jul 2023 08:34:55 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 12:05 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 24, 2023 at 6:39 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Sat, Jul 22, 2023 at 7:32 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > You have moved most of the comments related to the restriction of\n> > > which index can be picked atop IsIndexUsableForReplicaIdentityFull().\n> > > Now, the comments related to limitation atop\n> > > FindUsableIndexForReplicaIdentityFull() look slightly odd as it refers\n> > > to limitations but those limitation were not stated. The comments I am\n> > > referring to are: \"Note that the limitations of index scans for\n> > > replica identity full only .... might not be a good idea in some\n> > > cases\". Shall we move these as well atop\n> > > IsIndexUsableForReplicaIdentityFull()?\n> >\n> > Good point.\n> >\n> > Looking at neighbor comments, the following comment looks slightly odd to me:\n> >\n> > * XXX: See IsIndexUsableForReplicaIdentityFull() to know the challenges in\n> > * supporting indexes other than btree and hash. For partial indexes, the\n> > * required changes are likely to be larger. If none of the tuples satisfy\n> > * the expression for the index scan, we fall-back to sequential execution,\n> > * which might not be a good idea in some cases.\n> >\n> > Are the first and second sentences related actually?\n> >\n>\n> Not really.\n>\n> > I think we can move it as well to\n> > IsIndexUsableForReplicaIdentityFull() with some adjustments. I've\n> > attached the updated patch that incorporated your comment and included\n> > my idea to update the comment.\n> >\n>\n> LGTM.\n\nPushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 16:44:34 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc: improve the restriction description of using indexes on\n REPLICA IDENTITY FULL table."
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile testing PG16, I observed that in PG16 there is a big performance\ndegradation in concurrent COPY into a single relation with 2 - 16\nclients in my environment. I've attached a test script that measures\nthe execution time of COPYing 5GB data in total to the single relation\nwhile changing the number of concurrent insertions, in PG16 and PG15.\nHere are the results on my environment (EC2 instance, RHEL 8.6, 128\nvCPUs, 512GB RAM):\n\n* PG15 (4b15868b69)\nPG15: nclients = 1, execution time = 14.181\nPG15: nclients = 2, execution time = 9.319\nPG15: nclients = 4, execution time = 5.872\nPG15: nclients = 8, execution time = 3.773\nPG15: nclients = 16, execution time = 3.202\nPG15: nclients = 32, execution time = 3.023\nPG15: nclients = 64, execution time = 3.829\nPG15: nclients = 128, execution time = 4.111\nPG15: nclients = 256, execution time = 4.158\n\n* PG16 (c24e9ef330)\nPG16: nclients = 1, execution time = 17.112\nPG16: nclients = 2, execution time = 14.084\nPG16: nclients = 4, execution time = 27.997\nPG16: nclients = 8, execution time = 10.554\nPG16: nclients = 16, execution time = 7.074\nPG16: nclients = 32, execution time = 4.607\nPG16: nclients = 64, execution time = 2.093\nPG16: nclients = 128, execution time = 2.141\nPG16: nclients = 256, execution time = 2.202\n\nPG16 has better scalability (more than 64 clients) but it took much\nmore time than PG15, especially at 1 - 16 clients.\n\nThe relevant commit is 00d1e02be2 \"hio: Use ExtendBufferedRelBy() to\nextend tables more efficiently\". With commit 1cbbee0338 (the previous\ncommit of 00d1e02be2), I got a better numbers, it didn't have a better\nscalability, though:\n\nPG16: nclients = 1, execution time = 17.444\nPG16: nclients = 2, execution time = 10.690\nPG16: nclients = 4, execution time = 7.010\nPG16: nclients = 8, execution time = 4.282\nPG16: nclients = 16, execution time = 3.373\nPG16: nclients = 32, execution time = 3.205\nPG16: nclients = 64, execution time = 3.705\nPG16: nclients = 128, execution time = 4.196\nPG16: nclients = 256, execution time = 4.201\n\nWhile investigating the cause, I found an interesting fact that in\nmdzeroextend if I use only either FileFallocate() or FileZero, we can\nget better numbers. For example, If I always use FileZero with the\nfollowing change:\n\n@@ -574,7 +574,7 @@ mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n * that decision should be made though? For now just use a cutoff of\n * 8, anything between 4 and 8 worked OK in some local testing.\n */\n- if (numblocks > 8)\n+ if (false)\n {\n int ret;\n\nI got:\n\nPG16: nclients = 1, execution time = 16.898\nPG16: nclients = 2, execution time = 8.740\nPG16: nclients = 4, execution time = 4.656\nPG16: nclients = 8, execution time = 2.733\nPG16: nclients = 16, execution time = 2.021\nPG16: nclients = 32, execution time = 1.693\nPG16: nclients = 64, execution time = 1.742\nPG16: nclients = 128, execution time = 2.180\nPG16: nclients = 256, execution time = 2.296\n\nAfter further investigation, the performance degradation comes from\ncalling posix_fallocate() (called via FileFallocate()) and pwritev()\n(called via FileZero) alternatively depending on how many blocks we\nextend by. And it happens only on the xfs filesystem. Does anyone\nobserve a similar performance issue with the attached benchmark\nscript?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 3 Jul 2023 11:55:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance degradation on concurrent COPY into a single relation in\n PG16."
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada <[email protected]> wrote:\n>\n> After further investigation, the performance degradation comes from\n> calling posix_fallocate() (called via FileFallocate()) and pwritev()\n> (called via FileZero) alternatively depending on how many blocks we\n> extend by. And it happens only on the xfs filesystem.\n\nFYI, the attached simple C program proves the fact that calling\nalternatively posix_fallocate() and pwrite() causes slow performance\non posix_fallocate():\n\n$ gcc -o test test.c\n$ time ./test test.1 1\ntotal 200000\nfallocate 200000\nfilewrite 0\n\nreal 0m1.305s\nuser 0m0.050s\nsys 0m1.255s\n\n$ time ./test test.2 2\ntotal 200000\nfallocate 100000\nfilewrite 100000\n\nreal 1m29.222s\nuser 0m0.139s\nsys 0m3.139s\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 3 Jul 2023 11:59:38 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On 03/07/2023 05:59, Masahiko Sawada wrote:\n> On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada <[email protected]> wrote:\n>>\n>> After further investigation, the performance degradation comes from\n>> calling posix_fallocate() (called via FileFallocate()) and pwritev()\n>> (called via FileZero) alternatively depending on how many blocks we\n>> extend by. And it happens only on the xfs filesystem.\n> \n> FYI, the attached simple C program proves the fact that calling\n> alternatively posix_fallocate() and pwrite() causes slow performance\n> on posix_fallocate():\n> \n> $ gcc -o test test.c\n> $ time ./test test.1 1\n> total 200000\n> fallocate 200000\n> filewrite 0\n> \n> real 0m1.305s\n> user 0m0.050s\n> sys 0m1.255s\n> \n> $ time ./test test.2 2\n> total 200000\n> fallocate 100000\n> filewrite 100000\n> \n> real 1m29.222s\n> user 0m0.139s\n> sys 0m3.139s\n\nThis must be highly dependent on the underlying OS and filesystem. I'm \nnot seeing that effect on my laptop:\n\n/data$ time /tmp/test test.0 0\ntotal\t200000\nfallocate\t0\nfilewrite\t200000\n\nreal\t0m1.856s\nuser\t0m0.140s\nsys\t0m1.688s\n/data$ time /tmp/test test.1 1\ntotal\t200000\nfallocate\t200000\nfilewrite\t0\n\nreal\t0m1.335s\nuser\t0m0.156s\nsys\t0m1.179s\n/data$ time /tmp/test test.2 2\ntotal\t200000\nfallocate\t100000\nfilewrite\t100000\n\nreal\t0m2.159s\nuser\t0m0.165s\nsys\t0m1.880s\n\n/data$ uname -a\nLinux heikkilaptop 6.0.0-6-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.0.12-1 \n(2022-12-09) x86_64 GNU/Linux\n\n/data is an nvme drive with ext4 filesystem.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 10:36:49 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 4:36 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 03/07/2023 05:59, Masahiko Sawada wrote:\n> > On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada <[email protected]> wrote:\n> >>\n> >> After further investigation, the performance degradation comes from\n> >> calling posix_fallocate() (called via FileFallocate()) and pwritev()\n> >> (called via FileZero) alternatively depending on how many blocks we\n> >> extend by. And it happens only on the xfs filesystem.\n> >\n> > FYI, the attached simple C program proves the fact that calling\n> > alternatively posix_fallocate() and pwrite() causes slow performance\n> > on posix_fallocate():\n> >\n> > $ gcc -o test test.c\n> > $ time ./test test.1 1\n> > total 200000\n> > fallocate 200000\n> > filewrite 0\n> >\n> > real 0m1.305s\n> > user 0m0.050s\n> > sys 0m1.255s\n> >\n> > $ time ./test test.2 2\n> > total 200000\n> > fallocate 100000\n> > filewrite 100000\n> >\n> > real 1m29.222s\n> > user 0m0.139s\n> > sys 0m3.139s\n>\n> This must be highly dependent on the underlying OS and filesystem.\n\nRight. The above were the result where I created the file on the xfs\nfilesystem. The kernel version and the xfs filesystem version are:\n\n% uname -rms\nLinux 4.18.0-372.9.1.el8.x86_64 x86_64\n\n% sudo xfs_db -r /dev/nvme4n1p2\nxfs_db> version\nversionnum [0xb4b5+0x18a] =\nV5,NLINK,DIRV2,ATTR,ALIGN,LOGV2,EXTFLG,MOREBITS,ATTR2,LAZYSBCOUNT,PROJID32BIT,CRC,FTYPE,FINOBT,SPARSE_INODES,REFLINK\n\nAs far as I tested, it happens only on the xfs filesystem (at least\nthe above version) and doesn't happen on ext4 and ext3 filesystems.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:54:16 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi Masahiko,\n\nOut of curiosity I've tried and it is reproducible as you have stated : XFS\n@ 4.18.0-425.10.1.el8_7.x86_64:\n\n[root@rockyora ~]# time ./test test.1 1\ntotal 200000\nfallocate 200000\nfilewrite 0\n\nreal 0m5.868s\nuser 0m0.035s\nsys 0m5.716s\n[root@rockyora ~]# time ./test test.2 2\ntotal 200000\nfallocate 100000\nfilewrite 100000\n\nreal 0m25.858s\nuser 0m0.108s\nsys 0m3.596s\n[root@rockyora ~]# time ./test test.3 2\ntotal 200000\nfallocate 100000\nfilewrite 100000\n\nreal 0m25.927s\nuser 0m0.091s\nsys 0m3.621s\n[root@rockyora ~]# time ./test test.4 1\ntotal 200000\nfallocate 200000\nfilewrite 0\n\nreal 0m3.044s\nuser 0m0.043s\nsys 0m2.934s\n\nAccording to iostat and blktrace -d /dev/sda -o - | blkparse -i - output ,\nthe XFS issues sync writes while ext4 does not, xfs looks like constant\nloop of sync writes (D) by kworker/2:1H-kblockd:\n[..]\n 8,0 2 34172 24.115364875 312 D WS 44624928 + 16\n[kworker/2:1H]\n 8,0 2 34173 24.115482679 0 C WS 44624928 + 16 [0]\n 8,0 2 34174 24.115548251 6501 A WS 42525760 + 16 <- (253,0)\n34225216\n 8,0 2 34175 24.115548660 6501 A WS 44624960 + 16 <- (8,2)\n42525760\n 8,0 2 34176 24.115549111 6501 Q WS 44624960 + 16 [test]\n 8,0 2 34177 24.115551351 6501 G WS 44624960 + 16 [test]\n 8,0 2 34178 24.115552111 6501 I WS 44624960 + 16 [test]\n 8,0 2 34179 24.115559713 312 D WS 44624960 + 16\n[kworker/2:1H]\n 8,0 2 34180 24.115677217 0 C WS 44624960 + 16 [0]\n 8,0 2 34181 24.115743150 6501 A WS 42525792 + 16 <- (253,0)\n34225248\n 8,0 2 34182 24.115743502 6501 A WS 44624992 + 16 <- (8,2)\n42525792\n 8,0 2 34183 24.115743949 6501 Q WS 44624992 + 16 [test]\n 8,0 2 34184 24.115746175 6501 G WS 44624992 + 16 [test]\n 8,0 2 34185 24.115746918 6501 I WS 44624992 + 16 [test]\n 8,0 2 34186 24.115754492 312 D WS 44624992 + 16\n[kworker/2:1H]\n\nSo it looks like you are onto something.\n\nRegards,\n-J.\n\nHi Masahiko,Out of curiosity I've tried and it is reproducible as you have stated : XFS @ 4.18.0-425.10.1.el8_7.x86_64:[root@rockyora ~]# time ./test test.1 1total 200000fallocate 200000filewrite 0real 0m5.868suser 0m0.035ssys 0m5.716s[root@rockyora ~]# time ./test test.2 2total 200000fallocate 100000filewrite 100000real 0m25.858suser 0m0.108ssys 0m3.596s[root@rockyora ~]# time ./test test.3 2total 200000fallocate 100000filewrite 100000real 0m25.927suser 0m0.091ssys 0m3.621s[root@rockyora ~]# time ./test test.4 1total 200000fallocate 200000filewrite 0real 0m3.044suser 0m0.043ssys 0m2.934sAccording to iostat and blktrace -d /dev/sda -o - | blkparse -i - output , the XFS issues sync writes while ext4 does not, xfs looks like constant loop of sync writes (D) by kworker/2:1H-kblockd:[..] 8,0 2 34172 24.115364875 312 D WS 44624928 + 16 [kworker/2:1H] 8,0 2 34173 24.115482679 0 C WS 44624928 + 16 [0] 8,0 2 34174 24.115548251 6501 A WS 42525760 + 16 <- (253,0) 34225216 8,0 2 34175 24.115548660 6501 A WS 44624960 + 16 <- (8,2) 42525760 8,0 2 34176 24.115549111 6501 Q WS 44624960 + 16 [test] 8,0 2 34177 24.115551351 6501 G WS 44624960 + 16 [test] 8,0 2 34178 24.115552111 6501 I WS 44624960 + 16 [test] 8,0 2 34179 24.115559713 312 D WS 44624960 + 16 [kworker/2:1H] 8,0 2 34180 24.115677217 0 C WS 44624960 + 16 [0] 8,0 2 34181 24.115743150 6501 A WS 42525792 + 16 <- (253,0) 34225248 8,0 2 34182 24.115743502 6501 A WS 44624992 + 16 <- (8,2) 42525792 8,0 2 34183 24.115743949 6501 Q WS 44624992 + 16 [test] 8,0 2 34184 24.115746175 6501 G WS 44624992 + 16 [test] 8,0 2 34185 24.115746918 6501 I WS 44624992 + 16 [test] 8,0 2 34186 24.115754492 312 D WS 44624992 + 16 [kworker/2:1H]So it looks like you are onto something. Regards,-J.",
"msg_date": "Mon, 3 Jul 2023 11:53:56 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hello,\n\nOn 2023-Jul-03, Masahiko Sawada wrote:\n\n> While testing PG16, I observed that in PG16 there is a big performance\n> degradation in concurrent COPY into a single relation with 2 - 16\n> clients in my environment. I've attached a test script that measures\n> the execution time of COPYing 5GB data in total to the single relation\n> while changing the number of concurrent insertions, in PG16 and PG15.\n\nThis item came up in the RMT meeting. Andres, I think this item belongs\nto you, because of commit 00d1e02be2.\n\nThe regression seems serious enough at low client counts:\n\n> * PG15 (4b15868b69)\n> PG15: nclients = 1, execution time = 14.181\n> PG15: nclients = 2, execution time = 9.319\n> PG15: nclients = 4, execution time = 5.872\n> PG15: nclients = 8, execution time = 3.773\n> PG15: nclients = 16, execution time = 3.202\n\n> * PG16 (c24e9ef330)\n> PG16: nclients = 1, execution time = 17.112\n> PG16: nclients = 2, execution time = 14.084\n> PG16: nclients = 4, execution time = 27.997\n> PG16: nclients = 8, execution time = 10.554\n> PG16: nclients = 16, execution time = 7.074\n\nSo the fact that the speed has clearly gone up at larger client counts\nis not an excuse for not getting it fixed, XFS-specificity\nnotwithstanding.\n\n> The relevant commit is 00d1e02be2 \"hio: Use ExtendBufferedRelBy() to\n> extend tables more efficiently\". With commit 1cbbee0338 (the previous\n> commit of 00d1e02be2), I got a better numbers, it didn't have a better\n> scalability, though:\n> \n> PG16: nclients = 1, execution time = 17.444\n> PG16: nclients = 2, execution time = 10.690\n> PG16: nclients = 4, execution time = 7.010\n> PG16: nclients = 8, execution time = 4.282\n> PG16: nclients = 16, execution time = 3.373\n\nWell, these numbers are better, but they still look worse than PG15.\nI suppose there are other commits that share blame.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:25:41 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-10 15:25:41 +0200, Alvaro Herrera wrote:\n> On 2023-Jul-03, Masahiko Sawada wrote:\n> \n> > While testing PG16, I observed that in PG16 there is a big performance\n> > degradation in concurrent COPY into a single relation with 2 - 16\n> > clients in my environment. I've attached a test script that measures\n> > the execution time of COPYing 5GB data in total to the single relation\n> > while changing the number of concurrent insertions, in PG16 and PG15.\n> \n> This item came up in the RMT meeting. Andres, I think this item belongs\n> to you, because of commit 00d1e02be2.\n\nI'll take a look - I wasn't even aware of this thread until now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 08:28:25 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:\n> While testing PG16, I observed that in PG16 there is a big performance\n> degradation in concurrent COPY into a single relation with 2 - 16\n> clients in my environment. I've attached a test script that measures\n> the execution time of COPYing 5GB data in total to the single relation\n> while changing the number of concurrent insertions, in PG16 and PG15.\n> Here are the results on my environment (EC2 instance, RHEL 8.6, 128\n> vCPUs, 512GB RAM):\n\nGah, RHEL with its frankenkernels, the bane of my existance.\n\nFWIW, I had extensively tested this with XFS, just with a newer kernel. Have\nyou tested this on RHEL9 as well by any chance?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 08:34:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-03 11:59:38 +0900, Masahiko Sawada wrote:\n> On Mon, Jul 3, 2023 at 11:55 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > After further investigation, the performance degradation comes from\n> > calling posix_fallocate() (called via FileFallocate()) and pwritev()\n> > (called via FileZero) alternatively depending on how many blocks we\n> > extend by. And it happens only on the xfs filesystem.\n>\n> FYI, the attached simple C program proves the fact that calling\n> alternatively posix_fallocate() and pwrite() causes slow performance\n> on posix_fallocate():\n>\n> $ gcc -o test test.c\n> $ time ./test test.1 1\n> total 200000\n> fallocate 200000\n> filewrite 0\n>\n> real 0m1.305s\n> user 0m0.050s\n> sys 0m1.255s\n>\n> $ time ./test test.2 2\n> total 200000\n> fallocate 100000\n> filewrite 100000\n>\n> real 1m29.222s\n> user 0m0.139s\n> sys 0m3.139s\n\nOn an xfs filesystem, with a very recent kernel:\n\ntime /tmp/msw_test /srv/dev/fio/msw 0\ntotal\t200000\nfallocate\t0\nfilewrite\t200000\n\nreal\t0m0.456s\nuser\t0m0.017s\nsys\t0m0.439s\n\n\ntime /tmp/msw_test /srv/dev/fio/msw 1\ntotal\t200000\nfallocate\t200000\nfilewrite\t0\n\nreal\t0m0.141s\nuser\t0m0.010s\nsys\t0m0.131s\n\n\ntime /tmp/msw_test /srv/dev/fio/msw 2\ntotal\t200000\nfallocate\t100000\nfilewrite\t100000\n\nreal\t0m0.297s\nuser\t0m0.017s\nsys\t0m0.280s\n\n\nSo I don't think I can reproduce your problem on that system...\n\nI also tried adding a fdatasync() into the loop, but that just made things\nuniformly slow.\n\n\nI guess I'll try to dig up whether this is a problem in older upstream\nkernels, or whether it's been introduced in RHEL.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 08:44:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:\n> Out of curiosity I've tried and it is reproducible as you have stated : XFS\n> @ 4.18.0-425.10.1.el8_7.x86_64:\n>...\n> According to iostat and blktrace -d /dev/sda -o - | blkparse -i - output ,\n> the XFS issues sync writes while ext4 does not, xfs looks like constant\n> loop of sync writes (D) by kworker/2:1H-kblockd:\n\nThat clearly won't go well. It's not reproducible on newer systems,\nunfortunately :(. Or well, fortunately maybe.\n\n\nI wonder if a trick to avoid this could be to memorialize the fact that we\nbulk extended before and extend by that much going forward? That'd avoid the\nswapping back and forth.\n\n\nOne thing that confuses me is that Sawada-san observed a regression at a\nsingle client - yet from what I can tell, there's actually not a single\nfallocate() being generated in that case, because the table is so narrow that\nwe never end up extending by a sufficient number of blocks in\nRelationAddBlocks() to reach that path. Yet there's a regression at 1 client.\n\nI don't yet have a RHEL8 system at hand, could either of you send the result\nof a\n strace -c -p $pid_of_backend_doing_copy\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jul 2023 09:24:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 12:34 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:\n> > While testing PG16, I observed that in PG16 there is a big performance\n> > degradation in concurrent COPY into a single relation with 2 - 16\n> > clients in my environment. I've attached a test script that measures\n> > the execution time of COPYing 5GB data in total to the single relation\n> > while changing the number of concurrent insertions, in PG16 and PG15.\n> > Here are the results on my environment (EC2 instance, RHEL 8.6, 128\n> > vCPUs, 512GB RAM):\n>\n> Gah, RHEL with its frankenkernels, the bane of my existance.\n>\n> FWIW, I had extensively tested this with XFS, just with a newer kernel. Have\n> you tested this on RHEL9 as well by any chance?\n\nI've tested this on RHEL9 (m5.24xlarge; 96vCPUs, 384GB RAM), and it\nseems to be reproducible on RHEL9 too.\n\n$ uname -rms\nLinux 5.14.0-284.11.1.el9_2.x86_64 x86_64\n$ cat /etc/redhat-release\nRed Hat Enterprise Linux release 9.2 (Plow)\n\nPG15: nclients = 1, execution time = 22.193\nPG15: nclients = 2, execution time = 12.430\nPG15: nclients = 4, execution time = 8.677\nPG15: nclients = 8, execution time = 6.144\nPG15: nclients = 16, execution time = 5.938\nPG15: nclients = 32, execution time = 5.775\nPG15: nclients = 64, execution time = 5.928\nPG15: nclients = 128, execution time = 6.346\nPG15: nclients = 256, execution time = 6.641\n\nPG16: nclients = 1, execution time = 24.601\nPG16: nclients = 2, execution time = 27.845\nPG16: nclients = 4, execution time = 40.575\nPG16: nclients = 8, execution time = 24.379\nPG16: nclients = 16, execution time = 15.835\nPG16: nclients = 32, execution time = 8.370\nPG16: nclients = 64, execution time = 4.042\nPG16: nclients = 128, execution time = 2.956\nPG16: nclients = 256, execution time = 2.591\n\nTests with test.c program:\n\n$ rm -f test.data; time ./test test.data 0\ntotal 200000\nfallocate 0\nfilewrite 200000\n\nreal 0m0.709s\nuser 0m0.057s\nsys 0m0.649s\n\n$ rm -f test.data; time ./test test.data 1\ntotal 200000\nfallocate 200000\nfilewrite 0\n\nreal 0m0.853s\nuser 0m0.058s\nsys 0m0.791s\n\n$ rm -f test.data; time ./test test.data 2\ntotal 200000\nfallocate 100000\nfilewrite 100000\n\nreal 2m10.002s\nuser 0m0.366s\nsys 0m6.649s\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 11:02:51 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 1:24 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:\n> > Out of curiosity I've tried and it is reproducible as you have stated : XFS\n> > @ 4.18.0-425.10.1.el8_7.x86_64:\n> >...\n> > According to iostat and blktrace -d /dev/sda -o - | blkparse -i - output ,\n> > the XFS issues sync writes while ext4 does not, xfs looks like constant\n> > loop of sync writes (D) by kworker/2:1H-kblockd:\n>\n> That clearly won't go well. It's not reproducible on newer systems,\n> unfortunately :(. Or well, fortunately maybe.\n>\n>\n> I wonder if a trick to avoid this could be to memorialize the fact that we\n> bulk extended before and extend by that much going forward? That'd avoid the\n> swapping back and forth.\n>\n>\n> One thing that confuses me is that Sawada-san observed a regression at a\n> single client - yet from what I can tell, there's actually not a single\n> fallocate() being generated in that case, because the table is so narrow that\n> we never end up extending by a sufficient number of blocks in\n> RelationAddBlocks() to reach that path. Yet there's a regression at 1 client.\n>\n> I don't yet have a RHEL8 system at hand, could either of you send the result\n> of a\n> strace -c -p $pid_of_backend_doing_copy\n>\n\nHere are the results:\n\n* PG16: nclients = 1, execution time = 23.758\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 53.18 0.308675 0 358898 pwrite64\n 33.92 0.196900 2 81202 pwritev\n 7.78 0.045148 0 81378 lseek\n 5.06 0.029371 2 11141 read\n 0.04 0.000250 2 91 openat\n 0.02 0.000094 1 89 close\n 0.00 0.000000 0 1 munmap\n 0.00 0.000000 0 84 brk\n 0.00 0.000000 0 1 sendto\n 0.00 0.000000 0 2 1 recvfrom\n 0.00 0.000000 0 2 kill\n 0.00 0.000000 0 1 futex\n 0.00 0.000000 0 1 epoll_wait\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.580438 1 532891 1 total\n\n* PG16: nclients = 2, execution time = 18.045\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 54.19 0.218721 1 187803 pwrite64\n 29.24 0.118002 2 40242 pwritev\n 6.23 0.025128 0 41239 lseek\n 5.03 0.020285 2 9133 read\n 2.04 0.008229 9 855 fallocate\n 1.28 0.005151 0 5598 1000 futex\n 1.12 0.004516 1 3965 kill\n 0.78 0.003141 1 3058 1 epoll_wait\n 0.06 0.000224 2 100 openat\n 0.03 0.000136 1 98 close\n 0.01 0.000050 0 84 brk\n 0.00 0.000013 0 22 setitimer\n 0.00 0.000006 0 15 1 rt_sigreturn\n 0.00 0.000002 2 1 munmap\n 0.00 0.000002 2 1 sendto\n 0.00 0.000002 0 3 2 recvfrom\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.403608 1 292217 1004 total\n\n* PG16: nclients = 4, execution time = 18.807\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 64.61 0.240171 2 93868 pwrite64\n 15.11 0.056173 4 11337 pwritev\n 7.29 0.027100 7 3465 fallocate\n 4.05 0.015048 2 5179 read\n 3.55 0.013188 0 14941 lseek\n 2.65 0.009858 1 8675 2527 futex\n 1.76 0.006536 1 4190 kill\n 0.88 0.003268 1 2199 epoll_wait\n 0.06 0.000213 2 101 openat\n 0.03 0.000130 1 99 close\n 0.01 0.000031 1 18 rt_sigreturn\n 0.01 0.000027 1 17 setitimer\n 0.00 0.000000 0 1 munmap\n 0.00 0.000000 0 84 brk\n 0.00 0.000000 0 1 sendto\n 0.00 0.000000 0 1 recvfrom\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.371743 2 144176 2527 total\n\n* PG16: nclients = 8, execution time = 11.982\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 73.16 0.180095 3 47895 pwrite64\n 8.61 0.021194 5 4199 pwritev\n 5.93 0.014598 6 2199 fallocate\n 3.42 0.008425 1 6723 2206 futex\n 3.18 0.007824 2 3068 read\n 2.44 0.005995 0 6510 lseek\n 1.82 0.004475 1 2665 kill\n 1.27 0.003118 1 1758 2 epoll_wait\n 0.10 0.000239 2 95 openat\n 0.06 0.000141 1 93 close\n 0.01 0.000034 2 16 setitimer\n 0.01 0.000020 2 10 2 rt_sigreturn\n 0.00 0.000000 0 1 munmap\n 0.00 0.000000 0 84 brk\n 0.00 0.000000 0 1 sendto\n 0.00 0.000000 0 2 1 recvfrom\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.246158 3 75319 2211 total\n\n* PG16: nclients = 16, execution time = 7.507\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 79.45 0.078310 5 14870 pwrite64\n 5.52 0.005440 5 973 pwritev\n 4.51 0.004443 6 640 fallocate\n 3.69 0.003640 1 2884 1065 futex\n 2.23 0.002200 2 866 read\n 1.80 0.001775 1 1685 lseek\n 1.44 0.001421 1 782 kill\n 1.08 0.001064 2 477 1 epoll_wait\n 0.13 0.000129 2 57 openat\n 0.08 0.000078 1 56 close\n 0.06 0.000055 0 84 brk\n 0.00 0.000003 3 1 munmap\n 0.00 0.000003 3 1 sendto\n 0.00 0.000003 1 2 1 recvfrom\n 0.00 0.000002 0 5 setitimer\n 0.00 0.000001 0 3 1 rt_sigreturn\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.098567 4 23386 1068 total\n\n* PG16: nclients = 32, execution time = 4.644\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 88.90 0.147208 12 11571 pwrite64\n 3.11 0.005151 1 2643 943 futex\n 2.61 0.004314 4 1039 pwritev\n 1.74 0.002879 8 327 fallocate\n 1.21 0.001998 3 624 read\n 0.90 0.001498 1 1439 lseek\n 0.66 0.001090 3 358 1 epoll_wait\n 0.63 0.001049 2 426 kill\n 0.12 0.000206 3 65 openat\n 0.07 0.000118 1 64 close\n 0.03 0.000045 0 84 brk\n 0.01 0.000011 11 1 munmap\n 0.00 0.000008 8 1 sendto\n 0.00 0.000007 3 2 1 recvfrom\n 0.00 0.000002 0 3 1 rt_sigreturn\n 0.00 0.000001 0 3 setitimer\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.165585 8 18650 946 total\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 11:32:59 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 6:24 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:\n> > Out of curiosity I've tried and it is reproducible as you have stated : XFS\n> > @ 4.18.0-425.10.1.el8_7.x86_64:\n> >...\n> > According to iostat and blktrace -d /dev/sda -o - | blkparse -i - output ,\n> > the XFS issues sync writes while ext4 does not, xfs looks like constant\n> > loop of sync writes (D) by kworker/2:1H-kblockd:\n>\n> That clearly won't go well. It's not reproducible on newer systems,\n> unfortunately :(. Or well, fortunately maybe.\n>\n>\n> I wonder if a trick to avoid this could be to memorialize the fact that we\n> bulk extended before and extend by that much going forward? That'd avoid the\n> swapping back and forth.\n\nI haven't seen this thread [1] \"Question on slow fallocate\", from XFS\nmailing list being mentioned here (it was started by Masahiko), but I\ndo feel it contains very important hints even challenging the whole\nidea of zeroing out files (or posix_fallocate()). Please especially\nsee Dave's reply. He also argues that posix_fallocate() !=\nfallocate(). What's interesting is that it's by design and newer\nkernel versions should not prevent such behaviour, see my testing\nresult below.\n\nAll I can add is that this those kernel versions (4.18.0) seem to very\npopular across customers (RHEL, Rocky) right now and that I've tested\non most recent available one (4.18.0-477.15.1.el8_8.x86_64) using\nMasahiko test.c and still got 6-7x slower time when using XFS on that\nkernel. After installing kernel-ml (6.4.2) the test.c result seems to\nbe the same (note it it occurs only when 1st allocating space, but of\ncourse it doesnt if the same file is rewritten/\"reallocated\"):\n\n[root@rockyora ~]# uname -r\n6.4.2-1.el8.elrepo.x86_64\n[root@rockyora ~]# time ./test test.0 0\ntotal 200000\nfallocate 0\nfilewrite 200000\n\nreal 0m0.405s\nuser 0m0.006s\nsys 0m0.391s\n[root@rockyora ~]# time ./test test.0 1\ntotal 200000\nfallocate 200000\nfilewrite 0\n\nreal 0m0.137s\nuser 0m0.005s\nsys 0m0.132s\n[root@rockyora ~]# time ./test test.1 1\ntotal 200000\nfallocate 200000\nfilewrite 0\n\nreal 0m0.968s\nuser 0m0.020s\nsys 0m0.928s\n[root@rockyora ~]# time ./test test.2 2\ntotal 200000\nfallocate 100000\nfilewrite 100000\n\nreal 0m6.059s\nuser 0m0.000s\nsys 0m0.788s\n[root@rockyora ~]# time ./test test.2 2\ntotal 200000\nfallocate 100000\nfilewrite 100000\n\nreal 0m0.598s\nuser 0m0.003s\nsys 0m0.225s\n[root@rockyora ~]#\n\niostat -x reports during first \"time ./test test.2 2\" (as you can see\nw_awiat is not that high but it accumulates):\nDevice r/s w/s rMB/s wMB/s rrqm/s wrqm/s\n%rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util\nsda 0.00 15394.00 0.00 122.02 0.00 13.00\n0.00 0.08 0.00 0.05 0.75 0.00 8.12 0.06 100.00\ndm-0 0.00 15407.00 0.00 122.02 0.00 0.00\n0.00 0.00 0.00 0.06 0.98 0.00 8.11 0.06 100.00\n\nSo maybe that's just a hint that you should try on slower storage\ninstead? (I think that on NVMe this issue would be hardly noticeable\ndue to low IO latency, not like here)\n\n-J.\n\n[1] - https://www.spinics.net/lists/linux-xfs/msg73035.html\n\n\n",
"msg_date": "Tue, 11 Jul 2023 09:09:43 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-11 09:09:43 +0200, Jakub Wartak wrote:\n> On Mon, Jul 10, 2023 at 6:24 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-07-03 11:53:56 +0200, Jakub Wartak wrote:\n> > > Out of curiosity I've tried and it is reproducible as you have stated : XFS\n> > > @ 4.18.0-425.10.1.el8_7.x86_64:\n> > >...\n> > > According to iostat and blktrace -d /dev/sda -o - | blkparse -i - output ,\n> > > the XFS issues sync writes while ext4 does not, xfs looks like constant\n> > > loop of sync writes (D) by kworker/2:1H-kblockd:\n> >\n> > That clearly won't go well. It's not reproducible on newer systems,\n> > unfortunately :(. Or well, fortunately maybe.\n> >\n> >\n> > I wonder if a trick to avoid this could be to memorialize the fact that we\n> > bulk extended before and extend by that much going forward? That'd avoid the\n> > swapping back and forth.\n>\n> I haven't seen this thread [1] \"Question on slow fallocate\", from XFS\n> mailing list being mentioned here (it was started by Masahiko), but I\n> do feel it contains very important hints even challenging the whole\n> idea of zeroing out files (or posix_fallocate()). Please especially\n> see Dave's reply.\n\nI think that's just due to the reproducer being a bit too minimal and the\nactual problem being addressed not being explained.\n\n\n> He also argues that posix_fallocate() != fallocate(). What's interesting is\n> that it's by design and newer kernel versions should not prevent such\n> behaviour, see my testing result below.\n\nI think the problem there was that I was not targetting a different file\nbetween the different runs, somehow assuming the test program would be taking\ncare of that.\n\nI don't think the test program actually tests things in a particularly useful\nway - it does fallocate()s in 8k chunks - which postgres never does.\n\n\n\n> All I can add is that this those kernel versions (4.18.0) seem to very\n> popular across customers (RHEL, Rocky) right now and that I've tested\n> on most recent available one (4.18.0-477.15.1.el8_8.x86_64) using\n> Masahiko test.c and still got 6-7x slower time when using XFS on that\n> kernel. After installing kernel-ml (6.4.2) the test.c result seems to\n> be the same (note it it occurs only when 1st allocating space, but of\n> course it doesnt if the same file is rewritten/\"reallocated\"):\n\ntest.c really doesn't reproduce postgres behaviour in any meaningful way,\nusing it as a benchmark is a bad idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jul 2023 08:47:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:\n> While testing PG16, I observed that in PG16 there is a big performance\n> degradation in concurrent COPY into a single relation with 2 - 16\n> clients in my environment. I've attached a test script that measures\n> the execution time of COPYing 5GB data in total to the single relation\n> while changing the number of concurrent insertions, in PG16 and PG15.\n> Here are the results on my environment (EC2 instance, RHEL 8.6, 128\n> vCPUs, 512GB RAM):\n>\n> * PG15 (4b15868b69)\n> PG15: nclients = 1, execution time = 14.181\n>\n> * PG16 (c24e9ef330)\n> PG16: nclients = 1, execution time = 17.112\n\n> The relevant commit is 00d1e02be2 \"hio: Use ExtendBufferedRelBy() to\n> extend tables more efficiently\". With commit 1cbbee0338 (the previous\n> commit of 00d1e02be2), I got a better numbers, it didn't have a better\n> scalability, though:\n>\n> PG16: nclients = 1, execution time = 17.444\n\nI think the single client case is indicative of an independent regression, or\nrather regressions - it can't have anything to do with the fallocate() issue\nand reproduces before that too in your numbers.\n\n1) COPY got slower, due to:\n9f8377f7a27 Add a DEFAULT option to COPY FROM\n\nThis added a new palloc()/free() to every call to NextCopyFrom(). It's not at\nall clear to me why this needs to happen in NextCopyFrom(), particularly\nbecause it's already stored in CopyFromState?\n\n\n2) pg_strtoint32_safe() got substantially slower, mainly due\n to\nfaff8f8e47f Allow underscores in integer and numeric constants.\n6fcda9aba83 Non-decimal integer literals\n\npinned to one cpu, turbo mode disabled, I get the following best-of-three times for\n copy test from '/tmp/tmp_4.data'\n(too impatient to use the larger file every time)\n\n15:\n6281.107 ms\n\nHEAD:\n7000.469 ms\n\nbacking out 9f8377f7a27:\n6433.516 ms\n\nalso backing out faff8f8e47f, 6fcda9aba83:\n6235.453 ms\n\n\nI suspect 1) can relatively easily be fixed properly. But 2) seems much\nharder. The changes increased the number of branches substantially, that's\ngonna cost in something as (previously) tight as pg_strtoint32().\n\n\n\nFor higher concurrency numbers, I now was able to reproduce the regression, to\na smaller degree. Much smaller after fixing the above. The reason we run into\nthe issue here is basically that the rows in the test are very narrow and reach\n\n#define MAX_BUFFERED_TUPLES\t\t1000\n\nat a small number of pages, so we go back and forth between extending with\nfallocate() and not.\n\nI'm *not* saying that that is the solution, but after changing that to 5000,\nthe numbers look a lot better (with the other regressions \"worked around\"):\n\n(this is again with turboboost disabled, to get more reproducible numbers)\n\nclients\t\t\t\t1 2 4 8 16 32\n\n15,buffered=1000 25725 13211 9232 5639 4862 4700\n15,buffered=5000 26107 14550 8644 6050 4943 4766\nHEAD+fixes,buffered=1000 25875 14505 8200 4900 3565 3433\nHEAD+fixes,buffered=5000 25830 12975 6527 3594 2739 2642\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CAD21AoAEwHTLYhuQ6PaBRPXKWN-CgW9iw%2B4hm%3D2EOFXbJQ3tOg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 11:51:59 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 3:52 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:\n> > While testing PG16, I observed that in PG16 there is a big performance\n> > degradation in concurrent COPY into a single relation with 2 - 16\n> > clients in my environment. I've attached a test script that measures\n> > the execution time of COPYing 5GB data in total to the single relation\n> > while changing the number of concurrent insertions, in PG16 and PG15.\n> > Here are the results on my environment (EC2 instance, RHEL 8.6, 128\n> > vCPUs, 512GB RAM):\n> >\n> > * PG15 (4b15868b69)\n> > PG15: nclients = 1, execution time = 14.181\n> >\n> > * PG16 (c24e9ef330)\n> > PG16: nclients = 1, execution time = 17.112\n>\n> > The relevant commit is 00d1e02be2 \"hio: Use ExtendBufferedRelBy() to\n> > extend tables more efficiently\". With commit 1cbbee0338 (the previous\n> > commit of 00d1e02be2), I got a better numbers, it didn't have a better\n> > scalability, though:\n> >\n> > PG16: nclients = 1, execution time = 17.444\n>\n> I think the single client case is indicative of an independent regression, or\n> rather regressions - it can't have anything to do with the fallocate() issue\n> and reproduces before that too in your numbers.\n\nRight.\n\n>\n> 1) COPY got slower, due to:\n> 9f8377f7a27 Add a DEFAULT option to COPY FROM\n>\n> This added a new palloc()/free() to every call to NextCopyFrom(). It's not at\n> all clear to me why this needs to happen in NextCopyFrom(), particularly\n> because it's already stored in CopyFromState?\n\nYeah, it seems to me that we can palloc the bool array once and use it\nfor the entire COPY FROM. With the attached small patch, the\nperformance becomes much better:\n\n15:\n14.70500 sec\n\n16:\n17.42900 sec\n\n16 w/ patch:\n14.85600 sec\n\n>\n>\n> 2) pg_strtoint32_safe() got substantially slower, mainly due\n> to\n> faff8f8e47f Allow underscores in integer and numeric constants.\n> 6fcda9aba83 Non-decimal integer literals\n\nAgreed.\n\n>\n> pinned to one cpu, turbo mode disabled, I get the following best-of-three times for\n> copy test from '/tmp/tmp_4.data'\n> (too impatient to use the larger file every time)\n>\n> 15:\n> 6281.107 ms\n>\n> HEAD:\n> 7000.469 ms\n>\n> backing out 9f8377f7a27:\n> 6433.516 ms\n>\n> also backing out faff8f8e47f, 6fcda9aba83:\n> 6235.453 ms\n>\n>\n> I suspect 1) can relatively easily be fixed properly. But 2) seems much\n> harder. The changes increased the number of branches substantially, that's\n> gonna cost in something as (previously) tight as pg_strtoint32().\n\nI'll look at how to fix 2).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 12 Jul 2023 17:40:20 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 5:40 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 3:52 AM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-07-03 11:55:13 +0900, Masahiko Sawada wrote:\n> > > While testing PG16, I observed that in PG16 there is a big performance\n> > > degradation in concurrent COPY into a single relation with 2 - 16\n> > > clients in my environment. I've attached a test script that measures\n> > > the execution time of COPYing 5GB data in total to the single relation\n> > > while changing the number of concurrent insertions, in PG16 and PG15.\n> > > Here are the results on my environment (EC2 instance, RHEL 8.6, 128\n> > > vCPUs, 512GB RAM):\n> > >\n> > > * PG15 (4b15868b69)\n> > > PG15: nclients = 1, execution time = 14.181\n> > >\n> > > * PG16 (c24e9ef330)\n> > > PG16: nclients = 1, execution time = 17.112\n> >\n> > > The relevant commit is 00d1e02be2 \"hio: Use ExtendBufferedRelBy() to\n> > > extend tables more efficiently\". With commit 1cbbee0338 (the previous\n> > > commit of 00d1e02be2), I got a better numbers, it didn't have a better\n> > > scalability, though:\n> > >\n> > > PG16: nclients = 1, execution time = 17.444\n> >\n> > I think the single client case is indicative of an independent regression, or\n> > rather regressions - it can't have anything to do with the fallocate() issue\n> > and reproduces before that too in your numbers.\n>\n> Right.\n>\n> >\n> > 1) COPY got slower, due to:\n> > 9f8377f7a27 Add a DEFAULT option to COPY FROM\n> >\n> > This added a new palloc()/free() to every call to NextCopyFrom(). It's not at\n> > all clear to me why this needs to happen in NextCopyFrom(), particularly\n> > because it's already stored in CopyFromState?\n>\n> Yeah, it seems to me that we can palloc the bool array once and use it\n> for the entire COPY FROM. With the attached small patch, the\n> performance becomes much better:\n>\n> 15:\n> 14.70500 sec\n>\n> 16:\n> 17.42900 sec\n>\n> 16 w/ patch:\n> 14.85600 sec\n>\n> >\n> >\n> > 2) pg_strtoint32_safe() got substantially slower, mainly due\n> > to\n> > faff8f8e47f Allow underscores in integer and numeric constants.\n> > 6fcda9aba83 Non-decimal integer literals\n>\n> Agreed.\n>\n> >\n> > pinned to one cpu, turbo mode disabled, I get the following best-of-three times for\n> > copy test from '/tmp/tmp_4.data'\n> > (too impatient to use the larger file every time)\n> >\n> > 15:\n> > 6281.107 ms\n> >\n> > HEAD:\n> > 7000.469 ms\n> >\n> > backing out 9f8377f7a27:\n> > 6433.516 ms\n> >\n> > also backing out faff8f8e47f, 6fcda9aba83:\n> > 6235.453 ms\n> >\n> >\n> > I suspect 1) can relatively easily be fixed properly. But 2) seems much\n> > harder. The changes increased the number of branches substantially, that's\n> > gonna cost in something as (previously) tight as pg_strtoint32().\n>\n> I'll look at how to fix 2).\n\nI have made some progress on dealing with performance regression on\nsingle client COPY. I've attached a patch to fix 2). With the patch I\nshared[1] to deal with 1), single client COPY performance seems to be\nnow as good as (or slightly better than) PG15 . Here are the results\n(averages of 5 times) of loading 50M rows via COPY:\n\n15:\n7.609 sec\n\n16:\n8.637 sec\n\n16 w/ two patches:\n7.179 sec\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoBb9Sbddh%2BnQk1BxUFaRHaa%2BfL8fCzO-Lvxqb6ZcpAHqw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Jul 2023 17:24:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 19 Jul 2023 at 09:24, Masahiko Sawada <[email protected]> wrote:\n>\n> > > 2) pg_strtoint32_safe() got substantially slower, mainly due\n> > > to\n> > > faff8f8e47f Allow underscores in integer and numeric constants.\n> > > 6fcda9aba83 Non-decimal integer literals\n> >\n> > Agreed.\n> >\n> I have made some progress on dealing with performance regression on\n> single client COPY. I've attached a patch to fix 2). With the patch I\n> shared[1] to deal with 1), single client COPY performance seems to be\n> now as good as (or slightly better than) PG15 . Here are the results\n> (averages of 5 times) of loading 50M rows via COPY:\n>\n\nHmm, I'm somewhat sceptical about this second patch. It's not obvious\nwhy adding such tests would speed it up, and indeed, testing on my\nmachine with 50M rows, I see a noticeable speed-up from patch 1, and a\nslow-down from patch 2:\n\n\nPG15\n====\n\n7390.461 ms\n7497.655 ms\n7485.850 ms\n7406.336 ms\n\nHEAD\n====\n\n8388.707 ms\n8283.484 ms\n8391.638 ms\n8363.306 ms\n\nHEAD + P1\n=========\n\n7255.128 ms\n7185.319 ms\n7197.822 ms\n7191.176 ms\n\nHEAD + P2\n=========\n\n8687.164 ms\n8654.907 ms\n8641.493 ms\n8668.865 ms\n\nHEAD + P1 + P2\n==============\n\n7780.126 ms\n7786.427 ms\n7775.047 ms\n7785.938 ms\n\n\nSo for me at least, just applying patch 1 gives the best results, and\nmakes it slightly faster than PG15 (possibly due to 6b423ec677).\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 19 Jul 2023 12:13:48 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 19 Jul 2023 at 23:14, Dean Rasheed <[email protected]> wrote:\n> Hmm, I'm somewhat sceptical about this second patch. It's not obvious\n> why adding such tests would speed it up, and indeed, testing on my\n> machine with 50M rows, I see a noticeable speed-up from patch 1, and a\n> slow-down from patch 2:\n\nI noticed that 6fcda9aba added quite a lot of conditions that need to\nbe checked before we process a normal decimal integer string. I think\nwe could likely do better and code it to assume that most strings will\nbe decimal and put that case first rather than last.\n\nIn the attached, I've changed that for the 32-bit version only. A\nmore complete patch would need to do the 16 and 64-bit versions too.\n\n-- setup\ncreate table a (a int);\ninsert into a select x from generate_series(1,10000000)x;\ncopy a to '~/a.dump';\n\n-- test\ntruncate a; copy a from '/tmp/a.dump';\n\nmaster @ ab29a7a9c\nTime: 2152.633 ms (00:02.153)\nTime: 2121.091 ms (00:02.121)\nTime: 2100.995 ms (00:02.101)\nTime: 2101.724 ms (00:02.102)\nTime: 2103.949 ms (00:02.104)\n\nmaster + pg_strtoint32_base_10_first.patch\nTime: 2061.464 ms (00:02.061)\nTime: 2035.513 ms (00:02.036)\nTime: 2028.356 ms (00:02.028)\nTime: 2043.218 ms (00:02.043)\nTime: 2037.035 ms (00:02.037) (~3.6% faster)\n\nWithout that, we need to check if the first digit is '0' a total of 3\ntimes and also check if the 2nd digit is any of 'x', 'X', 'o', 'O',\n'b' or 'B'. It seems to be coded with the assumption that hex strings\nare the most likely. I think decimals are the most likely by far.\n\nDavid",
"msg_date": "Thu, 20 Jul 2023 11:56:25 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Thu, 20 Jul 2023 at 00:56, David Rowley <[email protected]> wrote:\n>\n> I noticed that 6fcda9aba added quite a lot of conditions that need to\n> be checked before we process a normal decimal integer string. I think\n> we could likely do better and code it to assume that most strings will\n> be decimal and put that case first rather than last.\n\nThat sounds sensible, but ...\n\n> In the attached, I've changed that for the 32-bit version only. A\n> more complete patch would need to do the 16 and 64-bit versions too.\n>\n> Without that, we need to check if the first digit is '0' a total of 3\n> times and also check if the 2nd digit is any of 'x', 'X', 'o', 'O',\n> 'b' or 'B'.\n\nThat's not what I see. For me, the compiler (gcc 11, using either -O2\nor -O3) is smart enough to spot that the first part of each of the 3\nchecks is the same, and it only tests whether the first digit is '0'\nonce, before immediately jumping to the decimal parsing code. I didn't\ntest other compilers though, so I can believe that different compilers\nmight not be so smart.\n\nOTOH, this test in your patch:\n\n+ /* process decimal digits */\n+ if (isdigit((unsigned char) ptr[0]) &&\n+ (isdigit((unsigned char) ptr[1]) || ptr[1] == '_' || ptr[1]\n== '\\0' || isspace(ptr[1])))\n\nis doing more work than it needs to, and actually makes things\nnoticeably worse for me. It needs to do a minimum of 2 isdigit()\nchecks before it will parse the input as a decimal, whereas before\n(for me at least) it just did one simple comparison of ptr[0] against\n'0'.\n\nI agree with the principal though. In the attached updated patch, I\nreplaced that test with a simpler one:\n\n+ /*\n+ * Process the number's digits. We optimize for decimal input (expected to\n+ * be the most common case) first. Anything that doesn't start with a base\n+ * prefix indicator must be decimal.\n+ */\n+\n+ /* process decimal digits */\n+ if (likely(ptr[0] != '0' || !isalpha((unsigned char) ptr[1])))\n\nSo hopefully any compiler should only need to do the one comparison\nagainst '0' for any non-zero decimal input.\n\nDoing that didn't give any measurable performance improvement for me,\nbut it did at least not make it noticeably worse, and seems more\nlikely to generate better code with not-so-smart compilers. I'd be\ninterested to know how that performs for you (and if your compiler\nreally does generate 3 \"first digit is '0'\" tests for the unpatched\ncode).\n\nRegards,\nDean\n\n---\n\nHere were my test results (where P1 is the \"fix_COPY_DEFAULT.patch\"),\nand I tested COPY loading 50M rows:\n\nHEAD + P1\n=========\n\n7137.966 ms\n7193.190 ms\n7094.491 ms\n7123.520 ms\n\nHEAD + P1 + pg_strtoint32_base_10_first.patch\n=============================================\n\n7561.658 ms\n7548.282 ms\n7551.360 ms\n7560.671 ms\n\nHEAD + P1 + pg_strtoint32_base_10_first.v2.patch\n================================================\n\n7238.775 ms\n7184.937 ms\n7123.257 ms\n7159.618 ms",
"msg_date": "Thu, 20 Jul 2023 09:36:48 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Thu, 20 Jul 2023 at 20:37, Dean Rasheed <[email protected]> wrote:\n>\n> On Thu, 20 Jul 2023 at 00:56, David Rowley <[email protected]> wrote:\n> I agree with the principal though. In the attached updated patch, I\n> replaced that test with a simpler one:\n>\n> + /*\n> + * Process the number's digits. We optimize for decimal input (expected to\n> + * be the most common case) first. Anything that doesn't start with a base\n> + * prefix indicator must be decimal.\n> + */\n> +\n> + /* process decimal digits */\n> + if (likely(ptr[0] != '0' || !isalpha((unsigned char) ptr[1])))\n>\n> So hopefully any compiler should only need to do the one comparison\n> against '0' for any non-zero decimal input.\n>\n> Doing that didn't give any measurable performance improvement for me,\n> but it did at least not make it noticeably worse, and seems more\n> likely to generate better code with not-so-smart compilers. I'd be\n> interested to know how that performs for you (and if your compiler\n> really does generate 3 \"first digit is '0'\" tests for the unpatched\n> code).\n\nThat seems better. I benchmarked it on two machines:\n\ngcc12.2/linux/amd3990x\ncreate table a (a int);\ninsert into a select x from generate_series(1,10000000)x;\ncopy a to '/tmp/a.dump';\n\nmaster @ ab29a7a9c\npostgres=# truncate a; copy a from '/tmp/a.dump';\nTime: 2226.912 ms (00:02.227)\nTime: 2214.168 ms (00:02.214)\nTime: 2206.481 ms (00:02.206)\nTime: 2219.640 ms (00:02.220)\nTime: 2205.004 ms (00:02.205)\n\nmaster + pg_strtoint32_base_10_first.v2.patch\npostgres=# truncate a; copy a from '/tmp/a.dump';\nTime: 2067.108 ms (00:02.067)\nTime: 2070.401 ms (00:02.070)\nTime: 2073.423 ms (00:02.073)\nTime: 2065.407 ms (00:02.065)\nTime: 2066.536 ms (00:02.067) (~7% faster)\n\napple m2 pro/clang\n\nmaster @ 9089287a\npostgres=# truncate a; copy a from '/tmp/a.dump';\nTime: 1286.369 ms (00:01.286)\nTime: 1300.534 ms (00:01.301)\nTime: 1295.661 ms (00:01.296)\nTime: 1296.404 ms (00:01.296)\nTime: 1268.361 ms (00:01.268)\nTime: 1259.321 ms (00:01.259)\n\nmaster + pg_strtoint32_base_10_first.v2.patch\npostgres=# truncate a; copy a from '/tmp/a.dump';\nTime: 1253.519 ms (00:01.254)\nTime: 1235.256 ms (00:01.235)\nTime: 1269.501 ms (00:01.270)\nTime: 1267.801 ms (00:01.268)\nTime: 1275.758 ms (00:01.276)\nTime: 1261.478 ms (00:01.261) (a bit noisy but avg of ~1.8% faster)\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Jul 2023 10:35:05 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nHm, in some cases your patch is better, but in others both the old code\n(8692f6644e7) and HEAD beat yours on my machine. TBH, not entirely sure why.\n\nprep:\nCOPY (SELECT generate_series(1, 2000000) a, (random() * 100000 - 50000)::int b, 3243423 c) TO '/tmp/lotsaints.copy';\nDROP TABLE lotsaints; CREATE UNLOGGED TABLE lotsaints(a int, b int, c int);\n\nbenchmark:\npsql -qX -c 'truncate lotsaints' && pgbench -n -P1 -f <( echo \"COPY lotsaints FROM '/tmp/lotsaints.copy';\") -t 15\n\nI disabled turbo mode, pinned the server to a single core of my Xeon Gold 5215:\n\nHEAD: 812.690\n\nyour patch: 821.354\n\nstrtoint from 8692f6644e7: 824.543\n\nstrtoint from 6b423ec677d^: 806.678\n\n(when I say strtoint from, I did not replace the goto labels, so that part is\nunchanged and unrelated)\n\n\nIOW, for me the code from 15 is the fastest by a good bit... There's an imul,\nsure, but the fact that it sets a flag makes it faster than having to add more\ntests and branches.\n\n\nLooking at a profile reminded me of how silly it is that we are wasting a good\nchunk of the time in these isdigit() checks, even though we already rely on on\nthe ascii values via (via *ptr++ - '0'). I think that's done in the headers\nfor some platforms, but not others (glibc). And we've even done already for\noctal and binary!\n\nOpen coding isdigit() gives us:\n\n\nHEAD: 797.434\n\nyour patch: 803.570\n\nstrtoint from 8692f6644e7: 778.943\n\nstrtoint from 6b423ec677d^: 777.741\n\n\nIt's somewhat odd that HEAD and your patch switch position here...\n\n\n> -\telse if (ptr[0] == '0' && (ptr[1] == 'o' || ptr[1] == 'O'))\n> +\t/* process hex digits */\n> +\telse if (ptr[1] == 'x' || ptr[1] == 'X')\n> \t{\n>\n> \t\tfirstdigit = ptr += 2;\n\nI find this unnecessarily hard to read. I realize it's been added in\n6fcda9aba83, but I don't see a reason to use such a construct here.\n\n\nI find it somewhat grating how much duplication there now is in this\ncode due to being repeated for all the bases...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Jul 2023 22:34:36 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "On Tue, 25 Jul 2023 at 17:34, Andres Freund <[email protected]> wrote:\n> prep:\n> COPY (SELECT generate_series(1, 2000000) a, (random() * 100000 - 50000)::int b, 3243423 c) TO '/tmp/lotsaints.copy';\n> DROP TABLE lotsaints; CREATE UNLOGGED TABLE lotsaints(a int, b int, c int);\n>\n> benchmark:\n> psql -qX -c 'truncate lotsaints' && pgbench -n -P1 -f <( echo \"COPY lotsaints FROM '/tmp/lotsaints.copy';\") -t 15\n>\n> I disabled turbo mode, pinned the server to a single core of my Xeon Gold 5215:\n>\n> HEAD: 812.690\n>\n> your patch: 821.354\n>\n> strtoint from 8692f6644e7: 824.543\n>\n> strtoint from 6b423ec677d^: 806.678\n\nI'm surprised to see the imul version is faster. It's certainly not\nwhat we found when working on 6b423ec67.\n\nI've been fooling around a bit to try to learn what might be going on.\nI wrote 2 patches:\n\n1) pg_strtoint_imul.patch: Effectively reverts 6b423ec67 and puts the\ncode how it likely would have looked had I not done that work at all.\n(I've assumed that the hex, octal, binary parsing would have been\nadded using the overflow multiplication functions just as the base 10\nparsing).\n\n2) pg_strtoint_optimize.patch: I've made another pass over the\nfunctions with the current overflow checks and optimized the digit\nchecking code so that it can be done in a single check like: if (digit\n< 10). This can be done by assigning the result of *ptr - '0' to an\nunsigned char. Anything less than '0' will wrap around and anything\nabove '9' will remain so. I've also removed a few inefficiencies with\nthe isspace checking. We didn't need to do \"while (*ptr &&\nisspace(*ptr)). It's fine just to do while (isspace(*ptr)) since '\\0'\nisn't a space char. I also got rid of the isxdigit call. The\nhexlookup array contains -1 for non-hex chars. We may as well check\nthe digit value is >= 0.\n\nHere are the results I get using your test as quoted above:\n\nmaster @ 62e9af4c63f + fix_COPY_DEFAULT.patch\n\nlatency average = 568.102 ms\n\nmaster @ 62e9af4c63f + fix_COPY_DEFAULT.patch + pg_strtoint_optimize.patch\n\nlatency average = 531.238 ms\n\nmaster @ 62e9af4c63f + fix_COPY_DEFAULT.patch + pg_strtoint_imul.patch\n\nlatency average = 559.333 ms (surprisingly also faster on my machine)\n\nThe optimized version of the pg_strtoint functions wins over the imul\npatch. Could you test to see if this is also the case in your Xeon\nmachine?\n\n> (when I say strtoint from, I did not replace the goto labels, so that part is\n> unchanged and unrelated)\n\nI didn't quite follow this.\n\nI've not really studied the fix_COPY_DEFAULT.patch patch. Is there a\nreason to delay committing that? It would be good to eliminate that\nas a variable for the current performance regression.\n\nDavid",
"msg_date": "Tue, 25 Jul 2023 23:37:08 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-25 23:37:08 +1200, David Rowley wrote:\n> On Tue, 25 Jul 2023 at 17:34, Andres Freund <[email protected]> wrote:\n> I've not really studied the fix_COPY_DEFAULT.patch patch. Is there a\n> reason to delay committing that? It would be good to eliminate that\n> as a variable for the current performance regression.\n\nYea, I don't think there's a reason to hold off on that. Sawada-san, do you\nwant to commit it? Or Andrew?\n\n\n> > prep:\n> > COPY (SELECT generate_series(1, 2000000) a, (random() * 100000 - 50000)::int b, 3243423 c) TO '/tmp/lotsaints.copy';\n> > DROP TABLE lotsaints; CREATE UNLOGGED TABLE lotsaints(a int, b int, c int);\n> >\n> > benchmark:\n> > psql -qX -c 'truncate lotsaints' && pgbench -n -P1 -f <( echo \"COPY lotsaints FROM '/tmp/lotsaints.copy';\") -t 15\n> >\n> > I disabled turbo mode, pinned the server to a single core of my Xeon Gold 5215:\n> >\n> > HEAD: 812.690\n> >\n> > your patch: 821.354\n> >\n> > strtoint from 8692f6644e7: 824.543\n> >\n> > strtoint from 6b423ec677d^: 806.678\n> \n> I'm surprised to see the imul version is faster. It's certainly not\n> what we found when working on 6b423ec67.\n\nWhat CPUs did you test it on? I'd not be surprised if this were heavily\ndependent on the microarchitecture.\n\n\n> I've been fooling around a bit to try to learn what might be going on.\n> I wrote 2 patches:\n> \n> 1) pg_strtoint_imul.patch: Effectively reverts 6b423ec67 and puts the\n> code how it likely would have looked had I not done that work at all.\n> (I've assumed that the hex, octal, binary parsing would have been\n> added using the overflow multiplication functions just as the base 10\n> parsing).\n\n\n> 2) pg_strtoint_optimize.patch: I've made another pass over the\n> functions with the current overflow checks and optimized the digit\n> checking code so that it can be done in a single check like: if (digit\n> < 10). This can be done by assigning the result of *ptr - '0' to an\n> unsigned char. Anything less than '0' will wrap around and anything\n> above '9' will remain so. I've also removed a few inefficiencies with\n> the isspace checking. We didn't need to do \"while (*ptr &&\n> isspace(*ptr)). It's fine just to do while (isspace(*ptr)) since '\\0'\n> isn't a space char. I also got rid of the isxdigit call. The\n> hexlookup array contains -1 for non-hex chars. We may as well check\n> the digit value is >= 0.\n> \n> Here are the results I get using your test as quoted above:\n> \n> master @ 62e9af4c63f + fix_COPY_DEFAULT.patch\n> \n> latency average = 568.102 ms\n> \n> master @ 62e9af4c63f + fix_COPY_DEFAULT.patch + pg_strtoint_optimize.patch\n> \n> latency average = 531.238 ms\n> \n> master @ 62e9af4c63f + fix_COPY_DEFAULT.patch + pg_strtoint_imul.patch\n> \n> latency average = 559.333 ms (surprisingly also faster on my machine)\n> \n> The optimized version of the pg_strtoint functions wins over the imul\n> patch. Could you test to see if this is also the case in your Xeon\n> machine?\n\n(these are the numbers with turbo disabled, if I enable it they're all in the\n6xx ms range, but the variance is higher)\n\n\nfix_COPY_DEFAULT.patch\n774.344\n\nfix_COPY_DEFAULT.patch + pg_strtoint32_base_10_first.v2.patch\n777.673\n\nfix_COPY_DEFAULT.patch + pg_strtoint_optimize.patch\n777.545\n\nfix_COPY_DEFAULT.patch + pg_strtoint_imul.patch\n795.298\n\nfix_COPY_DEFAULT.patch + pg_strtoint_imul.patch + likely(isdigit())\n773.477\n\nfix_COPY_DEFAULT.patch + pg_strtoint32_base_10_first.v2.patch + pg_strtoint_imul.patch\n774.443\n\nfix_COPY_DEFAULT.patch + pg_strtoint32_base_10_first.v2.patch + pg_strtoint_imul.patch + likely(isdigit())\n774.513\n\nfix_COPY_DEFAULT.patch + pg_strtoint32_base_10_first.v2.patch + pg_strtoint_imul.patch + likely(isdigit()) + unlikely(*ptr == '_')\n763.879\n\n\nOne idea I had was to add a fastpath that won't parse all strings, but will\nparse the strings that we would generate, and fall back to the more general\nvariant if it fails. See the attached, rough, prototype:\n\nfix_COPY_DEFAULT.patch + fastpath.patch:\n746.971\n\nfix_COPY_DEFAULT.patch + fastpath.patch + isdigit.patch:\n715.570\n\nNow, the precise contents of this fastpath are not yet clear (wrt imul or\nnot), but I think the idea has promise.\n\n\n\n> > (when I say strtoint from, I did not replace the goto labels, so that part is\n> > unchanged and unrelated)\n> \n> I didn't quite follow this.\n\nI meant that I didn't undo ereport()->ereturn().\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 25 Jul 2023 08:50:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-25 08:50:19 -0700, Andres Freund wrote:\n> One idea I had was to add a fastpath that won't parse all strings, but will\n> parse the strings that we would generate, and fall back to the more general\n> variant if it fails. See the attached, rough, prototype:\n> \n> fix_COPY_DEFAULT.patch + fastpath.patch:\n> 746.971\n> \n> fix_COPY_DEFAULT.patch + fastpath.patch + isdigit.patch:\n> 715.570\n> \n> Now, the precise contents of this fastpath are not yet clear (wrt imul or\n> not), but I think the idea has promise.\n\nBtw, I strongly suspect that fastpath wants to be branchless SSE when it grows\nup.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 25 Jul 2023 09:06:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "> On 2023-07-25 23:37:08 +1200, David Rowley wrote:\n> > On Tue, 25 Jul 2023 at 17:34, Andres Freund <[email protected]> wrote:\n> > > HEAD: 812.690\n> > >\n> > > your patch: 821.354\n> > >\n> > > strtoint from 8692f6644e7: 824.543\n> > >\n> > > strtoint from 6b423ec677d^: 806.678\n> >\n> > I'm surprised to see the imul version is faster. It's certainly not\n> > what we found when working on 6b423ec67.\n>\n> What CPUs did you test it on? I'd not be surprised if this were heavily\n> dependent on the microarchitecture.\n\nThis was on AMD 3990x.\n\n> One idea I had was to add a fastpath that won't parse all strings, but will\n> parse the strings that we would generate, and fall back to the more general\n> variant if it fails. See the attached, rough, prototype:\n\nThere were a couple of problems with fastpath.patch. You need to\nreset the position of ptr at the start of the slow path and also you\nwere using tmp in the if (neg) part instead of tmp_s in the fast path\nsection.\n\nI fixed that up and made two versions of the patch, one using the\noverflow functions (pg_strtoint_fastpath1.patch) and one testing if\nthe number is going to overflow (same as current master)\n(pg_strtoint_fastpath2.patch)\n\nAMD 3990x:\n\nmaster + fix_COPY_DEFAULT.patch:\nlatency average = 525.226 ms\n\nmaster + fix_COPY_DEFAULT.patch + pg_strtoint_fastpath1.patch:\nlatency average = 488.171 ms\n\nmaster + fix_COPY_DEFAULT.patch + pg_strtoint_fastpath2.patch:\nlatency average = 481.827 ms\n\n\nApple M2 Pro:\n\nmaster + fix_COPY_DEFAULT.patch:\nlatency average = 348.433 ms\n\nmaster + fix_COPY_DEFAULT.patch + pg_strtoint_fastpath1.patch:\nlatency average = 336.778 ms\n\nmaster + fix_COPY_DEFAULT.patch + pg_strtoint_fastpath2.patch:\nlatency average = 335.992 ms\n\nZen 4 7945HX CPU:\n\nmaster + fix_COPY_DEFAULT.patch:\nlatency average = 296.881 ms\n\nmaster + fix_COPY_DEFAULT.patch + pg_strtoint_fastpath1.patch:\nlatency average = 287.052 ms\n\nmaster + fix_COPY_DEFAULT.patch + pg_strtoint_fastpath2.patch:\nlatency average = 280.742 ms\n\nThe M2 chip does not seem to be clearly faster with the fastpath2\nmethod of overflow checking, but both AMD CPUs seem pretty set on\nfastpath2 being faster.\n\nIt would be really good if someone with another a newish intel CPU\ncould test this too.\n\nDavid",
"msg_date": "Thu, 27 Jul 2023 12:17:20 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 26 Jul 2023 at 03:50, Andres Freund <[email protected]> wrote:\n> On 2023-07-25 23:37:08 +1200, David Rowley wrote:\n> > On Tue, 25 Jul 2023 at 17:34, Andres Freund <[email protected]> wrote:\n> > I've not really studied the fix_COPY_DEFAULT.patch patch. Is there a\n> > reason to delay committing that? It would be good to eliminate that\n> > as a variable for the current performance regression.\n>\n> Yea, I don't think there's a reason to hold off on that. Sawada-san, do you\n> want to commit it? Or Andrew?\n\nJust to keep this moving and to make it easier for people to test the\npg_strtoint patches, I've pushed the fix_COPY_DEFAULT.patch patch.\nThe only thing I changed was to move the line that was allocating the\narray to a location more aligned with the order that the fields are\ndefined in the struct.\n\nDavid\n\n\n",
"msg_date": "Thu, 27 Jul 2023 14:51:20 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Thu, 27 Jul 2023 at 14:51, David Rowley <[email protected]> wrote:\n> Just to keep this moving and to make it easier for people to test the\n> pg_strtoint patches, I've pushed the fix_COPY_DEFAULT.patch patch.\n> The only thing I changed was to move the line that was allocating the\n> array to a location more aligned with the order that the fields are\n> defined in the struct.\n\nI just did another round of benchmarking to see where we're at after\nfox_COPY_DEFAULT.patch has been committed.\n\nBelow I've benchmarked REL_15_STABLE up to REL_16_STABLE with some\nhand-selected key commits, many of which have been mentioned on this\nthread already because they seem to affect the performance of COPY.\n\nTo summarise, REL_15_STABLE can run this benchmark in 526.014 ms on my\nAMD 3990x machine. Today's REL_16_STABLE takes 530.344 ms. We're\ntalking about another patch to speed up the pg_strtoint functions\nwhich gets this down to 483.790 ms. Do we need to do this for v16 or\ncan we just leave this as it is already? The slowdown does not seem\nto be much above what we'd ordinarily classify as noise using this\ntest on my machine.\n\nBenchmark setup:\n\nCOPY (SELECT generate_series(1, 2000000) a, (random() * 100000 -\n50000)::int b, 3243423 c) TO '/tmp/lotsaints.copy';\nDROP TABLE lotsaints; CREATE UNLOGGED TABLE lotsaints(a int, b int, c int);\n\nBenchmark:\npsql -qX -c 'truncate lotsaints' && pgbench -n -P1 -f <( echo \"COPY\nlotsaints FROM '/tmp/lotsaints.copy';\") -t 15\n\n2864eb977 REL_15_STABLE\nlatency average = 526.014 ms\n\n8.84% postgres [.] pg_strtoint32\n\n29452de73 Sat Dec 3 10:50:39 2022 -0500 The commit before \"Improve\nperformance of pg_strtointNN functions\"\nlatency average = 508.453 ms\n\n10.21% postgres [.] pg_strtoint32\n\n6b423ec67 Sun Dec 4 16:18:18 2022 +1300 Improve performance of\npg_strtointNN functions\nlatency average = 492.943 ms\n\n7.73% postgres [.] pg_strtoint32\n\n1939d2628 Fri Dec 9 10:08:44 2022 -0500 The commit before \"Convert a\nfew datatype input functions to use \"soft\" error reporting.\"\nlatency average = 485.016 ms\n\n8.43% postgres [.] pg_strtoint32\n\nccff2d20e Fri Dec 9 10:14:53 2022 -0500 Convert a few datatype input\nfunctions to use \"soft\" error reporting.\nlatency average = 501.325 ms\n\n6.90% postgres [.] pg_strtoint32_safe\n\n60684dd83 Tue Dec 13 17:33:28 2022 -0800 The commit before\n\"Non-decimal integer literals\"\nlatency average = 500.889 ms\n\n8.27% postgres [.] pg_strtoint32_safe\n\n6fcda9aba Wed Dec 14 05:40:38 2022 +0100 Non-decimal integer literals\nlatency average = 521.904 ms\n\n9.02% postgres [.] pg_strtoint32_safe\n\n1b6f632a3 Sat Feb 4 07:56:09 2023 +0100 The commit before \"Allow\nunderscores in integer and numeric constants.\"\nlatency average = 523.195 ms\n\n9.21% postgres [.] pg_strtoint32_safe\n\nfaff8f8e4 Sat Feb 4 09:48:51 2023 +0000 Allow underscores in integer\nand numeric constants.\nlatency average = 493.064 ms\n\n10.25% postgres [.] pg_strtoint32_safe\n\n9f8377f7a Mon Mar 13 10:01:56 2023 -0400 Add a DEFAULT option to COPY FROM\nlatency average = 597.617 ms\n\n 12.91% postgres [.] CopyReadLine\n 10.62% postgres [.] CopyReadAttributesText\n 10.51% postgres [.] pg_strtoint32_safe\n 7.97% postgres [.] NextCopyFrom\n\nREL_16_STABLE @ c1308ce2d Thu Jul 27 14:48:44 2023 +1200 Fix\nperformance problem with new COPY DEFAULT code\nlatency average = 530.344 ms\n\n 13.51% postgres [.] CopyReadLine\n 9.62% postgres [.] pg_strtoint32_safe\n 8.97% postgres [.] CopyReadAttributesText\n 8.43% postgres [.] NextCopyFrom\n\nREL_16_STABLE + pg_strtoint_fastpath1.patch\nlatency average = 493.136 ms\n\n 13.79% postgres [.] CopyReadLine\n 11.82% postgres [.] CopyReadAttributesText\n 7.07% postgres [.] NextCopyFrom\n 6.81% postgres [.] pg_strtoint32_safe\n\nREL_16_STABLE + pg_strtoint_fastpath2.patch\nlatency average = 483.790 ms\n\n 13.87% postgres [.] CopyReadLine\n 10.40% postgres [.] CopyReadAttributesText\n 8.22% postgres [.] NextCopyFrom\n 5.52% postgres [.] pg_strtoint32_safe\n\nDavid\n\n\n",
"msg_date": "Thu, 27 Jul 2023 20:53:16 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Thu, Jul 27, 2023 at 7:17 AM David Rowley <[email protected]> wrote:\n>\n> It would be really good if someone with another a newish intel CPU\n> could test this too.\n\nI ran the lotsaints test from last email on an i7-10750H (~3 years old) and\ngot these results (gcc 13.1 , turbo off):\n\nREL_15_STABLE:\nlatency average = 956.453 ms\nlatency stddev = 4.854 ms\n\nREL_16_STABLE @ 695f5deb7902 (28-JUL-2023):\nlatency average = 999.354 ms\nlatency stddev = 3.611 ms\n\nmaster @ 39055cb4cc (31-JUL-2023):\nlatency average = 995.310 ms\nlatency stddev = 5.176 ms\n\nmaster + revert c1308ce2d (the replace-palloc0 fix)\nlatency average = 1080.909 ms\nlatency stddev = 8.707 ms\n\nmaster + pg_strtoint_fastpath1.patch\nlatency average = 938.146 ms\nlatency stddev = 9.354 ms\n\nmaster + pg_strtoint_fastpath2.patch\nlatency average = 902.808 ms\nlatency stddev = 3.957 ms\n\nFor me, PG16 seems to regress from PG15, and the second patch seems faster\nthan the first.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 27, 2023 at 7:17 AM David Rowley <[email protected]> wrote:>> It would be really good if someone with another a newish intel CPU> could test this too.I ran the lotsaints test from last email on an i7-10750H (~3 years old) and got these results (gcc 13.1 , turbo off):REL_15_STABLE:latency average = 956.453 mslatency stddev = 4.854 msREL_16_STABLE @ 695f5deb7902 (28-JUL-2023):latency average = 999.354 mslatency stddev = 3.611 msmaster @ 39055cb4cc (31-JUL-2023):latency average = 995.310 mslatency stddev = 5.176 msmaster + revert c1308ce2d (the replace-palloc0 fix)latency average = 1080.909 mslatency stddev = 8.707 msmaster + pg_strtoint_fastpath1.patchlatency average = 938.146 mslatency stddev = 9.354 msmaster + pg_strtoint_fastpath2.patchlatency average = 902.808 mslatency stddev = 3.957 msFor me, PG16 seems to regress from PG15, and the second patch seems faster than the first. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 31 Jul 2023 16:39:04 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-27 20:53:16 +1200, David Rowley wrote:\n> To summarise, REL_15_STABLE can run this benchmark in 526.014 ms on my\n> AMD 3990x machine. Today's REL_16_STABLE takes 530.344 ms. We're\n> talking about another patch to speed up the pg_strtoint functions\n> which gets this down to 483.790 ms. Do we need to do this for v16 or\n> can we just leave this as it is already? The slowdown does not seem\n> to be much above what we'd ordinarily classify as noise using this\n> test on my machine.\n\nI think we need to do something for 16 - it appears on recent-ish AMD the\nregression is quite a bit smaller than on intel. You see something < 1%, I\nsee more like 4%. I think there's also other cases where the slowdown is more\nsubstantial.\n\nBesides intel vs amd, it also looks like the gcc version might make a\ndifference. The code generated by 13 is noticeably slower than 12 for me...\n\n> Benchmark setup:\n> \n> COPY (SELECT generate_series(1, 2000000) a, (random() * 100000 -\n> 50000)::int b, 3243423 c) TO '/tmp/lotsaints.copy';\n> DROP TABLE lotsaints; CREATE UNLOGGED TABLE lotsaints(a int, b int, c int);\n\nThere's a lot of larger numbers in the file, which likely reduces the impact\nsome. And there's the overhead of actually inserting the rows into the table,\nmaking the difference appear smaller than it is.\n\nIf I avoid the actual insert into the table and use more columns, I see an about\n10% regression here.\n\nCOPY (SELECT generate_series(1, 1000) a, 10 b, 20 c, 30 d, 40 e, 50 f FROM generate_series(1, 10000)) TO '/tmp/lotsaints_wide.copy';\n\npsql -c 'DROP TABLE IF EXISTS lotsaints_wide; CREATE UNLOGGED TABLE lotsaints_wide(a int, b int, c int, d int, e int, f int);' && \\\n pgbench -n -P1 -f <( echo \"COPY lotsaints_wide FROM '/tmp/lotsaints_wide.copy' WHERE false\") -t 5\n\n15: 2992.605\nHEAD: 3325.201\nfastpath1.patch 2932.606\nfastpath2.patch 2783.915\n\n\nInterestingly fastpath1 is slower now, even though it wasn't with earlier\npatches (which still is repeatable). I do not have the foggiest as to why.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Jul 2023 18:25:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "On Mon, 31 Jul 2023 at 21:39, John Naylor <[email protected]> wrote:\n> master + pg_strtoint_fastpath1.patch\n> latency average = 938.146 ms\n> latency stddev = 9.354 ms\n>\n> master + pg_strtoint_fastpath2.patch\n> latency average = 902.808 ms\n> latency stddev = 3.957 ms\n\nThanks for checking those two on your machine. I'm glad to see\nfastpath2 faster on yours too.\n\nDavid\n\n\n",
"msg_date": "Wed, 2 Aug 2023 00:21:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Tue, 1 Aug 2023 at 13:25, Andres Freund <[email protected]> wrote:\n> There's a lot of larger numbers in the file, which likely reduces the impact\n> some. And there's the overhead of actually inserting the rows into the table,\n> making the difference appear smaller than it is.\n\nIt might be worth special casing the first digit as we can save doing\nthe multiplication by 10 and the overflow checks on the first digit.\nThat should make it slightly faster to parse smaller numbers.\n\n> COPY (SELECT generate_series(1, 1000) a, 10 b, 20 c, 30 d, 40 e, 50 f FROM generate_series(1, 10000)) TO '/tmp/lotsaints_wide.copy';\n>\n> psql -c 'DROP TABLE IF EXISTS lotsaints_wide; CREATE UNLOGGED TABLE lotsaints_wide(a int, b int, c int, d int, e int, f int);' && \\\n> pgbench -n -P1 -f <( echo \"COPY lotsaints_wide FROM '/tmp/lotsaints_wide.copy' WHERE false\") -t 5\n>\n> 15: 2992.605\n> HEAD: 3325.201\n> fastpath1.patch 2932.606\n> fastpath2.patch 2783.915\n>\n> Interestingly fastpath1 is slower now, even though it wasn't with earlier\n> patches (which still is repeatable). I do not have the foggiest as to why.\n\nI'm glad to see that.\n\nI've adjusted the patch to add the fast path for the 16 and 64-bit\nversions of the function. I also added the special case for\nprocessing the first digit, which looks like:\n\n/* process the first digit */\ndigit = (*ptr - '0');\n\nif (likely(digit < 10))\n{\n ptr++;\n tmp = digit;\n}\n\n/* process remaining digits */\nfor (;;)\n\nI tried adding the \"at least 1 digit check\" by adding an else { goto\nslow; } in the above code, but it seems to generate slower code than\njust checking if (unlikely(ptr == s)) { goto slow; } after the loop.\n\nI also noticed that I wasn't getting the same performance after\nadjusting the 16 and 64 bit versions. I assume that's down to code\nalignment, but unsure of that. I ended up adjusting all the \"while\n(*ptr)\" loops into \"for (;;)\" loops\nsince the NUL char check is handled by the \"else break;\". I also\nremoved the needless NUL char check in the isspace loops. It can't be\nisspace and '\\0'. I also replaced the isdigit() function call and\nreplaced it for manually checking the digit range. I see my compiler\n(gcc12.2) effectively generates the same code as the unsigned char\nfast path version checking if (digit < 10). Once I did that, I got the\nperformance back again.\n\nWith your new test with the small-sized ints, I get:\n\nREL_15_STABLE:\nlatency average = 1696.390 ms\n\nmaster @ d3a38318a\nlatency average = 1928.803 ms\n\nmaster + fastpath1.patch:\nlatency average = 1634.736 ms\n\nmaster + fastpath2.patch:\nlatency average = 1628.704 ms\n\nmaster + fastpath3.patch\nlatency average = 1590.417 ms\n\nI see no particular reason not to go ahead with the attached patch and\nget this thread closed off. Any objections?\n\nDavid",
"msg_date": "Wed, 2 Aug 2023 00:55:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Tue, 1 Aug 2023 at 13:55, David Rowley <[email protected]> wrote:\n>\n> I tried adding the \"at least 1 digit check\" by adding an else { goto\n> slow; } in the above code, but it seems to generate slower code than\n> just checking if (unlikely(ptr == s)) { goto slow; } after the loop.\n>\n\nThat check isn't quite right, because \"ptr\" will not equal \"s\" if\nthere is a sign character, so it won't detect an input with no digits\nin that case.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 1 Aug 2023 14:25:51 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 2 Aug 2023 at 01:26, Dean Rasheed <[email protected]> wrote:\n>\n> On Tue, 1 Aug 2023 at 13:55, David Rowley <[email protected]> wrote:\n> >\n> > I tried adding the \"at least 1 digit check\" by adding an else { goto\n> > slow; } in the above code, but it seems to generate slower code than\n> > just checking if (unlikely(ptr == s)) { goto slow; } after the loop.\n> >\n>\n> That check isn't quite right, because \"ptr\" will not equal \"s\" if\n> there is a sign character, so it won't detect an input with no digits\n> in that case.\n\nAh, yeah. Thanks.\n\nHere's a patch with an else condition when the first digit check fails.\n\nmaster + fastpath4.patch:\nlatency average = 1579.576 ms\nlatency average = 1572.716 ms\nlatency average = 1563.398 ms\n\n(appears slightly faster than fastpath3.patch)\n\nDavid",
"msg_date": "Wed, 2 Aug 2023 02:00:47 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Tue, 1 Aug 2023 at 15:01, David Rowley <[email protected]> wrote:\n>\n> Here's a patch with an else condition when the first digit check fails.\n>\n> master + fastpath4.patch:\n> latency average = 1579.576 ms\n> latency average = 1572.716 ms\n> latency average = 1563.398 ms\n>\n> (appears slightly faster than fastpath3.patch)\n>\n\nRunning the new test on slightly older Intel hardware (i9-9900K, gcc\n11.3), I get the following:\n\nREL_15_STABLE\nlatency average = 1687.695 ms\nlatency stddev = 3.068 ms\n\nREL_16_STABLE\nlatency average = 1931.756 ms\nlatency stddev = 2.065 ms\n\nREL_16_STABLE + pg_strtoint_fastpath1.patch\nlatency average = 1635.731 ms\nlatency stddev = 3.074 ms\n\nREL_16_STABLE + pg_strtoint_fastpath2.patch\nlatency average = 1687.303 ms\nlatency stddev = 4.243 ms\n\nREL_16_STABLE + pg_strtoint_fastpath3.patch\nlatency average = 1610.307 ms\nlatency stddev = 2.193 ms\n\nREL_16_STABLE + pg_strtoint_fastpath4.patch\nlatency average = 1577.604 ms\nlatency stddev = 4.060 ms\n\nHEAD\nlatency average = 1868.737 ms\nlatency stddev = 6.114 ms\n\nHEAD + pg_strtoint_fastpath1.patch\nlatency average = 1683.215 ms\nlatency stddev = 2.322 ms\n\nHEAD + pg_strtoint_fastpath2.patch\nlatency average = 1650.014 ms\nlatency stddev = 3.920 ms\n\nHEAD + pg_strtoint_fastpath3.patch\nlatency average = 1670.337 ms\nlatency stddev = 5.074 ms\n\nHEAD + pg_strtoint_fastpath4.patch\nlatency average = 1653.568 ms\nlatency stddev = 8.224 ms\n\nI don't know why HEAD and v16 aren't consistent, but it's seems to be\nquite reproducible, even though the numutils source is the same in\nboth branches, and using gdb to dump the disassembly for\npg_strtoint32_safe() shows that it's also the same.\n\nAnyway, insofar as these results can be trusted, fastpath4.patch looks good.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 1 Aug 2023 20:38:12 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 2 Aug 2023 at 07:38, Dean Rasheed <[email protected]> wrote:\n> Running the new test on slightly older Intel hardware (i9-9900K, gcc\n> 11.3), I get the following:\n\nThanks for running those tests. I've now pushed the fastpath4.patch\nafter making a few adjustments to the header comments to mention the\nnew stuff that was added in v16.\n\n> I don't know why HEAD and v16 aren't consistent, but it's seems to be\n> quite reproducible, even though the numutils source is the same in\n> both branches, and using gdb to dump the disassembly for\n> pg_strtoint32_safe() shows that it's also the same.\n\nI also see it's inconsistent, but the other way around. Here are some\nfresh tests with master and REL_16_STABLE with the committed code and\nthe version directly prior to the commit:\n\nmaster @ 3845577cb\nlatency average = 1575.879 ms\n\n 6.79% postgres [.] pg_strtoint32_safe\n\nmaster~1\nlatency average = 1968.004 ms\n\n 14.28% postgres [.] pg_strtoint32_safe\n\nREL_16_STABLE\nlatency average = 1735.163 ms\n\n 6.04% postgres [.] pg_strtoint32_safe\n\nREL_16_STABLE~1\nlatency average = 2188.186 ms\n\n 13.83% postgres [.] pg_strtoint32_safe\n\nDavid\n\n\n",
"msg_date": "Wed, 2 Aug 2023 12:25:35 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 2 Aug 2023 at 12:25, David Rowley <[email protected]> wrote:\n> master @ 3845577cb\n> latency average = 1575.879 ms\n>\n> 6.79% postgres [.] pg_strtoint32_safe\n>\n> master~1\n> latency average = 1968.004 ms\n>\n> 14.28% postgres [.] pg_strtoint32_safe\n>\n> REL_16_STABLE\n> latency average = 1735.163 ms\n>\n> 6.04% postgres [.] pg_strtoint32_safe\n>\n> REL_16_STABLE~1\n> latency average = 2188.186 ms\n>\n> 13.83% postgres [.] pg_strtoint32_safe\n\nAnd just to complete that set, here's the REL_15_STABLE performance\nusing the same test:\n\nlatency average = 1829.108 ms\n\n15.46% postgres [.] pg_strtoint32\n\nSo, it looks like this item can be closed off. I'll hold off from\ndoing that for a few days just in case anyone else wants to give\nfeedback or test themselves.\n\nDavid\n\n\n",
"msg_date": "Wed, 2 Aug 2023 13:35:49 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Wed, 2 Aug 2023 at 13:35, David Rowley <[email protected]> wrote:\n> So, it looks like this item can be closed off. I'll hold off from\n> doing that for a few days just in case anyone else wants to give\n> feedback or test themselves.\n\nAlright, closed.\n\nDavid\n\n\n",
"msg_date": "Mon, 7 Aug 2023 18:16:19 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 3:16 PM David Rowley <[email protected]> wrote:\n>\n> On Wed, 2 Aug 2023 at 13:35, David Rowley <[email protected]> wrote:\n> > So, it looks like this item can be closed off. I'll hold off from\n> > doing that for a few days just in case anyone else wants to give\n> > feedback or test themselves.\n>\n> Alright, closed.\n\nIIUC the problem with multiple concurrent COPY is not resolved yet.\nI've run the same benchmark that I used for the first report:\n\n* PG15 (cb2ae5741f)\n nclients = 1, execution time = 15.213\n nclients = 2, execution time = 9.470\n nclients = 4, execution time = 6.508\n nclients = 8, execution time = 4.526\n nclients = 16, execution time = 3.788\n nclients = 32, execution time = 3.837\n nclients = 64, execution time = 4.286\n nclients = 128, execution time = 4.388\n nclients = 256, execution time = 4.333\n\n* PG16 (67a007dc0c)\n nclients = 1, execution time = 14.494\n nclients = 2, execution time = 12.962\n nclients = 4, execution time = 17.757\n nclients = 8, execution time = 10.865\n nclients = 16, execution time = 7.371\n nclients = 32, execution time = 4.929\n nclients = 64, execution time = 2.212\n nclients = 128, execution time = 2.020\n nclients = 256, execution time = 2.196\n\nThe result of nclients = 1 became better thanks to recent fixes, but\nthere still seems to be the performance regression at nclient = 2~16\n(on RHEL 8 and 9). Andres reported[1] that after changing\nMAX_BUFFERED_TUPLES to 5000 the numbers became a lot better but it\nwould not be the solution, as he mentioned.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20230711185159.v2j5vnyrtodnwhgz%40awork3.anarazel.de\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 23:05:39 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-08-07 23:05:39 +0900, Masahiko Sawada wrote:\n> On Mon, Aug 7, 2023 at 3:16 PM David Rowley <[email protected]> wrote:\n> >\n> > On Wed, 2 Aug 2023 at 13:35, David Rowley <[email protected]> wrote:\n> > > So, it looks like this item can be closed off. I'll hold off from\n> > > doing that for a few days just in case anyone else wants to give\n> > > feedback or test themselves.\n> >\n> > Alright, closed.\n>\n> IIUC the problem with multiple concurrent COPY is not resolved yet.\n\nYea - it was just hard to analyze until the other regressions were fixed.\n\n\n> The result of nclients = 1 became better thanks to recent fixes, but\n> there still seems to be the performance regression at nclient = 2~16\n> (on RHEL 8 and 9). Andres reported[1] that after changing\n> MAX_BUFFERED_TUPLES to 5000 the numbers became a lot better but it\n> would not be the solution, as he mentioned.\n\nI think there could be a quite simple fix: Track by how much we've extended\nthe relation previously in the same bistate. If we already extended by many\nblocks, it's very likey that we'll do so further.\n\nA simple prototype patch attached. The results for me are promising. I copied\na smaller file [1], to have more accurate throughput results at shorter runs\n(15s).\n\nHEAD before:\nclients\t tps\n1\t 41\n2\t 76\n4\t 136\n8\t 248\n16\t 360\n32\t 375\n64\t 317\n\n\nHEAD after:\nclients\t tps\n1\t 43\n2\t 80\n4\t 155\n8\t 280\n16\t 369\n32\t 405\n64\t 344\n\nAny chance you could your benchmark? I don't see as much of a regression vs 16\nas you...\n\nGreetings,\n\nAndres Freund\n\n[1] COPY (SELECT generate_series(1, 100000)) TO '/tmp/data.copy';",
"msg_date": "Mon, 7 Aug 2023 11:10:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 3:10 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-08-07 23:05:39 +0900, Masahiko Sawada wrote:\n> > On Mon, Aug 7, 2023 at 3:16 PM David Rowley <[email protected]> wrote:\n> > >\n> > > On Wed, 2 Aug 2023 at 13:35, David Rowley <[email protected]> wrote:\n> > > > So, it looks like this item can be closed off. I'll hold off from\n> > > > doing that for a few days just in case anyone else wants to give\n> > > > feedback or test themselves.\n> > >\n> > > Alright, closed.\n> >\n> > IIUC the problem with multiple concurrent COPY is not resolved yet.\n>\n> Yea - it was just hard to analyze until the other regressions were fixed.\n>\n>\n> > The result of nclients = 1 became better thanks to recent fixes, but\n> > there still seems to be the performance regression at nclient = 2~16\n> > (on RHEL 8 and 9). Andres reported[1] that after changing\n> > MAX_BUFFERED_TUPLES to 5000 the numbers became a lot better but it\n> > would not be the solution, as he mentioned.\n>\n> I think there could be a quite simple fix: Track by how much we've extended\n> the relation previously in the same bistate. If we already extended by many\n> blocks, it's very likey that we'll do so further.\n>\n> A simple prototype patch attached. The results for me are promising. I copied\n> a smaller file [1], to have more accurate throughput results at shorter runs\n> (15s).\n\nThank you for the patch!\n\n>\n> HEAD before:\n> clients tps\n> 1 41\n> 2 76\n> 4 136\n> 8 248\n> 16 360\n> 32 375\n> 64 317\n>\n>\n> HEAD after:\n> clients tps\n> 1 43\n> 2 80\n> 4 155\n> 8 280\n> 16 369\n> 32 405\n> 64 344\n>\n> Any chance you could your benchmark? I don't see as much of a regression vs 16\n> as you...\n\nSure. The results are promising for me too:\n\n nclients = 1, execution time = 13.743\n nclients = 2, execution time = 7.552\n nclients = 4, execution time = 4.758\n nclients = 8, execution time = 3.035\n nclients = 16, execution time = 2.172\n nclients = 32, execution time = 1.959\nnclients = 64, execution time = 1.819\nnclients = 128, execution time = 1.583\nnclients = 256, execution time = 1.631\n\nHere are results of the same benchmark test you used:\n\nw/o patch:\nclients tps\n1 66.702\n2 59.696\n4 73.783\n8 168.115\n16 400.134\n32 574.098\n64 565.373\n128 526.303\n256 591.751\n\nw/ patch:\nclients tps\n1 67.735\n2 122.534\n4 240.707\n8 398.944\n16 541.097\n32 643.083\n64 614.775\n128 616.007\n256 577.885\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 8 Aug 2023 12:45:05 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-08-08 12:45:05 +0900, Masahiko Sawada wrote:\n> > I think there could be a quite simple fix: Track by how much we've extended\n> > the relation previously in the same bistate. If we already extended by many\n> > blocks, it's very likey that we'll do so further.\n> >\n> > A simple prototype patch attached. The results for me are promising. I copied\n> > a smaller file [1], to have more accurate throughput results at shorter runs\n> > (15s).\n> \n> Thank you for the patch!\n\nAttached is a mildly updated version of that patch. No functional changes,\njust polished comments and added a commit message.\n\n\n> > Any chance you could your benchmark? I don't see as much of a regression vs 16\n> > as you...\n> \n> Sure. The results are promising for me too:\n>\n> nclients = 1, execution time = 13.743\n> nclients = 2, execution time = 7.552\n> nclients = 4, execution time = 4.758\n> nclients = 8, execution time = 3.035\n> nclients = 16, execution time = 2.172\n> nclients = 32, execution time = 1.959\n> nclients = 64, execution time = 1.819\n> nclients = 128, execution time = 1.583\n> nclients = 256, execution time = 1.631\n\nNice. We are consistently better than both 15 and \"post integer parsing 16\".\n\n\nI'm really a bit baffled at myself for not using this approach from the get\ngo.\n\nThis also would make it much more beneficial to use a BulkInsertState in\nnodeModifyTable.c, even without converting to table_multi_insert().\n\n\nI was tempted to optimize RelationAddBlocks() a bit, by not calling\nRelationExtensionLockWaiterCount() if we are already extending by\nMAX_BUFFERS_TO_EXTEND_BY. Before this patch, it was pretty much impossible to\nreach that case, because of the MAX_BUFFERED_* limits in copyfrom.c, but now\nit's more common. But that probably ought to be done only HEAD, not 16.\n\nA related oddity: RelationExtensionLockWaiterCount()->LockWaiterCount() uses\nan exclusive lock on the lock partition - which seems not at all necessary?\n\n\nUnless somebody sees a reason not to, I'm planning to push this?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 12 Aug 2023 13:05:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hello,\n\nOn 2023-Aug-12, Andres Freund wrote:\n\n> On 2023-08-08 12:45:05 +0900, Masahiko Sawada wrote:\n\n> > > Any chance you could your benchmark? I don't see as much of a regression vs 16\n> > > as you...\n> > \n> > Sure. The results are promising for me too:\n> >\n> > nclients = 1, execution time = 13.743\n> > nclients = 2, execution time = 7.552\n> > nclients = 4, execution time = 4.758\n> > nclients = 8, execution time = 3.035\n> > nclients = 16, execution time = 2.172\n> > nclients = 32, execution time = 1.959\n> > nclients = 64, execution time = 1.819\n> > nclients = 128, execution time = 1.583\n> > nclients = 256, execution time = 1.631\n> \n> Nice. We are consistently better than both 15 and \"post integer parsing 16\".\n\nSince the wins from this patch were replicated and it has been pushed, I\nunderstand that this open item can be marked as closed, so I've done\nthat.\n\nThanks,\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n",
"msg_date": "Wed, 16 Aug 2023 13:15:46 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "On 2023-08-16 13:15:46 +0200, Alvaro Herrera wrote:\n> Since the wins from this patch were replicated and it has been pushed, I\n> understand that this open item can be marked as closed, so I've done\n> that.\n\nThanks!\n\n\n",
"msg_date": "Wed, 16 Aug 2023 15:54:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-08-16 13:15:46 +0200, Alvaro Herrera wrote:\n>> Since the wins from this patch were replicated and it has been pushed, I\n>> understand that this open item can be marked as closed, so I've done\n>> that.\n\n> Thanks!\n\nIt turns out that this patch is what's making buildfarm member\nchipmunk fail in contrib/pg_visibility [1]. That's easily reproduced\nby running the test with shared_buffers = 10MB. I didn't dig further\nthan the \"git bisect\" result:\n\n$ git bisect bad\n82a4edabd272f70d044faec8cf7fd1eab92d9991 is the first bad commit\ncommit 82a4edabd272f70d044faec8cf7fd1eab92d9991\nAuthor: Andres Freund <[email protected]>\nDate: Mon Aug 14 09:54:03 2023 -0700\n\n hio: Take number of prior relation extensions into account\n\nbut I imagine the problem is that the patch's more aggressive\nrelation-extension heuristic is causing the table to have more\npages than the test case expects. Is it really a good idea\nfor such a heuristic to depend on shared_buffers? If you don't\nwant to change the heuristic then we'll have to find a way to\ntweak this test to avoid it.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-09-06%2014%3A14%3A51\n\n\n",
"msg_date": "Wed, 06 Sep 2023 18:01:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-09-06 18:01:53 -0400, Tom Lane wrote:\n> It turns out that this patch is what's making buildfarm member\n> chipmunk fail in contrib/pg_visibility [1]. That's easily reproduced\n> by running the test with shared_buffers = 10MB. I didn't dig further\n> than the \"git bisect\" result:\n\nAt first I was a bit confounded by not being able to reproduce this. My test\nenvironment had max_connections=110 for some other reason - and the problem\ndoesn't reproduce with that setting...\n\n\n> $ git bisect bad\n> 82a4edabd272f70d044faec8cf7fd1eab92d9991 is the first bad commit\n> commit 82a4edabd272f70d044faec8cf7fd1eab92d9991\n> Author: Andres Freund <[email protected]>\n> Date: Mon Aug 14 09:54:03 2023 -0700\n>\n> hio: Take number of prior relation extensions into account\n>\n> but I imagine the problem is that the patch's more aggressive\n> relation-extension heuristic is causing the table to have more\n> pages than the test case expects. Is it really a good idea\n> for such a heuristic to depend on shared_buffers?\n\nThe heuristic doesn't directly depend on shared buffers. However, the amount\nwe extend by is limited by needing to pin shared buffers covering all the\nnewly extended buffers.\n\nThat's what ends up limiting things here - shared_buffers = 10MB and\nmax_connections = 10 doesn't allow for a lot of buffers to be pinned\nconcurrently in each backend. Although perhaps LimitAdditionalPins() is a bit\ntoo conservative, due to not checking the private refcount array and just\nassuming REFCOUNT_ARRAY_ENTRIES.\n\n\n> If you don't want to change the heuristic then we'll have to find a way to\n> tweak this test to avoid it.\n\nWe could tweak LimitAdditionalPins() by checking PrivateRefCountArray instead\nof assuming the worst-case REFCOUNT_ARRAY_ENTRIES.\n\nHowever, it seems that the logic in the test is pretty fragile independent of\nthis issue? Different alignment, block size or an optimization of the page\nlayout could also break the test?\n\nUnfortunately a query that doesn't falsely alert in this case is a bit ugly,\ndue to needing to deal with the corner case of an empty page at the end:\n\nselect *\nfrom pg_visibility_map('copyfreeze')\nwhere\n (not all_visible or not all_frozen)\n -- deal with trailing empty pages due to potentially bulk-extending too aggressively\n and exists(SELECT * FROM copyfreeze WHERE ctid >= ('('||blkno||', 0)')::tid)\n;\n\nBefore 82a4edabd27 this situation was rare - you'd have needed contended\nextensions. But after it has become more common. I worry that that might cause\nother issues :(. OTOH, I think we'll need to extend way more aggressively at\nsome point...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Sep 2023 17:00:11 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-09-06 18:01:53 -0400, Tom Lane wrote:\n>> It turns out that this patch is what's making buildfarm member\n>> chipmunk fail in contrib/pg_visibility [1]. That's easily reproduced\n>> by running the test with shared_buffers = 10MB. I didn't dig further\n>> than the \"git bisect\" result:\n\n> At first I was a bit confounded by not being able to reproduce this. My test\n> environment had max_connections=110 for some other reason - and the problem\n> doesn't reproduce with that setting...\n\nI just did a git bisect run to discover when the failure documented\nin bug #18130 [1] started. And the answer is commit 82a4edabd.\nNow, it's pretty obvious that that commit didn't in itself cause\nproblems like this:\n\npostgres=# \\copy test from 'bug18130.csv' csv\nERROR: could not read block 5 in file \"base/5/17005\": read only 0 of 8192 bytes\nCONTEXT: COPY test, line 472: \"0,185647715,222655,489637,2,2023-07-31,9100.0000000,302110385,2023-07-30 14:16:36.750981+00,14026347...\"\n\nIMO there must be some very nasty bug lurking in the new\nmultiple-block extension logic, that happens to be exposed by this\ntest case with 82a4edabd's adjustments to the when-to-extend choices\nbut wasn't before that.\n\nTo save other people the trouble of extracting the in-line data\nin the bug submission, I've attached the test files I was using.\nThe DDL is simplified slightly from what was submitted. I'm not\nentirely sure why a no-op trigger is needed to provoke the bug...\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/18130-7a86a7356a75209d%40postgresql.org",
"msg_date": "Mon, 25 Sep 2023 15:42:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-09-25 15:42:26 -0400, Tom Lane wrote:\n> I just did a git bisect run to discover when the failure documented\n> in bug #18130 [1] started. And the answer is commit 82a4edabd.\n> Now, it's pretty obvious that that commit didn't in itself cause\n> problems like this:\n> \n> postgres=# \\copy test from 'bug18130.csv' csv\n> ERROR: could not read block 5 in file \"base/5/17005\": read only 0 of 8192 bytes\n> CONTEXT: COPY test, line 472: \"0,185647715,222655,489637,2,2023-07-31,9100.0000000,302110385,2023-07-30 14:16:36.750981+00,14026347...\"\n\nUgh.\n\n\n> IMO there must be some very nasty bug lurking in the new\n> multiple-block extension logic, that happens to be exposed by this\n> test case with 82a4edabd's adjustments to the when-to-extend choices\n> but wasn't before that.\n\n> To save other people the trouble of extracting the in-line data\n> in the bug submission, I've attached the test files I was using.\n\nThanks, looking at this now.\n\n\n> The DDL is simplified slightly from what was submitted. I'm not\n> entirely sure why a no-op trigger is needed to provoke the bug...\n\nA trigger might prevent using the multi-insert API, which would lead to\ndifferent execution paths...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 25 Sep 2023 12:48:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-09-25 12:48:30 -0700, Andres Freund wrote:\n> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:\n> > I just did a git bisect run to discover when the failure documented\n> > in bug #18130 [1] started. And the answer is commit 82a4edabd.\n> > Now, it's pretty obvious that that commit didn't in itself cause\n> > problems like this:\n> > \n> > postgres=# \\copy test from 'bug18130.csv' csv\n> > ERROR: could not read block 5 in file \"base/5/17005\": read only 0 of 8192 bytes\n> > CONTEXT: COPY test, line 472: \"0,185647715,222655,489637,2,2023-07-31,9100.0000000,302110385,2023-07-30 14:16:36.750981+00,14026347...\"\n> \n> Ugh.\n> \n> \n> > IMO there must be some very nasty bug lurking in the new\n> > multiple-block extension logic, that happens to be exposed by this\n> > test case with 82a4edabd's adjustments to the when-to-extend choices\n> > but wasn't before that.\n> \n> > To save other people the trouble of extracting the in-line data\n> > in the bug submission, I've attached the test files I was using.\n> \n> Thanks, looking at this now.\n\n(had to switch locations in between)\n\nUh, huh. The problem is that COPY uses a single BulkInsertState for multiple\npartitions. Which to me seems to run counter to the following comment:\n *\tThe caller can also provide a BulkInsertState object to optimize many\n *\tinsertions into the same relation. This keeps a pin on the current\n *\tinsertion target page (to save pin/unpin cycles) and also passes a\n *\tBULKWRITE buffer selection strategy object to the buffer manager.\n *\tPassing NULL for bistate selects the default behavior.\n\nThe reason this doesn't cause straight up corruption due to reusing a pin from\nanother relation is that b1ecb9b3fcfb added ReleaseBulkInsertStatePin() and a\ncall to it. But I didn't make ReleaseBulkInsertStatePin() reset the bulk\ninsertion state, which is what leads to the errors from the bug report.\n\nResetting the relevant BulkInsertState fields fixes the problem. But I'm not\nsure that's the right fix. ISTM that independent of whether we fix this via\nReleaseBulkInsertStatePin() resetting the fields or via not reusing\nBulkInsertState, we should add assertions defending against future issues like\nthis (e.g. by adding a relation field to BulkInsertState in cassert builds,\nand asserting that the relation is the same as in prior calls unless\nReleaseBulkInsertStatePin() has been called).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:37:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n>> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:\n>>> I just did a git bisect run to discover when the failure documented\n>>> in bug #18130 [1] started. And the answer is commit 82a4edabd.\n\n> Uh, huh. The problem is that COPY uses a single BulkInsertState for multiple\n> partitions. Which to me seems to run counter to the following comment:\n> *\tThe caller can also provide a BulkInsertState object to optimize many\n> *\tinsertions into the same relation. This keeps a pin on the current\n> *\tinsertion target page (to save pin/unpin cycles) and also passes a\n> *\tBULKWRITE buffer selection strategy object to the buffer manager.\n> *\tPassing NULL for bistate selects the default behavior.\n\n> The reason this doesn't cause straight up corruption due to reusing a pin from\n> another relation is that b1ecb9b3fcfb added ReleaseBulkInsertStatePin() and a\n> call to it. But I didn't make ReleaseBulkInsertStatePin() reset the bulk\n> insertion state, which is what leads to the errors from the bug report.\n\n> Resetting the relevant BulkInsertState fields fixes the problem. But I'm not\n> sure that's the right fix. ISTM that independent of whether we fix this via\n> ReleaseBulkInsertStatePin() resetting the fields or via not reusing\n> BulkInsertState, we should add assertions defending against future issues like\n> this (e.g. by adding a relation field to BulkInsertState in cassert builds,\n> and asserting that the relation is the same as in prior calls unless\n> ReleaseBulkInsertStatePin() has been called).\n\nPing? We really ought to have a fix for this committed in time for\n16.1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Oct 2023 11:44:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single relation\n in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-10-12 11:44:09 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> >> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:\n> >>> I just did a git bisect run to discover when the failure documented\n> >>> in bug #18130 [1] started. And the answer is commit 82a4edabd.\n> \n> > Uh, huh. The problem is that COPY uses a single BulkInsertState for multiple\n> > partitions. Which to me seems to run counter to the following comment:\n> > *\tThe caller can also provide a BulkInsertState object to optimize many\n> > *\tinsertions into the same relation. This keeps a pin on the current\n> > *\tinsertion target page (to save pin/unpin cycles) and also passes a\n> > *\tBULKWRITE buffer selection strategy object to the buffer manager.\n> > *\tPassing NULL for bistate selects the default behavior.\n> \n> > The reason this doesn't cause straight up corruption due to reusing a pin from\n> > another relation is that b1ecb9b3fcfb added ReleaseBulkInsertStatePin() and a\n> > call to it. But I didn't make ReleaseBulkInsertStatePin() reset the bulk\n> > insertion state, which is what leads to the errors from the bug report.\n> \n> > Resetting the relevant BulkInsertState fields fixes the problem. But I'm not\n> > sure that's the right fix. ISTM that independent of whether we fix this via\n> > ReleaseBulkInsertStatePin() resetting the fields or via not reusing\n> > BulkInsertState, we should add assertions defending against future issues like\n> > this (e.g. by adding a relation field to BulkInsertState in cassert builds,\n> > and asserting that the relation is the same as in prior calls unless\n> > ReleaseBulkInsertStatePin() has been called).\n> \n> Ping? We really ought to have a fix for this committed in time for\n> 16.1.\n\nI kind of had hoped somebody would comment on the approach. Given that nobody\nhas, I'll push the minimal fix of resetting the state in\nReleaseBulkInsertStatePin(), even though I think architecturally that's not\ngreat.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Oct 2023 09:24:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-10-12 09:24:19 -0700, Andres Freund wrote:\n> On 2023-10-12 11:44:09 -0400, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > >> On 2023-09-25 15:42:26 -0400, Tom Lane wrote:\n> > >>> I just did a git bisect run to discover when the failure documented\n> > >>> in bug #18130 [1] started. And the answer is commit 82a4edabd.\n> > \n> > > Uh, huh. The problem is that COPY uses a single BulkInsertState for multiple\n> > > partitions. Which to me seems to run counter to the following comment:\n> > > *\tThe caller can also provide a BulkInsertState object to optimize many\n> > > *\tinsertions into the same relation. This keeps a pin on the current\n> > > *\tinsertion target page (to save pin/unpin cycles) and also passes a\n> > > *\tBULKWRITE buffer selection strategy object to the buffer manager.\n> > > *\tPassing NULL for bistate selects the default behavior.\n> > \n> > > The reason this doesn't cause straight up corruption due to reusing a pin from\n> > > another relation is that b1ecb9b3fcfb added ReleaseBulkInsertStatePin() and a\n> > > call to it. But I didn't make ReleaseBulkInsertStatePin() reset the bulk\n> > > insertion state, which is what leads to the errors from the bug report.\n> > \n> > > Resetting the relevant BulkInsertState fields fixes the problem. But I'm not\n> > > sure that's the right fix. ISTM that independent of whether we fix this via\n> > > ReleaseBulkInsertStatePin() resetting the fields or via not reusing\n> > > BulkInsertState, we should add assertions defending against future issues like\n> > > this (e.g. by adding a relation field to BulkInsertState in cassert builds,\n> > > and asserting that the relation is the same as in prior calls unless\n> > > ReleaseBulkInsertStatePin() has been called).\n> > \n> > Ping? We really ought to have a fix for this committed in time for\n> > 16.1.\n> \n> I kind of had hoped somebody would comment on the approach. Given that nobody\n> has, I'll push the minimal fix of resetting the state in\n> ReleaseBulkInsertStatePin(), even though I think architecturally that's not\n> great.\n\nI spent some time working on a test that shows the problem more cheaply than\nthe case upthread. I think it'd be desirable to have a test that's likely to\ncatch an issue like this fairly quickly. We've had other problems in this\nrealm before - there's only a single test that fails if I remove the\nReleaseBulkInsertStatePin() call, and I don't think that's guaranteed at all.\n\nI'm a bit on the fence on how large to make the relation. For me the bug\ntriggers when filling both relations up to the 3rd block, but how many rows\nthat takes is somewhat dependent on space utilization on the page and stuff.\n\nRight now the test uses data/desc.data and ends up with 328kB and 312kB in two\npartitions. Alternatively I could make the test create a new file to load with\ncopy that has fewer rows than data/desc.data - I didn't see another data file\nthat works conveniently and has fewer rows. The copy is reasonably fast, even\nunder something as expensive as rr (~60ms). So I'm inclined to just go with\nthat?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Oct 2023 10:39:10 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-10-13 10:39:10 -0700, Andres Freund wrote:\n> On 2023-10-12 09:24:19 -0700, Andres Freund wrote:\n> > I kind of had hoped somebody would comment on the approach. Given that nobody\n> > has, I'll push the minimal fix of resetting the state in\n> > ReleaseBulkInsertStatePin(), even though I think architecturally that's not\n> > great.\n> \n> I spent some time working on a test that shows the problem more cheaply than\n> the case upthread. I think it'd be desirable to have a test that's likely to\n> catch an issue like this fairly quickly. We've had other problems in this\n> realm before - there's only a single test that fails if I remove the\n> ReleaseBulkInsertStatePin() call, and I don't think that's guaranteed at all.\n> \n> I'm a bit on the fence on how large to make the relation. For me the bug\n> triggers when filling both relations up to the 3rd block, but how many rows\n> that takes is somewhat dependent on space utilization on the page and stuff.\n> \n> Right now the test uses data/desc.data and ends up with 328kB and 312kB in two\n> partitions. Alternatively I could make the test create a new file to load with\n> copy that has fewer rows than data/desc.data - I didn't see another data file\n> that works conveniently and has fewer rows. The copy is reasonably fast, even\n> under something as expensive as rr (~60ms). So I'm inclined to just go with\n> that?\n\nPatch with fix and test attached (0001).\n\nGiven how easy a missing ReleaseBulkInsertStatePin() can cause corruption (not\ndue to this bug, but the issue fixed in b1ecb9b3fcf), I think we should\nconsider adding an assertion along the lines of 0002 to HEAD. Perhaps adding a\nnew bufmgr.c function to avoid having to get the fields in the buffer tag we\ndon't care about. Or perhaps we should just promote the check to an elog, we\nalready call BufferGetBlockNumber(), using BufferGetTag() instead doesn't cost\nmuch more.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 13 Oct 2023 11:30:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
},
{
"msg_contents": "Hi,\n\nOn 2023-10-13 11:30:35 -0700, Andres Freund wrote:\n> On 2023-10-13 10:39:10 -0700, Andres Freund wrote:\n> > On 2023-10-12 09:24:19 -0700, Andres Freund wrote:\n> > > I kind of had hoped somebody would comment on the approach. Given that nobody\n> > > has, I'll push the minimal fix of resetting the state in\n> > > ReleaseBulkInsertStatePin(), even though I think architecturally that's not\n> > > great.\n> > \n> > I spent some time working on a test that shows the problem more cheaply than\n> > the case upthread. I think it'd be desirable to have a test that's likely to\n> > catch an issue like this fairly quickly. We've had other problems in this\n> > realm before - there's only a single test that fails if I remove the\n> > ReleaseBulkInsertStatePin() call, and I don't think that's guaranteed at all.\n> > \n> > I'm a bit on the fence on how large to make the relation. For me the bug\n> > triggers when filling both relations up to the 3rd block, but how many rows\n> > that takes is somewhat dependent on space utilization on the page and stuff.\n> > \n> > Right now the test uses data/desc.data and ends up with 328kB and 312kB in two\n> > partitions. Alternatively I could make the test create a new file to load with\n> > copy that has fewer rows than data/desc.data - I didn't see another data file\n> > that works conveniently and has fewer rows. The copy is reasonably fast, even\n> > under something as expensive as rr (~60ms). So I'm inclined to just go with\n> > that?\n> \n> Patch with fix and test attached (0001).\n\nPushed that.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Fri, 13 Oct 2023 19:33:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation on concurrent COPY into a single\n relation in PG16."
}
] |
[
{
"msg_contents": "Hi hackers,\nStartup process will record not existed pages into hash table invalid_page_tab during replaying WAL.\nAnd it would call XLogCheckInvalidPages after reaching consistent recovery state. Finally, it will\nPANIC or WARNING based on parameter ignore_invalid_pages if where's any invalid pages.\nNow I'm wondering why doesn't call XLogCheckInvalidPages during primary crash recovery?\nWhen primary node crash recovery, the mini recovery point is InvalidXLogRecPtr, so it skips\nconsistent recovery state stage. Startup process get no chance to call XLogCheckInvalidPages\nbefore exit. \nIn my opinion, invalid pages found in hash table invalid_page_tab means there's something\ninconsistent between WAL and data. But why primary node can ignore it? Can anyone help\nto answer?\n--\nBest regards,\nrogers.ww\n\nHi hackers,Startup process will record not existed pages into hash table invalid_page_tab during replaying WAL.And it would call XLogCheckInvalidPages after reaching consistent recovery state. Finally, it willPANIC or WARNING based on parameter ignore_invalid_pages if where's any invalid pages.Now I'm wondering why doesn't call XLogCheckInvalidPages during primary crash recovery?When primary node crash recovery, the mini recovery point is InvalidXLogRecPtr, so it skipsconsistent recovery state stage. Startup process get no chance to call XLogCheckInvalidPagesbefore exit. In my opinion, invalid pages found in hash table invalid_page_tab means there's somethinginconsistent between WAL and data. But why primary node can ignore it? Can anyone helpto answer?--Best regards,rogers.ww",
"msg_date": "Mon, 03 Jul 2023 18:11:51 +0800",
"msg_from": "\"=?UTF-8?B?546L5LyfKOWtpuW8iCk=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?d2h5IGRvZXNuJ3QgY2FsbCBYTG9nQ2hlY2tJbnZhbGlkUGFnZXMgZHVyaW5nIHByaW1hcnkg?=\n =?UTF-8?B?Y3Jhc2ggcmVjb3Zlcnk/?="
}
] |
[
{
"msg_contents": "Hi All,\nlogicalrep_message_type() is used to convert logical message type code\ninto name while prepared error context or details. Thus when this\nfunction is called probably an ERROR is already raised. If\nlogicalrep_message_type() gets an unknown message type, it will throw\nan error, which will suppress the error for which we are building\ncontext or details. That's not useful. I think instead\nlogicalrep_message_type() should return \"unknown\" when it encounters\nan unknown message type and let the original error message be thrown\nas is.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:00:29 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "logicalrep_message_type throws an error"
},
{
"msg_contents": "On Mon, Jul 3, 2023, at 7:30 AM, Ashutosh Bapat wrote:\n> logicalrep_message_type() is used to convert logical message type code\n> into name while prepared error context or details. Thus when this\n> function is called probably an ERROR is already raised. If\n> logicalrep_message_type() gets an unknown message type, it will throw\n> an error, which will suppress the error for which we are building\n> context or details. That's not useful. I think instead\n> logicalrep_message_type() should return \"unknown\" when it encounters\n> an unknown message type and let the original error message be thrown\n> as is.\n\nHmm. Good catch. The current behavior is:\n\nERROR: invalid logical replication message type \"X\" \nLOG: background worker \"logical replication worker\" (PID 71800) exited with exit code 1\n\n... that hides the details. After providing a default message type:\n\nERROR: invalid logical replication message type \"X\"\nCONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"???\" in transaction 796, finished at 0/16266F8\n\nMasahiko, since abc0910e2e0 is your patch maybe you want to take a look at it.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 03 Jul 2023 10:01:31 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "Thanks Euler for the patch.\n\nOn Mon, Jul 3, 2023 at 6:32 PM Euler Taveira <[email protected]> wrote:\n>\n> Masahiko, since abc0910e2e0 is your patch maybe you want to take a look at it.\n>\n\nA couple of comments.\n\n-char *\n+const char *\n\nNice improvement.\n\n logicalrep_message_type(LogicalRepMsgType action)\n {\n switch (action)\n@@ -1256,9 +1256,7 @@ logicalrep_message_type(LogicalRepMsgType action)\n return \"STREAM ABORT\";\n case LOGICAL_REP_MSG_STREAM_PREPARE:\n return \"STREAM PREPARE\";\n+ default:\n+ return \"???\";\n }\n-\n- elog(ERROR, \"invalid logical replication message type \\\"%c\\\"\", action);\n-\n- return NULL; /* keep compiler quiet */\n\nThe switch is on action which is an enum. So without default it will\nprovide a compilation warning for missing enums. Adding \"default\" case\ndefeats that purpose. I think we should just return \"???\" from outside\nswitch block.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 3 Jul 2023 18:52:24 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 6:52 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Thanks Euler for the patch.\n>\n> On Mon, Jul 3, 2023 at 6:32 PM Euler Taveira <[email protected]> wrote:\n> >\n> > Masahiko, since abc0910e2e0 is your patch maybe you want to take a look at it.\n> >\n>\n> A couple of comments.\n>\n> -char *\n> +const char *\n>\n> Nice improvement.\n>\n> logicalrep_message_type(LogicalRepMsgType action)\n> {\n> switch (action)\n> @@ -1256,9 +1256,7 @@ logicalrep_message_type(LogicalRepMsgType action)\n> return \"STREAM ABORT\";\n> case LOGICAL_REP_MSG_STREAM_PREPARE:\n> return \"STREAM PREPARE\";\n> + default:\n> + return \"???\";\n> }\n> -\n> - elog(ERROR, \"invalid logical replication message type \\\"%c\\\"\", action);\n> -\n> - return NULL; /* keep compiler quiet */\n>\n> The switch is on action which is an enum. So without default it will\n> provide a compilation warning for missing enums. Adding \"default\" case\n> defeats that purpose. I think we should just return \"???\" from outside\n> switch block.\n>\n\nPFA patch.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 3 Jul 2023 19:27:50 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Mon, Jul 3, 2023, at 10:57 AM, Ashutosh Bapat wrote:\n> On Mon, Jul 3, 2023 at 6:52 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > The switch is on action which is an enum. So without default it will\n> > provide a compilation warning for missing enums. Adding \"default\" case\n> > defeats that purpose. I think we should just return \"???\" from outside\n> > switch block.\n> >\n\nYeah. I don't think any gcc -Wswitch options have effect if a default is used\nso your suggestion is good for wrong/missing message types in the future.\n\n> PFA patch.\n\nWFM.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Jul 3, 2023, at 10:57 AM, Ashutosh Bapat wrote:On Mon, Jul 3, 2023 at 6:52 PM Ashutosh Bapat<[email protected]> wrote:>> The switch is on action which is an enum. So without default it will> provide a compilation warning for missing enums. Adding \"default\" case> defeats that purpose. I think we should just return \"???\" from outside> switch block.>Yeah. I don't think any gcc -Wswitch options have effect if a default is usedso your suggestion is good for wrong/missing message types in the future.PFA patch.WFM.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 03 Jul 2023 11:16:00 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 6:32 PM Euler Taveira <[email protected]> wrote:\n>\n> On Mon, Jul 3, 2023, at 7:30 AM, Ashutosh Bapat wrote:\n>\n> logicalrep_message_type() is used to convert logical message type code\n> into name while prepared error context or details. Thus when this\n> function is called probably an ERROR is already raised. If\n> logicalrep_message_type() gets an unknown message type, it will throw\n> an error, which will suppress the error for which we are building\n> context or details. That's not useful. I think instead\n> logicalrep_message_type() should return \"unknown\" when it encounters\n> an unknown message type and let the original error message be thrown\n> as is.\n>\n>\n> Hmm. Good catch. The current behavior is:\n>\n> ERROR: invalid logical replication message type \"X\"\n> LOG: background worker \"logical replication worker\" (PID 71800) exited with exit code 1\n>\n> ... that hides the details. After providing a default message type:\n>\n> ERROR: invalid logical replication message type \"X\"\n> CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"???\" in transaction 796, finished at 0/16266F8\n>\n\nI think after returning \"???\" from logicalrep_message_type(), the\nabove is possible when we get the error: \"invalid logical replication\nmessage type \"X\"\" from apply_dispatch(), right? If so, then what about\nthe case when we forgot to handle some message in\nlogicalrep_message_type() but handled it in apply_dispatch()? Isn't it\nbetter to return the 'action' from the function\nlogicalrep_message_type() for unknown type? That way the information\ncould be a bit better and we can easily catch the code bug as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 10:34:46 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On 2023-Jul-05, Amit Kapila wrote:\n\n> I think after returning \"???\" from logicalrep_message_type(), the\n> above is possible when we get the error: \"invalid logical replication\n> message type \"X\"\" from apply_dispatch(), right? If so, then what about\n> the case when we forgot to handle some message in\n> logicalrep_message_type() but handled it in apply_dispatch()? Isn't it\n> better to return the 'action' from the function\n> logicalrep_message_type() for unknown type? That way the information\n> could be a bit better and we can easily catch the code bug as well.\n\nAre you suggesting that logicalrep_message_type should include the\nnumerical value of 'action' in the ??? message? Something like this:\n\n ERROR: invalid logical replication message type \"X\"\n CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (123)\" in transaction 796, finished at 0/16266F8\n\nI don't see why not -- seems easy enough, and might help somebody.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Wed, 5 Jul 2023 12:56:39 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 4:26 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Jul-05, Amit Kapila wrote:\n>\n> > I think after returning \"???\" from logicalrep_message_type(), the\n> > above is possible when we get the error: \"invalid logical replication\n> > message type \"X\"\" from apply_dispatch(), right? If so, then what about\n> > the case when we forgot to handle some message in\n> > logicalrep_message_type() but handled it in apply_dispatch()? Isn't it\n> > better to return the 'action' from the function\n> > logicalrep_message_type() for unknown type? That way the information\n> > could be a bit better and we can easily catch the code bug as well.\n>\n> Are you suggesting that logicalrep_message_type should include the\n> numerical value of 'action' in the ??? message? Something like this:\n>\n> ERROR: invalid logical replication message type \"X\"\n> CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (123)\" in transaction 796, finished at 0/16266F8\n>\n\nYes, something like that would be better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 18:17:54 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 5, 2023, at 7:56 AM, Alvaro Herrera wrote:\n> On 2023-Jul-05, Amit Kapila wrote:\n> \n> > I think after returning \"???\" from logicalrep_message_type(), the\n> > above is possible when we get the error: \"invalid logical replication\n> > message type \"X\"\" from apply_dispatch(), right? If so, then what about\n> > the case when we forgot to handle some message in\n> > logicalrep_message_type() but handled it in apply_dispatch()? Isn't it\n> > better to return the 'action' from the function\n> > logicalrep_message_type() for unknown type? That way the information\n> > could be a bit better and we can easily catch the code bug as well.\n> \n> Are you suggesting that logicalrep_message_type should include the\n> numerical value of 'action' in the ??? message? Something like this:\n> \n> ERROR: invalid logical replication message type \"X\"\n> CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (123)\" in transaction 796, finished at 0/16266F8\n\nIsn't this numerical value already exposed in the error message (X = 88)?\nIn this example, it is:\n\nERROR: invalid logical replication message type \"X\"\nCONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698\n\nIMO it could be confusing if we provide two representations of the same data (X\nand 88). Since we already provide \"X\" I don't think we need \"88\". Another\noption, is to remove \"X\" from apply_dispatch() and add \"??? (88)\" to \nlogicalrep_message_type().\n\nOpinions?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Jul 5, 2023, at 7:56 AM, Alvaro Herrera wrote:On 2023-Jul-05, Amit Kapila wrote:> I think after returning \"???\" from logicalrep_message_type(), the> above is possible when we get the error: \"invalid logical replication> message type \"X\"\" from apply_dispatch(), right? If so, then what about> the case when we forgot to handle some message in> logicalrep_message_type() but handled it in apply_dispatch()? Isn't it> better to return the 'action' from the function> logicalrep_message_type() for unknown type? That way the information> could be a bit better and we can easily catch the code bug as well.Are you suggesting that logicalrep_message_type should include thenumerical value of 'action' in the ??? message? Something like this:ERROR: invalid logical replication message type \"X\"CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (123)\" in transaction 796, finished at 0/16266F8Isn't this numerical value already exposed in the error message (X = 88)?In this example, it is:ERROR: invalid logical replication message type \"X\"CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698IMO it could be confusing if we provide two representations of the same data (Xand 88). Since we already provide \"X\" I don't think we need \"88\". Anotheroption, is to remove \"X\" from apply_dispatch() and add \"??? (88)\" to logicalrep_message_type().Opinions?--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 05 Jul 2023 10:37:05 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 7:07 PM Euler Taveira <[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023, at 7:56 AM, Alvaro Herrera wrote:\n>\n> On 2023-Jul-05, Amit Kapila wrote:\n>\n> > I think after returning \"???\" from logicalrep_message_type(), the\n> > above is possible when we get the error: \"invalid logical replication\n> > message type \"X\"\" from apply_dispatch(), right? If so, then what about\n> > the case when we forgot to handle some message in\n> > logicalrep_message_type() but handled it in apply_dispatch()?\n\napply_dispatch() has a default case in switch() so it can\ntheoretically forget to handle some message type. I think we should\navoid default case in that function to catch missing message type in\nthat function. But if logicalrep_message_type() doesn't use default\ncase, it won't forget to handle a known message type. So what you are\nsuggesting is not possible.\n\nIt might happen that the upstream may send an unknown message type\nthat both apply_dispatch() and logicalrep_message_type() can not\nhandle.\n\n> ERROR: invalid logical replication message type \"X\"\n> CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698\n>\n> IMO it could be confusing if we provide two representations of the same data (X\n> and 88). Since we already provide \"X\" I don't think we need \"88\". Another\n> option, is to remove \"X\" from apply_dispatch() and add \"??? (88)\" to\n> logicalrep_message_type().\n\nI think we don't need message type to be mentioned in the context for\nan error about invalid message type. I think what needs to be done is\nto set\napply_error_callback_arg.command = 0 before calling ereport in the\ndefault case of apply_dispatch().\napply_error_callback() will just return without providing a context.\nIf we need a context and have all the other necessary fields, we can\nimprove apply_error_callback() to provide context when\napply_error_callback_arg.command = 0 .\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 5 Jul 2023 19:54:36 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On 2023-Jul-05, Euler Taveira wrote:\n\n> Isn't this numerical value already exposed in the error message (X = 88)?\n> In this example, it is:\n> \n> ERROR: invalid logical replication message type \"X\"\n> CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698\n> \n> IMO it could be confusing if we provide two representations of the same data (X\n> and 88). Since we already provide \"X\" I don't think we need \"88\". Another\n> option, is to remove \"X\" from apply_dispatch() and add \"??? (88)\" to \n> logicalrep_message_type().\n> \n> Opinions?\n\nThe CONTEXT message could theoretically appear in other error throws,\nnot just in \"invalid logical replication message\". So the duplicity is\nnot really a problem.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n",
"msg_date": "Wed, 5 Jul 2023 18:11:55 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On 2023-Jul-05, Alvaro Herrera wrote:\n\n> On 2023-Jul-05, Euler Taveira wrote:\n> \n> > Isn't this numerical value already exposed in the error message (X = 88)?\n> > In this example, it is:\n> > \n> > ERROR: invalid logical replication message type \"X\"\n> > CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698\n> > \n> > IMO it could be confusing if we provide two representations of the same data (X\n> > and 88). Since we already provide \"X\" I don't think we need \"88\".\n> \n> The CONTEXT message could theoretically appear in other error throws,\n> not just in \"invalid logical replication message\". So the duplicity is\n> not really a problem.\n\nAh, but you're thinking that if the message type is invalid, then it\nwill have been rejected in the \"invalid logical replication message\"\nstage, so no invalid message type will be reported. I guess there's a\npoint to that argument as well.\n\nHowever, I don't see that having the numerical ASCII value there causes\nany harm, even if the char value is already exposed in the other\nmessage. (I'm sure you'll agree that this is quite a minor issue.)\n\nI doubt that each of these two prints of the enum value showing\ndifferent formats is confusing. Yes, the enum is defined in terms of\nchar literals, but if an actually invalid message shows up with an\nuint32 value outside the printable ASCII range, the printout might\nbe ugly or useless.\n\n> > Another option, is to remove \"X\" from apply_dispatch() and add \"???\n> > (88)\" to logicalrep_message_type().\n\nNow *this* would be an actively bad idea IMO.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 5 Jul 2023 19:15:44 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 10:45 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Jul-05, Alvaro Herrera wrote:\n>\n> > On 2023-Jul-05, Euler Taveira wrote:\n> >\n> > > Isn't this numerical value already exposed in the error message (X = 88)?\n> > > In this example, it is:\n> > >\n> > > ERROR: invalid logical replication message type \"X\"\n> > > CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698\n> > >\n> > > IMO it could be confusing if we provide two representations of the same data (X\n> > > and 88). Since we already provide \"X\" I don't think we need \"88\".\n> >\n> > The CONTEXT message could theoretically appear in other error throws,\n> > not just in \"invalid logical replication message\". So the duplicity is\n> > not really a problem.\n>\n> Ah, but you're thinking that if the message type is invalid, then it\n> will have been rejected in the \"invalid logical replication message\"\n> stage, so no invalid message type will be reported.\n>\n\nYeah, but it would still be displayed both in context and the actual message.\n\n> I guess there's a\n> point to that argument as well.\n>\n\nOne point to note is that the user may also get confused if the actual\nERROR says message type as 'X' and the context says '???'. I feel in\nthis case duplicate information is better than different information.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 14:58:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 7:54 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jul 5, 2023 at 7:07 PM Euler Taveira <[email protected]> wrote:\n> >\n> > On Wed, Jul 5, 2023, at 7:56 AM, Alvaro Herrera wrote:\n> >\n> > On 2023-Jul-05, Amit Kapila wrote:\n> >\n> > > I think after returning \"???\" from logicalrep_message_type(), the\n> > > above is possible when we get the error: \"invalid logical replication\n> > > message type \"X\"\" from apply_dispatch(), right? If so, then what about\n> > > the case when we forgot to handle some message in\n> > > logicalrep_message_type() but handled it in apply_dispatch()?\n>\n> apply_dispatch() has a default case in switch() so it can\n> theoretically forget to handle some message type. I think we should\n> avoid default case in that function to catch missing message type in\n> that function. But if logicalrep_message_type() doesn't use default\n> case, it won't forget to handle a known message type. So what you are\n> suggesting is not possible.\n>\n\nRight, but still I feel it would be better to return actual action.\n\n> It might happen that the upstream may send an unknown message type\n> that both apply_dispatch() and logicalrep_message_type() can not\n> handle.\n>\n> > ERROR: invalid logical replication message type \"X\"\n> > CONTEXT: processing remote data for replication origin \"pg_16638\" during message type \"??? (88)\" in transaction 796, finished at 0/1626698\n> >\n> > IMO it could be confusing if we provide two representations of the same data (X\n> > and 88). Since we already provide \"X\" I don't think we need \"88\". Another\n> > option, is to remove \"X\" from apply_dispatch() and add \"??? (88)\" to\n> > logicalrep_message_type().\n>\n> I think we don't need message type to be mentioned in the context for\n> an error about invalid message type. I think what needs to be done is\n> to set\n> apply_error_callback_arg.command = 0 before calling ereport in the\n> default case of apply_dispatch().\n> apply_error_callback() will just return without providing a context.\n>\n\nHmm, this looks a bit hacky to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 15:00:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 6:28 PM Amit Kapila <[email protected]> wrote:\n>\n> One point to note is that the user may also get confused if the actual\n> ERROR says message type as 'X' and the context says '???'. I feel in\n> this case duplicate information is better than different information.\n\nI agree. I think it would be better to show the same string like:\n\nERROR: invalid logical replication message type \"??? (88)\"\nCONTEXT: processing remote data for replication origin \"pg_16638\"\nduring message type \"??? (88)\" in transaction 796, finished at\n0/1626698\n\nSince the numerical value is important only in invalid message type\ncases, how about using a format like \"??? (88)\" in unknown message\ntype cases, in both error and context messages?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 11 Jul 2023 17:05:50 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On 2023-Jul-11, Masahiko Sawada wrote:\n\n> Since the numerical value is important only in invalid message type\n> cases, how about using a format like \"??? (88)\" in unknown message\n> type cases, in both error and context messages?\n\n+1\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)\n\n\n",
"msg_date": "Tue, 11 Jul 2023 12:21:56 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 1:36 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jul 6, 2023 at 6:28 PM Amit Kapila <[email protected]> wrote:\n> >\n> > One point to note is that the user may also get confused if the actual\n> > ERROR says message type as 'X' and the context says '???'. I feel in\n> > this case duplicate information is better than different information.\n>\n> I agree. I think it would be better to show the same string like:\n>\n> ERROR: invalid logical replication message type \"??? (88)\"\n> CONTEXT: processing remote data for replication origin \"pg_16638\"\n> during message type \"??? (88)\" in transaction 796, finished at\n> 0/1626698\n>\n> Since the numerical value is important only in invalid message type\n> cases, how about using a format like \"??? (88)\" in unknown message\n> type cases, in both error and context messages?\n>\n\nDo you have something like attached in mind?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 15 Jul 2023 12:57:39 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Sat, Jul 15, 2023, at 4:27 AM, Amit Kapila wrote:\n> Do you have something like attached in mind?\n\nWFM. I would change the comment that says\n\nThis function is called to provide context in the error ...\n\nto\n\nThis message provides context in the error ...\n\nbecause this comment is not at the beginning of the function but *before* the\nmessage.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Jul 15, 2023, at 4:27 AM, Amit Kapila wrote:Do you have something like attached in mind?WFM. I would change the comment that saysThis function is called to provide context in the error ...toThis message provides context in the error ...because this comment is not at the beginning of the function but *before* themessage.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Sat, 15 Jul 2023 10:45:40 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Sat, Jul 15, 2023 at 7:16 PM Euler Taveira <[email protected]> wrote:\n>\n> On Sat, Jul 15, 2023, at 4:27 AM, Amit Kapila wrote:\n>\n> Do you have something like attached in mind?\n>\n>\n> WFM. I would change the comment that says\n>\n> This function is called to provide context in the error ...\n>\n> to\n>\n> This message provides context in the error ...\n>\n> because this comment is not at the beginning of the function but *before* the\n> message.\n>\n\nSounds reasonable to me. I'll make this modification before pushing\nunless there are more comments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jul 2023 07:47:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Sat, Jul 15, 2023 at 12:57 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Since the numerical value is important only in invalid message type\n> > cases, how about using a format like \"??? (88)\" in unknown message\n> > type cases, in both error and context messages?\n> >\n>\n> Do you have something like attached in mind?\n\nPrologue of psprintf() says\n\n* Errors are not returned to the caller, but are reported via elog(ERROR)\n* in the backend, or printf-to-stderr-and-exit() in frontend builds.\n* One should therefore think twice about using this in libpq.\n\nIf an error occurs in psprintf(), it will throw an error which will\noverride the original error. I think we should avoid any stuff that\nthrows further errors.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 17 Jul 2023 19:19:42 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On 2023-Jul-17, Ashutosh Bapat wrote:\n\n> Prologue of psprintf() says\n> \n> * Errors are not returned to the caller, but are reported via elog(ERROR)\n> * in the backend, or printf-to-stderr-and-exit() in frontend builds.\n> * One should therefore think twice about using this in libpq.\n> \n> If an error occurs in psprintf(), it will throw an error which will\n> override the original error. I think we should avoid any stuff that\n> throws further errors.\n\nOoh, yeah, this is an excellent point. I agree it would be better to\navoid psprintf() here and anything that adds more failure modes. Let's\njust do the thing in the original patch you submitted, to ensure all\nthese strings are compile-time constants; that's likely the most robust.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Mon, 17 Jul 2023 16:24:52 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 7:54 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Jul-17, Ashutosh Bapat wrote:\n>\n> > Prologue of psprintf() says\n> >\n> > * Errors are not returned to the caller, but are reported via elog(ERROR)\n> > * in the backend, or printf-to-stderr-and-exit() in frontend builds.\n> > * One should therefore think twice about using this in libpq.\n> >\n> > If an error occurs in psprintf(), it will throw an error which will\n> > override the original error. I think we should avoid any stuff that\n> > throws further errors.\n>\n> Ooh, yeah, this is an excellent point. I agree it would be better to\n> avoid psprintf() here and anything that adds more failure modes.\n>\n\nI have tried to check whether we have such usage in any other error\ncallbacks. Though I haven't scrutinized each and every error callback,\nI found a few of them where an error can be raised. For example,\n\nrm_redo_error_callback()->initStringInfo()\nCopyFromErrorCallback()->limit_printout_length()\nshared_buffer_write_error_callback()->relpathperm()->relpathbackend()->GetRelationPath()->psprintf()\n\n> Let's\n> just do the thing in the original patch you submitted, to ensure all\n> these strings are compile-time constants; that's likely the most robust.\n>\n\nSo will we be okay with something like the below:\n\nERROR: invalid logical replication message type \"??? (88)\"\nCONTEXT: processing remote data for replication origin \"pg_16638\"\nduring message type \"???\" in transaction 796, finished at\n0/1626698\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Jul 2023 08:45:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 12:15 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Jul 17, 2023 at 7:54 PM Alvaro Herrera <[email protected]> wrote:\n> >\n> > On 2023-Jul-17, Ashutosh Bapat wrote:\n> >\n> > > Prologue of psprintf() says\n> > >\n> > > * Errors are not returned to the caller, but are reported via elog(ERROR)\n> > > * in the backend, or printf-to-stderr-and-exit() in frontend builds.\n> > > * One should therefore think twice about using this in libpq.\n> > >\n> > > If an error occurs in psprintf(), it will throw an error which will\n> > > override the original error. I think we should avoid any stuff that\n> > > throws further errors.\n> >\n> > Ooh, yeah, this is an excellent point. I agree it would be better to\n> > avoid psprintf() here and anything that adds more failure modes.\n> >\n>\n> I have tried to check whether we have such usage in any other error\n> callbacks. Though I haven't scrutinized each and every error callback,\n> I found a few of them where an error can be raised. For example,\n>\n> rm_redo_error_callback()->initStringInfo()\n> CopyFromErrorCallback()->limit_printout_length()\n> shared_buffer_write_error_callback()->relpathperm()->relpathbackend()->GetRelationPath()->psprintf()\n>\n> > Let's\n> > just do the thing in the original patch you submitted, to ensure all\n> > these strings are compile-time constants; that's likely the most robust.\n> >\n\nOr can we use snprintf() writing \"??? (%d)\" to a fixed length char[8 +\n11] allocated on the stack instead?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Jul 2023 13:57:19 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 10:27 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Jul 18, 2023 at 12:15 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Jul 17, 2023 at 7:54 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> >\n> > I have tried to check whether we have such usage in any other error\n> > callbacks. Though I haven't scrutinized each and every error callback,\n> > I found a few of them where an error can be raised. For example,\n> >\n> > rm_redo_error_callback()->initStringInfo()\n> > CopyFromErrorCallback()->limit_printout_length()\n> > shared_buffer_write_error_callback()->relpathperm()->relpathbackend()->GetRelationPath()->psprintf()\n> >\n> > > Let's\n> > > just do the thing in the original patch you submitted, to ensure all\n> > > these strings are compile-time constants; that's likely the most robust.\n> > >\n>\n> Or can we use snprintf() writing \"??? (%d)\" to a fixed length char[8 +\n> 11] allocated on the stack instead?\n>\n\nIn the above size calculation, shouldn't it be 7 + 11 where 7 is for\n(3 (???) + 1 for space + 2 for () + 1 for terminating null char) and\n11 is for %d? BTW, this avoids dynamic allocation of the err string in\nlogicalrep_message_type() but we can't return a locally allocated\nstring, so do you think we should change the prototype of the function\nto get this as an argument and then use it both for valid and invalid\ncases? I think if there is some simpler way to achieve this then fine,\notherwise, let's return a constant string like \"???\" from\nlogicalrep_message_type() for the invalid action.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 Jul 2023 09:01:34 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 9:01 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jul 18, 2023 at 10:27 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Jul 18, 2023 at 12:15 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 17, 2023 at 7:54 PM Alvaro Herrera <[email protected]> wrote:\n> > > >\n> > >\n> > > I have tried to check whether we have such usage in any other error\n> > > callbacks. Though I haven't scrutinized each and every error callback,\n> > > I found a few of them where an error can be raised. For example,\n> > >\n> > > rm_redo_error_callback()->initStringInfo()\n> > > CopyFromErrorCallback()->limit_printout_length()\n> > > shared_buffer_write_error_callback()->relpathperm()->relpathbackend()->GetRelationPath()->psprintf()\n> > >\n> > > > Let's\n> > > > just do the thing in the original patch you submitted, to ensure all\n> > > > these strings are compile-time constants; that's likely the most robust.\n> > > >\n> >\n> > Or can we use snprintf() writing \"??? (%d)\" to a fixed length char[8 +\n> > 11] allocated on the stack instead?\n> >\n>\n> In the above size calculation, shouldn't it be 7 + 11 where 7 is for\n> (3 (???) + 1 for space + 2 for () + 1 for terminating null char) and\n> 11 is for %d? BTW, this avoids dynamic allocation of the err string in\n> logicalrep_message_type() but we can't return a locally allocated\n> string, so do you think we should change the prototype of the function\n> to get this as an argument and then use it both for valid and invalid\n> cases?\n\nThere are other places in the code which do something similar by using\nstatically allocated buffers like static char xya[SIZE]. We could do\nthat here. The caller may decide whether to pstrdup this buffer\nfurther or just use it one time e.g. as an elog or printf argument.\n\nAs I said before, we should not even print message type in the error\ncontext because it's unknown. Repeating that twice is useless. That\nwill need some changes to apply_error_callback() though.\nBut I am fine with \"???\" as well.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:08:38 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 10:08 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Jul 19, 2023 at 9:01 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jul 18, 2023 at 10:27 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > Or can we use snprintf() writing \"??? (%d)\" to a fixed length char[8 +\n> > > 11] allocated on the stack instead?\n> > >\n> >\n> > In the above size calculation, shouldn't it be 7 + 11 where 7 is for\n> > (3 (???) + 1 for space + 2 for () + 1 for terminating null char) and\n> > 11 is for %d? BTW, this avoids dynamic allocation of the err string in\n> > logicalrep_message_type() but we can't return a locally allocated\n> > string, so do you think we should change the prototype of the function\n> > to get this as an argument and then use it both for valid and invalid\n> > cases?\n>\n> There are other places in the code which do something similar by using\n> statically allocated buffers like static char xya[SIZE]. We could do\n> that here. The caller may decide whether to pstrdup this buffer\n> further or just use it one time e.g. as an elog or printf argument.\n>\n\nOkay, changed it accordingly. Currently, we call it only from\nerrcontext, so it looks reasonable to me to use static here.\n\n> As I said before, we should not even print message type in the error\n> context because it's unknown. Repeating that twice is useless. That\n> will need some changes to apply_error_callback() though.\n> But I am fine with \"???\" as well.\n>\n\nI think in the end it won't make a big difference. So, I would like to\ngo with Sawada-San's suggestion to keep the message type consistent in\nactual error and error context unless that requires bigger changes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Jul 2023 09:10:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 9:10 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Jul 19, 2023 at 10:08 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Wed, Jul 19, 2023 at 9:01 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Jul 18, 2023 at 10:27 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > Or can we use snprintf() writing \"??? (%d)\" to a fixed length char[8 +\n> > > > 11] allocated on the stack instead?\n> > > >\n> > >\n> > > In the above size calculation, shouldn't it be 7 + 11 where 7 is for\n> > > (3 (???) + 1 for space + 2 for () + 1 for terminating null char) and\n> > > 11 is for %d? BTW, this avoids dynamic allocation of the err string in\n> > > logicalrep_message_type() but we can't return a locally allocated\n> > > string, so do you think we should change the prototype of the function\n> > > to get this as an argument and then use it both for valid and invalid\n> > > cases?\n> >\n> > There are other places in the code which do something similar by using\n> > statically allocated buffers like static char xya[SIZE]. We could do\n> > that here. The caller may decide whether to pstrdup this buffer\n> > further or just use it one time e.g. as an elog or printf argument.\n> >\n>\n> Okay, changed it accordingly.\n>\n\noops, forgot to attach the patch.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 20 Jul 2023 09:11:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 9:11 AM Amit Kapila <[email protected]> wrote:\n> >\n> > Okay, changed it accordingly.\n> >\n>\n> oops, forgot to attach the patch.\n>\n\nWFM\n\n@@ -3367,7 +3367,7 @@ apply_dispatch(StringInfo s)\n default:\n ereport(ERROR,\n (errcode(ERRCODE_PROTOCOL_VIOLATION),\n- errmsg(\"invalid logical replication message type\n\\\"%c\\\"\", action)));\n+ errmsg(\"invalid logical replication message type\n\\\"??? (%d)\\\"\", action)));\n\nI think we should report character here since that's what is visible\nin the code and also the message types are communicated as characters\nnot integers. Makes debugging easier. Context may report integer as an\nadditional data point.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 20 Jul 2023 22:09:31 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 1:39 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Thu, Jul 20, 2023 at 9:11 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > Okay, changed it accordingly.\n> > >\n> >\n> > oops, forgot to attach the patch.\n> >\n>\n> WFM\n>\n> @@ -3367,7 +3367,7 @@ apply_dispatch(StringInfo s)\n> default:\n> ereport(ERROR,\n> (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> - errmsg(\"invalid logical replication message type\n> \\\"%c\\\"\", action)));\n> + errmsg(\"invalid logical replication message type\n> \\\"??? (%d)\\\"\", action)));\n>\n> I think we should report character here since that's what is visible\n> in the code and also the message types are communicated as characters\n> not integers. Makes debugging easier. Context may report integer as an\n> additional data point.\n\nI think it could confuse us if an invalid message is not a printable character.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Jul 2023 09:57:30 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 6:28 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jul 21, 2023 at 1:39 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Thu, Jul 20, 2023 at 9:11 AM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > Okay, changed it accordingly.\n> > > >\n> > >\n> > > oops, forgot to attach the patch.\n> > >\n> >\n> > WFM\n> >\n> > @@ -3367,7 +3367,7 @@ apply_dispatch(StringInfo s)\n> > default:\n> > ereport(ERROR,\n> > (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > - errmsg(\"invalid logical replication message type\n> > \\\"%c\\\"\", action)));\n> > + errmsg(\"invalid logical replication message type\n> > \\\"??? (%d)\\\"\", action)));\n> >\n> > I think we should report character here since that's what is visible\n> > in the code and also the message types are communicated as characters\n> > not integers. Makes debugging easier. Context may report integer as an\n> > additional data point.\n>\n> I think it could confuse us if an invalid message is not a printable character.\n>\n\nRight. I'll push and backpatch this till 15 by Tuesday unless you guys\nthink otherwise.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Jul 2023 10:17:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Sat, Jul 22, 2023 at 10:18 AM Amit Kapila <[email protected]> wrote:\n\n> >\n> > I think it could confuse us if an invalid message is not a printable character.\n> >\n>\n\nThat's a good point.\n\n> Right. I'll push and backpatch this till 15 by Tuesday unless you guys\n> think otherwise.\n\nWFM.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 24 Jul 2023 18:14:28 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: logicalrep_message_type throws an error"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 6:14 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Sat, Jul 22, 2023 at 10:18 AM Amit Kapila <[email protected]> wrote:\n>\n> > Right. I'll push and backpatch this till 15 by Tuesday unless you guys\n> > think otherwise.\n>\n> WFM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jul 2023 16:17:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: logicalrep_message_type throws an error"
}
] |
[
{
"msg_contents": "Normally it doesn't really matter which dbname is used in the connection\nstring that pg_basebackup and other physical replication CLI tools use.\nThe reason being, that physical replication does not work at the\ndatabase level, but instead at the server level. So you will always get\nthe data for all databases.\n\nHowever, when there's a proxy, such as PgBouncer, in between the client\nand the server, then it might very well matter. Because this proxy might\nwant to route the connection to a different server depending on the\ndbname parameter in the startup packet.\n\nThis patch changes the creation of the connection string key value\npairs, so that the following command will actually include\ndbname=postgres in the startup packet to the server:\n\npg_basebackup --dbname 'dbname=postgres port=6432' -D dump\n\nThis also applies to other physical replication CLI tools like\npg_receivewal.\n\nTo address the security issue described in CVE-2016-5424 it\nnow only passes expand_dbname=true when the tool did not\nreceive a connection_string argument.\n\nI tested that the change worked on this PgBouncer PR of mine:\nhttps://github.com/pgbouncer/pgbouncer/pull/876",
"msg_date": "Mon, 3 Jul 2023 14:22:40 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 13:23, Jelte Fennema <[email protected]> wrote:\n>\n> Normally it doesn't really matter which dbname is used in the connection\n> string that pg_basebackup and other physical replication CLI tools use.\n> The reason being, that physical replication does not work at the\n> database level, but instead at the server level. So you will always get\n> the data for all databases.\n>\n> However, when there's a proxy, such as PgBouncer, in between the client\n> and the server, then it might very well matter. Because this proxy might\n> want to route the connection to a different server depending on the\n> dbname parameter in the startup packet.\n>\n> This patch changes the creation of the connection string key value\n> pairs, so that the following command will actually include\n> dbname=postgres in the startup packet to the server:\n>\n> pg_basebackup --dbname 'dbname=postgres port=6432' -D dump\n\nI guess my immediate question is, should backups be taken through\nPgBouncer? It seems beyond PgBouncer's remit.\n\nThom\n\n\n",
"msg_date": "Wed, 5 Jul 2023 13:43:35 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Wed, Jul 5, 2023, at 9:43 AM, Thom Brown wrote:\n> I guess my immediate question is, should backups be taken through\n> PgBouncer? It seems beyond PgBouncer's remit.\n\nOne of the PgBouncer's missions is to be a transparent proxy.\n\nSometimes you cannot reach out the database directly due to a security policy.\nI've heard this backup question a few times. IMO if dbname doesn't matter for\nreaching the server directly, I don't see a problem relaxing this restriction\nto support this use case. We just need to document that dbname will be ignored\nif specified. Other connection poolers might also benefit from it.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Jul 5, 2023, at 9:43 AM, Thom Brown wrote:I guess my immediate question is, should backups be taken throughPgBouncer? It seems beyond PgBouncer's remit.One of the PgBouncer's missions is to be a transparent proxy.Sometimes you cannot reach out the database directly due to a security policy.I've heard this backup question a few times. IMO if dbname doesn't matter forreaching the server directly, I don't see a problem relaxing this restrictionto support this use case. We just need to document that dbname will be ignoredif specified. Other connection poolers might also benefit from it.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 05 Jul 2023 11:01:08 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 16:01, Euler Taveira <[email protected]> wrote:\n> One of the PgBouncer's missions is to be a transparent proxy.\n>\n> Sometimes you cannot reach out the database directly due to a security policy.\n\nIndeed the transparent proxy use case is where replication through\npgbouncer makes sense. There's quite some reasons to set up PgBouncer\nlike such a proxy apart from security policies. Some others that come\nto mind are:\n- load balancer layer of pgbouncers\n- transparent failovers\n- transparent database moves\n\nAnd in all of those cases its nice for a user to use a single\nconnection string/hostname. Instead of having to think: Oh yeah, for\nbackups, I need to use this other one.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 17:50:01 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 16:50, Jelte Fennema <[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 16:01, Euler Taveira <[email protected]> wrote:\n> > One of the PgBouncer's missions is to be a transparent proxy.\n> >\n> > Sometimes you cannot reach out the database directly due to a security policy.\n>\n> Indeed the transparent proxy use case is where replication through\n> pgbouncer makes sense. There's quite some reasons to set up PgBouncer\n> like such a proxy apart from security policies. Some others that come\n> to mind are:\n> - load balancer layer of pgbouncers\n> - transparent failovers\n> - transparent database moves\n>\n> And in all of those cases its nice for a user to use a single\n> connection string/hostname. Instead of having to think: Oh yeah, for\n> backups, I need to use this other one.\n\nOkay, understood. In that case, please remember to write changes to\nthe pg_basebackup docs page explaining how the dbname value is ignored\nunder normal usage.\n\nThom\n\n\n",
"msg_date": "Wed, 5 Jul 2023 19:08:49 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 20:09, Thom Brown <[email protected]> wrote:\n> Okay, understood. In that case, please remember to write changes to\n> the pg_basebackup docs page explaining how the dbname value is ignored\n\n\nI updated the wording in the docs for pg_basebackup and pg_receivewal.\nThey now clarify that Postgres itself doesn't care if there's a\ndatabase name in the connection string, but that a proxy might.",
"msg_date": "Wed, 5 Jul 2023 21:39:40 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello,\r\n\r\nI've reviewed your patch and it applies and builds without error. When testing this patch I was slightly confused as to what its purpose was, after testing it I now understand. Initially, I thought this was a change to add database-level replication. I would suggest some clarifications to the documentation such as changing:\r\n\r\n\"supplying a specific database name in the connection string won't cause PostgreSQL to behave any differently.\"\r\n\r\nto \r\n\r\n\"supplying a specific database name in the connection string won't cause pg_basebackup to behave any differently.\"\r\n\r\nI believe this better illustrates that we are referring to the actual pg_basebackup utility and how this parameter is only used for proxies and bears no impact on what pg_basebackup is actually doing. It also would remove any confusion about database replication I had prior.\r\n\r\nThere is also a small typo in the same documentation:\r\n\r\n\"However, if you are connecting to PostgreSQL through a proxy, then it's possible that this proxy does use the supplied databasename to make certain decisions, such as to which cluster to route the connection.\"\r\n\r\n\"databasename\" is just missing a space.\r\n\r\nOther than that, everything looks good.\r\n\r\nRegards,\r\n\r\nTristen",
"msg_date": "Mon, 28 Aug 2023 21:49:41 +0000",
"msg_from": "Tristen Raab <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 23:50, Tristen Raab <[email protected]> wrote:\n> I've reviewed your patch and it applies and builds without error. When testing this patch I was slightly confused as to what its purpose was, after testing it I now understand. Initially, I thought this was a change to add database-level replication. I would suggest some clarifications to the documentation such as changing:\n\nThanks for the review. I've updated the documentation to make it\nclearer (using some of your suggestions but also some others)",
"msg_date": "Tue, 29 Aug 2023 15:55:05 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "Hi Jelte\n\nOn 29.08.23 15:55, Jelte Fennema wrote:\n> Thanks for the review. I've updated the documentation to make it\n> clearer (using some of your suggestions but also some others)\n\nThis patch applies and builds cleanly, and the documentation is very clear.\n\nI tested it using the 'replication-support' branch from your github fork:\n\n/pg_basebackup --dbname \"port=6432 user=postgres dbname=foo\" -D /tmp/dump1/\n\npgbouncer log:\n\n/2023-08-30 00:50:52.866 CEST [811770] LOG C-0x555fbd65bf40: \n(nodb)/postgres@unix(811776):6432 login attempt: db=foo user=postgres \ntls=no replication=yes/\n\nHowever, pgbouncer closes with a segmentation fault, so I couldn't test \nthe result of pg_basebackup itself - but I guess it isn't the issue here.\n\nOther than that, everything looks good to me.\n\nJim\n\n\n\n\n\n\nHi Jelte\n\nOn 29.08.23\n 15:55, Jelte Fennema wrote:\n\nThanks for the review. I've updated the\n documentation to make it\n \nclearer (using some of your suggestions but also some others)\n\n\nThis patch applies and builds cleanly, and\n the documentation is very clear.\n\nI tested it using the\n 'replication-support' branch from your github fork:\n\npg_basebackup --dbname \"port=6432\n user=postgres dbname=foo\" -D /tmp/dump1\n\npgbouncer log:\n2023-08-30 00:50:52.866 CEST [811770]\n LOG C-0x555fbd65bf40: (nodb)/postgres@unix(811776):6432 login\n attempt: db=foo user=postgres tls=no replication=yes\n\nHowever, pgbouncer closes with a\n segmentation fault, so I couldn't test the result of\n pg_basebackup itself - but I guess it isn't the issue here.\n\nOther than that, everything looks good to\n me.\n\nJim",
"msg_date": "Wed, 30 Aug 2023 01:01:39 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On Wed, 30 Aug 2023 at 01:01, Jim Jones <[email protected]> wrote:\n> However, pgbouncer closes with a segmentation fault, so I couldn't test the result of pg_basebackup itself - but I guess it isn't the issue here.\n\nOops it indeed seemed like I made an unintended change when handling\ndatabase names that did not exist in pgbouncer.conf when you used\nauth_type=hba. I pushed a fix for that now to the replication-support\nbranch. Feel free to try again. But as you said it's unrelated to the\npostgres patch.\n\n\n",
"msg_date": "Wed, 30 Aug 2023 14:11:07 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "On 30.08.23 14:11, Jelte Fennema wrote:\n> Oops it indeed seemed like I made an unintended change when handling\n> database names that did not exist in pgbouncer.conf when you used\n> auth_type=hba. I pushed a fix for that now to the replication-support\n> branch. Feel free to try again. But as you said it's unrelated to the\n> postgres patch.\n\nNice! Now it seems to work as expected :)\n\n$ /usr/local/postgres-dev/bin/pg_basebackup --dbname \"host=127.0.0.1 \nport=6432 user=jim dbname=foo\" -X fetch -v -l testlabel -D /tmp/dump\npg_basebackup: initiating base backup, waiting for checkpoint to complete\npg_basebackup: checkpoint completed\npg_basebackup: write-ahead log start point: 0/12000028 on timeline 1\npg_basebackup: write-ahead log end point: 0/12000100\npg_basebackup: syncing data to disk ...\npg_basebackup: renaming backup_manifest.tmp to backup_manifest\npg_basebackup: base backup completed\n\npgbouncer log:\n\n2023-08-30 21:04:30.225 CEST [785989] LOG C-0x55fbee0f50b0: \nfoo/[email protected]:49764 login attempt: db=foo user=jim tls=no \nreplication=yes\n2023-08-30 21:04:30.225 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 new connection to server (from 127.0.0.1:34400)\n2023-08-30 21:04:30.226 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 closing because: replication client was closed \n(age=0s)\n2023-08-30 21:04:30.226 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 new connection to server (from 127.0.0.1:34408)\n2023-08-30 21:04:30.227 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 closing because: replication client was closed \n(age=0s)\n2023-08-30 21:04:30.227 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 new connection to server (from 127.0.0.1:34410)\n2023-08-30 21:04:30.228 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 closing because: replication client was closed \n(age=0s)\n2023-08-30 21:04:30.228 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 new connection to server (from 127.0.0.1:34418)\n2023-08-30 21:04:30.309 CEST [785989] LOG S-0x55fbee0fc560: \nfoo/[email protected]:5432 closing because: replication client was closed \n(age=0s)\n2023-08-30 21:04:30.309 CEST [785989] LOG C-0x55fbee0f50b0: \nfoo/[email protected]:49764 closing because: client close request (age=0s)\n\nJim\n\n\n\n",
"msg_date": "Wed, 30 Aug 2023 21:08:33 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "Attached is a new version with some slightly updated wording in the docs",
"msg_date": "Thu, 31 Aug 2023 11:01:07 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "> On 31 Aug 2023, at 11:01, Jelte Fennema <[email protected]> wrote:\n\n> Attached is a new version with some slightly updated wording in the docs\n\nI had a look at this Ready for Committer entry in the CF and it seems to strike\na balance between useful in certain cases and non-intrusive in others.\n\nUnless something sticks out in a second pass over it I will go ahead and apply\nit.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 18 Sep 2023 14:11:13 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
},
{
"msg_contents": "> On 18 Sep 2023, at 14:11, Daniel Gustafsson <[email protected]> wrote:\n\n> Unless something sticks out in a second pass over it I will go ahead and apply\n> it.\n\nAnd applied, closing the CF entry.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 14:53:11 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow specifying a dbname in pg_basebackup connection string"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed eelpout failed with this in stats.sql, in the pg_stat_io tests\nadded a couple months ago [1]:\n\n@@ -1415,7 +1415,7 @@\n :io_sum_vac_strategy_after_reuses >\n:io_sum_vac_strategy_before_reuses;\n ?column? | ?column?\n ----------+----------\n- t | t\n+ t | f\n (1 row)\n\nThe failure seems completely unrelated to the new commit, so this seems\nlike some randomness / timing issue. The failing bit does this:\n\n----------------------------\nVACUUM (PARALLEL 0, BUFFER_USAGE_LIMIT 128) test_io_vac_strategy;\nSELECT pg_stat_force_next_flush();\nSELECT sum(reuses) AS reuses, sum(reads) AS reads\n FROM pg_stat_io WHERE context = 'vacuum' \\gset io_sum_vac_strategy_after_\nSELECT :io_sum_vac_strategy_after_reads > :io_sum_vac_strategy_before_reads,\n :io_sum_vac_strategy_after_reuses >\n:io_sum_vac_strategy_before_reuses;\n----------------------------\n\nSo I'm wondering if pg_stat_force_next_flush() is enough - AFAICS this\nonly sets some flag for the *next* pgstat_report_stat() call, but how do\nwe know that happens before the query execution?\n\nShouldn't there be something like pg_stat_flush() that actually does the\nflushing, instead of just setting the flag?\n\n\nregards\n\n\n[1]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=eelpout&dt=2023-07-03%2011%3A09%3A13\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jul 2023 15:45:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is a pg_stat_force_next_flush() call sufficient for regression tests?"
},
{
"msg_contents": "At Mon, 3 Jul 2023 15:45:52 +0200, Tomas Vondra <[email protected]> wrote in \n> So I'm wondering if pg_stat_force_next_flush() is enough - AFAICS this\n> only sets some flag for the *next* pgstat_report_stat() call, but how do\n> we know that happens before the query execution?\n> \n> Shouldn't there be something like pg_stat_flush() that actually does the\n> flushing, instead of just setting the flag?\n\nThe reason for the function is that pg_stat_flush() is supposed not to\nbe called within a transaction. AFAICS pg_stat_force_next_flush()\ntakes effect after a successfull transaction end and before the next\ncommand execution.\n\nTo verify this, I put in an assertion to check that the flag gets\nconsumed before reading of pg_stat_io (a.diff), then ran pgbench with\nthe attached custom script. As expected, it didn't fire at all during\nseveral trials. When I wrapped all lines in t.sql within a\nbegin-commit block, the assertion fired off immediately as a matter of\ncourse.\n\nIs there any chance concurrent backends or some other things can\nactually hinder the backend from reusing buffers?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 04 Jul 2023 11:29:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is a pg_stat_force_next_flush() call sufficient for regression\n tests?"
},
{
"msg_contents": "\n\nOn 7/4/23 04:29, Kyotaro Horiguchi wrote:\n> At Mon, 3 Jul 2023 15:45:52 +0200, Tomas Vondra <[email protected]> wrote in \n>> So I'm wondering if pg_stat_force_next_flush() is enough - AFAICS this\n>> only sets some flag for the *next* pgstat_report_stat() call, but how do\n>> we know that happens before the query execution?\n>>\n>> Shouldn't there be something like pg_stat_flush() that actually does the\n>> flushing, instead of just setting the flag?\n> \n> The reason for the function is that pg_stat_flush() is supposed not to\n> be called within a transaction. AFAICS pg_stat_force_next_flush()\n> takes effect after a successfull transaction end and before the next\n> command execution.\n> \n\nSure, if we're supposed to report the stats only at the end of a\ntransaction, that makes sense. But then why didn't that happen here?\n\n> To verify this, I put in an assertion to check that the flag gets\n> consumed before reading of pg_stat_io (a.diff), then ran pgbench with\n> the attached custom script. As expected, it didn't fire at all during\n> several trials. When I wrapped all lines in t.sql within a\n> begin-commit block, the assertion fired off immediately as a matter of\n> course.\n> \n\nIf I understand correctly, this just verifies that\n\n1) if everything goes well, we report the stats at the end of the\ntransaction (otherwise the case without BEGIN/COMMIT would fail)\n\n2) we don't report stats when in a transaction (with the BEGIN/COMMIT)\n\nBut the eelpout failure clearly suggests this may misbehave.\n\n> Is there any chance concurrent backends or some other things can\n> actually hinder the backend from reusing buffers?\n> \n\nNo idea. I'm not very familiar with the reworked pgstat system, but\neither the pgstat_report_stat() was not called for some reason, or it\ndecided there's nothing to report (i.e. have_iostats==false). Not sure\nwhy would that happen.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jul 2023 20:04:40 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is a pg_stat_force_next_flush() call sufficient for regression\n tests?"
}
] |
[
{
"msg_contents": "Greetings,\n\nFor ISO and German dates the order DMY is completely ignored on output but\nused for input.\n\ntest=# set datestyle to 'ISO,DMY';\nSET\nselect '7-8-2023'::date\ntest-# ;\n date\n------------\n 2023-08-07\n(1 row)\n\ntest=# set datestyle to 'ISO,MDY';\nSET\ntest=# select '7-8-2023'::date\n;\n date\n------------\n 2023-07-08\n(1 row)\n\nNote regardless of how the ordering is specified it is always output as\nYMD\n\nDave Cramer\n\nGreetings,For ISO and German dates the order DMY is completely ignored on output but used for input.test=# set datestyle to 'ISO,DMY';SETselect '7-8-2023'::datetest-# ; date------------ 2023-08-07(1 row)test=# set datestyle to 'ISO,MDY';SETtest=# select '7-8-2023'::date; date------------ 2023-07-08(1 row)Note regardless of how the ordering is specified it is always output as YMDDave Cramer",
"msg_date": "Mon, 3 Jul 2023 14:06:18 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is DATESTYLE, ordering ignored for output but used for input ?"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 20:06, Dave Cramer <[email protected]> wrote:\n>\n> Greetings,\n>\n> For ISO and German dates the order DMY is completely ignored on output but used for input.\n>\n> test=# set datestyle to 'ISO,DMY';\n> SET\n> select '7-8-2023'::date\n> test-# ;\n> date\n> ------------\n> 2023-08-07\n> (1 row)\n>\n> test=# set datestyle to 'ISO,MDY';\n> SET\n> test=# select '7-8-2023'::date\n> ;\n> date\n> ------------\n> 2023-07-08\n> (1 row)\n>\n> Note regardless of how the ordering is specified it is always output as\n> YMD\n\nWouldn't that be because ISO only has one correct ordering of the day\nand month fields? I fail to see why we'd output non-ISO-formatted date\nstrings when ISO format is requested. I believe the reason is the same\nfor German dates - Germany's official (or most common?) date\nformatting has a single ordering of these fields, which is also the\nordering that we supply.\n\nThe code comments also seem to hint to this:\n\n> case USE_ISO_DATES:\n> case USE_XSD_DATES:\n> /* compatible with ISO date formats */\n\n> case USE_GERMAN_DATES:\n> /* German-style date format */\n\nThis has been this way since the code for ISO was originally committed\nin July of '97 with 8507ddb9 and the GERMAN formatting which was added\nin December of '97 as D.M/Y with 352b3687 (and later that month was\nupdated to D.M.Y with ca23837a).\nSadly, the -hackers archives don't seem to have any mails from that\ntime period, so I couldn't find much info on the precise rationale\naround this behavior.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\nPS. That was some interesting digging into the history of the date\nformatting module.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 23:13:32 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is DATESTYLE,\n ordering ignored for output but used for input ?"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 17:13, Matthias van de Meent <\[email protected]> wrote:\n\n> On Mon, 3 Jul 2023 at 20:06, Dave Cramer <[email protected]> wrote:\n> >\n> > Greetings,\n> >\n> > For ISO and German dates the order DMY is completely ignored on output\n> but used for input.\n> >\n> > test=# set datestyle to 'ISO,DMY';\n> > SET\n> > select '7-8-2023'::date\n> > test-# ;\n> > date\n> > ------------\n> > 2023-08-07\n> > (1 row)\n> >\n> > test=# set datestyle to 'ISO,MDY';\n> > SET\n> > test=# select '7-8-2023'::date\n> > ;\n> > date\n> > ------------\n> > 2023-07-08\n> > (1 row)\n> >\n> > Note regardless of how the ordering is specified it is always output as\n> > YMD\n>\n> Wouldn't that be because ISO only has one correct ordering of the day\n> and month fields? I fail to see why we'd output non-ISO-formatted date\n> strings when ISO format is requested. I believe the reason is the same\n> for German dates - Germany's official (or most common?) date\n> formatting has a single ordering of these fields, which is also the\n> ordering that we supply.\n>\n\nseems rather un-intuitive that it works for some datestyles and not for\nothers\n\n\n>\n> The code comments also seem to hint to this:\n>\n> > case USE_ISO_DATES:\n> > case USE_XSD_DATES:\n> > /* compatible with ISO date formats */\n>\n> > case USE_GERMAN_DATES:\n> > /* German-style date format */\n>\n> This has been this way since the code for ISO was originally committed\n> in July of '97 with 8507ddb9 and the GERMAN formatting which was added\n> in December of '97 as D.M/Y with 352b3687 (and later that month was\n> updated to D.M.Y with ca23837a).\n> Sadly, the -hackers archives don't seem to have any mails from that\n> time period, so I couldn't find much info on the precise rationale\n> around this behavior.\n>\n\nYeah, I couldn't find much either.\n\n\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech/)\n>\n> PS. That was some interesting digging into the history of the date\n> formatting module.\n>\n\nAlways interesting digging into the history of the project.\n\nDave\n\nOn Mon, 3 Jul 2023 at 17:13, Matthias van de Meent <[email protected]> wrote:On Mon, 3 Jul 2023 at 20:06, Dave Cramer <[email protected]> wrote:\n>\n> Greetings,\n>\n> For ISO and German dates the order DMY is completely ignored on output but used for input.\n>\n> test=# set datestyle to 'ISO,DMY';\n> SET\n> select '7-8-2023'::date\n> test-# ;\n> date\n> ------------\n> 2023-08-07\n> (1 row)\n>\n> test=# set datestyle to 'ISO,MDY';\n> SET\n> test=# select '7-8-2023'::date\n> ;\n> date\n> ------------\n> 2023-07-08\n> (1 row)\n>\n> Note regardless of how the ordering is specified it is always output as\n> YMD\n\nWouldn't that be because ISO only has one correct ordering of the day\nand month fields? I fail to see why we'd output non-ISO-formatted date\nstrings when ISO format is requested. I believe the reason is the same\nfor German dates - Germany's official (or most common?) date\nformatting has a single ordering of these fields, which is also the\nordering that we supply.seems rather un-intuitive that it works for some datestyles and not for others \n\nThe code comments also seem to hint to this:\n\n> case USE_ISO_DATES:\n> case USE_XSD_DATES:\n> /* compatible with ISO date formats */\n\n> case USE_GERMAN_DATES:\n> /* German-style date format */\n\nThis has been this way since the code for ISO was originally committed\nin July of '97 with 8507ddb9 and the GERMAN formatting which was added\nin December of '97 as D.M/Y with 352b3687 (and later that month was\nupdated to D.M.Y with ca23837a).\nSadly, the -hackers archives don't seem to have any mails from that\ntime period, so I couldn't find much info on the precise rationale\naround this behavior.Yeah, I couldn't find much either. \n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\nPS. That was some interesting digging into the history of the date\nformatting module.Always interesting digging into the history of the project.Dave",
"msg_date": "Tue, 4 Jul 2023 11:05:45 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is DATESTYLE,\n ordering ignored for output but used for input ?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've run into an issue of a name clash with system libraries. Specifically,\nthe `Size` type seems to be just an alias for `size_t` and, at least on\nmacOS, it clashes with the core SDK, as it is also defined by MacTypes.h,\nwhich is used by some of the libraries one may want to use from within a\nPostgres extension.\n\nWhile in my case, I believe I have a workaround, I couldn't find any\nrationale as to why we might want to have this alias and not use size_t.\nAny insight on this would be appreciated.\n\nWould there be any sense in changing it all to size_t or renaming it to\nsomething else?\n\nI understand that they will break some extensions, so if we don't want them\nto have to go through with the renaming, can we enable backward\ncompatibility with a macro?\n\nIf there's a willingness to try this out, I am happy to prepare a patch.\n\n-- \nY.\n\nHi,I've run into an issue of a name clash with system libraries. Specifically, the `Size` type seems to be just an alias for `size_t` and, at least on macOS, it clashes with the core SDK, as it is also defined by MacTypes.h, which is used by some of the libraries one may want to use from within a Postgres extension.While in my case, I believe I have a workaround, I couldn't find any rationale as to why we might want to have this alias and not use size_t. Any insight on this would be appreciated.Would there be any sense in changing it all to size_t or renaming it to something else?I understand that they will break some extensions, so if we don't want them to have to go through with the renaming, can we enable backward compatibility with a macro?If there's a willingness to try this out, I am happy to prepare a patch.-- Y.",
"msg_date": "Mon, 3 Jul 2023 11:32:37 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Size vs size_t or, um, PgSize?"
},
{
"msg_contents": "> On 3 Jul 2023, at 20:32, Yurii Rashkovskii <[email protected]> wrote:\n\n> I couldn't find any rationale as to why we might want to have this alias and not use size_t. Any insight on this would be appreciated.\n\nThis used to be a typedef for unsigned int a very long time ago.\n\n> Would there be any sense in changing it all to size_t or renaming it to something else?\n> \n> I understand that they will break some extensions, so if we don't want them to have to go through with the renaming, can we enable backward compatibility with a macro?\n> \n> If there's a willingness to try this out, I am happy to prepare a patch.\n\nThis has been discussed a number of times in the past, and the conclusion from\nlast time IIRC was to use size_t for new code and only change the existing\ninstances when touched for other reasons to avoid churn.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 20:46:33 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Size vs size_t or, um, PgSize?"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 6:46 AM Daniel Gustafsson <[email protected]> wrote:\n> > On 3 Jul 2023, at 20:32, Yurii Rashkovskii <[email protected]> wrote:\n> > If there's a willingness to try this out, I am happy to prepare a patch.\n>\n> This has been discussed a number of times in the past, and the conclusion from\n> last time IIRC was to use size_t for new code and only change the existing\n> instances when touched for other reasons to avoid churn.\n\nOne such earlier discussion:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D1eA0vsgA7-2oigKzqg10YeXoPWiS-fCuQRDLwwmgMXag%40mail.gmail.com\n\nI personally wouldn't mind if we just flipped to standard types\neverywhere, but I guess it wouldn't help with your problem with\nextensions on macOS as you probably also want to target released\nbranches, not just master/17+. But renaming in the back branches\ndoesn't sound like something we'd do...\n\n\n",
"msg_date": "Tue, 4 Jul 2023 07:02:35 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Size vs size_t or, um, PgSize?"
},
{
"msg_contents": "Hi Thomas,\n\nOn Mon, Jul 3, 2023 at 12:03 PM Thomas Munro <[email protected]> wrote:\n\n> On Tue, Jul 4, 2023 at 6:46 AM Daniel Gustafsson <[email protected]> wrote:\n> > > On 3 Jul 2023, at 20:32, Yurii Rashkovskii <[email protected]> wrote:\n> > > If there's a willingness to try this out, I am happy to prepare a\n> patch.\n> >\n> > This has been discussed a number of times in the past, and the\n> conclusion from\n> > last time IIRC was to use size_t for new code and only change the\n> existing\n> > instances when touched for other reasons to avoid churn.\n>\n> One such earlier discussion:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAEepm%3D1eA0vsgA7-2oigKzqg10YeXoPWiS-fCuQRDLwwmgMXag%40mail.gmail.com\n>\n> I personally wouldn't mind if we just flipped to standard types\n> everywhere, but I guess it wouldn't help with your problem with\n> extensions on macOS as you probably also want to target released\n> branches, not just master/17+. But renaming in the back branches\n> doesn't sound like something we'd do...\n>\n\nOf course, it would have been great to have it backported in the ideal\nworld, but it isn't realistic, as you say.\n\nThat being said, going ahead with the global renaming of Size to size_t\nwill mostly eliminate this clash in, say, five years when old versions will\nbe gone. At least it'll be fixed then. Otherwise, it'll never be fixed at\nall. To me, having the problem gone in the future beats having the problem\nforever.\n\n\n-- \nY.\n\nHi Thomas,On Mon, Jul 3, 2023 at 12:03 PM Thomas Munro <[email protected]> wrote:On Tue, Jul 4, 2023 at 6:46 AM Daniel Gustafsson <[email protected]> wrote:\n> > On 3 Jul 2023, at 20:32, Yurii Rashkovskii <[email protected]> wrote:\n> > If there's a willingness to try this out, I am happy to prepare a patch.\n>\n> This has been discussed a number of times in the past, and the conclusion from\n> last time IIRC was to use size_t for new code and only change the existing\n> instances when touched for other reasons to avoid churn.\n\nOne such earlier discussion:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D1eA0vsgA7-2oigKzqg10YeXoPWiS-fCuQRDLwwmgMXag%40mail.gmail.com\n\nI personally wouldn't mind if we just flipped to standard types\neverywhere, but I guess it wouldn't help with your problem with\nextensions on macOS as you probably also want to target released\nbranches, not just master/17+. But renaming in the back branches\ndoesn't sound like something we'd do...\nOf course, it would have been great to have it backported in the ideal world, but it isn't realistic, as you say. That being said, going ahead with the global renaming of Size to size_t will mostly eliminate this clash in, say, five years when old versions will be gone. At least it'll be fixed then. Otherwise, it'll never be fixed at all. To me, having the problem gone in the future beats having the problem forever.-- Y.",
"msg_date": "Mon, 3 Jul 2023 12:14:00 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Size vs size_t or, um, PgSize?"
},
{
"msg_contents": "> On 3 Jul 2023, at 21:14, Yurii Rashkovskii <[email protected]> wrote:\n\n> That being said, going ahead with the global renaming of Size to size_t will mostly eliminate this clash in, say, five years when old versions will be gone. At least it'll be fixed then. Otherwise, it'll never be fixed at all. To me, having the problem gone in the future beats having the problem forever.\n\nI would also like all Size instances gone, but the cost during backpatching\nwill likely be very high. There are ~1300 or so of them in the code, and\nthat's a lot of potential conflicts during the coming 5 years of backpatches.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 21:20:49 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Size vs size_t or, um, PgSize?"
},
{
"msg_contents": "Daniel,\n\nOn Mon, Jul 3, 2023 at 12:20 PM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 3 Jul 2023, at 21:14, Yurii Rashkovskii <[email protected]> wrote:\n>\n> > That being said, going ahead with the global renaming of Size to size_t\n> will mostly eliminate this clash in, say, five years when old versions will\n> be gone. At least it'll be fixed then. Otherwise, it'll never be fixed at\n> all. To me, having the problem gone in the future beats having the problem\n> forever.\n>\n> I would also like all Size instances gone, but the cost during backpatching\n> will likely be very high. There are ~1300 or so of them in the code, and\n> that's a lot of potential conflicts during the coming 5 years of\n> backpatches.\n>\n>\nI understand. How about a workaround for extension builders? Something like\n\n```\n/* Use this if you run into Size type redefinition */\n#ifdef DONT_TYPEDEF_SIZE\n#define Size size_t\n#else\ntypedef size_t Size;\n#endif\n```\nThis way, extension developers can specify DONT_TYPEDEF_SIZE. However, this\nwould have to be backported, but to minimal/no effect if I am not missing\nanything.\n\nNot beautiful, but better than freezing the status quo forever?\n\n-- \nY.\n\nDaniel,On Mon, Jul 3, 2023 at 12:20 PM Daniel Gustafsson <[email protected]> wrote:> On 3 Jul 2023, at 21:14, Yurii Rashkovskii <[email protected]> wrote:\n\n> That being said, going ahead with the global renaming of Size to size_t will mostly eliminate this clash in, say, five years when old versions will be gone. At least it'll be fixed then. Otherwise, it'll never be fixed at all. To me, having the problem gone in the future beats having the problem forever.\n\nI would also like all Size instances gone, but the cost during backpatching\nwill likely be very high. There are ~1300 or so of them in the code, and\nthat's a lot of potential conflicts during the coming 5 years of backpatches.I understand. How about a workaround for extension builders? Something like```/* Use this if you run into Size type redefinition */#ifdef DONT_TYPEDEF_SIZE#define Size size_t#elsetypedef size_t Size;#endif```This way, extension developers can specify DONT_TYPEDEF_SIZE. However, this would have to be backported, but to minimal/no effect if I am not missing anything.Not beautiful, but better than freezing the status quo forever?-- Y.",
"msg_date": "Mon, 3 Jul 2023 12:37:55 -0700",
"msg_from": "Yurii Rashkovskii <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Size vs size_t or, um, PgSize?"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 3 Jul 2023, at 20:32, Yurii Rashkovskii <[email protected]> wrote:\n>> I couldn't find any rationale as to why we might want to have this alias and not use size_t. Any insight on this would be appreciated.\n\n> This used to be a typedef for unsigned int a very long time ago.\n\nI'm fairly sure that Size dates from before we could expect the system\nheaders to provide size_t everywhere.\n\n> This has been discussed a number of times in the past, and the conclusion from\n> last time IIRC was to use size_t for new code and only change the existing\n> instances when touched for other reasons to avoid churn.\n\nYeah. The code-churn costs of s/Size/size_t/g outweigh the possible\ngain, at least from our admittedly project-centric point of view.\nBut I don't have a whole lot of sympathy for arguments about \"this\nother code I'd like to also use has its own definition for Size\",\nbecause you could potentially make that complaint about just about\nevery typedef we've got. If you have conflicts like that, you have\nto resolve them by methods like #define hacks or factoring your code\nso it doesn't need to include Postgres headers in the same files that\ninclude $other-project-headers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jul 2023 18:20:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Size vs size_t or, um, PgSize?"
}
] |
[
{
"msg_contents": "At PgCon 2023 in Ottawa we had an Unconference session on Table Access\nMethods [1]\n\nOne thing that was briefly mentioned (but is missing from the notes)\nis need to have a sample API client in contrib/ , both for having a\n2nd user for API to make it more likely that non-heap AMs are doable\nand also to serve as an easy starting point for someone interested in\ndeveloping a new AM.\n\nThere are a few candidates which could be lightweight enough for this\n\n* in-memory temp tables, especially if you specify max table size at\ncreation and/or limit data types which can be used.\n\n* \"overlay tables\" - tables which \"overlay\" another - possibly\nread-only - table and store only changed rows and tombstones for\ndeletions. (this likely would make more sense as a FDW itself as Table\nAM currently knows nothing about Primary Keys and these are likely\nneeded for overlays)\n\n* Table AM as a (pl/)Python Class - this is inspired by the amazing\nMulticorn [2] FDW-in-Python tool which made it ridiculously easy to\nexpose anything (mailbox, twitter feed, git commit history,\nyou-name-it) as a Foreign Table\n\n\nCreating any of these seems to be a project of size suitable for a\nstudent course project or maybe Google Summer of Code [3].\n\n\nIncluded Mark Dilger directly to this mail as he mentioned he has a\nPerl script that makes a functional copy of heap AM that can be\ncompiled as installed as custom AM.\n\n@mark - maybe you can create 3 boilerplate Table AMs for the above\nnamed `mem_am`, `overlay_am` and `py3_am` and we could put them\nsomewhere for interested parties to play with ?\n\n[1] https://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference#Table_AMs\n[2] https://multicorn.org/ - unfortunately unmaintained since 2016,\nbut there are some forks supporting later PostgreSQL versions\n[3] https://wiki.postgresql.org/wiki/GSoC - Google Summer of Code\n\n---\nBest Regards\nHannu\n\n\n",
"msg_date": "Mon, 3 Jul 2023 20:33:32 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Including a sample Table Access Method with core code"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 08:33:32PM +0200, Hannu Krosing wrote:\n> One thing that was briefly mentioned (but is missing from the notes)\n> is need to have a sample API client in contrib/ , both for having a\n> 2nd user for API to make it more likely that non-heap AMs are doable\n> and also to serve as an easy starting point for someone interested in\n> developing a new AM.\n\nThat sounds like a fair thing to have, though templates may live\nbetter under src/test/modules.\n\n> There are a few candidates which could be lightweight enough for this\n> \n> * in-memory temp tables, especially if you specify max table size at\n> creation and/or limit data types which can be used.\n>\n> * \"overlay tables\" - tables which \"overlay\" another - possibly\n> read-only - table and store only changed rows and tombstones for\n> deletions. (this likely would make more sense as a FDW itself as Table\n> AM currently knows nothing about Primary Keys and these are likely\n> needed for overlays)\n> \n> * Table AM as a (pl/)Python Class - this is inspired by the amazing\n> Multicorn [2] FDW-in-Python tool which made it ridiculously easy to\n> expose anything (mailbox, twitter feed, git commit history,\n> you-name-it) as a Foreign Table\n\nI cannot say how simple that is without seeing the code, but limiting\nthe use of an AM to be linked to a single session sounds like a\nconcept simple enough, limiting its relpersistence on the way. One\nthing that may be also interesting is something that does not go\nthrough the Postgres buffer pool.\n\n> Included Mark Dilger directly to this mail as he mentioned he has a\n> Perl script that makes a functional copy of heap AM that can be\n> compiled as installed as custom AM.\n\nSimilar discussion has happened in 640c198 related to the creation of \ndummy_index_am, where the argument is that such a module needs to\nprovide value in testing some of the core internals. dummy_index_am\ndid so for reloptions on indexes because there was not much coverage\nfor that part of the system.\n\n> @mark - maybe you can create 3 boilerplate Table AMs for the above\n> named `mem_am`, `overlay_am` and `py3_am` and we could put them\n> somewhere for interested parties to play with ?\n\nNot sure if that's worth counting, but I also have a table AM template\nstored in my plugin repo:\nhttps://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n\nIt does as much as its name states, being able to eat all the data fed\nto it.\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 15:00:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Including a sample Table Access Method with core code"
},
{
"msg_contents": ">\n> > Included Mark Dilger directly to this mail as he mentioned he has a\n> > Perl script that makes a functional copy of heap AM that can be\n> > compiled as installed as custom AM.\n>\n> Similar discussion has happened in 640c198 related to the creation of\n> dummy_index_am, where the argument is that such a module needs to\n> provide value in testing some of the core internals. dummy_index_am\n> did so for reloptions on indexes because there was not much coverage\n> for that part of the system.\n>\n> > @mark - maybe you can create 3 boilerplate Table AMs for the above\n> > named `mem_am`, `overlay_am` and `py3_am` and we could put them\n> > somewhere for interested parties to play with ?\n>\n> Not sure if that's worth counting, but I also have a table AM template\n> stored in my plugin repo:\n> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n>\n\nAnd based on your `blackhole_am` I've sent a patch [1] to add a\n`dummy_table_am` for testing purposes.\n\nRegards,\n\n[1]\nhttps://www.postgresql.org/message-id/CAFcNs+pcU2ib=jvjNZNboD+M2tHO+vD77C_YZJ2rsGR0Tp35mg@mail.gmail.com\n\n--\nFabrízio de Royes Mello\n\n >> > Included Mark Dilger directly to this mail as he mentioned he has a> > Perl script that makes a functional copy of heap AM that can be> > compiled as installed as custom AM.>> Similar discussion has happened in 640c198 related to the creation of> dummy_index_am, where the argument is that such a module needs to> provide value in testing some of the core internals. dummy_index_am> did so for reloptions on indexes because there was not much coverage> for that part of the system.>> > @mark - maybe you can create 3 boilerplate Table AMs for the above> > named `mem_am`, `overlay_am` and `py3_am` and we could put them> > somewhere for interested parties to play with ?>> Not sure if that's worth counting, but I also have a table AM template> stored in my plugin repo:> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am>And based on your `blackhole_am` I've sent a patch [1] to add a `dummy_table_am` for testing purposes.Regards,[1] https://www.postgresql.org/message-id/CAFcNs+pcU2ib=jvjNZNboD+M2tHO+vD77C_YZJ2rsGR0Tp35mg@mail.gmail.com--Fabrízio de Royes Mello",
"msg_date": "Wed, 5 Jul 2023 17:22:32 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Including a sample Table Access Method with core code"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nMy colleague, Ashwin, pointed out to me that brininsert's per-tuple init\nof the revmap access struct can have non-trivial overhead.\n\nTurns out he is right. We are saving 24 bytes of memory per-call for\nthe access struct, and a bit on buffer/locking overhead, with the\nattached patch.\n\nThe implementation ties the revmap cleanup as a MemoryContext callback\nto the IndexInfo struct's MemoryContext, as there is no teardown\nfunction provided by the index AM for end-of-insert-command.\n\nTest setup (local Ubuntu workstation):\n\n# Drop caches and restart between each run:\nsudo sh -c \"sync; echo 3 > /proc/sys/vm/drop_caches;\"\npg_ctl -D /usr/local/pgsql/data/ -l /tmp/logfile restart\n\n\\timing\nDROP TABLE heap;\nCREATE TABLE heap(i int);\nCREATE INDEX ON heap USING brin(i) WITH (pages_per_range=1);\nINSERT INTO heap SELECT 1 FROM generate_series(1, 200000000);\n\nResults:\nWe see an improvement for 100M tuples and an even bigger improvement for\n200M tuples.\n\nMaster (29cf61ade3f245aa40f427a1d6345287ef77e622):\n\ntest=# INSERT INTO heap SELECT 1 FROM generate_series(1, 100000000);\nINSERT 0 100000000\nTime: 222762.159 ms (03:42.762)\n\n-- 3 runs\ntest=# INSERT INTO heap SELECT 1 FROM generate_series(1, 200000000);\nINSERT 0 200000000\nTime: 471168.181 ms (07:51.168)\nTime: 457071.883 ms (07:37.072)\nTimeL 486969.205 ms (08:06.969)\n\nBranch:\n\ntest2=# INSERT INTO heap SELECT 1 FROM generate_series(1, 100000000);\nINSERT 0 100000000\nTime: 200046.519 ms (03:20.047)\n\n-- 3 runs\ntest2=# INSERT INTO heap SELECT 1 FROM generate_series(1, 200000000);\nINSERT 0 200000000\nTime: 369041.832 ms (06:09.042)\nTime: 365483.382 ms (06:05.483)\nTime: 375506.144 ms (06:15.506)\n\n# Profiled backend running INSERT of 100000000 rows\nsudo perf record -p 11951 --call-graph fp sleep 180\n\nPlease see attached perf diff between master and branch. We see that we\nsave on a bit of overhead from brinRevmapInitialize(),\nbrinRevmapTerminate() and lock routines.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Mon, 3 Jul 2023 15:21:58 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "brininsert optimization opportunity"
},
{
"msg_contents": "On 2023-Jul-03, Soumyadeep Chakraborty wrote:\n\n> My colleague, Ashwin, pointed out to me that brininsert's per-tuple init\n> of the revmap access struct can have non-trivial overhead.\n> \n> Turns out he is right. We are saving 24 bytes of memory per-call for\n> the access struct, and a bit on buffer/locking overhead, with the\n> attached patch.\n\nHmm, yeah, I remember being bit bothered by this repeated\ninitialization. Your patch looks reasonable to me. I would set\nbistate->bs_rmAccess to NULL in the cleanup callback, just to be sure.\nAlso, please add comments atop these two new functions, to explain what\nthey are.\n\nNice results.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 4 Jul 2023 13:23:58 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "\n\nOn 7/4/23 13:23, Alvaro Herrera wrote:\n> On 2023-Jul-03, Soumyadeep Chakraborty wrote:\n> \n>> My colleague, Ashwin, pointed out to me that brininsert's per-tuple init\n>> of the revmap access struct can have non-trivial overhead.\n>>\n>> Turns out he is right. We are saving 24 bytes of memory per-call for\n>> the access struct, and a bit on buffer/locking overhead, with the\n>> attached patch.\n> \n> Hmm, yeah, I remember being bit bothered by this repeated\n> initialization. Your patch looks reasonable to me. I would set\n> bistate->bs_rmAccess to NULL in the cleanup callback, just to be sure.\n> Also, please add comments atop these two new functions, to explain what\n> they are.\n> \n> Nice results.\n> \n\nYeah. I wonder how much of that runtime is the generate_series(),\nthough. What's the speedup if that part is subtracted. It's guaranteed\nto be even more significant, but by how much?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jul 2023 13:59:21 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Thank you both for reviewing!\n\nOn Tue, Jul 4, 2023 at 4:24AM Alvaro Herrera <[email protected]> wrote:\n\n> Hmm, yeah, I remember being bit bothered by this repeated\n> initialization. Your patch looks reasonable to me. I would set\n> bistate->bs_rmAccess to NULL in the cleanup callback, just to be sure.\n> Also, please add comments atop these two new functions, to explain what\n> they are.\n\nDone. Set bistate->bs_desc = NULL; as well. Added comments.\n\n\nOn Tue, Jul 4, 2023 at 4:59AM Tomas Vondra\n<[email protected]> wrote:\n\n> Yeah. I wonder how much of that runtime is the generate_series(),\n> though. What's the speedup if that part is subtracted. It's guaranteed\n> to be even more significant, but by how much?\n\nWhen trying COPY, I got tripped by the following:\n\nWe get a buffer leak WARNING for the meta page and a revmap page.\n\nWARNING: buffer refcount leak: [094] (rel=base/156912/206068,\nblockNum=1, flags=0x83000000, refcount=1 1)\nWARNING: buffer refcount leak: [093] (rel=base/156912/206068,\nblockNum=0, flags=0x83000000, refcount=1 1)\n\nPrintBufferLeakWarning bufmgr.c:3240\nResourceOwnerReleaseInternal resowner.c:554\nResourceOwnerRelease resowner.c:494\nPortalDrop portalmem.c:563\nexec_simple_query postgres.c:1284\n\nWe release the buffer during this resowner release and then we crash\nwith:\n\nTRAP: failed Assert(\"bufnum <= NBuffers\"), File:\n\"../../../../src/include/storage/bufmgr.h\", Line: 305, PID: 86833\npostgres: pivotal test4 [local] COPY(ExceptionalCondition+0xbb)[0x5572b55bcc79]\npostgres: pivotal test4 [local] COPY(+0x61ccfc)[0x5572b537dcfc]\npostgres: pivotal test4 [local] COPY(ReleaseBuffer+0x19)[0x5572b5384db2]\npostgres: pivotal test4 [local] COPY(brinRevmapTerminate+0x1e)[0x5572b4e3fd39]\npostgres: pivotal test4 [local] COPY(+0xcfc44)[0x5572b4e30c44]\npostgres: pivotal test4 [local] COPY(+0x89e7f2)[0x5572b55ff7f2]\npostgres: pivotal test4 [local] COPY(MemoryContextDelete+0xd7)[0x5572b55ff683]\npostgres: pivotal test4 [local] COPY(PortalDrop+0x374)[0x5572b5602dc7]\n\nUnfortunately, when we do COPY, the MemoryContext where makeIndexInfo\ngets called is PortalContext and that is what is set in ii_Context.\nFurthermore, we clean up the resource owner stuff before we can clean\nup the MemoryContexts in PortalDrop().\n\nThe CurrentMemoryContext when initialize_brin_insertstate() is called\ndepends. For CopyMultiInsertBufferFlush() -> ExecInsertIndexTuples()\nit is PortalContext, and for CopyFrom() -> ExecInsertIndexTuples() it is\nExecutorState/ExprContext. We can't rely on it to register the callback\nneither.\n\nWhat we can do is create a new MemoryContext for holding the\nBrinInsertState, and we tie the callback to that so that cleanup is not\naffected by all of these variables. See v2 patch attached. Passes make\ninstallcheck-world and make installcheck -C src/test/modules/brin.\n\nHowever, we do still have 1 issue with the v2 patch:\nWhen we try to cancel (Ctrl-c) a running COPY command:\nERROR: buffer 151 is not owned by resource owner TopTransaction\n\n#4 0x0000559cbc54a934 in ResourceOwnerForgetBuffer\n(owner=0x559cbd6fcf28, buffer=143) at resowner.c:997\n#5 0x0000559cbc2c45e7 in UnpinBuffer (buf=0x7f8d4a8f3f80) at bufmgr.c:2390\n#6 0x0000559cbc2c7e49 in ReleaseBuffer (buffer=143) at bufmgr.c:4488\n#7 0x0000559cbbd82d53 in brinRevmapTerminate (revmap=0x559cbd7a03b8)\nat brin_revmap.c:105\n#8 0x0000559cbbd73c44 in brininsertCleanupCallback\n(arg=0x559cbd7a5b68) at brin.c:168\n#9 0x0000559cbc54280c in MemoryContextCallResetCallbacks\n(context=0x559cbd7a5a50) at mcxt.c:506\n#10 0x0000559cbc54269d in MemoryContextDelete (context=0x559cbd7a5a50)\nat mcxt.c:421\n#11 0x0000559cbc54273e in MemoryContextDeleteChildren\n(context=0x559cbd69ae90) at mcxt.c:457\n#12 0x0000559cbc54625c in AtAbort_Portals () at portalmem.c:850\n\nHaven't found a way to fix this ^ yet.\n\nMaybe there is a better way of doing our cleanup? I'm not sure. Would\nlove your input!\n\nThe other alternative for all this is to introduce new AM callbacks for\ninsert_begin and insert_end. That might be a tougher sell?\n\nNow, to finally answer your question about the speedup without\ngenerate_series(). We do see an even higher speedup!\n\nseq 1 200000000 > /tmp/data.csv\n\\timing\nDROP TABLE heap;\nCREATE TABLE heap(i int);\nCREATE INDEX ON heap USING brin(i) WITH (pages_per_range=1);\nCOPY heap FROM '/tmp/data.csv';\n\n-- 3 runs (master 29cf61ade3f245aa40f427a1d6345287ef77e622)\nCOPY 200000000\nTime: 205072.444 ms (03:25.072)\nTime: 215380.369 ms (03:35.380)\nTime: 203492.347 ms (03:23.492)\n\n-- 3 runs (branch v2)\n\nCOPY 200000000\nTime: 135052.752 ms (02:15.053)\nTime: 135093.131 ms (02:15.093)\nTime: 138737.048 ms (02:18.737)\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Tue, 4 Jul 2023 12:25:33 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 7/4/23 21:25, Soumyadeep Chakraborty wrote:\n> Thank you both for reviewing!\n> \n> On Tue, Jul 4, 2023 at 4:24AM Alvaro Herrera <[email protected]> wrote:\n> \n>> Hmm, yeah, I remember being bit bothered by this repeated\n>> initialization. Your patch looks reasonable to me. I would set\n>> bistate->bs_rmAccess to NULL in the cleanup callback, just to be sure.\n>> Also, please add comments atop these two new functions, to explain what\n>> they are.\n> \n> Done. Set bistate->bs_desc = NULL; as well. Added comments.\n> \n> \n> On Tue, Jul 4, 2023 at 4:59AM Tomas Vondra\n> <[email protected]> wrote:\n> \n>> Yeah. I wonder how much of that runtime is the generate_series(),\n>> though. What's the speedup if that part is subtracted. It's guaranteed\n>> to be even more significant, but by how much?\n> \n> When trying COPY, I got tripped by the following:\n> \n> We get a buffer leak WARNING for the meta page and a revmap page.\n> \n> WARNING: buffer refcount leak: [094] (rel=base/156912/206068,\n> blockNum=1, flags=0x83000000, refcount=1 1)\n> WARNING: buffer refcount leak: [093] (rel=base/156912/206068,\n> blockNum=0, flags=0x83000000, refcount=1 1)\n> \n> PrintBufferLeakWarning bufmgr.c:3240\n> ResourceOwnerReleaseInternal resowner.c:554\n> ResourceOwnerRelease resowner.c:494\n> PortalDrop portalmem.c:563\n> exec_simple_query postgres.c:1284\n> \n> We release the buffer during this resowner release and then we crash\n> with:\n> \n> TRAP: failed Assert(\"bufnum <= NBuffers\"), File:\n> \"../../../../src/include/storage/bufmgr.h\", Line: 305, PID: 86833\n> postgres: pivotal test4 [local] COPY(ExceptionalCondition+0xbb)[0x5572b55bcc79]\n> postgres: pivotal test4 [local] COPY(+0x61ccfc)[0x5572b537dcfc]\n> postgres: pivotal test4 [local] COPY(ReleaseBuffer+0x19)[0x5572b5384db2]\n> postgres: pivotal test4 [local] COPY(brinRevmapTerminate+0x1e)[0x5572b4e3fd39]\n> postgres: pivotal test4 [local] COPY(+0xcfc44)[0x5572b4e30c44]\n> postgres: pivotal test4 [local] COPY(+0x89e7f2)[0x5572b55ff7f2]\n> postgres: pivotal test4 [local] COPY(MemoryContextDelete+0xd7)[0x5572b55ff683]\n> postgres: pivotal test4 [local] COPY(PortalDrop+0x374)[0x5572b5602dc7]\n> \n> Unfortunately, when we do COPY, the MemoryContext where makeIndexInfo\n> gets called is PortalContext and that is what is set in ii_Context.\n> Furthermore, we clean up the resource owner stuff before we can clean\n> up the MemoryContexts in PortalDrop().\n> \n> The CurrentMemoryContext when initialize_brin_insertstate() is called\n> depends. For CopyMultiInsertBufferFlush() -> ExecInsertIndexTuples()\n> it is PortalContext, and for CopyFrom() -> ExecInsertIndexTuples() it is\n> ExecutorState/ExprContext. We can't rely on it to register the callback\n> neither.\n> > What we can do is create a new MemoryContext for holding the\n> BrinInsertState, and we tie the callback to that so that cleanup is not\n> affected by all of these variables. See v2 patch attached. Passes make\n> installcheck-world and make installcheck -C src/test/modules/brin.\n>> However, we do still have 1 issue with the v2 patch:\n> When we try to cancel (Ctrl-c) a running COPY command:\n> ERROR: buffer 151 is not owned by resource owner TopTransaction\n> \n\nI'm not sure if memory context callbacks are the right way to rely on\nfor this purpose. The primary purpose of memory contexts is to track\nmemory, so using them for this seems a bit weird.\n\nThere are cases that do something similar, like expandendrecord.c where\nwe track refcounted tuple slot, but IMHO there's a big difference\nbetween tracking one slot allocated right there, and unknown number of\nbuffers allocated much later.\n\nThe fact that even with the extra context is still doesn't handle query\ncancellations is another argument against that approach (I wonder how\nexpandedrecord.c handles that, but I haven't checked).\n\n> \n> Maybe there is a better way of doing our cleanup? I'm not sure. Would\n> love your input!\n> \n> The other alternative for all this is to introduce new AM callbacks for\n> insert_begin and insert_end. That might be a tougher sell?\n> \n\nThat's the approach I wanted to suggest, more or less - to do the\ncleanup from ExecCloseIndices() before index_close(). I wonder if it's\neven correct to do that later, once we release the locks etc.\n\nI don't think ii_AmCache was intended for stuff like this - GIN and GiST\nonly use it to cache stuff that can be just pfree-d, but for buffers\nthat's no enough. It's not surprising we need to improve this.\n\nFWIW while debugging this (breakpoint on MemoryContextDelete) I was\nrather annoyed the COPY keeps dropping and recreating the two BRIN\ncontexts - brininsert cxt / brin dtuple. I wonder if we could keep and\nreuse those too, but I don't know how much it'd help.\n\n> Now, to finally answer your question about the speedup without\n> generate_series(). We do see an even higher speedup!\n> \n> seq 1 200000000 > /tmp/data.csv\n> \\timing\n> DROP TABLE heap;\n> CREATE TABLE heap(i int);\n> CREATE INDEX ON heap USING brin(i) WITH (pages_per_range=1);\n> COPY heap FROM '/tmp/data.csv';\n> \n> -- 3 runs (master 29cf61ade3f245aa40f427a1d6345287ef77e622)\n> COPY 200000000\n> Time: 205072.444 ms (03:25.072)\n> Time: 215380.369 ms (03:35.380)\n> Time: 203492.347 ms (03:23.492)\n> \n> -- 3 runs (branch v2)\n> \n> COPY 200000000\n> Time: 135052.752 ms (02:15.053)\n> Time: 135093.131 ms (02:15.093)\n> Time: 138737.048 ms (02:18.737)\n> \n\nThat's nice, but it still doesn't say how much of that is reading the\ndata. If you do just copy into a table without any indexes, how long\ndoes it take?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jul 2023 23:54:29 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 2:54 PM Tomas Vondra\n<[email protected]> wrote:\n\n> I'm not sure if memory context callbacks are the right way to rely on\n> for this purpose. The primary purpose of memory contexts is to track\n> memory, so using them for this seems a bit weird.\n\nYeah, this just kept getting dirtier and dirtier.\n\n> There are cases that do something similar, like expandendrecord.c where\n> we track refcounted tuple slot, but IMHO there's a big difference\n> between tracking one slot allocated right there, and unknown number of\n> buffers allocated much later.\n\nYeah, the following code in ER_mc_callbackis is there I think to prevent double\nfreeing the tupdesc (since it might be freed in ResourceOwnerReleaseInternal())\n(The part about: /* Ditto for tupdesc references */).\n\nif (tupdesc->tdrefcount > 0)\n{\n if (--tupdesc->tdrefcount == 0)\n FreeTupleDesc(tupdesc);\n}\nPlus the above code doesn't try anything with Resource owner stuff, whereas\nreleasing a buffer means:\nReleaseBuffer() -> UnpinBuffer() ->\nResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n\n> The fact that even with the extra context is still doesn't handle query\n> cancellations is another argument against that approach (I wonder how\n> expandedrecord.c handles that, but I haven't checked).\n>\n> >\n> > Maybe there is a better way of doing our cleanup? I'm not sure. Would\n> > love your input!\n> >\n> > The other alternative for all this is to introduce new AM callbacks for\n> > insert_begin and insert_end. That might be a tougher sell?\n> >\n>\n> That's the approach I wanted to suggest, more or less - to do the\n> cleanup from ExecCloseIndices() before index_close(). I wonder if it's\n> even correct to do that later, once we release the locks etc.\n\nI'll try this out and introduce a couple of new index AM callbacks. I\nthink it's best to do it before releasing the locks - otherwise it\nmight be weird\nto manipulate buffers of an index relation, without having some sort of lock on\nit. I'll think about it some more.\n\n> I don't think ii_AmCache was intended for stuff like this - GIN and GiST\n> only use it to cache stuff that can be just pfree-d, but for buffers\n> that's no enough. It's not surprising we need to improve this.\n\nHmmm, yes, although the docs state:\n\"If the index AM wishes to cache data across successive index insertions within\nan SQL statement, it can allocate space in indexInfo->ii_Context and\nstore a pointer\nto the data in indexInfo->ii_AmCache (which will be NULL initially).\"\nthey don't mention anything about buffer usage. Well we will fix it!\n\nPS: It should be possible to make GIN and GiST use the new index AM APIs\nas well.\n\n> FWIW while debugging this (breakpoint on MemoryContextDelete) I was\n> rather annoyed the COPY keeps dropping and recreating the two BRIN\n> contexts - brininsert cxt / brin dtuple. I wonder if we could keep and\n> reuse those too, but I don't know how much it'd help.\n>\n\nInteresting, I will investigate that.\n\n> > Now, to finally answer your question about the speedup without\n> > generate_series(). We do see an even higher speedup!\n> >\n> > seq 1 200000000 > /tmp/data.csv\n> > \\timing\n> > DROP TABLE heap;\n> > CREATE TABLE heap(i int);\n> > CREATE INDEX ON heap USING brin(i) WITH (pages_per_range=1);\n> > COPY heap FROM '/tmp/data.csv';\n> >\n> > -- 3 runs (master 29cf61ade3f245aa40f427a1d6345287ef77e622)\n> > COPY 200000000\n> > Time: 205072.444 ms (03:25.072)\n> > Time: 215380.369 ms (03:35.380)\n> > Time: 203492.347 ms (03:23.492)\n> >\n> > -- 3 runs (branch v2)\n> >\n> > COPY 200000000\n> > Time: 135052.752 ms (02:15.053)\n> > Time: 135093.131 ms (02:15.093)\n> > Time: 138737.048 ms (02:18.737)\n> >\n>\n> That's nice, but it still doesn't say how much of that is reading the\n> data. If you do just copy into a table without any indexes, how long\n> does it take?\n\nSo, I loaded the same heap table without any indexes and at the same\nscale. I got:\n\npostgres=# COPY heap FROM '/tmp/data.csv';\nCOPY 200000000\nTime: 116161.545 ms (01:56.162)\nTime: 114182.745 ms (01:54.183)\nTime: 114975.368 ms (01:54.975)\n\nperf diff also attached between the three: w/ no indexes (baseline),\nmaster and v2.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Tue, 4 Jul 2023 17:35:15 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "\n\nOn 7/5/23 02:35, Soumyadeep Chakraborty wrote:\n> On Tue, Jul 4, 2023 at 2:54 PM Tomas Vondra\n> <[email protected]> wrote:\n> \n>> I'm not sure if memory context callbacks are the right way to rely on\n>> for this purpose. The primary purpose of memory contexts is to track\n>> memory, so using them for this seems a bit weird.\n> \n> Yeah, this just kept getting dirtier and dirtier.\n> \n>> There are cases that do something similar, like expandendrecord.c where\n>> we track refcounted tuple slot, but IMHO there's a big difference\n>> between tracking one slot allocated right there, and unknown number of\n>> buffers allocated much later.\n> \n> Yeah, the following code in ER_mc_callbackis is there I think to prevent double\n> freeing the tupdesc (since it might be freed in ResourceOwnerReleaseInternal())\n> (The part about: /* Ditto for tupdesc references */).\n> \n> if (tupdesc->tdrefcount > 0)\n> {\n> if (--tupdesc->tdrefcount == 0)\n> FreeTupleDesc(tupdesc);\n> }\n> Plus the above code doesn't try anything with Resource owner stuff, whereas\n> releasing a buffer means:\n> ReleaseBuffer() -> UnpinBuffer() ->\n> ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n> \n\nWell, my understanding is the expandedrecord.c code increments the\nrefcount exactly to prevent the resource owner to release the slot too\nearly. My assumption is we'd need to do something similar for the revmap\nbuffers by calling IncrBufferRefCount or something. But that's going to\nbe messy, because the buffers are read elsewhere.\n\n>> The fact that even with the extra context is still doesn't handle query\n>> cancellations is another argument against that approach (I wonder how\n>> expandedrecord.c handles that, but I haven't checked).\n>>\n>>>\n>>> Maybe there is a better way of doing our cleanup? I'm not sure. Would\n>>> love your input!\n>>>\n>>> The other alternative for all this is to introduce new AM callbacks for\n>>> insert_begin and insert_end. That might be a tougher sell?\n>>>\n>>\n>> That's the approach I wanted to suggest, more or less - to do the\n>> cleanup from ExecCloseIndices() before index_close(). I wonder if it's\n>> even correct to do that later, once we release the locks etc.\n> \n> I'll try this out and introduce a couple of new index AM callbacks. I\n> think it's best to do it before releasing the locks - otherwise it\n> might be weird\n> to manipulate buffers of an index relation, without having some sort of lock on\n> it. I'll think about it some more.\n> \n\nI don't understand why would this need more than just a callback to\nrelease the cache.\n\n>> I don't think ii_AmCache was intended for stuff like this - GIN and GiST\n>> only use it to cache stuff that can be just pfree-d, but for buffers\n>> that's no enough. It's not surprising we need to improve this.\n> \n> Hmmm, yes, although the docs state:\n> \"If the index AM wishes to cache data across successive index insertions within\n> an SQL statement, it can allocate space in indexInfo->ii_Context and\n> store a pointer\n> to the data in indexInfo->ii_AmCache (which will be NULL initially).\"\n> they don't mention anything about buffer usage. Well we will fix it!\n> \n> PS: It should be possible to make GIN and GiST use the new index AM APIs\n> as well.\n> \n\nWhy should GIN/GiST use the new API? I think it's perfectly sensible to\nonly require the \"cleanup callback\" when just pfree() is not enough.\n\n>> FWIW while debugging this (breakpoint on MemoryContextDelete) I was\n>> rather annoyed the COPY keeps dropping and recreating the two BRIN\n>> contexts - brininsert cxt / brin dtuple. I wonder if we could keep and\n>> reuse those too, but I don't know how much it'd help.\n>>\n> \n> Interesting, I will investigate that.\n> \n>>> Now, to finally answer your question about the speedup without\n>>> generate_series(). We do see an even higher speedup!\n>>>\n>>> seq 1 200000000 > /tmp/data.csv\n>>> \\timing\n>>> DROP TABLE heap;\n>>> CREATE TABLE heap(i int);\n>>> CREATE INDEX ON heap USING brin(i) WITH (pages_per_range=1);\n>>> COPY heap FROM '/tmp/data.csv';\n>>>\n>>> -- 3 runs (master 29cf61ade3f245aa40f427a1d6345287ef77e622)\n>>> COPY 200000000\n>>> Time: 205072.444 ms (03:25.072)\n>>> Time: 215380.369 ms (03:35.380)\n>>> Time: 203492.347 ms (03:23.492)\n>>>\n>>> -- 3 runs (branch v2)\n>>>\n>>> COPY 200000000\n>>> Time: 135052.752 ms (02:15.053)\n>>> Time: 135093.131 ms (02:15.093)\n>>> Time: 138737.048 ms (02:18.737)\n>>>\n>>\n>> That's nice, but it still doesn't say how much of that is reading the\n>> data. If you do just copy into a table without any indexes, how long\n>> does it take?\n> \n> So, I loaded the same heap table without any indexes and at the same\n> scale. I got:\n> \n> postgres=# COPY heap FROM '/tmp/data.csv';\n> COPY 200000000\n> Time: 116161.545 ms (01:56.162)\n> Time: 114182.745 ms (01:54.183)\n> Time: 114975.368 ms (01:54.975)\n> \n\nOK, so the baseline is 115 seconds. With the current code, an index\nmeans +100 seconds. With the optimization, it's just +20. Nice.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Jul 2023 12:16:48 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 3:16 AM Tomas Vondra\n<[email protected]> wrote:\n\n> > I'll try this out and introduce a couple of new index AM callbacks. I\n> > think it's best to do it before releasing the locks - otherwise it\n> > might be weird\n> > to manipulate buffers of an index relation, without having some sort of lock on\n> > it. I'll think about it some more.\n> >\n>\n> I don't understand why would this need more than just a callback to\n> release the cache.\n\nWe wouldn't. I thought that it would be slightly cleaner and slightly more\nperformant if we moved the (if !state) branches out of the XXXinsert()\nfunctions.\nBut I guess, let's minimize the changes here. One cleanup callback is enough.\n\n> > PS: It should be possible to make GIN and GiST use the new index AM APIs\n> > as well.\n> >\n>\n> Why should GIN/GiST use the new API? I think it's perfectly sensible to\n> only require the \"cleanup callback\" when just pfree() is not enough.\n\nYeah no need.\n\nAttached v3 of the patch w/ a single index AM callback.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Wed, 5 Jul 2023 11:57:58 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Attached v4 of the patch, rebased against latest HEAD.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Sat, 29 Jul 2023 09:28:34 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Created an entry for the Sep CF: https://commitfest.postgresql.org/44/4484/\n\nRegards,\nSoumyadeep (VMware)\n\nOn Sat, Jul 29, 2023 at 9:28 AM Soumyadeep Chakraborty\n<[email protected]> wrote:\n>\n> Attached v4 of the patch, rebased against latest HEAD.\n>\n> Regards,\n> Soumyadeep (VMware)\n\n\n",
"msg_date": "Sat, 5 Aug 2023 00:53:00 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Rebased against latest HEAD.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Mon, 4 Sep 2023 16:43:59 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Hi,\n\nI took a look at this patch today. I had to rebase the patch, due to\nsome minor bitrot related to 9f0602539d (but nothing major). I also did\na couple tiny cosmetic tweaks, but other than that the patch seems OK.\nSee the attached v6.\n\nI did some simple performance tests too, similar to those in the initial\nmessage:\n\n CREATE UNLOGGED TABLE heap (i int);\n CREATE INDEX ON heap USING brin(i) WITH (pages_per_range=1);\n --\n TRUNCATE heap;\n INSERT INTO heap SELECT 1 FROM generate_series(1, 20000000);\n\nAnd the results look like this (5 runs each):\n\n master: 16448.338 16066.473 16039.166 16067.420 16080.066\n patched: 13260.065 13229.800 13254.454 13265.479 13273.693\n\nSo that's a nice improvement, even though enabling WAL will make the\nrelative speedup somewhat smaller.\n\nThe one thing I'm not entirely sure about is adding new stuff to the\nIndexAmRoutine. I don't think we want to end up with too many callbacks\nthat all AMs have to initialize etc. I can't think of a different/better\nway to do this, though.\n\nBarring objections, I'll try to push this early next week, after another\nround of cleanup.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 3 Nov 2023 19:35:26 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Fri, 3 Nov 2023 at 19:37, Tomas Vondra <[email protected]> wrote:\n>\n> Hi,\n>\n> I took a look at this patch today. I had to rebase the patch, due to\n> some minor bitrot related to 9f0602539d (but nothing major). I also did\n> a couple tiny cosmetic tweaks, but other than that the patch seems OK.\n> See the attached v6.\n> [...]\n> Barring objections, I'll try to push this early next week, after another\n> round of cleanup.\n\nNo hard objections: The principle looks fine.\n\nI do think we should choose a better namespace than bs_* for the\nfields of BrinInsertState, as BrinBuildState already uses the bs_*\nnamespace for its fields in the same file, but that's only cosmetic.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 3 Nov 2023 20:16:22 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Fri, 3 Nov 2023 at 19:37, Tomas Vondra <[email protected]> wrote:\n\n> The one thing I'm not entirely sure about is adding new stuff to the\n> IndexAmRoutine. I don't think we want to end up with too many callbacks\n> that all AMs have to initialize etc. I can't think of a different/better\n> way to do this, though.\n\nYes there is not really an alternative. Also, aminsertcleanup() is very similar\nto amvacuumcleanup(), so it is not awkward. Why should vacuum be an\nexclusive VIP? :)\nAnd there are other indexam callbacks that not every AM implements. So this\naddition is not unprecedented in that sense.\n\n> Barring objections, I'll try to push this early next week, after another\n> round of cleanup.\n\nMany thanks for resurrecting this patch!\n\nOn Fri, Nov 3, 2023 at 12:16PM Matthias van de Meent\n<[email protected]> wrote:\n\n>\n> I do think we should choose a better namespace than bs_* for the\n> fields of BrinInsertState, as BrinBuildState already uses the bs_*\n> namespace for its fields in the same file, but that's only cosmetic.\n>\n\nbis_* then.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Sat, 4 Nov 2023 11:58:31 -0700",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "I've done a bit more cleanup on the last version of the patch (renamed\nthe fields to start with bis_ as agreed, rephrased the comments / docs /\ncommit message a bit) and pushed.\n\nThanks for the patch!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 25 Nov 2023 21:06:03 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Sat, Nov 25, 2023 at 12:06 PM Tomas Vondra <[email protected]>\nwrote:\n\n> I've done a bit more cleanup on the last version of the patch (renamed\n> the fields to start with bis_ as agreed, rephrased the comments / docs /\n> commit message a bit) and pushed.\n\n\nThanks a lot Tomas for helping to drive the patch to completion iteratively\nand realizing the benefits.\n\n- Ashwin\n\nOn Sat, Nov 25, 2023 at 12:06 PM Tomas Vondra <[email protected]> wrote:I've done a bit more cleanup on the last version of the patch (renamed\nthe fields to start with bis_ as agreed, rephrased the comments / docs /\ncommit message a bit) and pushed.Thanks a lot Tomas for helping to drive the patch to completion iteratively and realizing the benefits.- Ashwin",
"msg_date": "Sat, 25 Nov 2023 14:33:52 -0800",
"msg_from": "Ashwin Agrawal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Thanks a lot for reviewing and pushing! :)\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Sat, 25 Nov 2023 15:23:31 -0800",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Sun, Nov 26, 2023 at 4:06 AM Tomas Vondra <[email protected]>\nwrote:\n\n> I've done a bit more cleanup on the last version of the patch (renamed\n> the fields to start with bis_ as agreed, rephrased the comments / docs /\n> commit message a bit) and pushed.\n\n\nIt seems that we have an oversight in this commit. If there is no tuple\nthat has been inserted, we wouldn't have an available insert state in\nthe clean up phase. So the Assert in brininsertcleanup() is not always\nright. For example:\n\nregression=# update brin_summarize set value = brin_summarize.value;\nserver closed the connection unexpectedly\n\nSo I wonder if we should check 'bistate' and do the clean up only if\nthere is an available one, something like below.\n\n--- a/src/backend/access/brin/brin.c\n+++ b/src/backend/access/brin/brin.c\n@@ -359,7 +359,9 @@ brininsertcleanup(IndexInfo *indexInfo)\n {\n BrinInsertState *bistate = (BrinInsertState *) indexInfo->ii_AmCache;\n\n- Assert(bistate);\n+ /* We don't have an available insert state, nothing to do */\n+ if (!bistate)\n+ return;\n\nThanks\nRichard\n\nOn Sun, Nov 26, 2023 at 4:06 AM Tomas Vondra <[email protected]> wrote:I've done a bit more cleanup on the last version of the patch (renamed\nthe fields to start with bis_ as agreed, rephrased the comments / docs /\ncommit message a bit) and pushed.It seems that we have an oversight in this commit. If there is no tuplethat has been inserted, we wouldn't have an available insert state inthe clean up phase. So the Assert in brininsertcleanup() is not alwaysright. For example:regression=# update brin_summarize set value = brin_summarize.value;server closed the connection unexpectedlySo I wonder if we should check 'bistate' and do the clean up only ifthere is an available one, something like below.--- a/src/backend/access/brin/brin.c+++ b/src/backend/access/brin/brin.c@@ -359,7 +359,9 @@ brininsertcleanup(IndexInfo *indexInfo) { BrinInsertState *bistate = (BrinInsertState *) indexInfo->ii_AmCache;- Assert(bistate);+ /* We don't have an available insert state, nothing to do */+ if (!bistate)+ return;ThanksRichard",
"msg_date": "Mon, 27 Nov 2023 13:28:22 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Sun, Nov 26, 2023 at 9:28 PM Richard Guo <[email protected]> wrote:\n>\n>\n> On Sun, Nov 26, 2023 at 4:06 AM Tomas Vondra <[email protected]> wrote:\n>>\n>> I've done a bit more cleanup on the last version of the patch (renamed\n>> the fields to start with bis_ as agreed, rephrased the comments / docs /\n>> commit message a bit) and pushed.\n>\n>\n> It seems that we have an oversight in this commit. If there is no tuple\n> that has been inserted, we wouldn't have an available insert state in\n> the clean up phase. So the Assert in brininsertcleanup() is not always\n> right. For example:\n>\n> regression=# update brin_summarize set value = brin_summarize.value;\n> server closed the connection unexpectedly\n>\n\nI wasn't able to repro the issue on 86b64bafc19c4c60136a4038d2a8d1e6eecc59f2.\nwith UPDATE/INSERT:\n\npostgres=# drop table a;\nDROP TABLE\npostgres=# create table a(i int);\nCREATE TABLE\npostgres=# create index on a using brin(i);\nCREATE INDEX\npostgres=# insert into a select 1 where 1!=1;\nINSERT 0 0\npostgres=# update a set i = 2 where i = 0;\nUPDATE 0\npostgres=# update a set i = a.i;\nUPDATE 0\n\nThis could be because since c5b7ba4e67aeb5d6f824b74f94114d99ed6e42b7,\nwe have moved ExecOpenIndices()\nfrom ExecInitModifyTable() to ExecInsert(). Since we never open the\nindices if nothing is\ninserted, we would never attempt to close them with ExecCloseIndices()\nwhile the ii_AmCache\nis NULL (which is what causes this assertion failure).\n\nHowever, it is possible to repro the issue with:\n# create empty file\ntouch /tmp/a.csv\npostgres=# create table a(i int);\nCREATE TABLE\npostgres=# create index on a using brin(i);\nCREATE INDEX\npostgres=# copy a from '/tmp/a.csv';\nTRAP: failed Assert(\"bistate\"), File: \"brin.c\", Line: 362, PID: 995511\n\n> So I wonder if we should check 'bistate' and do the clean up only if\nthere is an available one, something like below.\n\nYes, this is the right way to go. Thanks for reporting!\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Sun, 26 Nov 2023 21:52:48 -0800",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 1:53 PM Soumyadeep Chakraborty <\[email protected]> wrote:\n\n> On Sun, Nov 26, 2023 at 9:28 PM Richard Guo <[email protected]>\n> wrote:\n> > It seems that we have an oversight in this commit. If there is no tuple\n> > that has been inserted, we wouldn't have an available insert state in\n> > the clean up phase. So the Assert in brininsertcleanup() is not always\n> > right. For example:\n> >\n> > regression=# update brin_summarize set value = brin_summarize.value;\n> > server closed the connection unexpectedly\n>\n> I wasn't able to repro the issue on\n> 86b64bafc19c4c60136a4038d2a8d1e6eecc59f2.\n> with UPDATE/INSERT:\n>\n> This could be because since c5b7ba4e67aeb5d6f824b74f94114d99ed6e42b7,\n> we have moved ExecOpenIndices()\n> from ExecInitModifyTable() to ExecInsert(). Since we never open the\n> indices if nothing is\n> inserted, we would never attempt to close them with ExecCloseIndices()\n> while the ii_AmCache\n> is NULL (which is what causes this assertion failure).\n\n\nAFAICS we would also open the indices from ExecUpdate(). So if we\nupdate the table in a way that no new tuples are inserted, we will have\nthis issue. As I showed previously, the query below crashes for me on\nlatest master (dc9f8a7983).\n\nregression=# update brin_summarize set value = brin_summarize.value;\nserver closed the connection unexpectedly\n\nThere are other code paths that call ExecOpenIndices(), such as\nExecMerge(). I believe it's not hard to create queries that trigger this\nAssert for those cases.\n\nThanks\nRichard\n\nOn Mon, Nov 27, 2023 at 1:53 PM Soumyadeep Chakraborty <[email protected]> wrote:On Sun, Nov 26, 2023 at 9:28 PM Richard Guo <[email protected]> wrote:\n> It seems that we have an oversight in this commit. If there is no tuple\n> that has been inserted, we wouldn't have an available insert state in\n> the clean up phase. So the Assert in brininsertcleanup() is not always\n> right. For example:\n>\n> regression=# update brin_summarize set value = brin_summarize.value;\n> server closed the connection unexpectedly\n\nI wasn't able to repro the issue on 86b64bafc19c4c60136a4038d2a8d1e6eecc59f2.\nwith UPDATE/INSERT:\n\nThis could be because since c5b7ba4e67aeb5d6f824b74f94114d99ed6e42b7,\nwe have moved ExecOpenIndices()\nfrom ExecInitModifyTable() to ExecInsert(). Since we never open the\nindices if nothing is\ninserted, we would never attempt to close them with ExecCloseIndices()\nwhile the ii_AmCache\nis NULL (which is what causes this assertion failure).AFAICS we would also open the indices from ExecUpdate(). So if weupdate the table in a way that no new tuples are inserted, we will havethis issue. As I showed previously, the query below crashes for me onlatest master (dc9f8a7983).regression=# update brin_summarize set value = brin_summarize.value;server closed the connection unexpectedlyThere are other code paths that call ExecOpenIndices(), such asExecMerge(). I believe it's not hard to create queries that trigger thisAssert for those cases.ThanksRichard",
"msg_date": "Mon, 27 Nov 2023 15:37:20 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "\n\nOn 11/27/23 08:37, Richard Guo wrote:\n> \n> On Mon, Nov 27, 2023 at 1:53 PM Soumyadeep Chakraborty\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On Sun, Nov 26, 2023 at 9:28 PM Richard Guo <[email protected]\n> <mailto:[email protected]>> wrote:\n> > It seems that we have an oversight in this commit. If there is no\n> tuple\n> > that has been inserted, we wouldn't have an available insert state in\n> > the clean up phase. So the Assert in brininsertcleanup() is not\n> always\n> > right. For example:\n> >\n> > regression=# update brin_summarize set value = brin_summarize.value;\n> > server closed the connection unexpectedly\n> \n> I wasn't able to repro the issue on\n> 86b64bafc19c4c60136a4038d2a8d1e6eecc59f2.\n> with UPDATE/INSERT:\n> \n> This could be because since c5b7ba4e67aeb5d6f824b74f94114d99ed6e42b7,\n> we have moved ExecOpenIndices()\n> from ExecInitModifyTable() to ExecInsert(). Since we never open the\n> indices if nothing is\n> inserted, we would never attempt to close them with ExecCloseIndices()\n> while the ii_AmCache\n> is NULL (which is what causes this assertion failure).\n> \n> \n> AFAICS we would also open the indices from ExecUpdate(). So if we\n> update the table in a way that no new tuples are inserted, we will have\n> this issue. As I showed previously, the query below crashes for me on\n> latest master (dc9f8a7983).\n> \n> regression=# update brin_summarize set value = brin_summarize.value;\n> server closed the connection unexpectedly\n> \n> There are other code paths that call ExecOpenIndices(), such as \n> ExecMerge(). I believe it's not hard to create queries that trigger\n> this Assert for those cases.\n> \n\nFWIW I can readily reproduce it like this:\n\n drop table t;\n create table t (a int);\n insert into t values (1);\n create index on t using brin (a);\n update t set a = a;\n\nI however wonder if maybe we should do the check in index_insert_cleanup\nand not in the AM callback. That seems simpler / better, because the AM\ncallbacks then can't make this mistake.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Nov 2023 11:34:56 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 11/27/23 11:34, Tomas Vondra wrote:\n> \n> \n> On 11/27/23 08:37, Richard Guo wrote:\n>>\n>> On Mon, Nov 27, 2023 at 1:53 PM Soumyadeep Chakraborty\n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> On Sun, Nov 26, 2023 at 9:28 PM Richard Guo <[email protected]\n>> <mailto:[email protected]>> wrote:\n>> > It seems that we have an oversight in this commit. If there is no\n>> tuple\n>> > that has been inserted, we wouldn't have an available insert state in\n>> > the clean up phase. So the Assert in brininsertcleanup() is not\n>> always\n>> > right. For example:\n>> >\n>> > regression=# update brin_summarize set value = brin_summarize.value;\n>> > server closed the connection unexpectedly\n>>\n>> I wasn't able to repro the issue on\n>> 86b64bafc19c4c60136a4038d2a8d1e6eecc59f2.\n>> with UPDATE/INSERT:\n>>\n>> This could be because since c5b7ba4e67aeb5d6f824b74f94114d99ed6e42b7,\n>> we have moved ExecOpenIndices()\n>> from ExecInitModifyTable() to ExecInsert(). Since we never open the\n>> indices if nothing is\n>> inserted, we would never attempt to close them with ExecCloseIndices()\n>> while the ii_AmCache\n>> is NULL (which is what causes this assertion failure).\n>>\n>>\n>> AFAICS we would also open the indices from ExecUpdate(). So if we\n>> update the table in a way that no new tuples are inserted, we will have\n>> this issue. As I showed previously, the query below crashes for me on\n>> latest master (dc9f8a7983).\n>>\n>> regression=# update brin_summarize set value = brin_summarize.value;\n>> server closed the connection unexpectedly\n>>\n>> There are other code paths that call ExecOpenIndices(), such as \n>> ExecMerge(). I believe it's not hard to create queries that trigger\n>> this Assert for those cases.\n>>\n> \n> FWIW I can readily reproduce it like this:\n> \n> drop table t;\n> create table t (a int);\n> insert into t values (1);\n> create index on t using brin (a);\n> update t set a = a;\n> \n> I however wonder if maybe we should do the check in index_insert_cleanup\n> and not in the AM callback. That seems simpler / better, because the AM\n> callbacks then can't make this mistake.\n> \n\nI did it this way (check in index_insert_cleanup) and pushed. Thanks for\nthe report!\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:54:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Hello Tomas and Soumyadeep,\n\n25.11.2023 23:06, Tomas Vondra wrote:\n> I've done a bit more cleanup on the last version of the patch (renamed\n> the fields to start with bis_ as agreed, rephrased the comments / docs /\n> commit message a bit) and pushed.\n\nPlease look at a query, which produces warnings similar to the ones\nobserved upthread:\nCREATE TABLE t(a INT);\nINSERT INTO t SELECT x FROM generate_series(1,10000) x;\nCREATE INDEX idx ON t USING brin (a);\nREINDEX index CONCURRENTLY idx;\n\nWARNING: resource was not closed: [1863] (rel=base/16384/16389, blockNum=1, flags=0x93800000, refcount=1 1)\nWARNING: resource was not closed: [1862] (rel=base/16384/16389, blockNum=0, flags=0x93800000, refcount=1 1)\n\nThe first bad commit for this anomaly is c1ec02be1.\n\nMay be you would also want to fix in passing some typos/inconsistencies\nintroduced with recent brin-related commits:\ns/bs_blkno/bt_blkno/\ns/emptry/empty/\ns/firt/first/\ns/indexinsertcleanup/aminsertcleanup/\ns/ maxRange/ nextRange/\ns/paga /page /\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 11 Dec 2023 18:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 12/11/23 16:00, Alexander Lakhin wrote:\n> Hello Tomas and Soumyadeep,\n> \n> 25.11.2023 23:06, Tomas Vondra wrote:\n>> I've done a bit more cleanup on the last version of the patch (renamed\n>> the fields to start with bis_ as agreed, rephrased the comments / docs /\n>> commit message a bit) and pushed.\n> \n> Please look at a query, which produces warnings similar to the ones\n> observed upthread:\n> CREATE TABLE t(a INT);\n> INSERT INTO t SELECT x FROM generate_series(1,10000) x;\n> CREATE INDEX idx ON t USING brin (a);\n> REINDEX index CONCURRENTLY idx;\n> \n> WARNING: resource was not closed: [1863] (rel=base/16384/16389,\n> blockNum=1, flags=0x93800000, refcount=1 1)\n> WARNING: resource was not closed: [1862] (rel=base/16384/16389,\n> blockNum=0, flags=0x93800000, refcount=1 1)\n> \n> The first bad commit for this anomaly is c1ec02be1.\n> \n\nThanks for the report. I haven't investigated what the issue is, but it\nseems we fail to release the buffers in some situations - I'd bet we\nfail to call the cleanup callback in some place, or something like that.\nI'll take a look tomorrow.\n\n> May be you would also want to fix in passing some typos/inconsistencies\n> introduced with recent brin-related commits:\n> s/bs_blkno/bt_blkno/\n> s/emptry/empty/\n> s/firt/first/\n> s/indexinsertcleanup/aminsertcleanup/\n> s/ maxRange/ nextRange/\n> s/paga /page /\n> \n\nDefinitely. Thanks for noticing those!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 11 Dec 2023 16:41:45 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 12/11/23 16:41, Tomas Vondra wrote:\n> On 12/11/23 16:00, Alexander Lakhin wrote:\n>> Hello Tomas and Soumyadeep,\n>>\n>> 25.11.2023 23:06, Tomas Vondra wrote:\n>>> I've done a bit more cleanup on the last version of the patch (renamed\n>>> the fields to start with bis_ as agreed, rephrased the comments / docs /\n>>> commit message a bit) and pushed.\n>>\n>> Please look at a query, which produces warnings similar to the ones\n>> observed upthread:\n>> CREATE TABLE t(a INT);\n>> INSERT INTO t SELECT x FROM generate_series(1,10000) x;\n>> CREATE INDEX idx ON t USING brin (a);\n>> REINDEX index CONCURRENTLY idx;\n>>\n>> WARNING: resource was not closed: [1863] (rel=base/16384/16389,\n>> blockNum=1, flags=0x93800000, refcount=1 1)\n>> WARNING: resource was not closed: [1862] (rel=base/16384/16389,\n>> blockNum=0, flags=0x93800000, refcount=1 1)\n>>\n>> The first bad commit for this anomaly is c1ec02be1.\n>>\n> \n> Thanks for the report. I haven't investigated what the issue is, but it\n> seems we fail to release the buffers in some situations - I'd bet we\n> fail to call the cleanup callback in some place, or something like that.\n> I'll take a look tomorrow.\n> \n\nYeah, just as I expected this seems to be a case of failing to call the\nindex_insert_cleanup after doing some inserts - in this case the inserts\nhappen in table_index_validate_scan, but validate_index has no idea it\nneeds to do the cleanup.\n\nThe attached 0001 patch fixes this by adding the call to validate_index,\nwhich seems like the proper place as it's where the indexInfo is\nallocated and destroyed.\n\nBut this makes me think - are there any other places that might call\nindex_insert without then also doing the cleanup? I did grep for the\nindex_insert() calls, and it seems OK except for this one.\n\nThere's a call in toast_internals.c, but that seems OK because that only\ndeals with btree indexes (and those don't need any cleanup). The same\nlogic applies to unique_key_recheck(). The rest goes through\nexecIndexing.c, which should do the cleanup in ExecCloseIndices().\n\nNote: We should probably call the cleanup even in the btree cases, even\nif only to make it clear it needs to be called after index_insert().\n\nI was thinking maybe we should have some explicit call to destroy the\nIndexInfo, but that seems rather inconvenient - it'd force everyone to\ncarefully track lifetimes of the IndexInfo instead of just relying on\nmemory context reset/destruction. That seems quite error-prone.\n\nI propose we do a much simpler thing instead - allow the cache may be\ninitialized / cleaned up repeatedly, and make sure it gets reset at\nconvenient place (typically after index_insert calls that don't go\nthrough execIndexing). That'd mean the cleanup does not need to happen\nvery far from the index_insert(), which makes the reasoning much easier.\n0002 does this.\n\nBut maybe there's a better way to ensure the cleanup gets called even\nwhen not using execIndexing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 12 Dec 2023 12:25:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Tue, Dec 12, 2023 at 3:25 AM Tomas Vondra\n<[email protected]> wrote:\n\n\n> The attached 0001 patch fixes this by adding the call to validate_index,\nwhich seems like the proper place as it's where the indexInfo is\nallocated and destroyed.\n\nYes, and by reading validate_index's header comment, there is a clear\nexpectation that we will be adding tuples to the index in the table AM call\ntable_index_validate_scan (albeit \"validating\" doesn't necessarily convey this\nside effect). So, it makes perfect sense to call it here.\n\n\n> But this makes me think - are there any other places that might call\n> index_insert without then also doing the cleanup? I did grep for the\n> index_insert() calls, and it seems OK except for this one.\n>\n> There's a call in toast_internals.c, but that seems OK because that only\n> deals with btree indexes (and those don't need any cleanup). The same\n> logic applies to unique_key_recheck(). The rest goes through\n> execIndexing.c, which should do the cleanup in ExecCloseIndices().\n>\n> Note: We should probably call the cleanup even in the btree cases, even\n> if only to make it clear it needs to be called after index_insert().\n\nAgreed. Doesn't feel great, but yeah all of these btree specific code does call\nthrough index_* functions, instead of calling btree functions directly. So,\nideally we should follow through with that pattern and call the cleanup\nexplicitly. But we are introducing per-tuple overhead that is totally wasted.\nMaybe we can add a comment instead like:\n\nvoid\ntoast_close_indexes(Relation *toastidxs, int num_indexes, LOCKMODE lock)\n{\nint i;\n\n/*\n* Save a few cycles by choosing not to call index_insert_cleanup(). Toast\n* indexes are btree, which don't have a aminsertcleanup() anyway.\n*/\n\n/* Close relations and clean up things */\n...\n}\n\nAnd add something similar for unique_key_recheck()? That said, I am also not\nopposed to just accepting these wasted cycles, if the commenting seems wonky.\n\n> I propose we do a much simpler thing instead - allow the cache may be\n> initialized / cleaned up repeatedly, and make sure it gets reset at\n> convenient place (typically after index_insert calls that don't go\n> through execIndexing). That'd mean the cleanup does not need to happen\n> very far from the index_insert(), which makes the reasoning much easier.\n> 0002 does this.\n\nThat kind of goes against the ethos of the ii_AmCache, which is meant to capture\nstate to be used across multiple index inserts. Also, quoting the current docs:\n\n\"If the index AM wishes to cache data across successive index insertions\nwithin an SQL statement, it can allocate space\nin <literal>indexInfo->ii_Context</literal> and store a pointer to the\ndata in <literal>indexInfo->ii_AmCache</literal> (which will be NULL\ninitially). After the index insertions complete, the memory will be freed\nautomatically. If additional cleanup is required (e.g. if the cache contains\nbuffers and tuple descriptors), the AM may define an optional function\n<literal>indexinsertcleanup</literal>, called before the memory is released.\"\n\nThe memory will be freed automatically (as soon as ii_Context goes away). So,\nwhy would we explicitly free it, like in the attached 0002 patch? And the whole\npoint of the cleanup function is to do something other than freeing memory, as\nthe docs note highlights so well.\n\nAlso, the 0002 patch does introduce per-tuple function call overhead in\nheapam_index_validate_scan().\n\nAlso, now we have split brain...in certain situations we want to call\nindex_insert_cleanup() per index insert and in certain cases we would like it\ncalled once for all inserts. Not very easy to understand IMO.\n\nFinally, I don't think that any index AM would have anything to clean up after\nevery insert.\n\nI had tried (abused) an approach with MemoryContextCallbacks upthread and that\nseems a no-go. And yes I agree, having a dual to makeIndexInfo() (like\ndestroyIndexInfo()) means that we lose the benefits of ii_Context. That could\nbe disruptive to existing AMs in-core and outside.\n\nAll said and done, v1 has my vote.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Wed, 13 Dec 2023 00:12:25 -0800",
"msg_from": "Soumyadeep Chakraborty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "\nOn 12/13/23 09:12, Soumyadeep Chakraborty wrote:\n> On Tue, Dec 12, 2023 at 3:25 AM Tomas Vondra\n> <[email protected]> wrote:\n> \n> \n>> The attached 0001 patch fixes this by adding the call to validate_index,\n> which seems like the proper place as it's where the indexInfo is\n> allocated and destroyed.\n> \n> Yes, and by reading validate_index's header comment, there is a clear\n> expectation that we will be adding tuples to the index in the table AM call\n> table_index_validate_scan (albeit \"validating\" doesn't necessarily convey this\n> side effect). So, it makes perfect sense to call it here.\n> \n\nOK\n\n> \n>> But this makes me think - are there any other places that might call\n>> index_insert without then also doing the cleanup? I did grep for the\n>> index_insert() calls, and it seems OK except for this one.\n>>\n>> There's a call in toast_internals.c, but that seems OK because that only\n>> deals with btree indexes (and those don't need any cleanup). The same\n>> logic applies to unique_key_recheck(). The rest goes through\n>> execIndexing.c, which should do the cleanup in ExecCloseIndices().\n>>\n>> Note: We should probably call the cleanup even in the btree cases, even\n>> if only to make it clear it needs to be called after index_insert().\n> \n> Agreed. Doesn't feel great, but yeah all of these btree specific code does call\n> through index_* functions, instead of calling btree functions directly. So,\n> ideally we should follow through with that pattern and call the cleanup\n> explicitly. But we are introducing per-tuple overhead that is totally wasted.\n\nI haven't tried but I very much doubt this will be measurable. It's just\na trivial check if a pointer is NULL. We do far more expensive stuff in\nthis code path.\n\n> Maybe we can add a comment instead like:\n> \n> void\n> toast_close_indexes(Relation *toastidxs, int num_indexes, LOCKMODE lock)\n> {\n> int i;\n> \n> /*\n> * Save a few cycles by choosing not to call index_insert_cleanup(). Toast\n> * indexes are btree, which don't have a aminsertcleanup() anyway.\n> */\n> \n> /* Close relations and clean up things */\n> ...\n> }\n> \n> And add something similar for unique_key_recheck()? That said, I am also not\n> opposed to just accepting these wasted cycles, if the commenting seems wonky.\n> \n\nI really don't want to do this sort of stuff unless we know it actually\nsaves something.\n\n>> I propose we do a much simpler thing instead - allow the cache may be\n>> initialized / cleaned up repeatedly, and make sure it gets reset at\n>> convenient place (typically after index_insert calls that don't go\n>> through execIndexing). That'd mean the cleanup does not need to happen\n>> very far from the index_insert(), which makes the reasoning much easier.\n>> 0002 does this.\n> \n> That kind of goes against the ethos of the ii_AmCache, which is meant\n> to capture state to be used across multiple index inserts.\n\nWhy would it be against the ethos? The point is that we reuse stuff over\nmultiple index_insert() calls. If we can do that for all inserts, cool.\nBut if that's difficult, it's maybe better to cache for smaller batches\nof inserts (instead of making it more complex for all index AMs, even\nthose not doing any caching).\n\n> Also, quoting the current docs:\n> \n> \"If the index AM wishes to cache data across successive index insertions\n> within an SQL statement, it can allocate space\n> in <literal>indexInfo->ii_Context</literal> and store a pointer to the\n> data in <literal>indexInfo->ii_AmCache</literal> (which will be NULL\n> initially). After the index insertions complete, the memory will be freed\n> automatically. If additional cleanup is required (e.g. if the cache contains\n> buffers and tuple descriptors), the AM may define an optional function\n> <literal>indexinsertcleanup</literal>, called before the memory is released.\"\n> \n> The memory will be freed automatically (as soon as ii_Context goes away). So,\n> why would we explicitly free it, like in the attached 0002 patch? And the whole\n> point of the cleanup function is to do something other than freeing memory, as\n> the docs note highlights so well.\n> \n\nNot sure I follow. The whole reason for having the index_insert_cleanup\ncallback is we can't rely on ii_Context going away, exactly because that\njust throws away the memory and we need to release buffers etc.\n\nThe only reason why the 0002 patch does pfree() is that it clearly\nindicates whether the cache is initialized. We could have a third state\n\"allocated but not initialized\", but that doesn't seem worth it.\n\nIf you're saying the docs are misleading in some way, then maybe we need\nto clarify that.\n\n> Also, the 0002 patch does introduce per-tuple function call overhead in\n> heapam_index_validate_scan().\n> \n\nHow come? The cleanup is done only once after the scan completes. Isn't\nthat exactly what we want to do?\n\n> Also, now we have split brain...in certain situations we want to call\n> index_insert_cleanup() per index insert and in certain cases we would like it\n> called once for all inserts. Not very easy to understand IMO.\n> \n> Finally, I don't think that any index AM would have anything to clean up after\n> every insert.\n> \n\nBut I didn't suggest to call the cleanup after every index insert. If a\nfunction does a bunch of index_insert calls, it'd do the cleanup only\nonce. The point is that it'd happen \"close\" to the inserts, when we know\nit needs to be done. Yes, it might happen multiple times for the same\nquery, but that still likely saves quite a bit of work (compared to\nper-insert init+cleanup).\n\nWe need to call the cleanup at some point, and the only alternative I\ncan think of is to call it in every place that calls BuildIndexInfo\n(unless we can guarantee the place can't do index_insert).\n\n> I had tried (abused) an approach with MemoryContextCallbacks upthread and that\n> seems a no-go. And yes I agree, having a dual to makeIndexInfo() (like\n> destroyIndexInfo()) means that we lose the benefits of ii_Context. That could\n> be disruptive to existing AMs in-core and outside.\n> \n\nWhat do you mean by \"dual makeIndexInfo\"?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Dec 2023 22:27:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Hi All, not sure how to \"Specify thread msgid\" - choose one which i think is close to my new feature request.\r\n\r\nquery:\r\n\r\nSELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE t1.a_indexed_col='some_value' OR t2.a_indexed_col='some_vable';\r\n\r\ncan the server automatically replace the OR logic above with UNION please? i.e. replace it with:\r\n\r\n(SELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE t1.a_indexed_col='some_value' )\r\nUNION\r\n(SELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE t2.a_indexed_col='some_vable');\r\n\r\nThanks",
"msg_date": "Thu, 21 Dec 2023 10:05:11 +0000",
"msg_from": "James Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 2023-Dec-21, James Wang wrote:\n\n> Hi All, not sure how to \"Specify thread msgid\" - choose one which i think is close to my new feature request.\n\nHello James, based on the \"Specify thread msgid\" message it looks like\nyou were trying to request a feature using the Commitfest website? That\nwon't work; the commitfest website is only intended as a tracker of\nin-progress development work. Without a Postgres code patch, that\nwebsite doesn't help you any. What you have done amounts to hijacking\nan unrelated mailing list thread, which is discouraged and frowned upon.\n\nThat said, sadly we don't have any official feature request system,\nPlease start a new thread by composing an entirely new message to\[email protected], and don't use the commitfest\nwebsite for it.\n\nThat said,\n\n> query:\n> \n> SELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE t1.a_indexed_col='some_value' OR t2.a_indexed_col='some_vable';\n> \n> can the server automatically replace the OR logic above with UNION please? i.e. replace it with:\n> \n> (SELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE t1.a_indexed_col='some_value' )\n> UNION\n> (SELECT count(1) FROM table1 t1 JOIN table2 t2 ON t1.id = t2.id WHERE t2.a_indexed_col='some_vable');\n\nI have the feeling that this has already been discussed, but I can't\nfind anything useful in the mailing list archives.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n",
"msg_date": "Mon, 8 Jan 2024 12:23:14 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 2023-Dec-12, Tomas Vondra wrote:\n\n> I propose we do a much simpler thing instead - allow the cache may be\n> initialized / cleaned up repeatedly, and make sure it gets reset at\n> convenient place (typically after index_insert calls that don't go\n> through execIndexing). That'd mean the cleanup does not need to happen\n> very far from the index_insert(), which makes the reasoning much easier.\n> 0002 does this.\n\nI'm not in love with this 0002 patch; I think the layering after 0001 is\ncorrect in that the insert_cleanup call should remain in validate_index\nand called after the whole thing is done, but 0002 changes things so\nthat now every table AM has to worry about doing this correctly; and a\nbug of omission will not be detected unless you have a BRIN index on\nsuch a table and happen to use CREATE INDEX CONCURRENTLY. So a\ndeveloper has essentially zero chance to do things correctly, which I\nthink we'd rather avoid.\n\nSo I think we should do 0001 and perhaps some further tweaks to the\noriginal brininsert optimization commit: I think the aminsertcleanup\ncallback should receive the indexRelation as first argument; and also I\nthink it's not index_insert_cleanup() job to worry about whether\nii_AmCache is NULL or not, but instead the callback should be invoked\nalways, and then it's aminsertcleanup job to do nothing if ii_AmCache is\nNULL. That way, the index AM API doesn't have to worry about which\nparts of IndexInfo (or the indexRelation) is aminsertcleanup going to\ncare about. If we decide to change this, then the docs also need a bit\nof tweaking I think.\n\nLastly, I kinda disagree with the notion that only some of the callers\nof aminsert should call aminsertcleanup, even though btree doesn't have\nan aminsertcleanup and thus it can't affect TOAST or catalogs. Maybe we\ncan turn index_insert_cleanup into an inline function, which can quickly\ndo nothing if aminsertcleanup isn't defined. Then we no longer have the\nlayering violation where we assume that btree doesn't care. But the\nproposed change in this paragraph can be maybe handled separately to\navoid confusing things with the bugfix in the two paragraphs above.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n\n\n",
"msg_date": "Mon, 8 Jan 2024 16:51:22 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 2024-Jan-08, Alvaro Herrera wrote:\n\n> So I think we should do 0001 and perhaps some further tweaks to the\n> original brininsert optimization commit: [...]\n\nSo I propose the attached patch, which should fix the reported bug and\nthe things I mentioned above, and also the typos Alexander mentioned\nelsewhere in the thread.\n\n> Lastly, I kinda disagree with the notion that only some of the callers\n> of aminsert should call aminsertcleanup, even though btree doesn't have\n> an aminsertcleanup and thus it can't affect TOAST or catalogs. [...]\n\nI didn't do anything about this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 9 Jan 2024 11:43:56 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 1/8/24 16:51, Alvaro Herrera wrote:\n> On 2023-Dec-12, Tomas Vondra wrote:\n> \n>> I propose we do a much simpler thing instead - allow the cache may be\n>> initialized / cleaned up repeatedly, and make sure it gets reset at\n>> convenient place (typically after index_insert calls that don't go\n>> through execIndexing). That'd mean the cleanup does not need to happen\n>> very far from the index_insert(), which makes the reasoning much easier.\n>> 0002 does this.\n> \n> I'm not in love with this 0002 patch; I think the layering after 0001 is\n> correct in that the insert_cleanup call should remain in validate_index\n> and called after the whole thing is done, but 0002 changes things so\n> that now every table AM has to worry about doing this correctly; and a\n> bug of omission will not be detected unless you have a BRIN index on\n> such a table and happen to use CREATE INDEX CONCURRENTLY. So a\n> developer has essentially zero chance to do things correctly, which I\n> think we'd rather avoid.\n> \n\nTrue. If the AM code does not need to worry about this kind of stuff,\nthat would be good / less error prone.\n\nOne thing that is not very clear to me is that I don't think there's a\nvery good way to determine which places need the cleanup call. Because\nit depends on (a) what kind of index is used and (b) what happens in the\ncode called earlier (which may easily do arbitrary stuff). Which means\nwe have to call the cleanup whenever the code *might* have done inserts\ninto the index. Maybe it's not such an issue in practice, though.\n\n> So I think we should do 0001 and perhaps some further tweaks to the\n> original brininsert optimization commit: I think the aminsertcleanup\n> callback should receive the indexRelation as first argument; and also I\n> think it's not index_insert_cleanup() job to worry about whether\n> ii_AmCache is NULL or not, but instead the callback should be invoked\n> always, and then it's aminsertcleanup job to do nothing if ii_AmCache is\n> NULL. That way, the index AM API doesn't have to worry about which\n> parts of IndexInfo (or the indexRelation) is aminsertcleanup going to\n> care about. If we decide to change this, then the docs also need a bit\n> of tweaking I think.\n> \n\nYeah, passing the indexRelation to the am callback seems reasonable.\nIt's more consistent what we do for other callbacks, and perhaps the\ncallback might need the indexRelation.\n\nI don't quite see why we should invoke the callback with ii_AmCache=NULL\nthough. If there's nothing cached, why bother? It just means all cleanup\ncallbacks have to do this NULL check on their own.\n\n> Lastly, I kinda disagree with the notion that only some of the callers\n> of aminsert should call aminsertcleanup, even though btree doesn't have\n> an aminsertcleanup and thus it can't affect TOAST or catalogs. Maybe we\n> can turn index_insert_cleanup into an inline function, which can quickly\n> do nothing if aminsertcleanup isn't defined. Then we no longer have the\n> layering violation where we assume that btree doesn't care. But the\n> proposed change in this paragraph can be maybe handled separately to\n> avoid confusing things with the bugfix in the two paragraphs above.\n> \n\nAfter thinking about this a bit more I agree with you - we should call\nthe cleanup from each place calling aminsert, even if it's for nbtree\n(or other index types that don't require cleanup at the moment).\n\nI wonder if there's a nice way to check this in assert-enabled builds?\nCould we tweak nbtree (or index AM in general) to check that all places\nthat called aminsert also called aminsertcleanup?\n\nFor example, I was thinking we might add a flag to IndexInfo (separate\nfrom the ii_AmCache), tracking if aminsert() was called, and then later\ncheck the aminsertcleanup() got called too. The problem however is\nthere's no obviously convenient place for this check, because IndexInfo\nis not freed explicitly ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 13 Feb 2024 16:08:16 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 2024-Feb-13, Tomas Vondra wrote:\n\n> One thing that is not very clear to me is that I don't think there's a\n> very good way to determine which places need the cleanup call. Because\n> it depends on (a) what kind of index is used and (b) what happens in the\n> code called earlier (which may easily do arbitrary stuff). Which means\n> we have to call the cleanup whenever the code *might* have done inserts\n> into the index. Maybe it's not such an issue in practice, though.\n\nI think it's not an issue, or rather that we should not try to guess.\nInstead make it a simple rule: if aminsert is called, then\naminsertcleanup must be called afterwards, period.\n\n> On 1/8/24 16:51, Alvaro Herrera wrote:\n\n> > So I think we should do 0001 and perhaps some further tweaks to the\n> > original brininsert optimization commit: I think the aminsertcleanup\n> > callback should receive the indexRelation as first argument; and also I\n> > think it's not index_insert_cleanup() job to worry about whether\n> > ii_AmCache is NULL or not, but instead the callback should be invoked\n> > always, and then it's aminsertcleanup job to do nothing if ii_AmCache is\n> > NULL. [...]\n\n> I don't quite see why we should invoke the callback with ii_AmCache=NULL\n> though. If there's nothing cached, why bother? It just means all cleanup\n> callbacks have to do this NULL check on their own.\n\nGuessing that aminsertcleanup is not needed when ii_AmCache is NULL\nseems like a leaky abstraction. I propose to have only the AM know\nwhether the cleanup call is important or not, without\nindex_insert_cleanup assuming that it's related to ii_AmCache. Somebody\ncould decide to have something completely different during insert\ncleanup, which is not in ii_AmCache.\n\n> I wonder if there's a nice way to check this in assert-enabled builds?\n> Could we tweak nbtree (or index AM in general) to check that all places\n> that called aminsert also called aminsertcleanup?\n> \n> For example, I was thinking we might add a flag to IndexInfo (separate\n> from the ii_AmCache), tracking if aminsert() was called, and Then later\n> check the aminsertcleanup() got called too. The problem however is\n> there's no obviously convenient place for this check, because IndexInfo\n> is not freed explicitly ...\n\nI agree it would be nice to have a way to verify, but it doesn't seem\n100% essential. After all, it's not very common to add new calls to\naminsert.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 29 Feb 2024 13:20:39 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 01:20:39PM +0100, Alvaro Herrera wrote:\n> I think it's not an issue, or rather that we should not try to guess.\n> Instead make it a simple rule: if aminsert is called, then\n> aminsertcleanup must be called afterwards, period.\n> \n> I agree it would be nice to have a way to verify, but it doesn't seem\n> 100% essential. After all, it's not very common to add new calls to\n> aminsert.\n\nThis thread is listed as an open item. What's the follow-up plan?\nThe last email of this thread is dated as of the 29th of February,\nwhich was 6 weeks ago.\n--\nMichael",
"msg_date": "Thu, 18 Apr 2024 16:07:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 4/18/24 09:07, Michael Paquier wrote:\n> On Thu, Feb 29, 2024 at 01:20:39PM +0100, Alvaro Herrera wrote:\n>> I think it's not an issue, or rather that we should not try to guess.\n>> Instead make it a simple rule: if aminsert is called, then\n>> aminsertcleanup must be called afterwards, period.\n>>\n>> I agree it would be nice to have a way to verify, but it doesn't seem\n>> 100% essential. After all, it's not very common to add new calls to\n>> aminsert.\n> \n> This thread is listed as an open item. What's the follow-up plan?\n> The last email of this thread is dated as of the 29th of February,\n> which was 6 weeks ago.\n\nApologies, I got distracted by the other patches. The bug is still\nthere, I believe the patch shared by Alvaro in [1] is the right way to\ndeal with it. I'll take care of that today/tomorrow.\n\n\n[1]\nhttps://www.postgresql.org/message-id/[email protected]\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2024 11:35:43 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "Hi,\n\nHere's two patched to deal with this open item. 0001 is a trivial fix of\ntypos and wording, I moved it into a separate commit for clarity. 0002\ndoes the actual fix - adding the index_insert_cleanup(). It's 99% the\npatch Alvaro shared some time ago, with only some minor formatting\ntweaks by me.\n\nI've also returned to this Alvaro's comment:\n\n> Lastly, I kinda disagree with the notion that only some of the callers\n> of aminsert should call aminsertcleanup, even though btree doesn't\n> have an aminsertcleanup and thus it can't affect TOAST or catalogs.\n\nwhich was a reaction to my earlier statement about places calling\nindex_insert():\n\n> There's a call in toast_internals.c, but that seems OK because that\n> only deals with btree indexes (and those don't need any cleanup). The\n> same logic applies to unique_key_recheck(). The rest goes through\n> execIndexing.c, which should do the cleanup in ExecCloseIndices().\n\nI think Alvaro is right, so I went through all index_insert() callers\nand checked which need the cleanup. Luckily there's not that many of\nthem, only 5 in total call index_insert() directly:\n\n1) toast_save_datum (src/backend/access/common/toast_internals.c)\n\n This is safe, because the index_insert() passes indexInfo=NULL, so\n there can't possibly be any cache. If we ever decide to pass a valid\n indexInfo, we can add the cleanup, now it seems pointless.\n\n Note: If someone created a BRIN index on a TOAST table, that'd already\n crash, because BRIN blindly dereferences the indexInfo. Maybe that\n should be fixed, but we don't support CREATE INDEX on TOAST tables.\n\n2) heapam_index_validate_scan (src/backend/access/heap/heapam_handler.c)\n\n Covered by the committed fix, adding cleanup to validate_index.\n\n3) CatalogIndexInsert (src/backend/catalog/indexing.c)\n\n Covered by all callers also calling CatalogCloseIndexes, which in turn\n calls ExecCloseIndices and cleanup.\n\n4) unique_key_recheck (src/backend/commands/constraint.c)\n\n This seems like the only place missing the cleanup call.\n\n5) ExecInsertIndexTuples (src/backend/executor/execIndexing.c)\n\n Should be covered by ExecCloseIndices, called after the insertions.\n\n\nSo it seems only (4) unique_key_recheck needs the extra call (it can't\nreally happen higher because the indexInfo is a local variable). So the\n0002 patch adds the call.\n\nThe patch also adds a test for this (or rather tweaks an existing one).\n\n\nIt's a bit too late for me to push this now, I'll do so early tomorrow.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 19 Apr 2024 00:13:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On 4/19/24 00:13, Tomas Vondra wrote:\n> ...\n> \n> It's a bit too late for me to push this now, I'll do so early tomorrow.\n> \n\nFWIW I've pushed both patches, which resolves the open item, so I've\nmoved it to the \"resolved\" part on wiki.\n\n\nthanks\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Apr 2024 16:14:00 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 04:14:00PM +0200, Tomas Vondra wrote:\n> FWIW I've pushed both patches, which resolves the open item, so I've\n> moved it to the \"resolved\" part on wiki.\n\nThanks, Tomas!\n--\nMichael",
"msg_date": "Sat, 20 Apr 2024 08:25:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: brininsert optimization opportunity"
}
] |
[
{
"msg_contents": "Hello,\n\nThis is the first time I've run pgindent on my current machine, and it\ndoesn't seem to be making any modifications to source files. For\nexample this command:\n\n./src/tools/pgindent/pgindent src/backend/optimizer/path/allpaths.c\n\nleaves the allpaths.c file unchanged despite my having some very long\nfunction calls. I've downloaded the latest typedefs list, but I\nhaven't added any types anyway.\n\nWhat obvious thing am I missing?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 3 Jul 2023 21:12:58 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgindent (probably my missing something obvious)"
},
{
"msg_contents": "James Coleman <[email protected]> writes:\n> This is the first time I've run pgindent on my current machine, and it\n> doesn't seem to be making any modifications to source files. For\n> example this command:\n\n> ./src/tools/pgindent/pgindent src/backend/optimizer/path/allpaths.c\n\n> leaves the allpaths.c file unchanged despite my having some very long\n> function calls.\n\n\"Long function calls\" aren't necessarily something pgindent would\nchange. Have you tried intentionally misindenting some lines?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jul 2023 21:20:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent (probably my missing something obvious)"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 9:20 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > This is the first time I've run pgindent on my current machine, and it\n> > doesn't seem to be making any modifications to source files. For\n> > example this command:\n>\n> > ./src/tools/pgindent/pgindent src/backend/optimizer/path/allpaths.c\n>\n> > leaves the allpaths.c file unchanged despite my having some very long\n> > function calls.\n>\n> \"Long function calls\" aren't necessarily something pgindent would\n> change. Have you tried intentionally misindenting some lines?\n>\n> regards, tom lane\n\nHmm, yeah, that works.\n\nMy heuristic for what pgindent changes must be wrong. The long\nfunction calls (and 'if' conditions) seem obviously out of place to my\neyes with the surrounding code. Does that mean the surrounding code\nwas just hand-prettified?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 3 Jul 2023 21:41:10 -0400",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgindent (probably my missing something obvious)"
},
{
"msg_contents": "James Coleman <[email protected]> writes:\n> My heuristic for what pgindent changes must be wrong. The long\n> function calls (and 'if' conditions) seem obviously out of place to my\n> eyes with the surrounding code. Does that mean the surrounding code\n> was just hand-prettified?\n\npgindent won't usually editorialize on line breaks within C\nstatements. (It *will* re-flow comment text, if the comment block\nisn't at the left margin.) It seems to feel free to play with\nhorizontal whitespace, but not to add or remove newlines within a\nstatement. I do know that it will move curly braces around to meet\nformatting rules, but I've not seen it do similar changes within a\nfunction call or if-condition. So it's up to you to break the lines\nin a reasonable way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jul 2023 22:04:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent (probably my missing something obvious)"
}
] |
[
{
"msg_contents": "Hello!\r\n\r\nI propose the attached patch to be applied on the 'master' branch\r\nof PostgreSQL to add GitLab CI automation alongside Cirrus CI in the PostgreSQL repository.\r\n\r\nIt is not intended to be a replacement for Cirrus CI, but simply suggestion for the\r\nPostgreSQL project to host centrally a Gitlab CI definition for those who prefer to use\r\nit while developing/testing PostgreSQL.\r\n\r\nThe intent is to facilitate collaboration among GitLab users, promote standardization\r\nand consistency, and ultimately, improve testing and code quality.\r\n\r\nRobin Newhouse\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 4 Jul 2023 23:44:45 +0000",
"msg_from": "\"Newhouse, Robin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "On 2023-07-04 Tu 19:44, Newhouse, Robin wrote:\n>\n> Hello!\n>\n> I propose the attached patch to be applied on the 'master' branch\n>\n> of PostgreSQL to add GitLab CI automation alongside Cirrus CI in the \n> PostgreSQL repository.\n>\n> It is not intended to be a replacement for Cirrus CI, but simply \n> suggestion for the\n>\n> PostgreSQL project to host centrally a Gitlab CI definition for those \n> who prefer to use\n>\n> it while developing/testing PostgreSQL.\n>\n> The intent is to facilitate collaboration among GitLab users, promote \n> standardization\n>\n> and consistency, and ultimately, improve testing and code quality.\n>\n\nThis seems very RedHat-centric, which I'm not sure is a good idea. Also, \nshouldn't at least some of these recipes call dnf and dnf-builddep \ninstead of yum and yum-build-dep?\n\nIf we're going to do this then arguably we should also be supporting \nGitHub Actions and who knows what other CI frameworks. There is a case \nfor us special casing Cirrus CI because it's used for the cfbot.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-04 Tu 19:44, Newhouse, Robin\n wrote:\n\n\n\n\n\n\nHello!\n \nI propose the attached patch to be applied\n on the 'master' branch\n \nof PostgreSQL to add GitLab CI automation\n alongside Cirrus CI in the PostgreSQL repository.\n \nIt is not intended to be a replacement for\n Cirrus CI, but simply suggestion for the\nPostgreSQL project to host centrally a\n Gitlab CI definition for those who prefer to use\nit while developing/testing PostgreSQL.\n \nThe intent is to facilitate collaboration\n among GitLab users, promote standardization\n \nand consistency, and ultimately, improve\n testing and code quality.\n\n\n\n\nThis seems very RedHat-centric, which I'm not sure is a good\n idea. Also, shouldn't at least some of these recipes call dnf and\n dnf-builddep instead of yum and yum-build-dep?\nIf we're going to do this then arguably we should also be\n supporting GitHub Actions and who knows what other CI frameworks.\n There is a case for us special casing Cirrus CI because it's used\n for the cfbot.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 5 Jul 2023 09:22:30 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 15:22, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-07-04 Tu 19:44, Newhouse, Robin wrote:\n>\n> > Hello!\n> >\n> > I propose the attached patch to be applied on the 'master' branch\n> > of PostgreSQL to add GitLab CI automation alongside Cirrus CI in the PostgreSQL repository.\n\nCan you configure GitLab to use a .gitlab-ci.yml file that is not in\nthe root directory?\n\n> > It is not intended to be a replacement for Cirrus CI, but simply suggestion for the\n> > PostgreSQL project to host centrally a Gitlab CI definition for those who prefer to use\n> > it while developing/testing PostgreSQL.\n> >\n> > The intent is to facilitate collaboration among GitLab users, promote standardization\n> > and consistency, and ultimately, improve testing and code quality.\n> >\n> This seems very RedHat-centric, which I'm not sure is a good idea. Also, shouldn't at least some of these recipes call dnf and dnf-builddep instead of yum and yum-build-dep?\n\nI don't think it's bad to add an automated test suite for redhat-based images?\n\n> If we're going to do this then arguably we should also be supporting GitHub Actions and who knows what other CI frameworks. There is a case for us special casing Cirrus CI because it's used for the cfbot.\n\nI think there's a good case for _not_ using Cirrus CI, namely that the\nlicense may be prohibitive, self-management of the build system\n(storage of artifacts, UI, database) is missing for Cirrus CI, and it\nalso seems to be unable to run automated CI on repositories that\naren't hosted on Github.\nI think these are good arguments for adding a GitLab CI configuration.\n\nUnless the cfbot is entirely under management of the PostgreSQL\nproject (which it doesn't appear to be, as far as I know the URL is\nstill cfbot.cputube.org indicating some amount of external control)\nthe only special casing for Cirrus CI within the project seems to be\n\"people have experience with this tool\", which is good, but not\nexclusive to Cirrus CI - clearly there are also people here who have\nexperience with (or are interested in) maintaining GitLab CI\nconfigurations.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 17:58:34 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "On 2023-07-05 We 11:58, Matthias van de Meent wrote:\n> On Wed, 5 Jul 2023 at 15:22, Andrew Dunstan<[email protected]> wrote:\n>>\n>> On 2023-07-04 Tu 19:44, Newhouse, Robin wrote:\n>>\n>>> Hello!\n>>>\n>>> I propose the attached patch to be applied on the 'master' branch\n>>> of PostgreSQL to add GitLab CI automation alongside Cirrus CI in the PostgreSQL repository.\n> Can you configure GitLab to use a .gitlab-ci.yml file that is not in\n> the root directory?\n>\n>>> It is not intended to be a replacement for Cirrus CI, but simply suggestion for the\n>>> PostgreSQL project to host centrally a Gitlab CI definition for those who prefer to use\n>>> it while developing/testing PostgreSQL.\n>>>\n>>> The intent is to facilitate collaboration among GitLab users, promote standardization\n>>> and consistency, and ultimately, improve testing and code quality.\n>>>\n>> This seems very RedHat-centric, which I'm not sure is a good idea. Also, shouldn't at least some of these recipes call dnf and dnf-builddep instead of yum and yum-build-dep?\n> I don't think it's bad to add an automated test suite for redhat-based images?\n\n\nI didn't suggest it wasn't just that the coverage should be broader.\n\n\n>\n>> If we're going to do this then arguably we should also be supporting GitHub Actions and who knows what other CI frameworks. There is a case for us special casing Cirrus CI because it's used for the cfbot.\n> I think there's a good case for _not_ using Cirrus CI, namely that the\n> license may be prohibitive, self-management of the build system\n> (storage of artifacts, UI, database) is missing for Cirrus CI, and it\n> also seems to be unable to run automated CI on repositories that\n> aren't hosted on Github.\n> I think these are good arguments for adding a GitLab CI configuration.\n>\n> Unless the cfbot is entirely under management of the PostgreSQL\n> project (which it doesn't appear to be, as far as I know the URL is\n> still cfbot.cputube.org indicating some amount of external control)\n> the only special casing for Cirrus CI within the project seems to be\n> \"people have experience with this tool\", which is good, but not\n> exclusive to Cirrus CI - clearly there are also people here who have\n> experience with (or are interested in) maintaining GitLab CI\n> configurations.\n>\n\nI would not assume too much from the URL. The PostgreSQL BuildFarm \noperated for many years with a URL that was not under postgresql.org. I \nassume the URL is partly a function of the fact that Thomas started the \ncfbot as a bit of a skunkworks project. However it's run, the fact is \nthat the project relies to some extent on it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-05 We 11:58, Matthias van de\n Meent wrote:\n\n\nOn Wed, 5 Jul 2023 at 15:22, Andrew Dunstan <[email protected]> wrote:\n\n\n\n\nOn 2023-07-04 Tu 19:44, Newhouse, Robin wrote:\n\n\n\nHello!\n\nI propose the attached patch to be applied on the 'master' branch\nof PostgreSQL to add GitLab CI automation alongside Cirrus CI in the PostgreSQL repository.\n\n\n\n\nCan you configure GitLab to use a .gitlab-ci.yml file that is not in\nthe root directory?\n\n\n\n\nIt is not intended to be a replacement for Cirrus CI, but simply suggestion for the\nPostgreSQL project to host centrally a Gitlab CI definition for those who prefer to use\nit while developing/testing PostgreSQL.\n\nThe intent is to facilitate collaboration among GitLab users, promote standardization\nand consistency, and ultimately, improve testing and code quality.\n\n\n\nThis seems very RedHat-centric, which I'm not sure is a good idea. Also, shouldn't at least some of these recipes call dnf and dnf-builddep instead of yum and yum-build-dep?\n\n\n\nI don't think it's bad to add an automated test suite for redhat-based images?\n\n\n\nI didn't suggest it wasn't just that the coverage should be\n broader.\n\n\n\n\n\n\n\n\nIf we're going to do this then arguably we should also be supporting GitHub Actions and who knows what other CI frameworks. There is a case for us special casing Cirrus CI because it's used for the cfbot.\n\n\n\nI think there's a good case for _not_ using Cirrus CI, namely that the\nlicense may be prohibitive, self-management of the build system\n(storage of artifacts, UI, database) is missing for Cirrus CI, and it\nalso seems to be unable to run automated CI on repositories that\naren't hosted on Github.\nI think these are good arguments for adding a GitLab CI configuration.\n\nUnless the cfbot is entirely under management of the PostgreSQL\nproject (which it doesn't appear to be, as far as I know the URL is\nstill cfbot.cputube.org indicating some amount of external control)\nthe only special casing for Cirrus CI within the project seems to be\n\"people have experience with this tool\", which is good, but not\nexclusive to Cirrus CI - clearly there are also people here who have\nexperience with (or are interested in) maintaining GitLab CI\nconfigurations.\n\n\n\n\n\nI would not assume too much from the URL. The PostgreSQL\n BuildFarm operated for many years with a URL that was not under\n postgresql.org. I assume the URL is partly a function of the fact\n that Thomas started the cfbot as a bit of a skunkworks project.\n However it's run, the fact is that the project relies to some\n extent on it.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 6 Jul 2023 07:32:14 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "On 06.07.23 13:32, Andrew Dunstan wrote:\n>>> This seems very RedHat-centric, which I'm not sure is a good idea. Also, shouldn't at least some of these recipes call dnf and dnf-builddep instead of yum and yum-build-dep?\n>> I don't think it's bad to add an automated test suite for redhat-based images?\n> \n> I didn't suggest it wasn't just that the coverage should be broader.\n\nIf we were to accept this (or other providers besides Cirrus), then I \nthink they should run the exact same configurations that we have for \nCirrus right now (or possibly subsets or supersets, depending on \navailability and capabilities). Those have been put there with a lot of \ncare to get efficient and reasonably broad coverage. There is no point \nin starting that whole journey over again.\n\nIf someone thinks we should have more coverage for Red Hat-based \nplatforms, then let's put that into the Cirrus configuration. That \nshould be independent of the choice of CI provider.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 17:19:13 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 6:22 AM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-07-04 Tu 19:44, Newhouse, Robin wrote:\n>\n> Hello!\n>\n>\n>\n> I propose the attached patch to be applied on the 'master' branch\n>\n> of PostgreSQL to add GitLab CI automation alongside Cirrus CI in the PostgreSQL repository.\n>\n>\n>\n> It is not intended to be a replacement for Cirrus CI, but simply suggestion for the\n>\n> PostgreSQL project to host centrally a Gitlab CI definition for those who prefer to use\n>\n> it while developing/testing PostgreSQL.\n>\n>\n>\n> The intent is to facilitate collaboration among GitLab users, promote standardization\n>\n> and consistency, and ultimately, improve testing and code quality.\n>\n>\n> This seems very RedHat-centric, which I'm not sure is a good idea.\n\nA few years ago, a proposal to use CentOS may not have been a bad\nproposal. But since Redhat changed CentOS to be an upstream distro\n(rather than a rebuild of RHEL), I do see a reason to consider RHEL as\na candidate in our CI.\n\nI think the idea of a pre-buildfarm CI is to enable contributors catch\nproblems before they're merged, or even before proposed as a patch to\nthe community. So if our CI includes support for a prominent Linux\ndistro, I think it's worth it, to provide coverage for the large\necosystem that's based on RHEL and its derivatives.\n\nWould using RockyLinux assuage these concerns? Perhaps, it would.\n\n> If we're going to do this then arguably we should also be supporting GitHub Actions and who knows what other CI frameworks. There is a case for us special casing Cirrus CI because it's used for the cfbot.\n\nWe've already lost that battle by supporting one\ncommercial/non-community provider. From Anrdres' email [1]:\n\n> An obvious criticism of the effort to put CI runner infrastructure into core\n> is that they are effectively all proprietary technology, and that we should be\n> hesistant to depend too much on one of them. I think that's a valid\n> concern. However, once one CI integration is done, a good chunk (but not all!)\n> the work is transferrable to another CI solution, which I do think reduces the\n> dependency sufficiently.\n\nSo it seems that supporting more than one CI was always on the cards.\nCirrus was chosen for its advantages that Andres mentions in the\nemail, but also for the reason that cfbot had chosen Cirrus. I can\nimagine if cfbot was developed against some other CI, it's very likely\nthat we'd be using that other CI instead of Cirrus.\n\n[1]: https://www.postgresql.org/message-id/20211001222752.wrz7erzh4cajvgp6%40alap3.anarazel.de\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:10:33 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "> On 6 Jul 2023, at 20:10, Gurjeet Singh <[email protected]> wrote:\n\n> I can\n> imagine if cfbot was developed against some other CI, it's very likely\n> that we'd be using that other CI instead of Cirrus.\n\nThe CFBot originally used Travis, but switched in late 2020 when Travis almost\nover night become hard to use for open source projects:\n\n https://github.com/macdice/cfbot/commit/a62aa6d77dd4cc7f0a5549db378cd6f1cf25c0e2\n\nThese systems come and go, and each have their quirks. Having options is good,\nbut maintaining multiple ones isn't necessarily a free fire-and-forget type of\nthing for the community.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 20:27:48 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-04 23:44:45 +0000, Newhouse, Robin wrote:\n> I propose the attached patch to be applied on the 'master' branch\n> of PostgreSQL to add GitLab CI automation alongside Cirrus CI in the PostgreSQL repository.\n> \n> It is not intended to be a replacement for Cirrus CI, but simply suggestion for the\n> PostgreSQL project to host centrally a Gitlab CI definition for those who prefer to use\n> it while developing/testing PostgreSQL.\n\nOne way to avoid duplicated CI definition could be to use for gitlab-ci to use\ncirrus-cli to run the cirrus CI tests within gitlab ci.\n\nRealistically I think adding a separate CI definition would entail committers\nneeding to run that CI at least occasionally. If we make the different CI envs\nmore similar, that becomes less of a necessity.\n\n\n> +default:\n> + # Base image for builds and tests unless otherwise defined\n> + image: fedora:latest\n> + # Extend build jobs to have longer timeout as the default GitLab\n> + # timeout (1h) is often not enough\n> + timeout: 3h\n\nIMO we shouldn't add CI that doesn't complete within well under an hour, it's\ntoo expensive workflow wise.\n\n\n\n> +fedora:\n> + stage: build\n> + variables:\n> + GIT_STRATEGY: fetch\n> + GIT_SUBMODULE_STRATEGY: normal\n> + script:\n> + # Install dependencies\n> + - yum install -y yum-utils perl\n> + - yum-builddep -y postgresql\n> + - *build-postgres-def\n\nMy experience is that installing dependencies on each run is way too slow to\nbe practical. I also found that it often causes temporary failures due to\nnetwork issues etc. For cirrus-ci we create VM and docker images on a regular\nschedule (three times a week right now) - if there's interest in building\nfedora containers that'd be easy.\n\nI'd be open to switching one of the cirrus-CI tasks over to fedora, fwiw.\n\n\n> +# From https://github.com/postgres/postgres/blob/master/.cirrus.yml\n> +.create-user: &create-user-def\n> + - useradd -m postgres\n> + - chown -R postgres:postgres .\n> + - mkdir -p ${CCACHE_DIR}\n> + - chown -R postgres:postgres ${CCACHE_DIR}\n> + - echo '* - memlock 134217728' > /etc/security/limits.d/postgres.conf\n> + - su postgres -c \"ulimit -l -H && ulimit -l -S\"\n> + # Can't change container's kernel.core_pattern. Postgres user can't write\n> + # to / normally. Change that.\n> + - chown root:postgres /\n> + - chmod g+rwx /\n\nIf we need duplicated stanzas like this, we should instead move them out into\nscripts that we can use from different CI environments.\n\n\n> +# Similar to https://github.com/postgres/postgres/blob/master/.cirrus.yml\n> +fedora meson:\n> + stage: build\n> + variables:\n> + GIT_STRATEGY: fetch\n> + GIT_SUBMODULE_STRATEGY: normal\n> + script:\n> + # Meson system only exists on master branch currently\n\nMaster and 16 now...\n\n\n> + - if [ ! -f meson.build ]; then exit 0; fi\n> + # Install dependencies\n> + - yum install -y yum-utils perl perl-IPC-Run meson ninja-build\n> + - yum-builddep -y postgresql\n> + # Create postgres user\n> + - *create-user-def\n> + # Configure\n> + - su postgres -c 'meson setup --buildtype=debug --auto-features=disabled -Dtap_tests=enabled build'\n> + # Build\n> + - su postgres -c 'ninja -C build -j 2'\n> + # Minimal test\n> + - su postgres -c 'meson test $MTEST_ARGS --num-processes 2 tmp_install cube/regress pg_ctl/001_start_stop'\n> + # Run all tests\n> + - su postgres -c 'meson test $MTEST_ARGS --num-processes 2'\n> + artifacts:\n> + when: always # Must be able to see logs\n> + paths:\n> + - build/meson-logs/testlog.txt\n\nFWIW, that's not enough to be able to debug problems. You really also need the\nlog files created by failing tests.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 12:01:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 11:27 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 6 Jul 2023, at 20:10, Gurjeet Singh <[email protected]> wrote:\n>\n> > I can\n> > imagine if cfbot was developed against some other CI, it's very likely\n> > that we'd be using that other CI instead of Cirrus.\n>\n> The CFBot originally used Travis, but switched in late 2020 when Travis almost\n> over night become hard to use for open source projects:\n>\n> https://github.com/macdice/cfbot/commit/a62aa6d77dd4cc7f0a5549db378cd6f1cf25c0e2\n\nThanks for providing the historical context! A for-profit entity,\ndespite their best intentions, and sometimes by no fault of their own,\nmay not survive. It's not that a non-profits are guaranteed to\nsurvive, but the conditions they operate in are drastically different\nthan those of for-profit ones.\n\n> These systems come and go, and each have their quirks.\n\nI'm sure the community has seen enough of such disappearances over the\nyears, which is why I was surprised to see the adoption of Cirrus in\ncore (after I had stopped paying attention to Postgres hackers list\nfor a few years). Having read that whole discussion, though, I do see\nthe immense value Cirrus CI provides.\n\n> Having options is good,\n> but maintaining multiple ones isn't necessarily a free fire-and-forget type of\n> thing for the community.\n\nBy not adopting at least one other CI, it'd seem like the community\nis favoring Cirrus over others; and that doesn't feel good.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 6 Jul 2023 12:19:42 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add GitLab CI to PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nAs discussed in [1], outputs of --help for some commands fits into 80 \ncolumns\nper line, while others do not.\n\nSince it seems preferable to have consistent line break policy and some \npeople\nuse 80-column terminal, wouldn't it be better to make all commands in 80\ncolumns per line?\n\nAttached patch which does this for src/bin commands.\n\nIf this is the way to go, I'll do same things for contrib commands.\n\n[1] \nhttps://www.postgresql.org/message-id/3fe4af5a0a81fc6a2ec01cb484c0a487%40oss.nttdata.com\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Wed, 05 Jul 2023 10:47:19 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 2023-07-05 10:47, torikoshia wrote:\n> Hi,\n> \n> As discussed in [1], outputs of --help for some commands fits into 80 \n> columns\n> per line, while others do not.\n> \n> Since it seems preferable to have consistent line break policy and some \n> people\n> use 80-column terminal, wouldn't it be better to make all commands in \n> 80\n> columns per line?\n> \n> Attached patch which does this for src/bin commands.\n> \n> If this is the way to go, I'll do same things for contrib commands.\n> \n> [1] \n> https://www.postgresql.org/message-id/3fe4af5a0a81fc6a2ec01cb484c0a487%40oss.nttdata.com\n\nThanks for making the patches! I have some comments to v1 patch.\n\n(1)\n\nWhy don't you add test for the purpose? It could be overkill...\nI though the following function is the best place.\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Utils.pm \nb/src/test/perl/PostgreSQL/Test/Utils.pm\nindex 617caa022f..1bdb81ac56 100644\n--- a/src/test/perl/PostgreSQL/Test/Utils.pm\n+++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n@@ -843,6 +843,10 @@ sub program_help_ok\n ok($result, \"$cmd --help exit code 0\");\n isnt($stdout, '', \"$cmd --help goes to stdout\");\n is($stderr, '', \"$cmd --help nothing to stderr\");\n+ foreach my $line (split /\\n/, $stdout)\n+ {\n+ ok(length($line) <= 80, \"$cmd --help output fit within \n80 columns per line\");\n+ }\n return;\n }\n\n(2)\n\nIs there any reason that only src/bin commands are targeted? I found \nthat\nwe also need to fix vacuumlo with the above test. I think it's better to\nfix it because it's a contrib module.\n\n$ vacuumlo --help | while IFS='' read line; do echo $((`echo $line | wc \n-m` - 1)) $line; done | sort -n -r | head -n 2\n84 -n, --dry-run don't remove large objects, just show \nwhat would be done\n74 -l, --limit=LIMIT commit after removing each LIMIT large \nobjects\n\n(3)\n\nIs to delete '/mnt/server' intended? I though it better to leave it as\nis since archive_cleanup_command example uses the absolute path.\n\n-\t\t\t \" pg_archivecleanup /mnt/server/archiverdir \n000000010000000000000010.00000020.backup\\n\"));\n+\t\t\t \" pg_archivecleanup archiverdir \n000000010000000000000010.00000020.backup\\n\"));\n\nI will confirmed that the --help text are not changed and only\nthe line breaks are changed. But, currently the above change\nbreak it.\n\n(4)\n\nI found that some binaries, for example ecpg, are not tested with\nprogram_help_ok(). Is it better to add tests in the patch?\n\nBTW, I check the difference with the following commands\n# files include \"--help\"\n$ find -name \"*.c\" | xargs -I {} sh -c 'if [ `grep -e --help {} | wc -l` \n-gt 0 ]; then echo {}; fi'\n\n# programs which is tested with program_help_ok\n$ find -name \"*.pl\" | xargs -I {} sh -c 'grep -e program_help_ok {}'\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 21 Aug 2023 13:08:58 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 2023-08-21 13:08, Masahiro Ikeda wrote:\nThanks for your review!\n\n> (1)\n> \n> Why don't you add test for the purpose? It could be overkill...\n> I though the following function is the best place.\n> \n> diff --git a/src/test/perl/PostgreSQL/Test/Utils.pm\n> b/src/test/perl/PostgreSQL/Test/Utils.pm\n> index 617caa022f..1bdb81ac56 100644\n> --- a/src/test/perl/PostgreSQL/Test/Utils.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n> @@ -843,6 +843,10 @@ sub program_help_ok\n> ok($result, \"$cmd --help exit code 0\");\n> isnt($stdout, '', \"$cmd --help goes to stdout\");\n> is($stderr, '', \"$cmd --help nothing to stderr\");\n> + foreach my $line (split /\\n/, $stdout)\n> + {\n> + ok(length($line) <= 80, \"$cmd --help output fit within\n> 80 columns per line\");\n> + }\n> return;\n> }\n\nAgreed.\n\n> (2)\n> \n> Is there any reason that only src/bin commands are targeted? I found \n> that\n> we also need to fix vacuumlo with the above test. I think it's better \n> to\n> fix it because it's a contrib module.\n> \n> $ vacuumlo --help | while IFS='' read line; do echo $((`echo $line |\n> wc -m` - 1)) $line; done | sort -n -r | head -n 2\n> 84 -n, --dry-run don't remove large objects, just show\n> what would be done\n> 74 -l, --limit=LIMIT commit after removing each LIMIT large \n> objects\n\nThis is because I wasn't sure making all --help outputs fit into 80 \ncolumns per line is right thing to do as described below:\n\n| If this is the way to go, I'll do same things for contrib commands.\n\nIf there are no objection, I'm going to make other commands fit within \n80 columns per line including (4).\n\n> (3)\n> \n> Is to delete '/mnt/server' intended? I though it better to leave it as\n> is since archive_cleanup_command example uses the absolute path.\n> \n> -\t\t\t \" pg_archivecleanup /mnt/server/archiverdir\n> 000000010000000000000010.00000020.backup\\n\"));\n> +\t\t\t \" pg_archivecleanup archiverdir\n> 000000010000000000000010.00000020.backup\\n\"));\n> \n> I will confirmed that the --help text are not changed and only\n> the line breaks are changed. But, currently the above change\n> break it.\n\nYes, it is intended as described in the thread.\n\nhttps://www.postgresql.org/message-id/20230615.152036.1556630042388070221.horikyota.ntt%40gmail.com\n\n| We could shorten it by removing the \"/mnt/server\" portion, but\nI'm not sure if it's worth doing.\n\nHowever, I feel it is acceptable to make an exception and exceed 80 \ncharacters for this line.\n\n> (4)\n> \n> I found that some binaries, for example ecpg, are not tested with\n> program_help_ok(). Is it better to add tests in the patch?\n> \nAgreed.\n\n> BTW, I check the difference with the following commands\n> # files include \"--help\"\n> $ find -name \"*.c\" | xargs -I {} sh -c 'if [ `grep -e --help {} | wc\n> -l` -gt 0 ]; then echo {}; fi'\n> \n> # programs which is tested with program_help_ok\n> $ find -name \"*.pl\" | xargs -I {} sh -c 'grep -e program_help_ok {}'\n\nThanks for sharing your procedure!\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Tue, 22 Aug 2023 22:57:07 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-22 22:57, torikoshia wrote:\n> On 2023-08-21 13:08, Masahiro Ikeda wrote:\n>> (2)\n>> \n>> Is there any reason that only src/bin commands are targeted? I found \n>> that\n>> we also need to fix vacuumlo with the above test. I think it's better \n>> to\n>> fix it because it's a contrib module.\n>> \n>> $ vacuumlo --help | while IFS='' read line; do echo $((`echo $line |\n>> wc -m` - 1)) $line; done | sort -n -r | head -n 2\n>> 84 -n, --dry-run don't remove large objects, just show\n>> what would be done\n>> 74 -l, --limit=LIMIT commit after removing each LIMIT large \n>> objects\n> \n> This is because I wasn't sure making all --help outputs fit into 80\n> columns per line is right thing to do as described below:\n> \n> | If this is the way to go, I'll do same things for contrib commands.\n> \n> If there are no objection, I'm going to make other commands fit within\n> 80 columns per line including (4).\n\nOK. Sorry, I missed the sentence above.\nI'd like to hear what others comment too.\n\n>> (3)\n>> \n>> Is to delete '/mnt/server' intended? I though it better to leave it \n>> as\n>> is since archive_cleanup_command example uses the absolute path.\n>> \n>> -\t\t\t \" pg_archivecleanup /mnt/server/archiverdir\n>> 000000010000000000000010.00000020.backup\\n\"));\n>> +\t\t\t \" pg_archivecleanup archiverdir\n>> 000000010000000000000010.00000020.backup\\n\"));\n>> \n>> I will confirmed that the --help text are not changed and only\n>> the line breaks are changed. But, currently the above change\n>> break it.\n> \n> Yes, it is intended as described in the thread.\n> \n> https://www.postgresql.org/message-id/20230615.152036.1556630042388070221.horikyota.ntt%40gmail.com\n> \n> | We could shorten it by removing the \"/mnt/server\" portion, but\n> I'm not sure if it's worth doing.\n> \n> However, I feel it is acceptable to make an exception and exceed 80\n> characters for this line.\n\nThanks for sharing the thread. I understood.\n\nIt seems that the directory name should be consistent with the example\nof archive_cleanup_command. However, it does not seem appropriate to\nmake archive_cleanup_command to use a relative path.\n\n```\n> pg_archivecleanup --help\n(snip)\ne.g.\n archive_cleanup_command = 'pg_archivecleanup /mnt/server/archiverdir \n%r'\n\nOr for use as a standalone archive cleaner:\ne.g.\n pg_archivecleanup /mnt/server/archiverdir \n000000010000000000000010.00000020.backup\n```\n\nIMHO, is simply breaking the line acceptable?\n\n```\nOr for use as a standalone archive cleaner:\ne.g.\n pg_archivecleanup /mnt/server/archiverdir\n 000000010000000000000010.00000020.backup\n```\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 23 Aug 2023 09:45:50 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 1:09 PM Masahiro Ikeda \n<[email protected]> wrote:\n> (1)\n> Why don't you add test for the purpose? It could be overkill...\n> I though the following function is the best place.\n\nAdded the test.\n\nBTW, psql --help outputs the content of PGHOST, which caused a failure \nin the test:\n\n```\n-h, --host=HOSTNAME database server host or socket directory\n (default: \n\"/var/folders/m7/9snkd5b54cx_b4lxkl9ljlcc0000gn/T/LobrmSUf7t\")\n```\n\nIt may be overkill, added a logic for removing the content of PGHOST.\n\n\nOn 2023-08-23 09:45, Masahiro Ikeda wrote:\n> Hi,\n> \n> On 2023-08-22 22:57, torikoshia wrote:\n>> On 2023-08-21 13:08, Masahiro Ikeda wrote:\n>>> (2)\n>>> \n>>> Is there any reason that only src/bin commands are targeted? I found \n>>> that\n>>> we also need to fix vacuumlo with the above test. I think it's better \n>>> to\n>>> fix it because it's a contrib module.\n>>> \n>>> $ vacuumlo --help | while IFS='' read line; do echo $((`echo $line |\n>>> wc -m` - 1)) $line; done | sort -n -r | head -n 2\n>>> 84 -n, --dry-run don't remove large objects, just show\n>>> what would be done\n>>> 74 -l, --limit=LIMIT commit after removing each LIMIT large \n>>> objects\n>> \n>> This is because I wasn't sure making all --help outputs fit into 80\n>> columns per line is right thing to do as described below:\n>> \n>> | If this is the way to go, I'll do same things for contrib commands.\n>> \n>> If there are no objection, I'm going to make other commands fit within\n>> 80 columns per line including (4).\n> \n> OK. Sorry, I missed the sentence above.\n> I'd like to hear what others comment too.\n\nAlthough there are no comments, attached patch modifies vaccumlo.\n\n>>> (3)\n>>> \n>>> Is to delete '/mnt/server' intended? I though it better to leave it \n>>> as\n>>> is since archive_cleanup_command example uses the absolute path.\n>>> \n>>> -\t\t\t \" pg_archivecleanup /mnt/server/archiverdir\n>>> 000000010000000000000010.00000020.backup\\n\"));\n>>> +\t\t\t \" pg_archivecleanup archiverdir\n>>> 000000010000000000000010.00000020.backup\\n\"));\n>>> \n>>> I will confirmed that the --help text are not changed and only\n>>> the line breaks are changed. But, currently the above change\n>>> break it.\n>> \n>> Yes, it is intended as described in the thread.\n>> \n>> https://www.postgresql.org/message-id/20230615.152036.1556630042388070221.horikyota.ntt%40gmail.com\n>> \n>> | We could shorten it by removing the \"/mnt/server\" portion, but\n>> I'm not sure if it's worth doing.\n>> \n>> However, I feel it is acceptable to make an exception and exceed 80\n>> characters for this line.\n> \n> Thanks for sharing the thread. I understood.\n> \n> It seems that the directory name should be consistent with the example\n> of archive_cleanup_command. However, it does not seem appropriate to\n> make archive_cleanup_command to use a relative path.\n> \n> ```\n>> pg_archivecleanup --help\n> (snip)\n> e.g.\n> archive_cleanup_command = 'pg_archivecleanup /mnt/server/archiverdir \n> %r'\n> \n> Or for use as a standalone archive cleaner:\n> e.g.\n> pg_archivecleanup /mnt/server/archiverdir\n> 000000010000000000000010.00000020.backup\n> ```\n> \n> IMHO, is simply breaking the line acceptable?\n\nAgreed.\n\n\n> (4)\n\n> I found that some binaries, for example ecpg, are not tested with\n> program_help_ok(). Is it better to add tests in the patch?\n\nAdded program_help_ok() to ecpg and pgbench.\nAlthough pg_regress and zic are not tested using program_help_ok, but \nleft as they are because they are not commands that users execute \ndirectly.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation",
"msg_date": "Thu, 31 Aug 2023 16:47:21 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "I like this work a lot. It's good to give developers easy to verify \nguidance about formatting the --help messages.\n\nHowever, I think instead of just adding a bunch of line breaks to \nsatisfy the test, we should really try to make the lines shorter by \nrewording. Otherwise, chances are we are making the usability worse for \nmany users, because it's more likely that part of the help will scroll \noff the screen. For example, in many cases, we could replace \"do not\" \nby \"don't\", or we could decrease the indentation of the second column by \na few spaces, or just reword altogether.\n\nAlso, it would be very useful if the TAP test function could print out \nthe violating lines if a test fails. (Similar to how is() and like() \nprint the failing values.) Maybe start with that, and then it's easier \nto play with different wording variants to try to make it fit better.\n\n\n",
"msg_date": "Tue, 12 Sep 2023 08:27:35 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 2023-09-12 15:27, Peter Eisentraut wrote:\n> I like this work a lot. It's good to give developers easy to verify\n> guidance about formatting the --help messages.\n> \n> However, I think instead of just adding a bunch of line breaks to\n> satisfy the test, we should really try to make the lines shorter by\n> rewording. Otherwise, chances are we are making the usability worse\n> for many users, because it's more likely that part of the help will\n> scroll off the screen. For example, in many cases, we could replace\n> \"do not\" by \"don't\", or we could decrease the indentation of the\n> second column by a few spaces, or just reword altogether.\n> \n> Also, it would be very useful if the TAP test function could print out\n> the violating lines if a test fails. (Similar to how is() and like()\n> print the failing values.) Maybe start with that, and then it's\n> easier to play with different wording variants to try to make it fit\n> better.\n\nThanks for the review!\nI'll try to fix the patch according to your comments.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Wed, 13 Sep 2023 21:40:46 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 9:47 PM torikoshia <[email protected]>\nwrote:\n\n> Since it seems preferable to have consistent line break policy and some\n> people use 80-column terminal, wouldn't it be better to make all commands\n> in 80\n> columns per line?\n>\n\nAll this seems an awful lot of work to support this mythical 80-column\nterminal user. It's 2023, perhaps it's time to widen the default assumption\npast 80 characters?\n\nCheers,\nGreg\n\nOn Tue, Jul 4, 2023 at 9:47 PM torikoshia <[email protected]> wrote:Since it seems preferable to have consistent line break policy and some \npeople use 80-column terminal, wouldn't it be better to make all commands in 80\ncolumns per line?All this seems an awful lot of work to support this mythical 80-column terminal user. It's 2023, perhaps it's time to widen the default assumption past 80 characters?Cheers,Greg",
"msg_date": "Wed, 13 Sep 2023 13:46:23 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 2023-09-12 15:27, Peter Eisentraut wrote:\n> Also, it would be very useful if the TAP test function could print out\n> the violating lines if a test fails. (Similar to how is() and like()\n> print the failing values.)\n\nAttached patch for this.\nBelow are the the outputs when test failed:\n\n```\n$ cd contrib/vacuumlo\n$ make check\n...(snip)...\nt/001_basic.pl .. 1/?\n # Failed test ' -n, --dry-run don't remove large \nobjects, just show what would be done'\n # at \n/home/atorik/postgres/contrib/vacuumlo/../../src/test/perl/PostgreSQL/Test/Utils.pm \nline 850.\n # Looks like you failed 1 test of 21.\n\n# Failed test 'vacuumlo --help outputs fit within 80 columns per line'\n# at t/001_basic.pl line 10.\n# Looks like you failed 1 test of 9.\nt/001_basic.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/9 subtests\n\nTest Summary Report\n-------------------\nt/001_basic.pl (Wstat: 256 (exited 1) Tests: 9 Failed: 1)\n Failed test: 4\n Non-zero exit status: 1\nFiles=1, Tests=9, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.04 cusr \n0.01 csys = 0.07 CPU)\nResult: FAIL\n```\n\n```\n$ cat tmp_check/log/regress_log_001_basic\n# Running: vacuumlo --help\n[23:11:10.378](0.230s) ok 1 - vacuumlo --help exit code 0\n[23:11:10.379](0.001s) ok 2 - vacuumlo --help goes to stdout\n[23:11:10.379](0.000s) ok 3 - vacuumlo --help nothing to stderr\n[23:11:10.380](0.000s) # Subtest: vacuumlo --help outputs fit within 80 \ncolumns per line\n[23:11:10.380](0.001s) ok 1 - vacuumlo removes unreferenced large \nobjects from databases.\n[23:11:10.380](0.000s) ok 2 -\n[23:11:10.381](0.000s) ok 3 - Usage:\n[23:11:10.381](0.000s) ok 4 - vacuumlo [OPTION]... DBNAME...\n[23:11:10.381](0.000s) ok 5 -\n[23:11:10.381](0.000s) ok 6 - Options:\n[23:11:10.381](0.000s) ok 7 - -l, --limit=LIMIT commit \nafter removing each LIMIT large objects\n[23:11:10.382](0.000s) ok 20 - Report bugs to \n<[email protected]>.\n[23:11:10.382](0.000s) ok 21 - PostgreSQL home page: \n<https://www.postgresql.org/>\n[23:11:10.382](0.000s) 1..21\n[23:11:10.382](0.000s) # Looks like you failed 1 test of 21.\n[23:11:10.382](0.000s) not ok 4 - vacuumlo --help outputs fit within 80 \ncolumns per line\n[23:11:10.382](0.000s)\n[23:11:10.382](0.000s) # Failed test 'vacuumlo --help outputs fit \nwithin 80 columns per line'\n# at t/001_basic.pl line 10.\n# Running: vacuumlo --version\n[23:11:10.388](0.005s) ok 5 - vacuumlo --version exit code 0\n[23:11:10.388](0.000s) ok 6 - vacuumlo --version goes to stdout\n[23:11:10.388](0.000s) ok 7 - vacuumlo --version nothing to stderr\n# Running: vacuumlo --not-a-valid-option\n[23:11:10.391](0.003s) ok 8 - vacuumlo with invalid option nonzero exit \ncode\n[23:11:10.391](0.000s) ok 9 - vacuumlo with invalid option prints error \nmessage\n[23:11:10.391](0.000s) 1..9\n[23:11:10.391](0.000s) # Looks like you failed 1 test of 9.\n```\n\nI feel using subtest in Test::More improves readability.\n\n\nOn 2023-09-14 02:46, Greg Sabino Mullane wrote:\n> All this seems an awful lot of work to support this mythical 80-column \n> terminal user.\n> It's 2023, perhaps it's time to widen the default assumption past 80 \n> characters?\n\nThat may be a good idea.\n\nHowever, from what I have seen some basic commands like `ls` in my Linux \nenvironments, the man command has over 100 characters per line, while \nthe output of the --help option seems to be within 80 characters per \nline.\nAlso, the current PostgreSQL commands follow the \"no more than 80 \ncharacters per line\".\n\nI do not intend to adhere to this rule(my terminals are usually bigger \nthan 80 chars per line), but wouldn't it be a not bad direction to use \n80 characters for all commands?\n\nThoughts?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation",
"msg_date": "Sat, 16 Sep 2023 00:10:59 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 11:11 AM torikoshia <[email protected]>\nwrote:\n\n> I do not intend to adhere to this rule(my terminals are usually bigger\n> than 80 chars per line), but wouldn't it be a not bad direction to use\n> 80 characters for all commands?\n>\n\nWell, that's the question du jour, isn't it? The 80 character limit is\nbased on punch cards, and really has no place in modern systems. While gnu\nsystems are stuck in the past, many other ones have moved on to more\nsensible defaults:\n\n$ wget --help | wc -L\n110\n\n$ gcloud --help | wc -L\n122\n\n$ yum --help | wc -L\n122\n\ngit is an interesting one, as they force things through a pager for their\nhelp, but if you look at their raw help text files, they have plenty of\ntimes they go past 80 when needed:\n\n$ wc -L git/Documentation/git-*.txt | sort -g | tail -20\n 109 git-filter-branch.txt\n 109 git-rebase.txt\n 116 git-diff-index.txt\n 116 git-http-fetch.txt\n 117 git-restore.txt\n 122 git-checkout.txt\n 122 git-ls-tree.txt\n 129 git-init-db.txt\n 131 git-push.txt\n 132 git-update-ref.txt\n 142 git-maintenance.txt\n 144 git-interpret-trailers.txt\n 146 git-cat-file.txt\n 148 git-repack.txt\n 161 git-config.txt\n 162 git-notes.txt\n 205 git-stash.txt\n 251 git-submodule.txt\n\nSo in summary, I think 80 is a decent soft limit, but let's not stress out\nabout some lines going over that, and make a hard limit of perhaps 120.\n\nSee also: https://hilton.org.uk/blog/source-code-line-length\n\nCheers,\nGreg\n\nP.S. I know this won't change anything right away, but it will get the\nconversation started, so we can escape the inertia of punch cards / VT100\nterminals someday. :)\n\nOn Fri, Sep 15, 2023 at 11:11 AM torikoshia <[email protected]> wrote:I do not intend to adhere to this rule(my terminals are usually bigger \nthan 80 chars per line), but wouldn't it be a not bad direction to use \n80 characters for all commands?Well, that's the question du jour, isn't it? The 80 character limit is based on punch cards, and really has no place in modern systems. While gnu systems are stuck in the past, many other ones have moved on to more sensible defaults:$ wget --help | wc -L110$ gcloud --help | wc -L122$ yum --help | wc -L122git is an interesting one, as they force things through a pager for their help, but if you look at their raw help text files, they have plenty of times they go past 80 when needed:$ wc -L git/Documentation/git-*.txt | sort -g | tail -20 109 git-filter-branch.txt 109 git-rebase.txt 116 git-diff-index.txt 116 git-http-fetch.txt 117 git-restore.txt 122 git-checkout.txt 122 git-ls-tree.txt 129 git-init-db.txt 131 git-push.txt 132 git-update-ref.txt 142 git-maintenance.txt 144 git-interpret-trailers.txt 146 git-cat-file.txt 148 git-repack.txt 161 git-config.txt 162 git-notes.txt 205 git-stash.txt 251 git-submodule.txtSo in summary, I think 80 is a decent soft limit, but let's not stress out about some lines going over that, and make a hard limit of perhaps 120.See also: https://hilton.org.uk/blog/source-code-line-lengthCheers,GregP.S. I know this won't change anything right away, but it will get the conversation started, so we can escape the inertia of punch cards / VT100 terminals someday. :)",
"msg_date": "Mon, 18 Sep 2023 11:45:01 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 31.08.23 09:47, torikoshia wrote:\n> BTW, psql --help outputs the content of PGHOST, which caused a failure \n> in the test:\n> \n> ```\n> -h, --host=HOSTNAME database server host or socket directory\n> (default: \n> \"/var/folders/m7/9snkd5b54cx_b4lxkl9ljlcc0000gn/T/LobrmSUf7t\")\n> ```\n> \n> It may be overkill, added a logic for removing the content of PGHOST.\n\nI wonder, should we remove this? We display the \nenvironment-variable-based defaults in psql --help, but not for any \nother programs. This is potentially misleading.\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 09:45:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 3:23 AM Greg Sabino Mullane <[email protected]> \nwrote:\nThanks for your investigation!\n\n> On Fri, Sep 15, 2023 at 11:11 AM torikoshia \n> <[email protected]> wrote:\n>> I do not intend to adhere to this rule(my terminals are usually bigger\n>> than 80 chars per line), but wouldn't it be a not bad direction to use\n>> 80 characters for all commands?\n> \n> Well, that's the question du jour, isn't it? The 80 character limit is \n> based on punch cards, and really has no place in modern systems. While \n> gnu systems are stuck in the past, many other ones have moved on to \n> more sensible defaults:\n> \n> $ wget --help | wc -L\n> 110\n> \n> $ gcloud --help | wc -L\n> 122\n> \n> $ yum --help | wc -L\n> 122\n> \n> git is an interesting one, as they force things through a pager for \n> their help, but if you look at their raw help text files, they have \n> plenty of times they go past 80 when needed:\n> \n> $ wc -L git/Documentation/git-*.txt | sort -g | tail -20\n> 109 git-filter-branch.txt\n> 109 git-rebase.txt\n> 116 git-diff-index.txt\n> 116 git-http-fetch.txt\n> 117 git-restore.txt\n> 122 git-checkout.txt\n> 122 git-ls-tree.txt\n> 129 git-init-db.txt\n> 131 git-push.txt\n> 132 git-update-ref.txt\n> 142 git-maintenance.txt\n> 144 git-interpret-trailers.txt\n> 146 git-cat-file.txt\n> 148 git-repack.txt\n> 161 git-config.txt\n> 162 git-notes.txt\n> 205 git-stash.txt\n> 251 git-submodule.txt\n> \n> So in summary, I think 80 is a decent soft limit, but let's not stress \n> out about some lines going over that, and make a hard limit of perhaps \n> 120.\n\n+1. It may be a good compromise.\nFor enforcing the hard limit, is it better to add a regression test like \npatch 0001?\n\n\nOn 2023-09-21 16:45, Peter Eisentraut wrote:\n> On 31.08.23 09:47, torikoshia wrote:\n>> BTW, psql --help outputs the content of PGHOST, which caused a failure \n>> in the test:\n>> \n>> ```\n>> -h, --host=HOSTNAME database server host or socket directory\n>> (default: \n>> \"/var/folders/m7/9snkd5b54cx_b4lxkl9ljlcc0000gn/T/LobrmSUf7t\")\n>> ```\n>> \n>> It may be overkill, added a logic for removing the content of PGHOST.\n> \n> I wonder, should we remove this? We display the\n> environment-variable-based defaults in psql --help, but not for any\n> other programs. This is potentially misleading.\n\nAgreed. It seems inconsistent with other commands.\nPatch 0002 removed environment-variable-based defaults in psql --help.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation",
"msg_date": "Mon, 25 Sep 2023 15:27:12 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 2023-09-25 15:27, torikoshia wrote:\n> On Tue, Sep 19, 2023 at 3:23 AM Greg Sabino Mullane \n> <[email protected]> wrote:\n> Thanks for your investigation!\n> \n>> On Fri, Sep 15, 2023 at 11:11 AM torikoshia \n>> <[email protected]> wrote:\n>>> I do not intend to adhere to this rule(my terminals are usually \n>>> bigger\n>>> than 80 chars per line), but wouldn't it be a not bad direction to \n>>> use\n>>> 80 characters for all commands?\n>> \n>> Well, that's the question du jour, isn't it? The 80 character limit is \n>> based on punch cards, and really has no place in modern systems. While \n>> gnu systems are stuck in the past, many other ones have moved on to \n>> more sensible defaults:\n>> \n>> $ wget --help | wc -L\n>> 110\n>> \n>> $ gcloud --help | wc -L\n>> 122\n>> \n>> $ yum --help | wc -L\n>> 122\n>> \n>> git is an interesting one, as they force things through a pager for \n>> their help, but if you look at their raw help text files, they have \n>> plenty of times they go past 80 when needed:\n>> \n>> $ wc -L git/Documentation/git-*.txt | sort -g | tail -20\n>> 109 git-filter-branch.txt\n>> 109 git-rebase.txt\n>> 116 git-diff-index.txt\n>> 116 git-http-fetch.txt\n>> 117 git-restore.txt\n>> 122 git-checkout.txt\n>> 122 git-ls-tree.txt\n>> 129 git-init-db.txt\n>> 131 git-push.txt\n>> 132 git-update-ref.txt\n>> 142 git-maintenance.txt\n>> 144 git-interpret-trailers.txt\n>> 146 git-cat-file.txt\n>> 148 git-repack.txt\n>> 161 git-config.txt\n>> 162 git-notes.txt\n>> 205 git-stash.txt\n>> 251 git-submodule.txt\n>> \n>> So in summary, I think 80 is a decent soft limit, but let's not stress \n>> out about some lines going over that, and make a hard limit of perhaps \n>> 120.\n> \n> +1. It may be a good compromise.\n> For enforcing the hard limit, is it better to add a regression test\n> like patch 0001?\n\nUgh, regression tests failed and it appears to be due to reasons related \nto meson.\nI'm going to investigate it.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Mon, 25 Sep 2023 19:13:35 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 25.09.23 08:27, torikoshia wrote:\n>> So in summary, I think 80 is a decent soft limit, but let's not stress \n>> out about some lines going over that, and make a hard limit of perhaps \n>> 120.\n> \n> +1. It may be a good compromise.\n> For enforcing the hard limit, is it better to add a regression test like \n> patch 0001?\n\n> Agreed. It seems inconsistent with other commands.\n> Patch 0002 removed environment-variable-based defaults in psql --help.\n\nI have committed 0002 and a different implementation of 0001. I set the \nmaximum line length to 95, which is the current maximum in use.\n\nI'm open to discussing other line lengths, but\n\n1) If we make it longer, then we should also adjust the existing \nwrapping so that we don't have a mix of lines wrapped at 80 and some \nsignificantly longer lines.\n\n2) There are some general readability guidelines that suggest like 66 or \n72 characters per line. If you take that and add the option name itself \nand some indentation, then around 90 does seem like a sensible limit.\n\n3) The examples from other tools posted earlier don't convince me. Some \nof their --help outputs look like junk and poorly taken care of.\n\nSo I think nudging people to aim for 80..95 seems like a good target \nright now. But I'm not against adjustments.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 12:49:32 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
},
{
"msg_contents": "On 2023-09-25 15:27, torikoshia wrote:\n> Ugh, regression tests failed and it appears to be due to reasons \n> related to meson.\n> I'm going to investigate it.\n\nISTM\n\nOn 2023-10-06 19:49, Peter Eisentraut wrote:\n> On 25.09.23 08:27, torikoshia wrote:\n>>> So in summary, I think 80 is a decent soft limit, but let's not \n>>> stress out about some lines going over that, and make a hard limit of \n>>> perhaps 120.\n>> \n>> +1. It may be a good compromise.\n>> For enforcing the hard limit, is it better to add a regression test \n>> like patch 0001?\n> \n>> Agreed. It seems inconsistent with other commands.\n>> Patch 0002 removed environment-variable-based defaults in psql --help.\n> \n> I have committed 0002 and a different implementation of 0001. I set\n> the maximum line length to 95, which is the current maximum in use.\n\nThanks!\n\nBTW as far as I investigated, the original 0002 patch failed because\ncurrent meson doesn't accept subtest outputs.\n\nAs I commented below thread a few days ago, they once modified to\naccept subtest outputs, but not anymore.\nhttps://github.com/mesonbuild/meson/issues/10032\n\n> I'm open to discussing other line lengths, but\n> \n> 1) If we make it longer, then we should also adjust the existing\n> wrapping so that we don't have a mix of lines wrapped at 80 and some\n> significantly longer lines.\n> \n> 2) There are some general readability guidelines that suggest like 66\n> or 72 characters per line. If you take that and add the option name\n> itself and some indentation, then around 90 does seem like a sensible\n> limit.\n> \n> 3) The examples from other tools posted earlier don't convince me.\n> Some of their --help outputs look like junk and poorly taken care of.\n> \n> So I think nudging people to aim for 80..95 seems like a good target\n> right now. But I'm not against adjustments.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Fri, 06 Oct 2023 21:44:48 +0900",
"msg_from": "torikoshia <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make --help output fit within 80 columns per line"
}
] |
[
{
"msg_contents": "Generate automatically code and documentation related to wait events\n\nThe documentation and the code is generated automatically from a new\nfile called wait_event_names.txt, formatted in sections dedicated to\neach wait event class (Timeout, Lock, IO, etc.) with three tab-separated\nfields:\n- C symbol in enums\n- Format in the system views\n- Description in the docs\n\nUsing this approach has several advantages, as we have proved to be\nrather bad in maintaining this area of the tree across the years:\n- The order of each item in the documentation and the code, which should\nbe alphabetical, has become incorrect multiple times, and the script\ngenerating the code and documentation has a few rules to enforce that,\nmaking the maintenance a no-brainer.\n- Some wait events were added to the code, but not documented, so this\ncannot be missed now.\n- The order of the tables for each wait event class is enforced in the\ndocumentation (the input .txt file does so as well for clarity, though\nthis is not mandatory).\n- Less code, shaving 1.2k lines from the tree, with 1/3 of the savings\ncoming from the code, the rest from the documentation.\n\nThe wait event types \"Lock\" and \"LWLock\" still have their own code path\nfor their code, hence only the documentation is created for them. These\nclasses are listed with a special marker called WAIT_EVENT_DOCONLY in\nthe input file.\n\nAdding a new wait event now requires only an update of\nwait_event_names.txt, with \"Lock\" and \"LWLock\" treated as exceptions.\n\nThis commit has been tested with configure/Makefile, the CI and VPATH\nbuild. clean, distclean and maintainer-clean were working fine.\n\nAuthor: Bertrand Drouvot, Michael Paquier\nDiscussion: https://postgr.es/m/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/fa88928470b538c0ec0289e4d69ee12356c5a8ce\n\nModified Files\n--------------\ndoc/src/sgml/.gitignore | 1 +\ndoc/src/sgml/Makefile | 4 +-\ndoc/src/sgml/filelist.sgml | 1 +\ndoc/src/sgml/meson.build | 12 +\ndoc/src/sgml/monitoring.sgml | 1271 +-------------------\nsrc/backend/Makefile | 13 +-\nsrc/backend/storage/lmgr/lwlocknames.txt | 4 +-\nsrc/backend/utils/activity/.gitignore | 2 +\nsrc/backend/utils/activity/Makefile | 12 +\n.../utils/activity/generate-wait_event_types.pl | 263 ++++\nsrc/backend/utils/activity/meson.build | 14 +\nsrc/backend/utils/activity/wait_event.c | 611 +---------\nsrc/backend/utils/activity/wait_event_names.txt | 371 ++++++\nsrc/backend/utils/adt/lockfuncs.c | 3 +-\nsrc/common/meson.build | 7 +-\nsrc/include/Makefile | 2 +-\nsrc/include/utils/.gitignore | 1 +\nsrc/include/utils/meson.build | 19 +\nsrc/include/utils/wait_event.h | 232 +---\nsrc/test/ssl/t/002_scram.pl | 3 +-\nsrc/tools/msvc/Solution.pm | 19 +\nsrc/tools/msvc/clean.bat | 3 +\n22 files changed, 757 insertions(+), 2111 deletions(-)",
"msg_date": "Wed, 05 Jul 2023 01:55:34 +0000",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Generate automatically code and documentation related to wait\n ev"
},
{
"msg_contents": "Re: Michael Paquier\n> Generate automatically code and documentation related to wait events\n\nHi,\n\nI'm not entirely sure this is the commit to blame, but it's certainly\nclose:\n\nA Debian user is complaining that in PG17, the installed\n/usr/include/postgresql/17/server/utils/wait_event.h file is\nreferencing utils/wait_event_types.h, but that file doesn't get\ninstalled by the (autoconf) build sytem.\n\nSee\nhttps://pgdgbuild.dus.dg-i.net/job/postgresql-17-binaries-snapshot/242/architecture=amd64,distribution=sid/consoleText\nfor a build log with file listings near the end.\n\nChristoph\n\n\n",
"msg_date": "Wed, 18 Oct 2023 17:59:19 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Generate automatically code and documentation related to\n wait ev"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 05:59:19PM +0200, Christoph Berg wrote:\n> I'm not entirely sure this is the commit to blame, but it's certainly\n> close:\n\nThat would be in this area, thanks for the report.\n\n> A Debian user is complaining that in PG17, the installed\n> /usr/include/postgresql/17/server/utils/wait_event.h file is\n> referencing utils/wait_event_types.h, but that file doesn't get\n> installed by the (autoconf) build sytem.\n\nOn a fresh install of HEAD (3f9b1f26ca), I get:\n$ cd $(pg_config --includedir-server)\n$ find . -name wait_event.h\n./utils/wait_event.h\n$ find . -name wait_event_types.h\n./utils/wait_event_types.h\n\nBut I have missed a piece related to VPATH builds for\nwait_event_types.h in src/include/Makefile (see around probes.h).\nWill fix as per the attached. Thanks for the report.\n--\nMichael",
"msg_date": "Thu, 19 Oct 2023 08:45:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Generate automatically code and documentation related to\n wait ev"
},
{
"msg_contents": "Re: Michael Paquier\n> Will fix as per the attached. Thanks for the report.\n\nThanks for the lightning-fast fix :)\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 Oct 2023 09:38:30 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Generate automatically code and documentation related to\n wait ev"
},
{
"msg_contents": "On Thu, Oct 19, 2023 at 09:38:30AM +0200, Christoph Berg wrote:\n> Thanks for the lightning-fast fix :)\n\nNo pb. Not the first one, not the last one. ;)\n--\nMichael",
"msg_date": "Thu, 19 Oct 2023 17:46:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Generate automatically code and documentation related to\n wait ev"
}
] |
[
{
"msg_contents": "Hi all,\n\nAfter removing --with-openssl from its build of HEAD, snapper has\nbegun failing in the pg_upgrade path 11->HEAD, because it attempts\npg_upgrade from binaries that have OpenSSL to builds without it:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=snapper&br=HEAD\n\nUsing the TAP tests of pg_upgrade, I can get the same failure with the\nfollowing steps:\n1) Setup instance based on Postgres 11, compiled with OpenSSL.\n2) Run a few tests and tap a dump:\n# From 11 source tree:\nmake installcheck\ncd contrib/pgcrypto/\nUSE_MODULE_DB=1 make installcheck\n~/path/to/11/bin/pg_dumpall -f /tmp/olddump.sql\n3) From 16~ source tree, compiled without OpenSSL:\ncd src/bin/pg_upgrade\nolddump=/tmp/olddump.sql oldinstall=~/path/to/11/ make check\n\nAnd then you would get:\ncould not load library \"$libdir/pgcrypto\": ERROR: could not access\nfile \"$libdir/pgcrypto\": No such file or directory\nIn database: contrib_regression_pgcrypto\n\nThe same thing as HEAD could be done on its back-branches by removing\n--with-openssl and bring more consistency, but pg_upgrade has never\nbeen good at handling upgrades with different library requirements.\nSomething I am wondering is if AdjustUpgrade.pm could gain more\nknowledge in this area, though I am unsure how much could be achieved\nas this module has only object-level knowledge.\n\nAnyway, that's not really limited to pgcrypto, any extension with\ndifferent cross-library requirements would see that. One example,\nxml2 could be compiled with libxml and without libxslt. It is less\npopular than pgcrypto, but the failure should be the same.\n\nI'd rather choose the shortcut of removing --with-openssl from snapper\nin the short term, but that does nothing for the root issue in the\nlong-term.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 5 Jul 2023 13:29:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade and cross-library upgrades"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 4:29 PM Michael Paquier <[email protected]> wrote:\n> Thoughts?\n\nI am grateful for all the bug discoveries that these Debian 7 animals\nprovided in their time, but at this point we're unlikely to learn\nthings that are useful to our mission from them. It costs our\ncommunity time to talk about each of these issues, re-discovering old\nGCC bugs etc. If this were my animal and if the hardware couldn't be\nupgraded to a modern distro for technical reasons like a de-supported\narchitecture, I would now disable HEAD, or more likely give the whole\nmachine a respectful send-off ceremony and move on.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 18:00:25 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and cross-library upgrades"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 06:00:25PM +1200, Thomas Munro wrote:\n> If this were my animal and if the hardware couldn't be\n> upgraded to a modern distro for technical reasons like a de-supported\n> architecture, I would now disable HEAD, or more likely give the whole\n> machine a respectful send-off ceremony and move on.\n\nAmen.\n--\nMichael",
"msg_date": "Wed, 5 Jul 2023 15:42:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and cross-library upgrades"
},
{
"msg_contents": "On 2023-Jul-05, Michael Paquier wrote:\n\n> The same thing as HEAD could be done on its back-branches by removing\n> --with-openssl and bring more consistency, but pg_upgrade has never\n> been good at handling upgrades with different library requirements.\n> Something I am wondering is if AdjustUpgrade.pm could gain more\n> knowledge in this area, though I am unsure how much could be achieved\n> as this module has only object-level knowledge.\n\nHmm, let's explore the AdjustUpgrade.pm path a bit more:\n\n002_pg_upgrade.pl can test for presence or absence of pgcrypto by\ngrepping pg_config --configure for --with-ssl or --with-openssl. If the\nold cluster has it but the new doesn't, we must drop the\ncontrib_regression_pgcrypto database. I think we would need a new\nfunction in AdjustUpgrade.pm (or an API change to\nadjust_database_contents) so that we can add the DROP DATABASE command\nconditionally.\n\nThis seems easily extended to contrib/xml2 also.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 10:49:35 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and cross-library upgrades"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> 002_pg_upgrade.pl can test for presence or absence of pgcrypto by\n> grepping pg_config --configure for --with-ssl or --with-openssl. If the\n> old cluster has it but the new doesn't, we must drop the\n> contrib_regression_pgcrypto database.\n\nHmm, but you'd also need code to handle meson builds no? Could it\nbe easier to look for the pgcrypto library in the new installation?\n\nNot entirely sure this is worth the effort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jul 2023 07:03:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and cross-library upgrades"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 07:03:56AM -0400, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > 002_pg_upgrade.pl can test for presence or absence of pgcrypto by\n> > grepping pg_config --configure for --with-ssl or --with-openssl. If the\n> > old cluster has it but the new doesn't, we must drop the\n> > contrib_regression_pgcrypto database.\n> \n> Hmm, but you'd also need code to handle meson builds no?\n\nYes. It is worth noting that pg_config (or its SQL function) would\nreport the switches for ./configure in its CONFIGURE field, but, err..\nWe report nothing under meson. That's a problem.\n\n> Could it\n> be easier to look for the pgcrypto library in the new installation?\n\nIf all the contrib/ modules are handled, we'd need mapping rules for\neverything listed in contrib/Makefile.\n\n> Not entirely sure this is worth the effort.\n\nI am not sure either.. Anyway, the buildfarm code does similar things\nalready, see around $bad_module in TestUpgradeXversion.pm, for\ninstance. So this kind of workaround exists already. It seems to me\nthat we should try to pull that out of the buildfarm code and have\nthat in the core module instead as a routine that would be used by the\nin-core TAP tests of pg_upgrade and the buildfarm code.\n--\nMichael",
"msg_date": "Thu, 6 Jul 2023 09:19:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade and cross-library upgrades"
},
{
"msg_contents": "> On 6 Jul 2023, at 02:19, Michael Paquier <[email protected]> wrote:\n> On Wed, Jul 05, 2023 at 07:03:56AM -0400, Tom Lane wrote:\n\n>> Not entirely sure this is worth the effort.\n> \n> I am not sure either..\n\nI can't see off the cuff that it would bring better test coverage or find bugs\nrelative to the effort of making it stable.\n\n> Anyway, the buildfarm code does similar things\n> already, see around $bad_module in TestUpgradeXversion.pm, for\n> instance. So this kind of workaround exists already. It seems to me\n> that we should try to pull that out of the buildfarm code and have\n> that in the core module instead as a routine that would be used by the\n> in-core TAP tests of pg_upgrade and the buildfarm code.\n\nThat however, would be a more interesting outcome.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 7 Jul 2023 14:51:16 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade and cross-library upgrades"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a ROW LEVEL SECURITY policy on the table part of an extension, and\nwhile\ndumping the database where that extension is installed, dumps the policy of\nthat table too even though not dumpling that table .\n\nHere is quick tests, where I have added following SQL to adminpack--1.0.sql\nextension file:\n\nCREATE TABLE public.foo (i int CHECK(i > 0));\nALTER TABLE public.foo ENABLE ROW LEVEL SECURITY;\nCREATE POLICY foo_policy ON public.foo USING (true);\n\nAfter installation and creation of this extension, the dump output will have\npolicy without that table:\n\n--\n-- Name: foo; Type: ROW SECURITY; Schema: public; Owner: amul\n--\n\nALTER TABLE public.foo ENABLE ROW LEVEL SECURITY;\n\n--\n-- Name: foo foo_policy; Type: POLICY; Schema: public; Owner: amul\n--\n\nCREATE POLICY foo_policy ON public.foo USING (true);\n\n\nI am not sure if that is expected behaviour. The code comment in\ncheckExtensionMembership() seems to be doing intentionally:\n\n * In 9.6 and above, mark the member object to have any non-initial ACL,\n * policies, and security labels dumped.\n\nThe question is why were we doing this? Shouldn't skip this policy if it is\npart of the create-extension script?\n\nAlso, If you try to drop this policy, get dropped without any warning/error\nunlike tables or other objects which are not allowed to drop at all.\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\nHi,I have a ROW LEVEL SECURITY policy on the table part of an extension, and whiledumping the database where that extension is installed, dumps the policy ofthat table too even though not dumpling that table . Here is quick tests, where I have added following SQL to adminpack--1.0.sqlextension file: CREATE TABLE public.foo (i int CHECK(i > 0));ALTER TABLE public.foo ENABLE ROW LEVEL SECURITY;CREATE POLICY foo_policy ON public.foo USING (true);After installation and creation of this extension, the dump output will havepolicy without that table:---- Name: foo; Type: ROW SECURITY; Schema: public; Owner: amul--ALTER TABLE public.foo ENABLE ROW LEVEL SECURITY;---- Name: foo foo_policy; Type: POLICY; Schema: public; Owner: amul--CREATE POLICY foo_policy ON public.foo USING (true);I am not sure if that is expected behaviour. The code comment incheckExtensionMembership() seems to be doing intentionally: * In 9.6 and above, mark the member object to have any non-initial ACL, * policies, and security labels dumped.The question is why were we doing this? Shouldn't skip this policy if it ispart of the create-extension script? Also, If you try to drop this policy, get dropped without any warning/errorunlike tables or other objects which are not allowed to drop at all.-- Regards,Amul SulEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 5 Jul 2023 11:35:15 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dumping policy on a table belonging to an extension is expected?"
},
{
"msg_contents": "Greetings,\n\n* Amul Sul ([email protected]) wrote:\n> I have a ROW LEVEL SECURITY policy on the table part of an extension, and\n> while\n> dumping the database where that extension is installed, dumps the policy of\n> that table too even though not dumpling that table .\n> \n> Here is quick tests, where I have added following SQL to adminpack--1.0.sql\n> extension file:\n> \n> CREATE TABLE public.foo (i int CHECK(i > 0));\n> ALTER TABLE public.foo ENABLE ROW LEVEL SECURITY;\n> CREATE POLICY foo_policy ON public.foo USING (true);\n> \n> After installation and creation of this extension, the dump output will have\n> policy without that table:\n> \n> --\n> -- Name: foo; Type: ROW SECURITY; Schema: public; Owner: amul\n> --\n> \n> ALTER TABLE public.foo ENABLE ROW LEVEL SECURITY;\n> \n> --\n> -- Name: foo foo_policy; Type: POLICY; Schema: public; Owner: amul\n> --\n> \n> CREATE POLICY foo_policy ON public.foo USING (true);\n> \n> \n> I am not sure if that is expected behaviour. The code comment in\n> checkExtensionMembership() seems to be doing intentionally:\n> \n> * In 9.6 and above, mark the member object to have any non-initial ACL,\n> * policies, and security labels dumped.\n> \n> The question is why were we doing this? Shouldn't skip this policy if it is\n> part of the create-extension script?\n> \n> Also, If you try to drop this policy, get dropped without any warning/error\n> unlike tables or other objects which are not allowed to drop at all.\n\nAt least at the time, it wasn't really envisioned that policies would be\npart of an extension's creation script. That was probably short-sighted\nand it'd be better if we handled that cleanly, but to do so we'd need\nsomething akin to pg_init_privs where we track what policies are created\nas part of the creation script vs. what are created afterwards and then\ndump out the post-installation policy changes (note that we'd need to\ntrack if any installation-time policies were dropped or changed too...)\nas part of the pg_dump.\n\nIt'd be helpful if you could maybe provide some use-cases around this?\nPermission changes such as those handled by pg_init_privs are a bit more\nunderstandable since an extension script might want to revoke access\nfrom PUBLIC for functions or to GRANT access to PUBLIC for other things,\nor to leverage predefined roles, but how does that work for policies?\nMost extensions aren't likely to be creating their own roles or\ndepending on non-predefined roles to already exist in the system as\notherwise they'd end up failing to install if those roles didn't\nexist...\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Jul 2023 20:12:11 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dumping policy on a table belonging to an extension is expected?"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, there is no support for CHECK constraint DEFERRABLE in a create\ntable statement.\nSQL standard specifies that CHECK constraint can be defined as DEFERRABLE.\n\nThe attached patch is having implementation for CHECK constraint Deferrable\nas below:\n\n‘postgres[579453]=#’CREATE TABLE t1 (i int CHECK(i<>0) DEFERRABLE, t text);\nCREATE TABLE\n‘postgres[579453]=#’\\d t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n i | integer | | |\n t | text | | |\nCheck constraints:\n \"t1_i_check\" CHECK (i <> 0) DEFERRABLE\n\nNow we can have a deferrable CHECK constraint, and we can defer the\nconstraint validation:\n\n‘postgres[579453]=#’BEGIN;\nBEGIN\n‘postgres[579453]=#*’SET CONSTRAINTS t1_i_check DEFERRED;\nSET CONSTRAINTS\n‘postgres[579453]=#*’INSERT INTO t1 VALUES (0, 'one'); -- should succeed\nINSERT 0 1\n‘postgres[579453]=#*’UPDATE t1 SET i = 1 WHERE t = 'one';\nUPDATE 1\n‘postgres[579453]=#*’COMMIT; -- should succeed\nCOMMIT\n\nAttaching the initial patch, I will improve it with documentation in my\nnext version of the patch.\n\nthoughts?\n\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 5 Jul 2023 15:07:44 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "CHECK Constraint Deferrable"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Currently, there is no support for CHECK constraint DEFERRABLE in a create table statement.\n> SQL standard specifies that CHECK constraint can be defined as DEFERRABLE.\n\nI think this is a valid argument that this is part of SQL standard so\nit would be good addition to PostgreSQL. So +1 for the feature.\n\nBut I am wondering whether there are some real-world use cases for\ndeferred CHECK/NOT NULL constraints? I mean like for foreign key\nconstraints if there is a cyclic dependency between two tables then\ndeferring the constraint is the simplest way to insert without error.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 17:20:44 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "I can think of one scenario, as below\n\n1) any department should have an employee\n2)any employee should be assigned to a department\nso, the employee table has a FK to the department table, and another check\nconstraint should be added to the department table to ensure there should\nbe one/more employees in this department. It's kind of a deadlock\nsituation, each one depends on the other one. We cant insert a new\ndepartment, coz there is no employee. Also, we can't insert new employee\nbelongs to this new department, coz the department hasn't been and cant be\nadded. So if we have a check constraint defined as deferrable we can solve\nthis problem.\n\n‘postgres[685143]=#’CREATE FUNCTION checkEmpPresent(did int) RETURNS int AS\n$$ SELECT count(*) from emp where emp.deptno = did $$ IMMUTABLE LANGUAGE\nSQL;\nCREATE FUNCTION\n‘postgres[685143]=#’alter table dept add constraint check_cons check\n(checkEmpPresent(deptno) > 0);\nALTER TABLE\n‘postgres[685143]=#’\\d dept;\n Table \"public.dept\"\n Column | Type | Collation | Nullable | Default\n----------+---------------+-----------+----------+---------\n deptno | integer | | not null |\n deptname | character(20) | | |\nIndexes:\n \"dept_pkey\" PRIMARY KEY, btree (deptno)\nCheck constraints:\n \"check_cons\" CHECK (checkemppresent(deptno) > 0)\nReferenced by:\n TABLE \"emp\" CONSTRAINT \"fk_cons\" FOREIGN KEY (deptno) REFERENCES\ndept(deptno)\n\n‘postgres[685143]=#’insert into dept values (1, 'finance');\nERROR: 23514: new row for relation \"dept\" violates check constraint\n\"check_cons\"\nDETAIL: Failing row contains (1, finance ).\nSCHEMA NAME: public\nTABLE NAME: dept\nCONSTRAINT NAME: check_cons\nLOCATION: ExecConstraints, execMain.c:2069\n‘postgres[685143]=#’\\d emp;\n Table \"public.emp\"\n Column | Type | Collation | Nullable | Default\n--------+---------------+-----------+----------+---------\n empno | integer | | |\n ename | character(20) | | |\n deptno | integer | | |\nForeign-key constraints:\n \"fk_cons\" FOREIGN KEY (deptno) REFERENCES dept(deptno)\n\n‘postgres[685143]=#’insert into emp values (1001, 'test', 1);\nERROR: 23503: insert or update on table \"emp\" violates foreign key\nconstraint \"fk_cons\"\nDETAIL: Key (deptno)=(1) is not present in table \"dept\".\nSCHEMA NAME: public\nTABLE NAME: emp\nCONSTRAINT NAME: fk_cons\nLOCATION: ri_ReportViolation, ri_triggers.c:2608\n\nI have tried with v1 patch as below;\n\n‘postgres[685143]=#’alter table dept drop constraint check_cons;\nALTER TABLE\n‘postgres[685143]=#’alter table dept add constraint check_cons check\n(checkEmpPresent(deptno) > 0) DEFERRABLE INITIALLY DEFERRED;\nALTER TABLE\n‘postgres[685143]=#’BEGIN;\nBEGIN\n‘postgres[685143]=#*’insert into dept values (1, 'finance');\nINSERT 0 1\n‘postgres[685143]=#*’insert into emp values (1001, 'test', 1);\nINSERT 0 1\n‘postgres[685143]=#*’commit;\nCOMMIT\n‘postgres[685143]=#’select * from dept;\n deptno | deptname\n--------+----------------------\n 1 | finance\n(1 row)\n\n‘postgres[685143]=#’select * from emp;\n empno | ename | deptno\n-------+----------------------+--------\n 1001 | test | 1\n(1 row)\n\nThanks,\nHimanshu\n\nOn Fri, Jul 7, 2023 at 5:21 PM Dilip Kumar <[email protected]> wrote:\n\n> On Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Currently, there is no support for CHECK constraint DEFERRABLE in a\n> create table statement.\n> > SQL standard specifies that CHECK constraint can be defined as\n> DEFERRABLE.\n>\n> I think this is a valid argument that this is part of SQL standard so\n> it would be good addition to PostgreSQL. So +1 for the feature.\n>\n> But I am wondering whether there are some real-world use cases for\n> deferred CHECK/NOT NULL constraints? I mean like for foreign key\n> constraints if there is a cyclic dependency between two tables then\n> deferring the constraint is the simplest way to insert without error.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nI can think of one scenario, as below1) any department should have an employee2)any employee should be assigned to a departmentso, the employee table has a FK to the department table, and another check constraint should be added to the department table to ensure there should be one/more employees in this department. It's kind of a deadlock situation, each one depends on the other one. We cant insert a new department, coz there is no employee. Also, we can't insert new employee belongs to this new department, coz the department hasn't been and cant be added. So if we have a check constraint defined as deferrable we can solve this problem.‘postgres[685143]=#’CREATE FUNCTION checkEmpPresent(did int) RETURNS int AS $$ SELECT count(*) from emp where emp.deptno = did $$ IMMUTABLE LANGUAGE SQL;CREATE FUNCTION‘postgres[685143]=#’alter table dept add constraint check_cons check (checkEmpPresent(deptno) > 0);ALTER TABLE‘postgres[685143]=#’\\d dept; Table \"public.dept\" Column | Type | Collation | Nullable | Default ----------+---------------+-----------+----------+--------- deptno | integer | | not null | deptname | character(20) | | | Indexes: \"dept_pkey\" PRIMARY KEY, btree (deptno)Check constraints: \"check_cons\" CHECK (checkemppresent(deptno) > 0)Referenced by: TABLE \"emp\" CONSTRAINT \"fk_cons\" FOREIGN KEY (deptno) REFERENCES dept(deptno)‘postgres[685143]=#’insert into dept values (1, 'finance');ERROR: 23514: new row for relation \"dept\" violates check constraint \"check_cons\"DETAIL: Failing row contains (1, finance ).SCHEMA NAME: publicTABLE NAME: deptCONSTRAINT NAME: check_consLOCATION: ExecConstraints, execMain.c:2069‘postgres[685143]=#’\\d emp; Table \"public.emp\" Column | Type | Collation | Nullable | Default --------+---------------+-----------+----------+--------- empno | integer | | | ename | character(20) | | | deptno | integer | | | Foreign-key constraints: \"fk_cons\" FOREIGN KEY (deptno) REFERENCES dept(deptno)‘postgres[685143]=#’insert into emp values (1001, 'test', 1);ERROR: 23503: insert or update on table \"emp\" violates foreign key constraint \"fk_cons\"DETAIL: Key (deptno)=(1) is not present in table \"dept\".SCHEMA NAME: publicTABLE NAME: empCONSTRAINT NAME: fk_consLOCATION: ri_ReportViolation, ri_triggers.c:2608I have tried with v1 patch as below;‘postgres[685143]=#’alter table dept drop constraint check_cons;ALTER TABLE‘postgres[685143]=#’alter table dept add constraint check_cons check (checkEmpPresent(deptno) > 0) DEFERRABLE INITIALLY DEFERRED;ALTER TABLE‘postgres[685143]=#’BEGIN;BEGIN‘postgres[685143]=#*’insert into dept values (1, 'finance');INSERT 0 1‘postgres[685143]=#*’insert into emp values (1001, 'test', 1);INSERT 0 1‘postgres[685143]=#*’commit;COMMIT‘postgres[685143]=#’select * from dept; deptno | deptname --------+---------------------- 1 | finance (1 row)‘postgres[685143]=#’select * from emp; empno | ename | deptno -------+----------------------+-------- 1001 | test | 1(1 row)Thanks,HimanshuOn Fri, Jul 7, 2023 at 5:21 PM Dilip Kumar <[email protected]> wrote:On Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Currently, there is no support for CHECK constraint DEFERRABLE in a create table statement.\n> SQL standard specifies that CHECK constraint can be defined as DEFERRABLE.\n\nI think this is a valid argument that this is part of SQL standard so\nit would be good addition to PostgreSQL. So +1 for the feature.\n\nBut I am wondering whether there are some real-world use cases for\ndeferred CHECK/NOT NULL constraints? I mean like for foreign key\nconstraints if there is a cyclic dependency between two tables then\ndeferring the constraint is the simplest way to insert without error.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 7 Jul 2023 19:30:10 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Friday, July 7, 2023, Himanshu Upadhyaya <[email protected]>\nwrote:\n\n> I can think of one scenario, as below\n>\n> 1) any department should have an employee\n> 2)any employee should be assigned to a department\n> so, the employee table has a FK to the department table, and another check\n> constraint should be added to the department table to ensure there should\n> be one/more employees in this department.\n>\n>\nThat isn’t a valid/allowed check constraint - it contains a prohibited\nreference to another table.\n\nDavid J.\n\nOn Friday, July 7, 2023, Himanshu Upadhyaya <[email protected]> wrote:I can think of one scenario, as below1) any department should have an employee2)any employee should be assigned to a departmentso, the employee table has a FK to the department table, and another check constraint should be added to the department table to ensure there should be one/more employees in this department. That isn’t a valid/allowed check constraint - it contains a prohibited reference to another table.David J.",
"msg_date": "Fri, 7 Jul 2023 07:04:41 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "Attached is v2 of the patch, rebased against the latest HEAD.\n\nThanks,\nHimanshu",
"msg_date": "Thu, 7 Sep 2023 13:25:05 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 1:25 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Attached is v2 of the patch, rebased against the latest HEAD.\n\nI have done some initial reviews, and here are my comments. More\ndetailed review later. Meanwhile, you can work on these comments and\nfix all the cosmetics especially 80 characters per line\n\n1.\n+\n+ (void) CreateTrigger(trigger, NULL, RelationGetRelid(rel),\n+ InvalidOid, constrOid, InvalidOid, InvalidOid,\n+ InvalidOid, NULL, true, false);\n\nheap.c is calling CreateTrigger but the inclusion of\n\"commands/trigger.h\" is missing.\n\n2.\n- if ((failed = ExecRelCheck(resultRelInfo, slot, estate)) != NULL)\n+ if ((failed = ExecRelCheck(resultRelInfo, slot, estate,\ncheckConstraint, &recheckConstraints)) != NULL && !recheckConstraints)\n\n\nWhy recheckConstraints need to get as output from ExecRelCheck? I\nmean whether it will be rechecked or nor is already known by the\ncaller and\nWhether the constraint failed or passed is known based on the return\nvalue so why do you need to extra parameter here?\n\n3.\n-void\n+bool\n ExecConstraints(ResultRelInfo *resultRelInfo,\n- TupleTableSlot *slot, EState *estate)\n+ TupleTableSlot *slot, EState *estate, checkConstraintRecheck checkConstraint)\n {\n\n- if ((failed = ExecRelCheck(resultRelInfo, slot, estate)) != NULL)\n+ if ((failed = ExecRelCheck(resultRelInfo, slot, estate,\ncheckConstraint, &recheckConstraints)) != NULL && !recheckConstraints)\n\n take care of postgres coding style and break line after 80\ncharacters. Check other places as well in the patch.\n\n4.\n+ if (checkConstraint == CHECK_RECHECK_ENABLED && check[i].ccdeferrable)\n+ {\n+ *recheckConstraints = true;\n+ }\n\nRemove curly brackets around single-line block\n\n5.\n+typedef enum checkConstraintRecheck\n+{\n+ CHECK_RECHECK_DISABLED, /* Recheck of CHECK constraint is disabled, so\n+ * DEFERRED CHECK constraint will be\n+ * considered as non-deferrable check\n+ * constraint. */\n+ CHECK_RECHECK_ENABLED, /* Recheck of CHECK constraint is enabled, so\n+ * CHECK constraint will be validated but\n+ * error will not be reported for deferred\n+ * CHECK constraint. */\n+ CHECK_RECHECK_EXISTING /* Recheck of existing violated CHECK\n+ * constraint, indicates that this is a\n+ * deferred recheck of a row that was reported\n+ * as a potential violation of CHECK\n+ * CONSTRAINT */\n+} checkConstraintRecheck;\n\nI do not like the naming convention here, especially the words\nRECHECK, DISABLE, and ENABLE. And also the name of the enum is a bit\noff. We can name it more like a unique constraint\nYES, PARTIAL, EXISTING\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Sep 2023 13:22:42 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 1:23 PM Dilip Kumar <[email protected]> wrote:\n\n> 2.\n> - if ((failed = ExecRelCheck(resultRelInfo, slot, estate)) != NULL)\n> + if ((failed = ExecRelCheck(resultRelInfo, slot, estate,\n> checkConstraint, &recheckConstraints)) != NULL && !recheckConstraints)\n>\n>\n> Why recheckConstraints need to get as output from ExecRelCheck? I\n> mean whether it will be rechecked or nor is already known by the\n> caller and\n>\nYes it will be known to the caller but ExecRelCheck will set this new\nparameter only if any one of the constraint is defined as Deferrable (in\ncreate table statement) and there is a potential constraint violation.\n\n> Whether the constraint failed or passed is known based on the return\n> value so why do you need to extra parameter here?\n>\n> Because if normal CHECK constraint(non Deferrable) is violated, no need to\nproceed with the insertion and in that case recheckConstraints will hold\n\"false\" but if Deferrable check constraint is violated, we need to\nrevalidate the constraint at commit time and recheckConstraints will hold\n\"true\".\n\n>\n> 5.\n> +typedef enum checkConstraintRecheck\n> +{\n> + CHECK_RECHECK_DISABLED, /* Recheck of CHECK constraint is disabled, so\n> + * DEFERRED CHECK constraint will be\n> + * considered as non-deferrable check\n> + * constraint. */\n> + CHECK_RECHECK_ENABLED, /* Recheck of CHECK constraint is enabled, so\n> + * CHECK constraint will be validated but\n> + * error will not be reported for deferred\n> + * CHECK constraint. */\n> + CHECK_RECHECK_EXISTING /* Recheck of existing violated CHECK\n> + * constraint, indicates that this is a\n> + * deferred recheck of a row that was reported\n> + * as a potential violation of CHECK\n> + * CONSTRAINT */\n> +} checkConstraintRecheck;\n>\n> I do not like the naming convention here, especially the words\n> RECHECK, DISABLE, and ENABLE. And also the name of the enum is a bit\n> off. We can name it more like a unique constraint\n> YES, PARTIAL, EXISTING\n>\n> I can think of alternative ENUM name as \"EnforceDeferredCheck\" and member\nas “CHECK_DEFERRED_YES”, “CHECK_DEFRRED_NO” and “CHECK_DEFERRED_EXISTING”.\n\nThoughts?\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Sep 8, 2023 at 1:23 PM Dilip Kumar <[email protected]> wrote:\n2.\n- if ((failed = ExecRelCheck(resultRelInfo, slot, estate)) != NULL)\n+ if ((failed = ExecRelCheck(resultRelInfo, slot, estate,\ncheckConstraint, &recheckConstraints)) != NULL && !recheckConstraints)\n\n\nWhy recheckConstraints need to get as output from ExecRelCheck? I\nmean whether it will be rechecked or nor is already known by the\ncaller andYes it will be known to the caller but ExecRelCheck will set this new parameter only if any one of the constraint is defined as Deferrable (in create table statement) and there is a potential constraint violation.\nWhether the constraint failed or passed is known based on the return\nvalue so why do you need to extra parameter here?\nBecause if normal CHECK constraint(non Deferrable) is violated, no need to proceed with the insertion and in that case recheckConstraints will hold \"false\" but if Deferrable check constraint is violated, we need to revalidate the constraint at commit time and recheckConstraints will hold \"true\". \n\n5.\n+typedef enum checkConstraintRecheck\n+{\n+ CHECK_RECHECK_DISABLED, /* Recheck of CHECK constraint is disabled, so\n+ * DEFERRED CHECK constraint will be\n+ * considered as non-deferrable check\n+ * constraint. */\n+ CHECK_RECHECK_ENABLED, /* Recheck of CHECK constraint is enabled, so\n+ * CHECK constraint will be validated but\n+ * error will not be reported for deferred\n+ * CHECK constraint. */\n+ CHECK_RECHECK_EXISTING /* Recheck of existing violated CHECK\n+ * constraint, indicates that this is a\n+ * deferred recheck of a row that was reported\n+ * as a potential violation of CHECK\n+ * CONSTRAINT */\n+} checkConstraintRecheck;\n\nI do not like the naming convention here, especially the words\nRECHECK, DISABLE, and ENABLE. And also the name of the enum is a bit\noff. We can name it more like a unique constraint\nYES, PARTIAL, EXISTING\nI can think of alternative ENUM name as \"EnforceDeferredCheck\" and member as “CHECK_DEFERRED_YES”, “CHECK_DEFRRED_NO” and “CHECK_DEFERRED_EXISTING”.Thoughts?-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Sep 2023 17:45:14 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Thu, 7 Sept 2023 at 17:26, Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Attached is v2 of the patch, rebased against the latest HEAD.\n\nThanks for working on this, few comments:\n1) \"CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t\ntext)\" is crashing in windows, the same was noticed in CFBot too:\n2023-09-11 08:11:36.585 UTC [58563][client backend]\n[pg_regress/constraints][13/880:0] LOG: statement: CREATE TABLE\ncheck_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n2023-09-11 08:11:36.586 UTC [58560][client backend]\n[pg_regress/inherit][15/391:0] LOG: statement: drop table c1;\n../src/backend/commands/trigger.c:220:26: runtime error: member access\nwithin null pointer of type 'struct CreateTrigStmt'\n==58563==Using libbacktrace symbolizer.\n\nThe details of CFBot failure can be seen at [1]\n\n2) Alter of check constraint deferrable is not handled, is this intentional?\nCREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\npostgres=# alter table check_constr_tbl alter constraint\ncheck_constr_tbl_i_check not deferrable;\nERROR: constraint \"check_constr_tbl_i_check\" of relation\n\"check_constr_tbl\" is not a foreign key constraint\n\n3) Should we handle this scenario for domains too:\nCREATE DOMAIN c1_check AS INT CHECK(VALUE > 10);\ncreate table test(c1 c1_check);\nalter domain c1_check ADD check (VALUE > 20) DEFERRABLE INITIALLY DEFERRED;\n\nbegin;\n-- should this be deffered\ninsert into test values(19);\nERROR: value for domain c1_check violates check constraint \"c1_check_check1\"\n\n4) There is one warning:\nheap.c: In function ‘StoreRelCheck’:\nheap.c:2178:24: warning: implicit declaration of function\n‘CreateTrigger’ [-Wimplicit-function-declaration]\n 2178 | (void) CreateTrigger(trigger, NULL,\nRelationGetRelid(rel),\n | ^~~~~~~~~~~~~\n\n5) This should be added to typedefs.list file:\n+typedef enum checkConstraintRecheck\n+{\n+ CHECK_RECHECK_DISABLED, /* Recheck of CHECK constraint\nis disabled, so\n+ *\nDEFERRED CHECK constraint will be\n+ *\nconsidered as non-deferrable check\n+ *\nconstraint. */\n+ CHECK_RECHECK_ENABLED, /* Recheck of CHECK constraint\nis enabled, so\n+ *\nCHECK constraint will be validated but\n+ *\nerror will not be reported for deferred\n+ *\nCHECK constraint. */\n+ CHECK_RECHECK_EXISTING /* Recheck of existing violated CHECK\n+ *\nconstraint, indicates that this is a\n+ *\ndeferred recheck of a row that was reported\n+ * as\na potential violation of CHECK\n+ * CONSTRAINT */\n+} checkConstraintRecheck;\n\n[1] - https://api.cirrus-ci.com/v1/artifact/task/4855966353588224/testrun/build-32/testrun/pg_upgrade/002_pg_upgrade/log/002_pg_upgrade_old_node.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 12 Sep 2023 14:55:48 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Thu, 7 Sept 2023 at 17:26, Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Attached is v2 of the patch, rebased against the latest HEAD.\n\nFew issues:\n1) Create domain fails but alter domain is successful, I feel we\nshould support create domain too:\npostgres=# create domain d1 as int check(value<>0) deferrable;\nERROR: specifying constraint deferrability not supported for domains\npostgres=# create domain d1 as int check(value<>0);\nCREATE DOMAIN\npostgres=# alter domain d1 add constraint con_2 check(value<>1) deferrable;\nALTER DOMAIN\n\n2) I was not sure, if the error message change was intentional:\n2a)\nIn Head:\nCREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER s1;\nERROR: misplaced DEFERRABLE clause\nLINE 1: CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER...\n ^\npostgres=# CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER s1;\nERROR: \"t9\" is a foreign table\nDETAIL: Foreign tables cannot have constraint triggers.\n\n2b)\nIn Head:\npostgres=# CREATE FOREIGN TABLE t2(a int CHECK(a<>0)) SERVER s1;\nCREATE FOREIGN TABLE\npostgres=# ALTER FOREIGN TABLE t2 ADD CONSTRAINT t2_chk_1 CHECK(a<>1)\nDEFERRABLE;\nERROR: CHECK constraints cannot be marked DEFERRABLE\n\nWith patch:\npostgres=# ALTER FOREIGN TABLE t8 ADD CONSTRAINT t8_chk_1 CHECK(a<>1)\nDEFERRABLE;\nERROR: \"t8\" is a foreign table\nDETAIL: Foreign tables cannot have constraint triggers.\n\n3) Insert check is not deferred to commit:\nThis insert check here is deferred to commit:\npostgres=# CREATE TABLE tbl (i int ) partition by range (i);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\nCREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\npostgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\nALTER TABLE\npostgres=# begin;\nBEGIN\npostgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\nSET CONSTRAINTS\npostgres=*# INSERT INTO tbl values (1);\nINSERT 0 1\npostgres=*# commit;\nERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\nDETAIL: Failing row contains (1).\n\nBut the check here is not deferred to commit:\npostgres=# CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\nby range (i);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\nCREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\npostgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\nALTER TABLE\npostgres=# begin;\nBEGIN\npostgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\nSET CONSTRAINTS\npostgres=*# INSERT INTO tbl values (1);\nERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\nDETAIL: Failing row contains (1).\n\n4) There is a new warning popping up now:\nCREATE TABLE tbl_new_3 (i int check(i<>0)) partition by range (i);\nCREATE FOREIGN TABLE ftbl_new_3 PARTITION OF tbl_new_3 FOR VALUES FROM\n(40) TO (50) server s1;\npostgres=# ALTER TABLE tbl_new_3 ADD CONSTRAINT tbl_new_3_chk\nCHECK(i<>1) DEFERRABLE;\nWARNING: unexpected pg_constraint record found for relation \"tbl_new_3\"\nERROR: \"ftbl_new_3\" is a foreign table\nDETAIL: Foreign tables cannot have constraint triggers.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 14 Sep 2023 09:57:15 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "Thanks for the review comments.\n\nOn Tue, Sep 12, 2023 at 2:56 PM vignesh C <[email protected]> wrote:\n\n> On Thu, 7 Sept 2023 at 17:26, Himanshu Upadhyaya\n> <[email protected]> wrote:\n> >\n> > Attached is v2 of the patch, rebased against the latest HEAD.\n>\n> Thanks for working on this, few comments:\n> 1) \"CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t\n> text)\" is crashing in windows, the same was noticed in CFBot too:\n> 2023-09-11 08:11:36.585 UTC [58563][client backend]\n> [pg_regress/constraints][13/880:0] LOG: statement: CREATE TABLE\n> check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n> 2023-09-11 08:11:36.586 UTC [58560][client backend]\n> [pg_regress/inherit][15/391:0] LOG: statement: drop table c1;\n> ../src/backend/commands/trigger.c:220:26: runtime error: member access\n> within null pointer of type 'struct CreateTrigStmt'\n> ==58563==Using libbacktrace symbolizer.\n>\n> Will Fix this in my next patch.\n\n\n> The details of CFBot failure can be seen at [1]\n>\n> 2) Alter of check constraint deferrable is not handled, is this\n> intentional?\n> CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n> postgres=# alter table check_constr_tbl alter constraint\n> check_constr_tbl_i_check not deferrable;\n> ERROR: constraint \"check_constr_tbl_i_check\" of relation\n> \"check_constr_tbl\" is not a foreign key constraint\n>\n> ALTER CONSTRAINT is currently only supported for FOREIGN KEY, it's even\nnot supported for UNIQUE constraint as below:\n‘postgres[1271421]=#’CREATE TABLE unique_constr_tbl (i int unique\nDEFERRABLE, t text);\nCREATE TABLE\n‘postgres[1271421]=#’\\d unique_constr_tbl;\n Table \"public.unique_constr_tbl\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n i | integer | | |\n t | text | | |\nIndexes:\n \"unique_constr_tbl_i_key\" UNIQUE CONSTRAINT, btree (i) DEFERRABLE\n‘postgres[1271421]=#’alter table unique_constr_tbl alter constraint\nunique_constr_tbl_i_key not deferrable;\nERROR: 42809: constraint \"unique_constr_tbl_i_key\" of relation\n\"unique_constr_tbl\" is not a foreign key constraint\nLOCATION: ATExecAlterConstraint, tablecmds.c:11183\n\nI still need to understand the design restriction here, please let me know\nif anyone is aware of this?\nis it because of dependency on Indexes?\n\n3) Should we handle this scenario for domains too:\n> CREATE DOMAIN c1_check AS INT CHECK(VALUE > 10);\n> create table test(c1 c1_check);\n> alter domain c1_check ADD check (VALUE > 20) DEFERRABLE INITIALLY DEFERRED;\n>\n> begin;\n> -- should this be deffered\n> insert into test values(19);\n> ERROR: value for domain c1_check violates check constraint\n> \"c1_check_check1\"\n>\n> Yes, thanks for notifying, I missed this for CREATE DOMAIN, will analyse\nand include in next revision.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks for the review comments.On Tue, Sep 12, 2023 at 2:56 PM vignesh C <[email protected]> wrote:On Thu, 7 Sept 2023 at 17:26, Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Attached is v2 of the patch, rebased against the latest HEAD.\n\nThanks for working on this, few comments:\n1) \"CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t\ntext)\" is crashing in windows, the same was noticed in CFBot too:\n2023-09-11 08:11:36.585 UTC [58563][client backend]\n[pg_regress/constraints][13/880:0] LOG: statement: CREATE TABLE\ncheck_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n2023-09-11 08:11:36.586 UTC [58560][client backend]\n[pg_regress/inherit][15/391:0] LOG: statement: drop table c1;\n../src/backend/commands/trigger.c:220:26: runtime error: member access\nwithin null pointer of type 'struct CreateTrigStmt'\n==58563==Using libbacktrace symbolizer.\nWill Fix this in my next patch. \nThe details of CFBot failure can be seen at [1]\n\n2) Alter of check constraint deferrable is not handled, is this intentional?\nCREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\npostgres=# alter table check_constr_tbl alter constraint\ncheck_constr_tbl_i_check not deferrable;\nERROR: constraint \"check_constr_tbl_i_check\" of relation\n\"check_constr_tbl\" is not a foreign key constraint\nALTER CONSTRAINT is currently only supported for FOREIGN KEY, it's even not supported for UNIQUE constraint as below:‘postgres[1271421]=#’CREATE TABLE unique_constr_tbl (i int unique DEFERRABLE, t text);CREATE TABLE‘postgres[1271421]=#’\\d unique_constr_tbl; Table \"public.unique_constr_tbl\" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- i | integer | | | t | text | | | Indexes: \"unique_constr_tbl_i_key\" UNIQUE CONSTRAINT, btree (i) DEFERRABLE‘postgres[1271421]=#’alter table unique_constr_tbl alter constraint unique_constr_tbl_i_key not deferrable;ERROR: 42809: constraint \"unique_constr_tbl_i_key\" of relation \"unique_constr_tbl\" is not a foreign key constraintLOCATION: ATExecAlterConstraint, tablecmds.c:11183I still need to understand the design restriction here, please let me know if anyone is aware of this?is it because of dependency on Indexes?\n3) Should we handle this scenario for domains too:\nCREATE DOMAIN c1_check AS INT CHECK(VALUE > 10);\ncreate table test(c1 c1_check);\nalter domain c1_check ADD check (VALUE > 20) DEFERRABLE INITIALLY DEFERRED;\n\nbegin;\n-- should this be deffered\ninsert into test values(19);\nERROR: value for domain c1_check violates check constraint \"c1_check_check1\"\nYes, thanks for notifying, I missed this for CREATE DOMAIN, will analyse and include in next revision. -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Sep 2023 11:44:34 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 9:57 AM vignesh C <[email protected]> wrote:\n\n> 3) Insert check is not deferred to commit:\n> This insert check here is deferred to commit:\n> postgres=# CREATE TABLE tbl (i int ) partition by range (i);\n> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n> CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TABLE\n> postgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n> ALTER TABLE\n> postgres=# begin;\n> BEGIN\n> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n> SET CONSTRAINTS\n> postgres=*# INSERT INTO tbl values (1);\n> INSERT 0 1\n> postgres=*# commit;\n> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n> DETAIL: Failing row contains (1).\n>\n> But the check here is not deferred to commit:\n> postgres=# CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\n> by range (i);\n> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n> CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TABLE\n> postgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n> ALTER TABLE\n> postgres=# begin;\n> BEGIN\n> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n> SET CONSTRAINTS\n> postgres=*# INSERT INTO tbl values (1);\n> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n> DETAIL: Failing row contains (1).\n>\n> I dont think it's a problem, in the second case there are two DEFERRABLE\nCHECK constraints and you are marking one as DEFERRED but other one will be\nINITIALLY IMMEDIATE. so we can use \"SET CONSTRAINTS ALL DEFERRED;\".\n‘postgres[1271421]=#’CREATE TABLE tbl (i int check(i<>0) DEFERRABLE)\npartition\n‘...>’by range (i);\nCREATE TABLE\n‘postgres[1271421]=#’CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM\n(0) TO (10);\nCREATE TABLE\n‘postgres[1271421]=#’CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM\n(20) TO (30);\nCREATE TABLE\n‘postgres[1271421]=#’ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1)\nDEFERRABLE;\nALTER TABLE\n‘postgres[1271421]=#’\\d tbl\n Partitioned table \"public.tbl\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n i | integer | | |\nPartition key: RANGE (i)\nCheck constraints:\n \"tbl_chk_1\" CHECK (i <> 1) DEFERRABLE\n \"tbl_i_check\" CHECK (i <> 0) DEFERRABLE\nNumber of partitions: 2 (Use \\d+ to list them.)\n ‘postgres[1271421]=#’begin;\nBEGIN\n‘postgres[1271421]=#*’SET CONSTRAINTS ALL DEFERRED;\nSET CONSTRAINTS\n‘postgres[1271421]=#*’INSERT INTO tbl values (1);\nINSERT 0 1\n‘postgres[1271421]=#*’commit;\nERROR: 23514: new row for relation \"tbl_1\" violates check constraint\n\"tbl_chk_1\"\nDETAIL: Failing row contains (1).\nSCHEMA NAME: public\nTABLE NAME: tbl_1\nCONSTRAINT NAME: tbl_chk_1\nLOCATION: ExecConstraints, execMain.c:2077\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Sep 14, 2023 at 9:57 AM vignesh C <[email protected]> wrote:\n3) Insert check is not deferred to commit:\nThis insert check here is deferred to commit:\npostgres=# CREATE TABLE tbl (i int ) partition by range (i);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\nCREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\npostgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\nALTER TABLE\npostgres=# begin;\nBEGIN\npostgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\nSET CONSTRAINTS\npostgres=*# INSERT INTO tbl values (1);\nINSERT 0 1\npostgres=*# commit;\nERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\nDETAIL: Failing row contains (1).\n\nBut the check here is not deferred to commit:\npostgres=# CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\nby range (i);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\nCREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\npostgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\nALTER TABLE\npostgres=# begin;\nBEGIN\npostgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\nSET CONSTRAINTS\npostgres=*# INSERT INTO tbl values (1);\nERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\nDETAIL: Failing row contains (1).\nI dont think it's a problem, in the second case there are two DEFERRABLE CHECK constraints and you are marking one as DEFERRED but other one will be INITIALLY IMMEDIATE. so we can use \"SET CONSTRAINTS ALL DEFERRED;\".‘postgres[1271421]=#’CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition‘...>’by range (i);CREATE TABLE‘postgres[1271421]=#’CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);CREATE TABLE‘postgres[1271421]=#’CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);CREATE TABLE‘postgres[1271421]=#’ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;ALTER TABLE‘postgres[1271421]=#’\\d tbl Partitioned table \"public.tbl\" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- i | integer | | | Partition key: RANGE (i)Check constraints: \"tbl_chk_1\" CHECK (i <> 1) DEFERRABLE \"tbl_i_check\" CHECK (i <> 0) DEFERRABLENumber of partitions: 2 (Use \\d+ to list them.) ‘postgres[1271421]=#’begin;BEGIN‘postgres[1271421]=#*’SET CONSTRAINTS ALL DEFERRED;SET CONSTRAINTS‘postgres[1271421]=#*’INSERT INTO tbl values (1);INSERT 0 1‘postgres[1271421]=#*’commit;ERROR: 23514: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"DETAIL: Failing row contains (1).SCHEMA NAME: publicTABLE NAME: tbl_1CONSTRAINT NAME: tbl_chk_1LOCATION: ExecConstraints, execMain.c:2077-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Sep 2023 15:33:19 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Thu, 14 Sept 2023 at 15:33, Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n>\n>\n> On Thu, Sep 14, 2023 at 9:57 AM vignesh C <[email protected]> wrote:\n>>\n>> 3) Insert check is not deferred to commit:\n>> This insert check here is deferred to commit:\n>> postgres=# CREATE TABLE tbl (i int ) partition by range (i);\n>> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n>> CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n>> CREATE TABLE\n>> CREATE TABLE\n>> CREATE TABLE\n>> postgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n>> ALTER TABLE\n>> postgres=# begin;\n>> BEGIN\n>> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n>> SET CONSTRAINTS\n>> postgres=*# INSERT INTO tbl values (1);\n>> INSERT 0 1\n>> postgres=*# commit;\n>> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n>> DETAIL: Failing row contains (1).\n>>\n>> But the check here is not deferred to commit:\n>> postgres=# CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\n>> by range (i);\n>> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n>> CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n>> CREATE TABLE\n>> CREATE TABLE\n>> CREATE TABLE\n>> postgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n>> ALTER TABLE\n>> postgres=# begin;\n>> BEGIN\n>> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n>> SET CONSTRAINTS\n>> postgres=*# INSERT INTO tbl values (1);\n>> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n>> DETAIL: Failing row contains (1).\n>>\n> I dont think it's a problem, in the second case there are two DEFERRABLE CHECK constraints and you are marking one as DEFERRED but other one will be INITIALLY IMMEDIATE. so we can use \"SET CONSTRAINTS ALL DEFERRED;\".\n> ‘postgres[1271421]=#’CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\n> ‘...>’by range (i);\n> CREATE TABLE\n> ‘postgres[1271421]=#’CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n> CREATE TABLE\n> ‘postgres[1271421]=#’CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n> CREATE TABLE\n> ‘postgres[1271421]=#’ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n> ALTER TABLE\n> ‘postgres[1271421]=#’\\d tbl\n> Partitioned table \"public.tbl\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> i | integer | | |\n> Partition key: RANGE (i)\n> Check constraints:\n> \"tbl_chk_1\" CHECK (i <> 1) DEFERRABLE\n> \"tbl_i_check\" CHECK (i <> 0) DEFERRABLE\n> Number of partitions: 2 (Use \\d+ to list them.)\n> ‘postgres[1271421]=#’begin;\n> BEGIN\n> ‘postgres[1271421]=#*’SET CONSTRAINTS ALL DEFERRED;\n> SET CONSTRAINTS\n> ‘postgres[1271421]=#*’INSERT INTO tbl values (1);\n> INSERT 0 1\n> ‘postgres[1271421]=#*’commit;\n> ERROR: 23514: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n> DETAIL: Failing row contains (1).\n> SCHEMA NAME: public\n> TABLE NAME: tbl_1\n> CONSTRAINT NAME: tbl_chk_1\n> LOCATION: ExecConstraints, execMain.c:2077\n\nI think we should be able to defer one constraint like in the case of\nforeign key constraint:\ncreate table t1(c1 int primary key);\ninsert into t1 values(10);\ncreate table t2(c1 int primary key);\ninsert into t2 values(10);\ncreate table t3(c1 int, c2 int references t1(c1) deferrable, c3 int\nreferences t2(c1) deferrable);\n\n-- Set only one constraint as deferred\nbegin;\nset CONSTRAINTS t3_c2_fkey deferred;\n-- c2 column constraint is deferred, we need not set all constraints\ndeferred in this case, insert was successful\npostgres=*# insert into t3 values(1,11,10);\nINSERT 0 1\n-- Throws error for the constraint that is not deferred\npostgres=*# insert into t3 values(1,10,11);\nERROR: insert or update on table \"t3\" violates foreign key constraint\n\"t3_c3_fkey\"\nDETAIL: Key (c3)=(11) is not present in table \"t2\".\n\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:28:47 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Fri, 15 Sept 2023 at 08:00, vignesh C <[email protected]> wrote:\n>\n> On Thu, 14 Sept 2023 at 15:33, Himanshu Upadhyaya\n> <[email protected]> wrote:\n> >\n> > On Thu, Sep 14, 2023 at 9:57 AM vignesh C <[email protected]> wrote:\n> >>\n> >> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n> >> SET CONSTRAINTS\n> >> postgres=*# INSERT INTO tbl values (1);\n> >> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n> >> DETAIL: Failing row contains (1).\n> >>\n> > I dont think it's a problem, in the second case there are two DEFERRABLE CHECK constraints and you are marking one as DEFERRED but other one will be INITIALLY IMMEDIATE.\n>\n> I think we should be able to defer one constraint like in the case of\n> foreign key constraint\n>\n\nAgreed. It should be possible to have a mix of deferred and immediate\nconstraint checks. In the example, the tbl_chk_1 is set deferred, but\nit fails immediately, which is clearly not right.\n\nI would say that it's reasonable to limit the scope of this patch to\ntable constraints only, and leave domain constraints to a possible\nfollow-up patch.\n\nA few other review comments:\n\n1. The following produces a WARNING (possibly the same issue already reported):\n\nCREATE TABLE foo (a int, b int);\nALTER TABLE foo ADD CONSTRAINT a_check CHECK (a > 0);\nALTER TABLE foo ADD CONSTRAINT b_check CHECK (b > 0) DEFERRABLE;\n\nWARNING: unexpected pg_constraint record found for relation \"foo\"\n\n2. I think that equalTupleDescs() should compare the new fields, when\ncomparing the 2 sets of check constraints.\n\n3. The constraint exclusion code in the planner should ignore\ndeferrable check constraints (see get_relation_constraints() in\nsrc/backend/optimizer/util/plancat.c), otherwise it might incorrectly\nexclude a relation on the basis of a constraint that is temporarily\nviolated, and return incorrect query results. For example:\n\nCREATE TABLE foo (a int);\nCREATE TABLE foo_c1 () INHERITS (foo);\nCREATE TABLE foo_c2 () INHERITS (foo);\nALTER TABLE foo_c2 ADD CONSTRAINT cc CHECK (a != 5) INITIALLY DEFERRED;\n\nBEGIN;\nINSERT INTO foo_c2 VALUES (5);\nSET LOCAL constraint_exclusion TO off;\nSELECT * FROM foo WHERE a = 5;\nSET LOCAL constraint_exclusion TO on;\nSELECT * FROM foo WHERE a = 5;\nROLLBACK;\n\n4. The code in MergeWithExistingConstraint() should prevent inherited\nconstraints being merged if their deferrable properties don't match\n(as MergeConstraintsIntoExisting() does, since\nconstraints_equivalent() tests the deferrable fields). I.e., the\nfollowing should fail to merge the constraints, since they don't\nmatch:\n\nDROP TABLE IF EXISTS p,c;\n\nCREATE TABLE p (a int, b int);\nALTER TABLE p ADD CONSTRAINT c1 CHECK (a > 0) DEFERRABLE;\nALTER TABLE p ADD CONSTRAINT c2 CHECK (b > 0);\n\nCREATE TABLE c () INHERITS (p);\nALTER TABLE c ADD CONSTRAINT c1 CHECK (a > 0);\nALTER TABLE c ADD CONSTRAINT c2 CHECK (b > 0) DEFERRABLE;\n\nI.e., that should produce an error, as happens if c is made to inherit\np *after* the constraints have been added.\n\n5. Instead of just adding the new fields to the end of the ConstrCheck\nstruct, and to the end of lists of function parameters like\nStoreRelCheck(), and other related code, it would be more logical to\nput them immediately before the valid/invalid entries, to match the\norder of constraint properties in pg_constraint, and functions like\nCreateConstraintEntry().\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 19 Sep 2023 11:44:04 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 2:56 PM vignesh C <[email protected]> wrote:\n\n> On Thu, 7 Sept 2023 at 17:26, Himanshu Upadhyaya\n> <[email protected]> wrote:\n> >\n> > Attached is v2 of the patch, rebased against the latest HEAD.\n>\n> Thanks for working on this, few comments:\n> 1) \"CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t\n> text)\" is crashing in windows, the same was noticed in CFBot too:\n> 2023-09-11 08:11:36.585 UTC [58563][client backend]\n> [pg_regress/constraints][13/880:0] LOG: statement: CREATE TABLE\n> check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n> 2023-09-11 08:11:36.586 UTC [58560][client backend]\n> [pg_regress/inherit][15/391:0] LOG: statement: drop table c1;\n> ../src/backend/commands/trigger.c:220:26: runtime error: member access\n> within null pointer of type 'struct CreateTrigStmt'\n> ==58563==Using libbacktrace symbolizer.\n>\n> The details of CFBot failure can be seen at [1]\n>\n> I have tried it with my latest patch on windows environment and not\ngetting any crash with the above statement, will do further analysis if\nthis patch also has the same issue.\n\n> 2) Alter of check constraint deferrable is not handled, is this\n> intentional?\n> CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n> postgres=# alter table check_constr_tbl alter constraint\n> check_constr_tbl_i_check not deferrable;\n> ERROR: constraint \"check_constr_tbl_i_check\" of relation\n> \"check_constr_tbl\" is not a foreign key constraint\n>\n> This is not allowed for any constraint type but FOREIGN key. I am not very\nsure about if there is any limitation with this so wanted to take opinion\nfrom other hackers on this.\n\n> 3) Should we handle this scenario for domains too:\n> CREATE DOMAIN c1_check AS INT CHECK(VALUE > 10);\n> create table test(c1 c1_check);\n> alter domain c1_check ADD check (VALUE > 20) DEFERRABLE INITIALLY DEFERRED;\n>\n> begin;\n> -- should this be deffered\n> insert into test values(19);\n> ERROR: value for domain c1_check violates check constraint\n> \"c1_check_check1\"\n>\n> We are planning to have a follow-up patch once this initial patch is\ncommitted.\n\n> 4) There is one warning:\n> heap.c: In function ‘StoreRelCheck’:\n> heap.c:2178:24: warning: implicit declaration of function\n> ‘CreateTrigger’ [-Wimplicit-function-declaration]\n> 2178 | (void) CreateTrigger(trigger, NULL,\n> RelationGetRelid(rel),\n> |\n\nFixed in V3 patch.\n\n> ^~~~~~~~~~~~~\n>\n> 5) This should be added to typedefs.list file:\n> +typedef enum checkConstraintRecheck\n> +{\n> + CHECK_RECHECK_DISABLED, /* Recheck of CHECK constraint\n> is disabled, so\n> + *\n> DEFERRED CHECK constraint will be\n> + *\n> considered as non-deferrable check\n> + *\n> constraint. */\n> + CHECK_RECHECK_ENABLED, /* Recheck of CHECK constraint\n> is enabled, so\n> + *\n> CHECK constraint will be validated but\n> + *\n> error will not be reported for deferred\n> + *\n> CHECK constraint. */\n> + CHECK_RECHECK_EXISTING /* Recheck of existing violated\n> CHECK\n> + *\n> constraint, indicates that this is a\n> + *\n> deferred recheck of a row that was reported\n> + * as\n> a potential violation of CHECK\n> + *\n> CONSTRAINT */\n> +} checkConstraintRecheck;\n>\n> Fixed in V3 patch.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Sep 12, 2023 at 2:56 PM vignesh C <[email protected]> wrote:On Thu, 7 Sept 2023 at 17:26, Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Attached is v2 of the patch, rebased against the latest HEAD.\n\nThanks for working on this, few comments:\n1) \"CREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t\ntext)\" is crashing in windows, the same was noticed in CFBot too:\n2023-09-11 08:11:36.585 UTC [58563][client backend]\n[pg_regress/constraints][13/880:0] LOG: statement: CREATE TABLE\ncheck_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\n2023-09-11 08:11:36.586 UTC [58560][client backend]\n[pg_regress/inherit][15/391:0] LOG: statement: drop table c1;\n../src/backend/commands/trigger.c:220:26: runtime error: member access\nwithin null pointer of type 'struct CreateTrigStmt'\n==58563==Using libbacktrace symbolizer.\n\nThe details of CFBot failure can be seen at [1]\nI have tried it with my latest patch on windows environment and not getting any crash with the above statement, will do further analysis if this patch also has the same issue.\n2) Alter of check constraint deferrable is not handled, is this intentional?\nCREATE TABLE check_constr_tbl (i int CHECK(i<>0) DEFERRABLE, t text);\npostgres=# alter table check_constr_tbl alter constraint\ncheck_constr_tbl_i_check not deferrable;\nERROR: constraint \"check_constr_tbl_i_check\" of relation\n\"check_constr_tbl\" is not a foreign key constraint\nThis is not allowed for any constraint type but FOREIGN key. I am not very sure about if there is any limitation with this so wanted to take opinion from other hackers on this.\n3) Should we handle this scenario for domains too:\nCREATE DOMAIN c1_check AS INT CHECK(VALUE > 10);\ncreate table test(c1 c1_check);\nalter domain c1_check ADD check (VALUE > 20) DEFERRABLE INITIALLY DEFERRED;\n\nbegin;\n-- should this be deffered\ninsert into test values(19);\nERROR: value for domain c1_check violates check constraint \"c1_check_check1\"\nWe are planning to have a follow-up patch once this initial patch is committed. \n4) There is one warning:\nheap.c: In function ‘StoreRelCheck’:\nheap.c:2178:24: warning: implicit declaration of function\n‘CreateTrigger’ [-Wimplicit-function-declaration]\n 2178 | (void) CreateTrigger(trigger, NULL,\nRelationGetRelid(rel),\n | Fixed in V3 patch. ^~~~~~~~~~~~~\n\n5) This should be added to typedefs.list file:\n+typedef enum checkConstraintRecheck\n+{\n+ CHECK_RECHECK_DISABLED, /* Recheck of CHECK constraint\nis disabled, so\n+ *\nDEFERRED CHECK constraint will be\n+ *\nconsidered as non-deferrable check\n+ *\nconstraint. */\n+ CHECK_RECHECK_ENABLED, /* Recheck of CHECK constraint\nis enabled, so\n+ *\nCHECK constraint will be validated but\n+ *\nerror will not be reported for deferred\n+ *\nCHECK constraint. */\n+ CHECK_RECHECK_EXISTING /* Recheck of existing violated CHECK\n+ *\nconstraint, indicates that this is a\n+ *\ndeferred recheck of a row that was reported\n+ * as\na potential violation of CHECK\n+ * CONSTRAINT */\n+} checkConstraintRecheck;\nFixed in V3 patch.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Oct 2023 20:31:17 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 9:57 AM vignesh C <[email protected]> wrote:\n\n> 2) I was not sure, if the error message change was intentional:\n> 2a)\n> In Head:\n> CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER s1;\n> ERROR: misplaced DEFERRABLE clause\n> LINE 1: CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER...\n> ^\n> postgres=# CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER s1;\n> ERROR: \"t9\" is a foreign table\n> DETAIL: Foreign tables cannot have constraint triggers.\n>\n> 2b)\n> In Head:\n> postgres=# CREATE FOREIGN TABLE t2(a int CHECK(a<>0)) SERVER s1;\n> CREATE FOREIGN TABLE\n> postgres=# ALTER FOREIGN TABLE t2 ADD CONSTRAINT t2_chk_1 CHECK(a<>1)\n> DEFERRABLE;\n> ERROR: CHECK constraints cannot be marked DEFERRABLE\n>\n> With patch:\n> postgres=# ALTER FOREIGN TABLE t8 ADD CONSTRAINT t8_chk_1 CHECK(a<>1)\n> DEFERRABLE;\n> ERROR: \"t8\" is a foreign table\n> DETAIL: Foreign tables cannot have constraint triggers.\n>\n> We are creating a constraint trigger for DEFERRED check constraint and as\nper implementation of FOREIGN table we are restricting to have a constraint\ntrigger. I need to do more analysis before reaching to any conclusion, I\nthink we can restrict this gram.y itself.\n\n> 3) Insert check is not deferred to commit:\n> This insert check here is deferred to commit:\n> postgres=# CREATE TABLE tbl (i int ) partition by range (i);\n> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n> CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TABLE\n> postgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n> ALTER TABLE\n> postgres=# begin;\n> BEGIN\n> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n> SET CONSTRAINTS\n> postgres=*# INSERT INTO tbl values (1);\n> INSERT 0 1\n> postgres=*# commit;\n> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n> DETAIL: Failing row contains (1).\n>\n> But the check here is not deferred to commit:\n> postgres=# CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\n> by range (i);\n> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\n> CREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TABLE\n> postgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\n> ALTER TABLE\n> postgres=# begin;\n> BEGIN\n> postgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\n> SET CONSTRAINTS\n> postgres=*# INSERT INTO tbl values (1);\n> ERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\n> DETAIL: Failing row contains (1).\n>\n> Fixed in V3 patch.\n\n> 4) There is a new warning popping up now:\n> CREATE TABLE tbl_new_3 (i int check(i<>0)) partition by range (i);\n> CREATE FOREIGN TABLE ftbl_new_3 PARTITION OF tbl_new_3 FOR VALUES FROM\n> (40) TO (50) server s1;\n> postgres=# ALTER TABLE tbl_new_3 ADD CONSTRAINT tbl_new_3_chk\n> CHECK(i<>1) DEFERRABLE;\n> WARNING: unexpected pg_constraint record found for relation \"tbl_new_3\"\n> ERROR: \"ftbl_new_3\" is a foreign table\n> DETAIL: Foreign tables cannot have constraint triggers.\n>\n> Fixed in V3 patch.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Sep 14, 2023 at 9:57 AM vignesh C <[email protected]> wrote:2) I was not sure, if the error message change was intentional:\n2a)\nIn Head:\nCREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER s1;\nERROR: misplaced DEFERRABLE clause\nLINE 1: CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER...\n ^\npostgres=# CREATE FOREIGN TABLE t9(a int CHECK(a<>0) DEFERRABLE) SERVER s1;\nERROR: \"t9\" is a foreign table\nDETAIL: Foreign tables cannot have constraint triggers.\n\n2b)\nIn Head:\npostgres=# CREATE FOREIGN TABLE t2(a int CHECK(a<>0)) SERVER s1;\nCREATE FOREIGN TABLE\npostgres=# ALTER FOREIGN TABLE t2 ADD CONSTRAINT t2_chk_1 CHECK(a<>1)\nDEFERRABLE;\nERROR: CHECK constraints cannot be marked DEFERRABLE\n\nWith patch:\npostgres=# ALTER FOREIGN TABLE t8 ADD CONSTRAINT t8_chk_1 CHECK(a<>1)\nDEFERRABLE;\nERROR: \"t8\" is a foreign table\nDETAIL: Foreign tables cannot have constraint triggers.\nWe are creating a constraint trigger for DEFERRED check constraint and as per implementation of FOREIGN table we are restricting to have a constraint trigger. I need to do more analysis before reaching to any conclusion, I think we can restrict this gram.y itself.\n3) Insert check is not deferred to commit:\nThis insert check here is deferred to commit:\npostgres=# CREATE TABLE tbl (i int ) partition by range (i);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\nCREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\npostgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\nALTER TABLE\npostgres=# begin;\nBEGIN\npostgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\nSET CONSTRAINTS\npostgres=*# INSERT INTO tbl values (1);\nINSERT 0 1\npostgres=*# commit;\nERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\nDETAIL: Failing row contains (1).\n\nBut the check here is not deferred to commit:\npostgres=# CREATE TABLE tbl (i int check(i<>0) DEFERRABLE) partition\nby range (i);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES FROM (0) TO (10);\nCREATE TABLE tbl_2 PARTITION OF tbl FOR VALUES FROM (20) TO (30);\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\npostgres=# ALTER TABLE tbl ADD CONSTRAINT tbl_chk_1 CHECK(i<>1) DEFERRABLE;\nALTER TABLE\npostgres=# begin;\nBEGIN\npostgres=*# SET CONSTRAINTS tbl_chk_1 DEFERRED;\nSET CONSTRAINTS\npostgres=*# INSERT INTO tbl values (1);\nERROR: new row for relation \"tbl_1\" violates check constraint \"tbl_chk_1\"\nDETAIL: Failing row contains (1).\nFixed in V3 patch.\n4) There is a new warning popping up now:\nCREATE TABLE tbl_new_3 (i int check(i<>0)) partition by range (i);\nCREATE FOREIGN TABLE ftbl_new_3 PARTITION OF tbl_new_3 FOR VALUES FROM\n(40) TO (50) server s1;\npostgres=# ALTER TABLE tbl_new_3 ADD CONSTRAINT tbl_new_3_chk\nCHECK(i<>1) DEFERRABLE;\nWARNING: unexpected pg_constraint record found for relation \"tbl_new_3\"\nERROR: \"ftbl_new_3\" is a foreign table\nDETAIL: Foreign tables cannot have constraint triggers.\nFixed in V3 patch.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Oct 2023 20:31:22 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 4:14 PM Dean Rasheed <[email protected]>\nwrote:\n\n> > I think we should be able to defer one constraint like in the case of\n> > foreign key constraint\n> >\n>\n> Agreed. It should be possible to have a mix of deferred and immediate\n> constraint checks. In the example, the tbl_chk_1 is set deferred, but\n> it fails immediately, which is clearly not right.\n>\n> Fixed in V3 patch.\n\n> I would say that it's reasonable to limit the scope of this patch to\n> table constraints only, and leave domain constraints to a possible\n> follow-up patch.\n>\n> Sure, Agree.\n\n> A few other review comments:\n>\n> 1. The following produces a WARNING (possibly the same issue already\n> reported):\n>\n> CREATE TABLE foo (a int, b int);\n> ALTER TABLE foo ADD CONSTRAINT a_check CHECK (a > 0);\n> ALTER TABLE foo ADD CONSTRAINT b_check CHECK (b > 0) DEFERRABLE;\n>\n> WARNING: unexpected pg_constraint record found for relation \"foo\"\n>\n> fixed in V3 patch.\n\n> 2. I think that equalTupleDescs() should compare the new fields, when\n> comparing the 2 sets of check constraints.\n>\n> Fixed in V3 patch.\n\n> 3. The constraint exclusion code in the planner should ignore\n> deferrable check constraints (see get_relation_constraints() in\n> src/backend/optimizer/util/plancat.c), otherwise it might incorrectly\n> exclude a relation on the basis of a constraint that is temporarily\n> violated, and return incorrect query results. For example:\n>\n> CREATE TABLE foo (a int);\n> CREATE TABLE foo_c1 () INHERITS (foo);\n> CREATE TABLE foo_c2 () INHERITS (foo);\n> ALTER TABLE foo_c2 ADD CONSTRAINT cc CHECK (a != 5) INITIALLY DEFERRED;\n>\n> BEGIN;\n> INSERT INTO foo_c2 VALUES (5);\n> SET LOCAL constraint_exclusion TO off;\n> SELECT * FROM foo WHERE a = 5;\n> SET LOCAL constraint_exclusion TO on;\n> SELECT * FROM foo WHERE a = 5;\n> ROLLBACK;\n>\n> Fixed in V3 patch.\n\n> 4. The code in MergeWithExistingConstraint() should prevent inherited\n> constraints being merged if their deferrable properties don't match\n> (as MergeConstraintsIntoExisting() does, since\n> constraints_equivalent() tests the deferrable fields). I.e., the\n> following should fail to merge the constraints, since they don't\n> match:\n>\n> DROP TABLE IF EXISTS p,c;\n>\n> CREATE TABLE p (a int, b int);\n> ALTER TABLE p ADD CONSTRAINT c1 CHECK (a > 0) DEFERRABLE;\n> ALTER TABLE p ADD CONSTRAINT c2 CHECK (b > 0);\n>\n> CREATE TABLE c () INHERITS (p);\n> ALTER TABLE c ADD CONSTRAINT c1 CHECK (a > 0);\n> ALTER TABLE c ADD CONSTRAINT c2 CHECK (b > 0) DEFERRABLE;\n>\n> I.e., that should produce an error, as happens if c is made to inherit\n> p *after* the constraints have been added.\n>\n> Fixed in V3 patch.\n\n> 5. Instead of just adding the new fields to the end of the ConstrCheck\n> struct, and to the end of lists of function parameters like\n> StoreRelCheck(), and other related code, it would be more logical to\n> put them immediately before the valid/invalid entries, to match the\n> order of constraint properties in pg_constraint, and functions like\n> CreateConstraintEntry().\n>\n> Fixed in V3 patch.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Sep 19, 2023 at 4:14 PM Dean Rasheed <[email protected]> wrote:\n> I think we should be able to defer one constraint like in the case of\n> foreign key constraint\n>\n\nAgreed. It should be possible to have a mix of deferred and immediate\nconstraint checks. In the example, the tbl_chk_1 is set deferred, but\nit fails immediately, which is clearly not right.\nFixed in V3 patch. \nI would say that it's reasonable to limit the scope of this patch to\ntable constraints only, and leave domain constraints to a possible\nfollow-up patch.\nSure, Agree. \nA few other review comments:\n\n1. The following produces a WARNING (possibly the same issue already reported):\n\nCREATE TABLE foo (a int, b int);\nALTER TABLE foo ADD CONSTRAINT a_check CHECK (a > 0);\nALTER TABLE foo ADD CONSTRAINT b_check CHECK (b > 0) DEFERRABLE;\n\nWARNING: unexpected pg_constraint record found for relation \"foo\"\nfixed in V3 patch. \n2. I think that equalTupleDescs() should compare the new fields, when\ncomparing the 2 sets of check constraints.\nFixed in V3 patch. \n3. The constraint exclusion code in the planner should ignore\ndeferrable check constraints (see get_relation_constraints() in\nsrc/backend/optimizer/util/plancat.c), otherwise it might incorrectly\nexclude a relation on the basis of a constraint that is temporarily\nviolated, and return incorrect query results. For example:\n\nCREATE TABLE foo (a int);\nCREATE TABLE foo_c1 () INHERITS (foo);\nCREATE TABLE foo_c2 () INHERITS (foo);\nALTER TABLE foo_c2 ADD CONSTRAINT cc CHECK (a != 5) INITIALLY DEFERRED;\n\nBEGIN;\nINSERT INTO foo_c2 VALUES (5);\nSET LOCAL constraint_exclusion TO off;\nSELECT * FROM foo WHERE a = 5;\nSET LOCAL constraint_exclusion TO on;\nSELECT * FROM foo WHERE a = 5;\nROLLBACK;\nFixed in V3 patch. \n4. The code in MergeWithExistingConstraint() should prevent inherited\nconstraints being merged if their deferrable properties don't match\n(as MergeConstraintsIntoExisting() does, since\nconstraints_equivalent() tests the deferrable fields). I.e., the\nfollowing should fail to merge the constraints, since they don't\nmatch:\n\nDROP TABLE IF EXISTS p,c;\n\nCREATE TABLE p (a int, b int);\nALTER TABLE p ADD CONSTRAINT c1 CHECK (a > 0) DEFERRABLE;\nALTER TABLE p ADD CONSTRAINT c2 CHECK (b > 0);\n\nCREATE TABLE c () INHERITS (p);\nALTER TABLE c ADD CONSTRAINT c1 CHECK (a > 0);\nALTER TABLE c ADD CONSTRAINT c2 CHECK (b > 0) DEFERRABLE;\n\nI.e., that should produce an error, as happens if c is made to inherit\np *after* the constraints have been added.\nFixed in V3 patch.\n5. Instead of just adding the new fields to the end of the ConstrCheck\nstruct, and to the end of lists of function parameters like\nStoreRelCheck(), and other related code, it would be more logical to\nput them immediately before the valid/invalid entries, to match the\norder of constraint properties in pg_constraint, and functions like\nCreateConstraintEntry().\nFixed in V3 patch.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Oct 2023 20:31:34 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Mon, Oct 2, 2023 at 8:31 PM Himanshu Upadhyaya <\[email protected]> wrote:\n\nV3 patch attached.\n\n>\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Oct 2023 20:33:11 +0530",
"msg_from": "Himanshu Upadhyaya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "Himanshu Upadhyaya <[email protected]> writes:\n> V3 patch attached.\n\nSorry for not weighing in on this before, but ... is this a feature\nwe want at all? We are very clear in the existing docs that CHECK\nconditions must be immutable [1], and that's not something we can\neasily relax because if they are not then it's unclear when we need\nto recheck them to ensure they stay satisfied. But here we have a\nfeature whose only possible use is with constraints that *aren't*\nimmutable; else we might as well just check them immediately.\nSo that gives rise to a bunch of subtle questions about exactly what\nproperties a user-written constraint would need to have to guarantee\nsane semantics given this implementation. Can we define what those\nproperties are, or what the ensuing semantic guarantees are exactly?\nCan we explain those things clearly enough that the average user would\nhave a shot at writing a valid deferred constraint? Is a deferred\nconstraint having those properties likely to be actually useful?\n\nI don't know the answers to these questions, but it troubles me a\nlot that zero consideration appears to have been given to them.\nI do not think we should put more effort into this patch unless\nsatisfactory answers are forthcoming.\n\n\t\t\tregards, tom lane\n\n[1] See Note at the bottom of \"5.4.1. Check Constraints\" here:\nhttps://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-CHECK-CONSTRAINTS\n\n\n",
"msg_date": "Mon, 02 Oct 2023 15:25:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Mon, Oct 2, 2023 at 12:25 PM Tom Lane <[email protected]> wrote:\n\n> Himanshu Upadhyaya <[email protected]> writes:\n> > V3 patch attached.\n>\n> Sorry for not weighing in on this before, but ... is this a feature\n> we want at all? We are very clear in the existing docs that CHECK\n> conditions must be immutable [1], and that's not something we can\n> easily relax because if they are not then it's unclear when we need\n> to recheck them to ensure they stay satisfied.\n\n\nAgreed. I'm not sold on conforming to the standard being an appropriate\nideal here. Either we already don't because our check constraints are\nimmutable, or I'm missing what use case the committee had in mind when they\ndesigned this feature. In any case, its absence doesn't seem that sorely\nmissed, and the OP's only actual example would require relaxing the\nimmutable property which I disagree with. We have deferrable triggers to\nserve that posited use case.\n\nDavid J.\n\nOn Mon, Oct 2, 2023 at 12:25 PM Tom Lane <[email protected]> wrote:Himanshu Upadhyaya <[email protected]> writes:\n> V3 patch attached.\n\nSorry for not weighing in on this before, but ... is this a feature\nwe want at all? We are very clear in the existing docs that CHECK\nconditions must be immutable [1], and that's not something we can\neasily relax because if they are not then it's unclear when we need\nto recheck them to ensure they stay satisfied.Agreed. I'm not sold on conforming to the standard being an appropriate ideal here. Either we already don't because our check constraints are immutable, or I'm missing what use case the committee had in mind when they designed this feature. In any case, its absence doesn't seem that sorely missed, and the OP's only actual example would require relaxing the immutable property which I disagree with. We have deferrable triggers to serve that posited use case.David J.",
"msg_date": "Mon, 2 Oct 2023 13:16:41 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On 10/2/23 21:25, Tom Lane wrote:\n> Himanshu Upadhyaya <[email protected]> writes:\n>> V3 patch attached.\n> \n> Sorry for not weighing in on this before, but ... is this a feature\n> we want at all?\n\nFor standards conformance, I vote yes.\n\n> We are very clear in the existing docs that CHECK\n> conditions must be immutable [1], and that's not something we can\n> easily relax because if they are not then it's unclear when we need\n> to recheck them to ensure they stay satisfied.\n\nThat is what the *user* documentation says, but we both know it isn't true.\n\nHere is a short conversation you and I had about five years ago where \nyou defended the non-immutability of CHECK constraints:\nhttps://www.postgresql.org/message-id/flat/12539.1544107316%40sss.pgh.pa.us\n\n> But here we have a\n> feature whose only possible use is with constraints that *aren't*\n> immutable; else we might as well just check them immediately.\n\nI disagree with this. The whole point of deferring constraints is to be \nable to do some cleanup before the consistency is checked.\n\n> So that gives rise to a bunch of subtle questions about exactly what\n> properties a user-written constraint would need to have to guarantee\n> sane semantics given this implementation. Can we define what those\n> properties are, or what the ensuing semantic guarantees are exactly?\n> Can we explain those things clearly enough that the average user would\n> have a shot at writing a valid deferred constraint?\n\nA trivial example is CHECK (c IS NOT NULL) which, according to the \nstandard, is the only way to check for such a condition. The NOT NULL \nsyntax is explicitly translated to that by 11.4 <column definition> SR \n17.a. We implement it a bit differently, but that does not negate the \nusefulness of being able to defer it. In fact, all of the work Álvaro \nis currently doing is mainly (or even fully) to be able to defer such a \nconstraint.\n\n> Is a deferred\n> constraint having those properties likely to be actually useful?\n\nI believe the answer is yes.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 3 Oct 2023 00:56:20 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "På fredag 07. juli 2023 kl. 13:50:44, skrev Dilip Kumar <[email protected] \n<mailto:[email protected]>>:\nOn Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Currently, there is no support for CHECK constraint DEFERRABLE in a create \ntable statement.\n> SQL standard specifies that CHECK constraint can be defined as DEFERRABLE.\n\nI think this is a valid argument that this is part of SQL standard so\nit would be good addition to PostgreSQL. So +1 for the feature.\n\nBut I am wondering whether there are some real-world use cases for\ndeferred CHECK/NOT NULL constraints? I mean like for foreign key\nconstraints if there is a cyclic dependency between two tables then\ndeferring the constraint is the simplest way to insert without error.\n\n\nThe real-world use case, at least for me, is when using an ORM. For large \nobject-graphs ORMs have a tendency to INSERT first with NULLs then UPDATE the \n“NOT NULLs” later.\n\n“Rewrite the ORM” is not an option for most of us…\n\n\n\n--\n\nAndreas Joseph Krogh",
"msg_date": "Tue, 3 Oct 2023 02:06:00 +0200 (CEST)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "Vik Fearing <[email protected]> writes:\n> On 10/2/23 21:25, Tom Lane wrote:\n>> Sorry for not weighing in on this before, but ... is this a feature\n>> we want at all?\n\n> For standards conformance, I vote yes.\n\nOnly if we can actually implement it in a defensible way, which this\npatch is far short of accomplishing.\n\n>> We are very clear in the existing docs that CHECK\n>> conditions must be immutable [1],\n\n> That is what the *user* documentation says, but we both know it isn't true.\n> Here is a short conversation you and I had about five years ago where \n> you defended the non-immutability of CHECK constraints:\n> https://www.postgresql.org/message-id/flat/12539.1544107316%40sss.pgh.pa.us\n\nWhat I intended to defend was not *checking* immutability strictly.\nOur CHECK constraint implementation is based very much on the assumption\nthat the constraints are immutable, and nobody has proposed that we\ntry to remove that assumption AFAIR. So I think the docs are fine\nas-is; anybody who wants to get into monotonically-weakening constraints\nis probably smart enough to work out for themselves whether it will\nfly or not.\n\nSo my problem with this patch is that it does nothing about that\nassumption, and yet the feature it adds seems useless without\nweakening the assumption. So what weaker assumption could we\nmake, and how would we modify the when-to-check rules to match\nthat, and what would it cost us in performance? Without good\nanswers to those questions, this patch is just a facade.\n\n> I disagree with this. The whole point of deferring constraints is to be \n> able to do some cleanup before the consistency is checked.\n\nWhat cleanup would you need that couldn't be performed beforehand\n(e.g. in a BEFORE INSERT/UPDATE trigger)? All the practical\nexamples that occur to me involve cross-row conditions, which\nCHECK is unsuitable to enforce --- at least, without doing a\nthorough implementation rethink.\n\nI continue to assert that basing this feature on the current\nCHECK implementation will produce nothing but a toy feature,\nthat's not only of little practical use but will be an active\nfoot-gun for people who expect it to do more than it can.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Oct 2023 20:08:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Monday, October 2, 2023, Andreas Joseph Krogh <[email protected]> wrote:\n\n> På fredag 07. juli 2023 kl. 13:50:44, skrev Dilip Kumar <\n> [email protected]>:\n>\n> On Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Currently, there is no support for CHECK constraint DEFERRABLE in a\n> create table statement.\n> > SQL standard specifies that CHECK constraint can be defined as\n> DEFERRABLE.\n>\n> I think this is a valid argument that this is part of SQL standard so\n> it would be good addition to PostgreSQL. So +1 for the feature.\n>\n> But I am wondering whether there are some real-world use cases for\n> deferred CHECK/NOT NULL constraints? I mean like for foreign key\n> constraints if there is a cyclic dependency between two tables then\n> deferring the constraint is the simplest way to insert without error.\n>\n>\n>\n> The real-world use case, at least for me, is when using an ORM. For large\n> object-graphs ORMs have a tendency to INSERT first with NULLs then UPDATE\n> the “NOT NULLs” later.\n>\n> “Rewrite the ORM” is not an option for most of us…\n>\nBetween this and Vik comment it sounds like we should probably require a\npatch in this area to solve both the not null and check constraint deferral\nomissions then, not just one of them (alternatively, let’s solve the not\nnull one first).\n\nDavid J.\n\nOn Monday, October 2, 2023, Andreas Joseph Krogh <[email protected]> wrote:På fredag 07. juli 2023 kl. 13:50:44, skrev Dilip Kumar <[email protected]>:On Wed, Jul 5, 2023 at 3:08 PM Himanshu Upadhyaya<[email protected]> wrote:>> Hi,>> Currently, there is no support for CHECK constraint DEFERRABLE in a create table statement.> SQL standard specifies that CHECK constraint can be defined as DEFERRABLE.I think this is a valid argument that this is part of SQL standard soit would be good addition to PostgreSQL. So +1 for the feature.But I am wondering whether there are some real-world use cases fordeferred CHECK/NOT NULL constraints? I mean like for foreign keyconstraints if there is a cyclic dependency between two tables thendeferring the constraint is the simplest way to insert without error. The real-world use case, at least for me, is when using an ORM. For large object-graphs ORMs have a tendency to INSERT first with NULLs then UPDATE the “NOT NULLs” later.“Rewrite the ORM” is not an option for most of us…Between this and Vik comment it sounds like we should probably require a patch in this area to solve both the not null and check constraint deferral omissions then, not just one of them (alternatively, let’s solve the not null one first).David J.",
"msg_date": "Tue, 3 Oct 2023 06:53:35 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Mon, Oct 2, 2023 at 10:25 PM Tom Lane <[email protected]> wrote:\n> But here we have a\n> feature whose only possible use is with constraints that *aren't*\n> immutable; else we might as well just check them immediately.\n\nI'm a little bit confused by this whole discussion because surely this\nstatement is just completely false. The example in the original post\ndemonstrates that clearly.\n\nThe use case for a deferred check constraint is exactly the same as\nthe use case for a deferred foreign key constraint or a deferred\nuniqueness constraint, which is that you might have a constraint that\nwill be temporarily false while the transaction is in progress, but\ntrue by the time the transaction actually commits, and you might like\nthe transaction to succeed instead of failing in such a case. You seem\nto be imagining that the constraint itself might be returning mutable\nanswers on the same inputs, but that's not what this is about at all.\n\nI'm not here - at least, not right now - to take a position on whether\nthe patch itself is any good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 16:11:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 10:05 AM David G. Johnston\n<[email protected]> wrote:\n>> The real-world use case, at least for me, is when using an ORM. For large object-graphs ORMs have a tendency to INSERT first with NULLs then UPDATE the “NOT NULLs” later.\n>>\n>> “Rewrite the ORM” is not an option for most of us…\n>\n> Between this and Vik comment it sounds like we should probably require a patch in this area to solve both the not null and check constraint deferral omissions then, not just one of them (alternatively, let’s solve the not null one first).\n\nI have a couple of problems with this comment:\n\n1. I don't know which of Vik's comments you're talking about here, but\nit seems to me that Vik is generally in favor of this feature, so I'm\na bit surprised to hear that one of his comments led you to think that\nit should be burdened with additional requirements.\n\n2. I don't think it's a good idea for the same patch to try to solve\ntwo problems unless they are so closely related that solving one\nwithout solving the other is not sensible. It is a good policy for the\ncommunity to accept incremental progress provided it doesn't break\nthings along the way. Smaller patches are way easier to get committed,\nand then we get some of the feature sooner instead of all of it some\nmore distant point in the future or never. Furthemore, forcing\nadditional requirements onto patch submitters as a condition of patch\nacceptance is extremely demoralizing to submitters, and we should not\ndo it without an excellent reason.\n\nMind you, I'm not against this patch handling both CHECK and NOT NULL\nconstraints if that's the most sensible way forward, especially in\nview of Álvaro's recent work in that area. But it sort of sounds like\nyou're just trying to sink the patch despite it being a feature that\nis both in the SQL standard and has real use cases which have been\nmentioned on the thread, and I don't really like that. In the interest\nof full disclosure, I do work at the same company as Dilip and\nHimanshu, so I might be biased. But I feel like I would be in favor of\nthis feature no matter who proposed it, as long as it was\nwell-implemented. It's always struck me as odd that we allow deferring\nsome types of constraints but not others, and I don't understand why\nwe'd want to block closing that gap unless there is some concrete\ndownside to so doing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 16:26:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 1:27 PM Robert Haas <[email protected]> wrote:\n\n> On Tue, Oct 3, 2023 at 10:05 AM David G. Johnston\n> <[email protected]> wrote:\n> >> The real-world use case, at least for me, is when using an ORM. For\n> large object-graphs ORMs have a tendency to INSERT first with NULLs then\n> UPDATE the “NOT NULLs” later.\n> >>\n> >> “Rewrite the ORM” is not an option for most of us…\n> >\n> > Between this and Vik comment it sounds like we should probably require a\n> patch in this area to solve both the not null and check constraint deferral\n> omissions then, not just one of them (alternatively, let’s solve the not\n> null one first).\n>\n> I have a couple of problems with this comment:\n>\n> 1. I don't know which of Vik's comments you're talking about here, but\n> it seems to me that Vik is generally in favor of this feature, so I'm\n> a bit surprised to hear that one of his comments led you to think that\n> it should be burdened with additional requirements.\n>\n\nSpecifically, Vik commented that the standard requires implementing NOT\nNULL as a check constraint and thus needs to allow for deferrable check\nconstraints in order for not null checks to be deferrable. I agree fully\nthat deferring a not null check makes for an excellent use case. But we've\nalso decided to make NOT NULL its own thing, contrary to the standard.\nThus my understanding for why this behavior is standard mandated is that it\nis to allow for deferrable not null constraints and thus our goal should be\nto make our not null constraints deferrable.\n\nThe only other example case of wanting a deferrable check constraint\ninvolved the usage of a function that we expressly prohibit as a check\nconstraint. The argument, which I weakly support, is that if our adding\ndeferrable check constraints increases the frequency of such functions\nbeing created and used by our users, then we should continue to prohibit\nsuch deferrability and require those users to properly implement triggers\nwhich can then be deferred. With a deferrable not null constraint any\nother reasonable check constraints can simply evaluate to null during the\nperiod where they should be deferred - because their column inputs are\ndeferred nulls - and then we be fully evaluated when the inputs ultimately\nend up non-null.\n\n2. I don't think it's a good idea for the same patch to try to solve\n> two problems unless they are so closely related that solving one\n> without solving the other is not sensible.\n\n\nA NOT NULL constraint apparently is just a special case of a check\nconstraint which seems closely related enough to match your definition.\n\nBut I guess you are right, I was trying to say no to this patch, and yes to\nthe not null deferral idea, without being so explicit and final about it.\n\nWhile the coders are welcome to work on whatever they wish, the effort\nspent on this just doesn't seem that valuable compared to what is already\nin the queue being worked on needing reviews and commits. I can live with a\ngap in our standards conformance here since I haven't observed any uses\ncases that are impossible to accomplish except by adding this specific\nfeature which would only cover NOT NULL constraints if the syntactical form\nfor creating them were not used (which I suppose if we fail to provide NOT\nNULL DEFERRABLE that would argue for at least giving this work-around...)\n\nDavid J.\n\nOn Mon, Oct 9, 2023 at 1:27 PM Robert Haas <[email protected]> wrote:On Tue, Oct 3, 2023 at 10:05 AM David G. Johnston\n<[email protected]> wrote:\n>> The real-world use case, at least for me, is when using an ORM. For large object-graphs ORMs have a tendency to INSERT first with NULLs then UPDATE the “NOT NULLs” later.\n>>\n>> “Rewrite the ORM” is not an option for most of us…\n>\n> Between this and Vik comment it sounds like we should probably require a patch in this area to solve both the not null and check constraint deferral omissions then, not just one of them (alternatively, let’s solve the not null one first).\n\nI have a couple of problems with this comment:\n\n1. I don't know which of Vik's comments you're talking about here, but\nit seems to me that Vik is generally in favor of this feature, so I'm\na bit surprised to hear that one of his comments led you to think that\nit should be burdened with additional requirements.Specifically, Vik commented that the standard requires implementing NOT NULL as a check constraint and thus needs to allow for deferrable check constraints in order for not null checks to be deferrable. I agree fully that deferring a not null check makes for an excellent use case. But we've also decided to make NOT NULL its own thing, contrary to the standard. Thus my understanding for why this behavior is standard mandated is that it is to allow for deferrable not null constraints and thus our goal should be to make our not null constraints deferrable.The only other example case of wanting a deferrable check constraint involved the usage of a function that we expressly prohibit as a check constraint. The argument, which I weakly support, is that if our adding deferrable check constraints increases the frequency of such functions being created and used by our users, then we should continue to prohibit such deferrability and require those users to properly implement triggers which can then be deferred. With a deferrable not null constraint any other reasonable check constraints can simply evaluate to null during the period where they should be deferred - because their column inputs are deferred nulls - and then we be fully evaluated when the inputs ultimately end up non-null.\n2. I don't think it's a good idea for the same patch to try to solve\ntwo problems unless they are so closely related that solving one\nwithout solving the other is not sensible.A NOT NULL constraint apparently is just a special case of a check constraint which seems closely related enough to match your definition.But I guess you are right, I was trying to say no to this patch, and yes to the not null deferral idea, without being so explicit and final about it.While the coders are welcome to work on whatever they wish, the effort spent on this just doesn't seem that valuable compared to what is already in the queue being worked on needing reviews and commits. I can live with a gap in our standards conformance here since I haven't observed any uses cases that are impossible to accomplish except by adding this specific feature which would only cover NOT NULL constraints if the syntactical form for creating them were not used (which I suppose if we fail to provide NOT NULL DEFERRABLE that would argue for at least giving this work-around...)David J.",
"msg_date": "Mon, 9 Oct 2023 14:07:35 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 5:07 PM David G. Johnston\n<[email protected]> wrote:\n>> 2. I don't think it's a good idea for the same patch to try to solve\n>> two problems unless they are so closely related that solving one\n>> without solving the other is not sensible.\n>\n> A NOT NULL constraint apparently is just a special case of a check constraint which seems closely related enough to match your definition.\n\nYes, that might be true. I suppose I'd like to hear from the patch\nauthor(s) about that. I'm somewhat coming around to your idea that\nmaybe both should be covered together, but I'm not the one writing the\npatch.\n\n> But I guess you are right, I was trying to say no to this patch, and yes to the not null deferral idea, without being so explicit and final about it.\n\nBut this, I dislike, for reasons which I'm sure you can appreciate. As\nyou say, people are free to choose their own development priorities. I\ndon't need this feature for anything either, personally, but my need\nor lack of it for some particular feature doesn't define the objective\nusefulness thereof. And to be honest, if I were trying to step back\nfrom my personal needs, I'd say this seems likely to be more useful\nthan 75% of what's in the CommitFest. Your judgement can be different\nand that's fine too, but I think the argument for calling this useless\nis weak, especially given that several people have already mentioned\nways that they would like to use it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Oct 2023 10:12:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
},
{
"msg_contents": "On 10/10/23 15:12, Robert Haas wrote:\n> On Mon, Oct 9, 2023 at 5:07 PM David G. Johnston\n> <[email protected]> wrote:\n>>> 2. I don't think it's a good idea for the same patch to try to solve\n>>> two problems unless they are so closely related that solving one\n>>> without solving the other is not sensible.\n>>\n>> A NOT NULL constraint apparently is just a special case of a check constraint which seems closely related enough to match your definition.\n> \n> Yes, that might be true. I suppose I'd like to hear from the patch\n> author(s) about that. I'm somewhat coming around to your idea that\n> maybe both should be covered together, but I'm not the one writing the\n> patch.\n\nÁlvaro Herrera has put (and is still putting) immense effort into \nturning NOT NULL into a CHECK constraint.\n\nHonestly, I don't see why the two patches need to be combined.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 02:36:02 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CHECK Constraint Deferrable"
}
] |
[
{
"msg_contents": "Hi,\n\nHeap-Only Tuple (HOT) updates are a significant performance\nenhancement, as they prevent unnecessary page writes. However, HOT\ncomes with a caveat: it means that if we have lots of available space\nearlier on in the relation, it can only be used for new tuples or in\ncases where there's insufficient space on a page for an UPDATE to use\nHOT.\n\nThis mechanism limits our options for condensing tables, forcing us to\nresort to methods like running VACUUM FULL/CLUSTER or using external\ntools like pg_repack. These either require exclusive locks (which will\nbe a deal-breaker on large tables on a production system), or there's\nrisks involved. Of course we can always flood pages with new versions\nof a row until it's forced onto an early page, but that shouldn't be\nnecessary.\n\nConsidering these trade-offs, I'd like to propose an option to allow\nsuperusers to disable HOT on tables. The intent is to trade some\nperformance benefits for the ability to reduce the size of a table\nwithout the typical locking associated with it.\n\nThis feature could be used to shrink tables in one of two ways:\ntemporarily disabling HOT until DML operations have compacted the data\ninto a smaller area, or performing a mass update on later rows to\nrelocate them to an earlier location, probably in stages. Of course,\nthis would need to be used in conjunction with a VACUUM operation.\n\nAdmittedly this isn't ideal, and it would be better if we had an\noperation that could do this (e.g. VACUUM COMPACT <table_name>), or an\noption that causes some operations to avoid HOT when it detects an\namount of free space over a threshold, but in lieu of those, I thought\nthis would at least allow users to help themselves when running into\ndisk space issues.\n\nThoughts?\n\nThom\n\n\n",
"msg_date": "Wed, 5 Jul 2023 11:44:31 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> Heap-Only Tuple (HOT) updates are a significant performance\n> enhancement, as they prevent unnecessary page writes. However, HOT\n> comes with a caveat: it means that if we have lots of available space\n> earlier on in the relation, it can only be used for new tuples or in\n> cases where there's insufficient space on a page for an UPDATE to use\n> HOT.\n>\n> This mechanism limits our options for condensing tables, forcing us to\n> resort to methods like running VACUUM FULL/CLUSTER or using external\n> tools like pg_repack. These either require exclusive locks (which will\n> be a deal-breaker on large tables on a production system), or there's\n> risks involved. Of course we can always flood pages with new versions\n> of a row until it's forced onto an early page, but that shouldn't be\n> necessary.\n>\n> Considering these trade-offs, I'd like to propose an option to allow\n> superusers to disable HOT on tables. The intent is to trade some\n> performance benefits for the ability to reduce the size of a table\n> without the typical locking associated with it.\n\nInteresting use case, but I think that disabling HOT would be missing\nthe forest for the trees. I think that a feature that disables\nblock-local updates for pages > some offset would be a better solution\nto your issue: Normal updates also prefer the new tuple to be stored\nin the same pages as the old tuple if at all possible, so disabling\nHOT wouldn't solve the issue of tuples residing in the tail of your\ntable - at least not while there is still empty space in those pages.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 12:57:40 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 11:57, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> > Heap-Only Tuple (HOT) updates are a significant performance\n> > enhancement, as they prevent unnecessary page writes. However, HOT\n> > comes with a caveat: it means that if we have lots of available space\n> > earlier on in the relation, it can only be used for new tuples or in\n> > cases where there's insufficient space on a page for an UPDATE to use\n> > HOT.\n> >\n> > This mechanism limits our options for condensing tables, forcing us to\n> > resort to methods like running VACUUM FULL/CLUSTER or using external\n> > tools like pg_repack. These either require exclusive locks (which will\n> > be a deal-breaker on large tables on a production system), or there's\n> > risks involved. Of course we can always flood pages with new versions\n> > of a row until it's forced onto an early page, but that shouldn't be\n> > necessary.\n> >\n> > Considering these trade-offs, I'd like to propose an option to allow\n> > superusers to disable HOT on tables. The intent is to trade some\n> > performance benefits for the ability to reduce the size of a table\n> > without the typical locking associated with it.\n>\n> Interesting use case, but I think that disabling HOT would be missing\n> the forest for the trees. I think that a feature that disables\n> block-local updates for pages > some offset would be a better solution\n> to your issue: Normal updates also prefer the new tuple to be stored\n> in the same pages as the old tuple if at all possible, so disabling\n> HOT wouldn't solve the issue of tuples residing in the tail of your\n> table - at least not while there is still empty space in those pages.\n\nHmm... I see your point. It's when an UPDATE isn't going to land on\nthe same page that it relocates to the earlier available page. So I\nguess I'm after whatever mechanism would allow that to happen reliably\nand predictably.\n\nSo $subject should really be \"Allow forcing UPDATEs off the same page\".\n\nThom\n\n\n",
"msg_date": "Wed, 5 Jul 2023 12:02:55 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 13:03, Thom Brown <[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 11:57, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> > > Heap-Only Tuple (HOT) updates are a significant performance\n> > > enhancement, as they prevent unnecessary page writes. However, HOT\n> > > comes with a caveat: it means that if we have lots of available space\n> > > earlier on in the relation, it can only be used for new tuples or in\n> > > cases where there's insufficient space on a page for an UPDATE to use\n> > > HOT.\n> > >\n> > > This mechanism limits our options for condensing tables, forcing us to\n> > > resort to methods like running VACUUM FULL/CLUSTER or using external\n> > > tools like pg_repack. These either require exclusive locks (which will\n> > > be a deal-breaker on large tables on a production system), or there's\n> > > risks involved. Of course we can always flood pages with new versions\n> > > of a row until it's forced onto an early page, but that shouldn't be\n> > > necessary.\n> > >\n> > > Considering these trade-offs, I'd like to propose an option to allow\n> > > superusers to disable HOT on tables. The intent is to trade some\n> > > performance benefits for the ability to reduce the size of a table\n> > > without the typical locking associated with it.\n> >\n> > Interesting use case, but I think that disabling HOT would be missing\n> > the forest for the trees. I think that a feature that disables\n> > block-local updates for pages > some offset would be a better solution\n> > to your issue: Normal updates also prefer the new tuple to be stored\n> > in the same pages as the old tuple if at all possible, so disabling\n> > HOT wouldn't solve the issue of tuples residing in the tail of your\n> > table - at least not while there is still empty space in those pages.\n>\n> Hmm... I see your point. It's when an UPDATE isn't going to land on\n> the same page that it relocates to the earlier available page. So I\n> guess I'm after whatever mechanism would allow that to happen reliably\n> and predictably.\n>\n> So $subject should really be \"Allow forcing UPDATEs off the same page\".\n\nYou'd probably want to do that only for a certain range of the table -\nfor a table with 1GB of data and 3GB of bloat there is no good reason\nto force page-crossing updates in the first 1GB of the table - all\ntuples of the table will eventually reside there, so why would you\ntake a performance penalty and move the tuples from inside that range\nto inside that same range?\n\nSomething else to note: Indexes would suffer some (large?) amount of\nbloat in this process, as you would be updating a lot of tuples\nwithout the HOT optimization, thus increasing the work to be done by\nVACUUM.\nThis may result in more bloat in indexes than what you get back from\nshrinking the table.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 14:12:15 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 13:12, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 13:03, Thom Brown <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 11:57, Matthias van de Meent\n> > <[email protected]> wrote:\n> > >\n> > > On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> > > > Heap-Only Tuple (HOT) updates are a significant performance\n> > > > enhancement, as they prevent unnecessary page writes. However, HOT\n> > > > comes with a caveat: it means that if we have lots of available space\n> > > > earlier on in the relation, it can only be used for new tuples or in\n> > > > cases where there's insufficient space on a page for an UPDATE to use\n> > > > HOT.\n> > > >\n> > > > This mechanism limits our options for condensing tables, forcing us to\n> > > > resort to methods like running VACUUM FULL/CLUSTER or using external\n> > > > tools like pg_repack. These either require exclusive locks (which will\n> > > > be a deal-breaker on large tables on a production system), or there's\n> > > > risks involved. Of course we can always flood pages with new versions\n> > > > of a row until it's forced onto an early page, but that shouldn't be\n> > > > necessary.\n> > > >\n> > > > Considering these trade-offs, I'd like to propose an option to allow\n> > > > superusers to disable HOT on tables. The intent is to trade some\n> > > > performance benefits for the ability to reduce the size of a table\n> > > > without the typical locking associated with it.\n> > >\n> > > Interesting use case, but I think that disabling HOT would be missing\n> > > the forest for the trees. I think that a feature that disables\n> > > block-local updates for pages > some offset would be a better solution\n> > > to your issue: Normal updates also prefer the new tuple to be stored\n> > > in the same pages as the old tuple if at all possible, so disabling\n> > > HOT wouldn't solve the issue of tuples residing in the tail of your\n> > > table - at least not while there is still empty space in those pages.\n> >\n> > Hmm... I see your point. It's when an UPDATE isn't going to land on\n> > the same page that it relocates to the earlier available page. So I\n> > guess I'm after whatever mechanism would allow that to happen reliably\n> > and predictably.\n> >\n> > So $subject should really be \"Allow forcing UPDATEs off the same page\".\n>\n> You'd probably want to do that only for a certain range of the table -\n> for a table with 1GB of data and 3GB of bloat there is no good reason\n> to force page-crossing updates in the first 1GB of the table - all\n> tuples of the table will eventually reside there, so why would you\n> take a performance penalty and move the tuples from inside that range\n> to inside that same range?\n\nI'm thinking more of a case of:\n\n<magic to stop UPDATES from landing on same page>\n\nUPDATE bigtable\nSET primary key = primary key\nWHERE ctid IN (\n SELECT ctid\n FROM bigtable\n ORDER BY ctid DESC\n LIMIT 100000);\n\n> Something else to note: Indexes would suffer some (large?) amount of\n> bloat in this process, as you would be updating a lot of tuples\n> without the HOT optimization, thus increasing the work to be done by\n> VACUUM.\n> This may result in more bloat in indexes than what you get back from\n> shrinking the table.\n\nThis could be the case, but I guess indexes are expendable to an\nextent, unlike tables.\n\nThom\n\n\n",
"msg_date": "Wed, 5 Jul 2023 13:38:44 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 14:39, Thom Brown <[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 13:12, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 13:03, Thom Brown <[email protected]> wrote:\n> > >\n> > > On Wed, 5 Jul 2023 at 11:57, Matthias van de Meent\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> > > > > Heap-Only Tuple (HOT) updates are a significant performance\n> > > > > enhancement, as they prevent unnecessary page writes. However, HOT\n> > > > > comes with a caveat: it means that if we have lots of available space\n> > > > > earlier on in the relation, it can only be used for new tuples or in\n> > > > > cases where there's insufficient space on a page for an UPDATE to use\n> > > > > HOT.\n> > > > >\n> > > > > This mechanism limits our options for condensing tables, forcing us to\n> > > > > resort to methods like running VACUUM FULL/CLUSTER or using external\n> > > > > tools like pg_repack. These either require exclusive locks (which will\n> > > > > be a deal-breaker on large tables on a production system), or there's\n> > > > > risks involved. Of course we can always flood pages with new versions\n> > > > > of a row until it's forced onto an early page, but that shouldn't be\n> > > > > necessary.\n> > > > >\n> > > > > Considering these trade-offs, I'd like to propose an option to allow\n> > > > > superusers to disable HOT on tables. The intent is to trade some\n> > > > > performance benefits for the ability to reduce the size of a table\n> > > > > without the typical locking associated with it.\n> > > >\n> > > > Interesting use case, but I think that disabling HOT would be missing\n> > > > the forest for the trees. I think that a feature that disables\n> > > > block-local updates for pages > some offset would be a better solution\n> > > > to your issue: Normal updates also prefer the new tuple to be stored\n> > > > in the same pages as the old tuple if at all possible, so disabling\n> > > > HOT wouldn't solve the issue of tuples residing in the tail of your\n> > > > table - at least not while there is still empty space in those pages.\n> > >\n> > > Hmm... I see your point. It's when an UPDATE isn't going to land on\n> > > the same page that it relocates to the earlier available page. So I\n> > > guess I'm after whatever mechanism would allow that to happen reliably\n> > > and predictably.\n> > >\n> > > So $subject should really be \"Allow forcing UPDATEs off the same page\".\n> >\n> > You'd probably want to do that only for a certain range of the table -\n> > for a table with 1GB of data and 3GB of bloat there is no good reason\n> > to force page-crossing updates in the first 1GB of the table - all\n> > tuples of the table will eventually reside there, so why would you\n> > take a performance penalty and move the tuples from inside that range\n> > to inside that same range?\n>\n> I'm thinking more of a case of:\n>\n> <magic to stop UPDATES from landing on same page>\n>\n> UPDATE bigtable\n> SET primary key = primary key\n> WHERE ctid IN (\n> SELECT ctid\n> FROM bigtable\n> ORDER BY ctid DESC\n> LIMIT 100000);\n\nSo what were you thinking of? A session GUC? A table option?\n\nThe benefit of a table option is that it is retained across sessions\nand thus allows tables that get enough updates to eventually get to a\ncleaner state. The main downside of such a table option is that it\nrequires a temporary table-level lock to update the parameter.\n\nThe benefit of a session GUC is that you can set it without impacting\nother sessions, but the downside is that you need to do the\nmaintenance in that session, and risk that cascading updates to other\ntables (e.g. through AFTER UPDATE triggers) are also impacted by this\nnon-local update GUC.\n\n> > Something else to note: Indexes would suffer some (large?) amount of\n> > bloat in this process, as you would be updating a lot of tuples\n> > without the HOT optimization, thus increasing the work to be done by\n> > VACUUM.\n> > This may result in more bloat in indexes than what you get back from\n> > shrinking the table.\n>\n> This could be the case, but I guess indexes are expendable to an\n> extent, unlike tables.\n\nI don't think that's accurate - index rebuilds are quite expensive.\nBut, that's besides the point of this thread.\n\nSomewhat related: did you consider using pg_repack instead of this\npotential feature?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 19:05:36 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 14:39, Thom Brown <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 13:12, Matthias van de Meent\n> > <[email protected]> wrote:\n> > >\n> > > On Wed, 5 Jul 2023 at 13:03, Thom Brown <[email protected]> wrote:\n> > > >\n> > > > On Wed, 5 Jul 2023 at 11:57, Matthias van de Meent\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> > > > > > Heap-Only Tuple (HOT) updates are a significant performance\n> > > > > > enhancement, as they prevent unnecessary page writes. However, HOT\n> > > > > > comes with a caveat: it means that if we have lots of available space\n> > > > > > earlier on in the relation, it can only be used for new tuples or in\n> > > > > > cases where there's insufficient space on a page for an UPDATE to use\n> > > > > > HOT.\n> > > > > >\n> > > > > > This mechanism limits our options for condensing tables, forcing us to\n> > > > > > resort to methods like running VACUUM FULL/CLUSTER or using external\n> > > > > > tools like pg_repack. These either require exclusive locks (which will\n> > > > > > be a deal-breaker on large tables on a production system), or there's\n> > > > > > risks involved. Of course we can always flood pages with new versions\n> > > > > > of a row until it's forced onto an early page, but that shouldn't be\n> > > > > > necessary.\n> > > > > >\n> > > > > > Considering these trade-offs, I'd like to propose an option to allow\n> > > > > > superusers to disable HOT on tables. The intent is to trade some\n> > > > > > performance benefits for the ability to reduce the size of a table\n> > > > > > without the typical locking associated with it.\n> > > > >\n> > > > > Interesting use case, but I think that disabling HOT would be missing\n> > > > > the forest for the trees. I think that a feature that disables\n> > > > > block-local updates for pages > some offset would be a better solution\n> > > > > to your issue: Normal updates also prefer the new tuple to be stored\n> > > > > in the same pages as the old tuple if at all possible, so disabling\n> > > > > HOT wouldn't solve the issue of tuples residing in the tail of your\n> > > > > table - at least not while there is still empty space in those pages.\n> > > >\n> > > > Hmm... I see your point. It's when an UPDATE isn't going to land on\n> > > > the same page that it relocates to the earlier available page. So I\n> > > > guess I'm after whatever mechanism would allow that to happen reliably\n> > > > and predictably.\n> > > >\n> > > > So $subject should really be \"Allow forcing UPDATEs off the same page\".\n> > >\n> > > You'd probably want to do that only for a certain range of the table -\n> > > for a table with 1GB of data and 3GB of bloat there is no good reason\n> > > to force page-crossing updates in the first 1GB of the table - all\n> > > tuples of the table will eventually reside there, so why would you\n> > > take a performance penalty and move the tuples from inside that range\n> > > to inside that same range?\n> >\n> > I'm thinking more of a case of:\n> >\n> > <magic to stop UPDATES from landing on same page>\n> >\n> > UPDATE bigtable\n> > SET primary key = primary key\n> > WHERE ctid IN (\n> > SELECT ctid\n> > FROM bigtable\n> > ORDER BY ctid DESC\n> > LIMIT 100000);\n>\n> So what were you thinking of? A session GUC? A table option?\n\nBoth.\n\n> The benefit of a table option is that it is retained across sessions\n> and thus allows tables that get enough updates to eventually get to a\n> cleaner state. The main downside of such a table option is that it\n> requires a temporary table-level lock to update the parameter.\n\nYes, but the maintenance window to make such a change would be extremely brief.\n\n> The benefit of a session GUC is that you can set it without impacting\n> other sessions, but the downside is that you need to do the\n> maintenance in that session, and risk that cascading updates to other\n> tables (e.g. through AFTER UPDATE triggers) are also impacted by this\n> non-local update GUC.\n>\n> > > Something else to note: Indexes would suffer some (large?) amount of\n> > > bloat in this process, as you would be updating a lot of tuples\n> > > without the HOT optimization, thus increasing the work to be done by\n> > > VACUUM.\n> > > This may result in more bloat in indexes than what you get back from\n> > > shrinking the table.\n> >\n> > This could be the case, but I guess indexes are expendable to an\n> > extent, unlike tables.\n>\n> I don't think that's accurate - index rebuilds are quite expensive.\n> But, that's besides the point of this thread.\n>\n> Somewhat related: did you consider using pg_repack instead of this\n> potential feature?\n\npg_repack isn't exactly innocuous, and can leave potentially the\ndatabase in an irrevocable state. Plus, if disk space is an issue, it\ndoesn't help.\n\nThom\n\n\n",
"msg_date": "Wed, 5 Jul 2023 18:54:53 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 2023-07-05 at 12:02 +0100, Thom Brown wrote:\n> On Wed, 5 Jul 2023 at 11:57, Matthias van de Meent <[email protected]> wrote:\n> > On Wed, 5 Jul 2023 at 12:45, Thom Brown <[email protected]> wrote:\n> > > Heap-Only Tuple (HOT) updates are a significant performance\n> > > enhancement, as they prevent unnecessary page writes. However, HOT\n> > > comes with a caveat: it means that if we have lots of available space\n> > > earlier on in the relation, it can only be used for new tuples or in\n> > > cases where there's insufficient space on a page for an UPDATE to use\n> > > HOT.\n> > > \n> > > Considering these trade-offs, I'd like to propose an option to allow\n> > > superusers to disable HOT on tables. The intent is to trade some\n> > > performance benefits for the ability to reduce the size of a table\n> > > without the typical locking associated with it.\n> > \n> > Interesting use case, but I think that disabling HOT would be missing\n> > the forest for the trees. I think that a feature that disables\n> > block-local updates for pages > some offset would be a better solution\n> > to your issue: Normal updates also prefer the new tuple to be stored\n> > in the same pages as the old tuple if at all possible, so disabling\n> > HOT wouldn't solve the issue of tuples residing in the tail of your\n> > table - at least not while there is still empty space in those pages.\n> \n> Hmm... I see your point. It's when an UPDATE isn't going to land on\n> the same page that it relocates to the earlier available page. So I\n> guess I'm after whatever mechanism would allow that to happen reliably\n> and predictably.\n> \n> So $subject should really be \"Allow forcing UPDATEs off the same page\".\n\nI've been thinking about the same thing - an option that changes the update\nstrategy to always use the lowest block with enough free space.\n\nThat would allow to consolidate bloated tables with no down time.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 05 Jul 2023 21:47:38 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> <[email protected]> wrote:\n> > So what were you thinking of? A session GUC? A table option?\n>\n> Both.\n\nHere's a small patch implementing a new table option max_local_update\n(name very much bikesheddable). Value is -1 (default, disabled) or the\nsize of the table in MiB that you still want to allow to update on the\nsame page. I didn't yet go for a GUC as I think that has too little\ncontrol on the impact on the system.\n\nI decided that max_local_update would be in MB because there is no\nreloption value that can contain MaxBlockNumber and -1/disabled; and 1\nMiB seems like enough granularity for essentially all use cases.\n\nThe added regression tests show how this feature works, that the new\nfeature works, and validate that lock levels are acceptable\n(ShareUpdateExclusiveLock, same as for updating fillfactor).\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)",
"msg_date": "Thu, 6 Jul 2023 22:18:07 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 1:48 AM Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > <[email protected]> wrote:\n> > > So what were you thinking of? A session GUC? A table option?\n> >\n> > Both.\n>\n> Here's a small patch implementing a new table option max_local_update\n> (name very much bikesheddable). Value is -1 (default, disabled) or the\n> size of the table in MiB that you still want to allow to update on the\n> same page. I didn't yet go for a GUC as I think that has too little\n> control on the impact on the system.\n\nSo IIUC, this parameter we can control that instead of putting the new\nversion of the tuple on the same page, it should choose using\nRelationGetBufferForTuple(), and that can reduce the fragmentation\nbecause now if there is space then most of the updated tuple will be\ninserted in same pages. But this still can not truncate the pages\nfrom the heap right? because we can not guarantee that the new page\nselected by RelationGetBufferForTuple() is not from the end of the\nheap, and until we free the pages from the end of the heap, the vacuum\ncan not truncate any page. Is my understanding correct?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 10:23:32 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Fri, 7 Jul 2023 at 06:53, Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Jul 7, 2023 at 1:48 AM Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> > >\n> > > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > > <[email protected]> wrote:\n> > > > So what were you thinking of? A session GUC? A table option?\n> > >\n> > > Both.\n> >\n> > Here's a small patch implementing a new table option max_local_update\n> > (name very much bikesheddable). Value is -1 (default, disabled) or the\n> > size of the table in MiB that you still want to allow to update on the\n> > same page. I didn't yet go for a GUC as I think that has too little\n> > control on the impact on the system.\n>\n> So IIUC, this parameter we can control that instead of putting the new\n> version of the tuple on the same page, it should choose using\n> RelationGetBufferForTuple(), and that can reduce the fragmentation\n> because now if there is space then most of the updated tuple will be\n> inserted in same pages. But this still can not truncate the pages\n> from the heap right? because we can not guarantee that the new page\n> selected by RelationGetBufferForTuple() is not from the end of the\n> heap, and until we free the pages from the end of the heap, the vacuum\n> can not truncate any page. Is my understanding correct?\n\nYes. If you don't have pages with (enough) free space for the updated\ntuples in your table, or if the FSM doesn't accurately reflect the\nactual state of free space in your table, this won't help (which is\nalso the reason why I run vacuum in the tests). It also won't help if\nyou don't update the tuples physically located at the end of your\ntable, but in the targeted workload this would introduce a bias where\nnew tuple versions are moved to the front of the table.\n\nSomething to note is that this may result in very bad bloat when this\nis combined with a low fillfactor: All blocks past max_local_update\nwill be unable to use space reserved by fillfactor because FSM lookups\nalways take fillfactor into account, and all updates (which ignore\nfillfactor when local) would go through the FSM instead, thus reducing\nthe space available on each block to exactly the fillfactor. So, this\nmight need some extra code to make sure we don't accidentally blow up\nthe table's size with UPDATEs when max_local_update is combined with\nlow fillfactors. I'm not sure where that would fit best.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Fri, 7 Jul 2023 11:55:28 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "\n\nOn 7/7/23 11:55, Matthias van de Meent wrote:\n> On Fri, 7 Jul 2023 at 06:53, Dilip Kumar <[email protected]> wrote:\n>>\n>> On Fri, Jul 7, 2023 at 1:48 AM Matthias van de Meent\n>> <[email protected]> wrote:\n>>>\n>>> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n>>>>\n>>>> On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n>>>> <[email protected]> wrote:\n>>>>> So what were you thinking of? A session GUC? A table option?\n>>>>\n>>>> Both.\n>>>\n>>> Here's a small patch implementing a new table option max_local_update\n>>> (name very much bikesheddable). Value is -1 (default, disabled) or the\n>>> size of the table in MiB that you still want to allow to update on the\n>>> same page. I didn't yet go for a GUC as I think that has too little\n>>> control on the impact on the system.\n>>\n>> So IIUC, this parameter we can control that instead of putting the new\n>> version of the tuple on the same page, it should choose using\n>> RelationGetBufferForTuple(), and that can reduce the fragmentation\n>> because now if there is space then most of the updated tuple will be\n>> inserted in same pages. But this still can not truncate the pages\n>> from the heap right? because we can not guarantee that the new page\n>> selected by RelationGetBufferForTuple() is not from the end of the\n>> heap, and until we free the pages from the end of the heap, the vacuum\n>> can not truncate any page. Is my understanding correct?\n> \n> Yes. If you don't have pages with (enough) free space for the updated\n> tuples in your table, or if the FSM doesn't accurately reflect the\n> actual state of free space in your table, this won't help (which is\n> also the reason why I run vacuum in the tests). It also won't help if\n> you don't update the tuples physically located at the end of your\n> table, but in the targeted workload this would introduce a bias where\n> new tuple versions are moved to the front of the table.\n> \n> Something to note is that this may result in very bad bloat when this\n> is combined with a low fillfactor: All blocks past max_local_update\n> will be unable to use space reserved by fillfactor because FSM lookups\n> always take fillfactor into account, and all updates (which ignore\n> fillfactor when local) would go through the FSM instead, thus reducing\n> the space available on each block to exactly the fillfactor. So, this\n> might need some extra code to make sure we don't accidentally blow up\n> the table's size with UPDATEs when max_local_update is combined with\n> low fillfactors. I'm not sure where that would fit best.\n> \n\nI know the thread started as \"let's disable HOT\" and this essentially\njust proposes to do that using a table option. But I wonder if that's\nfar too simple to be reliable, because hoping RelationGetBufferForTuple\nhappens to do the right thing does not seem great.\n\nI wonder if we should invent some definition of \"strategy\" that would\ntell RelationGetBufferForTuple what it should aim for ...\n\nI'm imagining either a table option with a couple possible values\n(default, non-hot, first-page, ...) or maybe something even more\nelaborate (perhaps even a callback?).\n\nNow, it's not my intention to hijack this thread, but this discussion\nreminds me one of the ideas from my \"BRIN improvements\" talk, about\nmaybe using BRIN indexes for routing. UPDATEs may be a major issue for\nBRIN, making them gradually worse over time. If we could \"tell\"\nRelationGetBufferForTuple() which buffers are more suitable (by looking\nat an index, histogram or some approximate mapping), that might help.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jul 2023 12:18:04 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 3:48 PM Tomas Vondra\n<[email protected]> wrote:\n>\n\n> On 7/7/23 11:55, Matthias van de Meent wrote:\n> > On Fri, 7 Jul 2023 at 06:53, Dilip Kumar <[email protected]> wrote:\n> >>\n> >> On Fri, Jul 7, 2023 at 1:48 AM Matthias van de Meent\n> >> <[email protected]> wrote:\n> >>>\n> >>> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> >>>>\n> >>>> On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> >>>> <[email protected]> wrote:\n> >>>>> So what were you thinking of? A session GUC? A table option?\n> >>>>\n> >>>> Both.\n> >>>\n> >>> Here's a small patch implementing a new table option max_local_update\n> >>> (name very much bikesheddable). Value is -1 (default, disabled) or the\n> >>> size of the table in MiB that you still want to allow to update on the\n> >>> same page. I didn't yet go for a GUC as I think that has too little\n> >>> control on the impact on the system.\n> >>\n> >> So IIUC, this parameter we can control that instead of putting the new\n> >> version of the tuple on the same page, it should choose using\n> >> RelationGetBufferForTuple(), and that can reduce the fragmentation\n> >> because now if there is space then most of the updated tuple will be\n> >> inserted in same pages. But this still can not truncate the pages\n> >> from the heap right? because we can not guarantee that the new page\n> >> selected by RelationGetBufferForTuple() is not from the end of the\n> >> heap, and until we free the pages from the end of the heap, the vacuum\n> >> can not truncate any page. Is my understanding correct?\n> >\n> > Yes. If you don't have pages with (enough) free space for the updated\n> > tuples in your table, or if the FSM doesn't accurately reflect the\n> > actual state of free space in your table, this won't help (which is\n> > also the reason why I run vacuum in the tests). It also won't help if\n> > you don't update the tuples physically located at the end of your\n> > table, but in the targeted workload this would introduce a bias where\n> > new tuple versions are moved to the front of the table.\n> >\n> > Something to note is that this may result in very bad bloat when this\n> > is combined with a low fillfactor: All blocks past max_local_update\n> > will be unable to use space reserved by fillfactor because FSM lookups\n> > always take fillfactor into account, and all updates (which ignore\n> > fillfactor when local) would go through the FSM instead, thus reducing\n> > the space available on each block to exactly the fillfactor. So, this\n> > might need some extra code to make sure we don't accidentally blow up\n> > the table's size with UPDATEs when max_local_update is combined with\n> > low fillfactors. I'm not sure where that would fit best.\n> >\n>\n> I know the thread started as \"let's disable HOT\" and this essentially\n> just proposes to do that using a table option. But I wonder if that's\n> far too simple to be reliable, because hoping RelationGetBufferForTuple\n> happens to do the right thing does not seem great.\n>\n> I wonder if we should invent some definition of \"strategy\" that would\n> tell RelationGetBufferForTuple what it should aim for ...\n>\n> I'm imagining either a table option with a couple possible values\n> (default, non-hot, first-page, ...) or maybe something even more\n> elaborate (perhaps even a callback?).\n>\n> Now, it's not my intention to hijack this thread, but this discussion\n> reminds me one of the ideas from my \"BRIN improvements\" talk, about\n> maybe using BRIN indexes for routing. UPDATEs may be a major issue for\n> BRIN, making them gradually worse over time. If we could \"tell\"\n> RelationGetBufferForTuple() which buffers are more suitable (by looking\n> at an index, histogram or some approximate mapping), that might help.\n\nIMHO that seems like the right direction for this feature to be\nuseful. Otherwise just forcing it to select a page using\nRelationGetBufferForTuple() without any input or direction to this\nfunction can behave pretty randomly. In fact, there should be some\nway to say insert a new tuple in a smaller block number first\n(provided they have free space) and with that, we might get an\nopportunity to truncate some heap pages by vacuum.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 16:27:45 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Fri, 2023-07-07 at 16:27 +0530, Dilip Kumar wrote:\n> On Fri, Jul 7, 2023 at 3:48 PM Tomas Vondra <[email protected]> wrote:\n> > I'm imagining either a table option with a couple possible values\n> > (default, non-hot, first-page, ...) or maybe something even more\n> > elaborate (perhaps even a callback?).\n> > \n> > Now, it's not my intention to hijack this thread, but this discussion\n> > reminds me one of the ideas from my \"BRIN improvements\" talk, about\n> > maybe using BRIN indexes for routing. UPDATEs may be a major issue for\n> > BRIN, making them gradually worse over time. If we could \"tell\"\n> > RelationGetBufferForTuple() which buffers are more suitable (by looking\n> > at an index, histogram or some approximate mapping), that might help.\n> \n> IMHO that seems like the right direction for this feature to be\n> useful.\n\nRight, I agree. A GUC/storage parameter like \"update_strategy\"\nthat is an enum (try-hot | first-page | ...).\n\nTo preserve BRIN indexes or CLUSTERed tables, there could be an additional\n\"insert_strategy\", but that would somehow have to be tied to a certain\nindex. I think that is out of scope for this effort.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 07 Jul 2023 13:10:48 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Thu, 6 Jul 2023 at 21:18, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> >\n> > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > <[email protected]> wrote:\n> > > So what were you thinking of? A session GUC? A table option?\n> >\n> > Both.\n>\n> Here's a small patch implementing a new table option max_local_update\n> (name very much bikesheddable). Value is -1 (default, disabled) or the\n> size of the table in MiB that you still want to allow to update on the\n> same page. I didn't yet go for a GUC as I think that has too little\n> control on the impact on the system.\n>\n> I decided that max_local_update would be in MB because there is no\n> reloption value that can contain MaxBlockNumber and -1/disabled; and 1\n> MiB seems like enough granularity for essentially all use cases.\n>\n> The added regression tests show how this feature works, that the new\n> feature works, and validate that lock levels are acceptable\n> (ShareUpdateExclusiveLock, same as for updating fillfactor).\n\nWow, thanks for working on this.\n\nI've given it a test, and it does what I would expect it to do.\n\nI'm aware of the concerns about the potential for the relocation to\nland in an undesirable location, so perhaps that needs addressing.\nBut this is already considerably better than the current need to\nupdate a row until it gets pushed off its current page. Ideally there\nwould be tooling built around this where the user wouldn't need to\nfigure out how much of the table to UPDATE, or deal with VACUUMing\nconcerns.\n\nBut here's my quick test:\n\nCREATE OR REPLACE FUNCTION compact_table(table_name IN TEXT)\nRETURNS VOID AS $$\nDECLARE\n current_row RECORD;\n old_ctid TID;\n new_ctid TID;\n keys TEXT;\n update_query TEXT;\n row_counter INTEGER := 0;\nBEGIN\n SELECT string_agg(a.attname || ' = ' || a.attname, ', ')\n INTO keys\n FROM\n pg_index i\n JOIN\n pg_attribute a ON a.attnum = ANY(i.indkey)\n WHERE\n i.indrelid = table_name::regclass\n AND a.attrelid = table_name::regclass\n AND i.indisprimary;\n\n IF keys IS NULL THEN\n RAISE EXCEPTION 'Table % does not have a primary key.', table_name;\n END IF;\n\n FOR current_row IN\n EXECUTE FORMAT('SELECT ctid, * FROM %I ORDER BY ctid DESC', table_name)\n LOOP\n old_ctid := current_row.ctid;\n\n update_query := FORMAT('UPDATE %I SET %s WHERE ctid = $1\nRETURNING ctid', table_name, keys);\n EXECUTE update_query USING old_ctid INTO new_ctid;\n\n row_counter := row_counter + 1;\n\n IF row_counter % 1000 = 0 THEN\n RAISE NOTICE '% rows relocated.', row_counter;\n END IF;\n\n IF new_ctid <= old_ctid THEN\n CONTINUE;\n ELSE\n RAISE NOTICE 'All non-contiguous rows relocated.';\n EXIT;\n END IF;\n END LOOP;\nEND; $$\nLANGUAGE plpgsql;\n\n\npostgres=# CREATE TABLE bigtable (id int, content text);\nCREATE TABLE\npostgres=# INSERT INTO bigtable SELECT x, 'This is just a way to fill\nup space.' FROM generate_series(1,10000000) a(x);\nINSERT 0 10000000\npostgres=# DELETE FROM bigtable WHERE id % 7 = 0;\nDELETE 1428571\npostgres=# VACUUM bigtable;\nVACUUM\npostgres=# ALTER TABLE bigtable SET (max_local_update = 0);\nALTER TABLE\npostgres=# ALTER TABLE bigtable ADD PRIMARY KEY (id);\nALTER TABLE\npostgres=# \\dt+ bigtable\n List of relations\n Schema | Name | Type | Owner | Persistence | Access method |\nSize | Description\n--------+----------+-------+-------+-------------+---------------+--------+-------------\n public | bigtable | table | thom | permanent | heap | 730 MB |\n(1 row)\n\npostgres=# SELECT * FROM pgstattuple('bigtable');\n table_len | tuple_count | tuple_len | tuple_percent |\ndead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |\nfree_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 765607936 | 8571429 | 557142885 | 72.77 |\n0 | 0 | 0 | 105901628 | 13.83\n(1 row)\n\npostgres=# SELECT compact_table('bigtable');\nNOTICE: 1000 rows relocated.\nNOTICE: 2000 rows relocated.\nNOTICE: 3000 rows relocated.\nNOTICE: 4000 rows relocated.\n...\nNOTICE: 1221000 rows relocated.\nNOTICE: 1222000 rows relocated.\nNOTICE: 1223000 rows relocated.\nNOTICE: 1224000 rows relocated.\nNOTICE: All non-contiguous rows relocated.\n compact_table\n---------------\n\n(1 row)\n\npostgres=# VACUUM bigtable;\nVACUUM\npostgres=# \\dt+ bigtable;\n List of relations\n Schema | Name | Type | Owner | Persistence | Access method |\nSize | Description\n--------+----------+-------+-------+-------------+---------------+--------+-------------\n public | bigtable | table | thom | permanent | heap | 626 MB |\n(1 row)\n\npostgres=# SELECT * FROM pgstattuple('bigtable');\n table_len | tuple_count | tuple_len | tuple_percent |\ndead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |\nfree_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 656236544 | 8571429 | 557142885 | 84.9 |\n0 | 0 | 0 | 2564888 | 0.39\n(1 row)\n\nWorks for me.\n\nThom\n\n\n",
"msg_date": "Fri, 7 Jul 2023 12:21:03 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Fri, 7 Jul 2023 at 13:18, Tomas Vondra <[email protected]> wrote:\n> On 7/7/23 11:55, Matthias van de Meent wrote:\n> > On Fri, 7 Jul 2023 at 06:53, Dilip Kumar <[email protected]> wrote:\n> >>\n> >> On Fri, Jul 7, 2023 at 1:48 AM Matthias van de Meent\n> >> <[email protected]> wrote:\n> >>>\n> >>> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> >>>>\n> >>>> On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> >>>> <[email protected]> wrote:\n> >>>>> So what were you thinking of? A session GUC? A table option?\n> >>>>\n> >>>> Both.\n> >>>\n> >>> Here's a small patch implementing a new table option max_local_update\n> >>> (name very much bikesheddable). Value is -1 (default, disabled) or the\n> >>> size of the table in MiB that you still want to allow to update on the\n> >>> same page. I didn't yet go for a GUC as I think that has too little\n> >>> control on the impact on the system.\n> >>\n> >> So IIUC, this parameter we can control that instead of putting the new\n> >> version of the tuple on the same page, it should choose using\n> >> RelationGetBufferForTuple(), and that can reduce the fragmentation\n> >> because now if there is space then most of the updated tuple will be\n> >> inserted in same pages. But this still can not truncate the pages\n> >> from the heap right? because we can not guarantee that the new page\n> >> selected by RelationGetBufferForTuple() is not from the end of the\n> >> heap, and until we free the pages from the end of the heap, the vacuum\n> >> can not truncate any page. Is my understanding correct?\n> >\n> > Yes. If you don't have pages with (enough) free space for the updated\n> > tuples in your table, or if the FSM doesn't accurately reflect the\n> > actual state of free space in your table, this won't help (which is\n> > also the reason why I run vacuum in the tests). It also won't help if\n> > you don't update the tuples physically located at the end of your\n> > table, but in the targeted workload this would introduce a bias where\n> > new tuple versions are moved to the front of the table.\n> >\n> > Something to note is that this may result in very bad bloat when this\n> > is combined with a low fillfactor: All blocks past max_local_update\n> > will be unable to use space reserved by fillfactor because FSM lookups\n> > always take fillfactor into account, and all updates (which ignore\n> > fillfactor when local) would go through the FSM instead, thus reducing\n> > the space available on each block to exactly the fillfactor. So, this\n> > might need some extra code to make sure we don't accidentally blow up\n> > the table's size with UPDATEs when max_local_update is combined with\n> > low fillfactors. I'm not sure where that would fit best.\n> >\n>\n> I know the thread started as \"let's disable HOT\" and this essentially\n> just proposes to do that using a table option. But I wonder if that's\n> far too simple to be reliable, because hoping RelationGetBufferForTuple\n> happens to do the right thing does not seem great.\n>\n> I wonder if we should invent some definition of \"strategy\" that would\n> tell RelationGetBufferForTuple what it should aim for ...\n>\n> I'm imagining either a table option with a couple possible values\n> (default, non-hot, first-page, ...) or maybe something even more\n> elaborate (perhaps even a callback?).\n>\n> Now, it's not my intention to hijack this thread, but this discussion\n> reminds me one of the ideas from my \"BRIN improvements\" talk, about\n> maybe using BRIN indexes for routing. UPDATEs may be a major issue for\n> BRIN, making them gradually worse over time. If we could \"tell\"\n> RelationGetBufferForTuple() which buffers are more suitable (by looking\n> at an index, histogram or some approximate mapping), that might help.\n\nJust as another point in support of strategy based/extensible tuple\nplacement, I would at some point try out placing INSERT ON CONFLICT\ntuples on the same page as the preceding key in the index. Use case is\nin tables with (series, timestamp) primary key to get locality of\naccess range scanning for a single series. Placement will always be a\ntradeoff that is dependent on hardware and workload, and the effect\ncan be pretty large. For the mentioned use case, if placement can\nmaintain some semblance of clustering, there will be a 10-100x\nreduction in buffers accessed for a relatively minor increase in\nbloat.\n\n--\nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 15:43:14 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Thu, 2023-07-06 at 22:18 +0200, Matthias van de Meent wrote:\n> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> > \n> > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > <[email protected]> wrote:\n> > > So what were you thinking of? A session GUC? A table option?\n> > \n> > Both.\n> \n> Here's a small patch implementing a new table option max_local_update\n> (name very much bikesheddable). Value is -1 (default, disabled) or the\n> size of the table in MiB that you still want to allow to update on the\n> same page. I didn't yet go for a GUC as I think that has too little\n> control on the impact on the system.\n> \n> I decided that max_local_update would be in MB because there is no\n> reloption value that can contain MaxBlockNumber and -1/disabled; and 1\n> MiB seems like enough granularity for essentially all use cases.\n> \n> The added regression tests show how this feature works, that the new\n> feature works, and validate that lock levels are acceptable\n> (ShareUpdateExclusiveLock, same as for updating fillfactor).\n\nI have looked at your patch, and I must say that I like it. Having\na size limit is better than my original idea of just \"on\" or \"off\".\nEssentially, it is \"try to shrink the table if it grows above a limit\".\n\nThe patch builds fine and passes all regression tests.\n\nDocumentation is missing.\n\nI agree that the name \"max_local_update\" could be improved.\nPerhaps \"avoid_hot_above_size_mb\".\n\n--- a/src/include/utils/rel.h\n+++ b/src/include/utils/rel.h\n@@ -342,6 +342,7 @@ typedef struct StdRdOptions\n int parallel_workers; /* max number of parallel workers */\n StdRdOptIndexCleanup vacuum_index_cleanup; /* controls index vacuuming */\n bool vacuum_truncate; /* enables vacuum to truncate a relation */\n+ int max_local_update; /* Updates to pages after this block must go through the VM */\n } StdRdOptions;\n \n #define HEAP_MIN_FILLFACTOR 10\n\nIn the comment, it should be FSM, not VM, right?\n\nOther than that, I see nothing wrong.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 19 Jul 2023 14:58:51 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 19 Jul 2023, 13:58 Laurenz Albe, <[email protected]> wrote:\n\n> On Thu, 2023-07-06 at 22:18 +0200, Matthias van de Meent wrote:\n> > On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> > >\n> > > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > > <[email protected]> wrote:\n> > > > So what were you thinking of? A session GUC? A table option?\n> > >\n> > > Both.\n> >\n> > Here's a small patch implementing a new table option max_local_update\n> > (name very much bikesheddable). Value is -1 (default, disabled) or the\n> > size of the table in MiB that you still want to allow to update on the\n> > same page. I didn't yet go for a GUC as I think that has too little\n> > control on the impact on the system.\n> >\n> > I decided that max_local_update would be in MB because there is no\n> > reloption value that can contain MaxBlockNumber and -1/disabled; and 1\n> > MiB seems like enough granularity for essentially all use cases.\n> >\n> > The added regression tests show how this feature works, that the new\n> > feature works, and validate that lock levels are acceptable\n> > (ShareUpdateExclusiveLock, same as for updating fillfactor).\n>\n> I have looked at your patch, and I must say that I like it. Having\n> a size limit is better than my original idea of just \"on\" or \"off\".\n> Essentially, it is \"try to shrink the table if it grows above a limit\".\n>\n> The patch builds fine and passes all regression tests.\n>\n> Documentation is missing.\n>\n> I agree that the name \"max_local_update\" could be improved.\n> Perhaps \"avoid_hot_above_size_mb\".\n>\n\nOr \"hot_table_size_threshold\" or \"hot_update_limit\"?\n\nThom\n\nOn Wed, 19 Jul 2023, 13:58 Laurenz Albe, <[email protected]> wrote:On Thu, 2023-07-06 at 22:18 +0200, Matthias van de Meent wrote:\n> On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> > \n> > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > <[email protected]> wrote:\n> > > So what were you thinking of? A session GUC? A table option?\n> > \n> > Both.\n> \n> Here's a small patch implementing a new table option max_local_update\n> (name very much bikesheddable). Value is -1 (default, disabled) or the\n> size of the table in MiB that you still want to allow to update on the\n> same page. I didn't yet go for a GUC as I think that has too little\n> control on the impact on the system.\n> \n> I decided that max_local_update would be in MB because there is no\n> reloption value that can contain MaxBlockNumber and -1/disabled; and 1\n> MiB seems like enough granularity for essentially all use cases.\n> \n> The added regression tests show how this feature works, that the new\n> feature works, and validate that lock levels are acceptable\n> (ShareUpdateExclusiveLock, same as for updating fillfactor).\n\nI have looked at your patch, and I must say that I like it. Having\na size limit is better than my original idea of just \"on\" or \"off\".\nEssentially, it is \"try to shrink the table if it grows above a limit\".\n\nThe patch builds fine and passes all regression tests.\n\nDocumentation is missing.\n\nI agree that the name \"max_local_update\" could be improved.\nPerhaps \"avoid_hot_above_size_mb\".Or \"hot_table_size_threshold\" or \"hot_update_limit\"?Thom",
"msg_date": "Wed, 19 Jul 2023 14:13:41 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Fri, 7 Jul 2023 at 12:18, Tomas Vondra <[email protected]> wrote:\n>\n> On 7/7/23 11:55, Matthias van de Meent wrote:\n>> On Fri, 7 Jul 2023 at 06:53, Dilip Kumar <[email protected]> wrote:\n>>>\n>>>\n>>> So IIUC, this parameter we can control that instead of putting the new\n>>> version of the tuple on the same page, it should choose using\n>>> RelationGetBufferForTuple(), and that can reduce the fragmentation\n>>> because now if there is space then most of the updated tuple will be\n>>> inserted in same pages. But this still can not truncate the pages\n>>> from the heap right? because we can not guarantee that the new page\n>>> selected by RelationGetBufferForTuple() is not from the end of the\n>>> heap, and until we free the pages from the end of the heap, the vacuum\n>>> can not truncate any page. Is my understanding correct?\n>>\n>> Yes. If you don't have pages with (enough) free space for the updated\n>> tuples in your table, or if the FSM doesn't accurately reflect the\n>> actual state of free space in your table, this won't help (which is\n>> also the reason why I run vacuum in the tests). It also won't help if\n>> you don't update the tuples physically located at the end of your\n>> table, but in the targeted workload this would introduce a bias where\n>> new tuple versions are moved to the front of the table.\n>>\n>> Something to note is that this may result in very bad bloat when this\n>> is combined with a low fillfactor: All blocks past max_local_update\n>> will be unable to use space reserved by fillfactor because FSM lookups\n>> always take fillfactor into account, and all updates (which ignore\n>> fillfactor when local) would go through the FSM instead, thus reducing\n>> the space available on each block to exactly the fillfactor. So, this\n>> might need some extra code to make sure we don't accidentally blow up\n>> the table's size with UPDATEs when max_local_update is combined with\n>> low fillfactors. I'm not sure where that would fit best.\n>>\n>\n> I know the thread started as \"let's disable HOT\" and this essentially\n> just proposes to do that using a table option. But I wonder if that's\n> far too simple to be reliable, because hoping RelationGetBufferForTuple\n> happens to do the right thing does not seem great.\n>\n> I wonder if we should invent some definition of \"strategy\" that would\n> tell RelationGetBufferForTuple what it should aim for ...\n>\n> I'm imagining either a table option with a couple possible values\n> (default, non-hot, first-page, ...) or maybe something even more\n> elaborate (perhaps even a callback?).\n\nI mostly agree, but the point is that first we have to get the update\naway from the page. Once we've done that, we can start getting smart\nabout placement in RelationGetBufferForTuple, but unless we decide to\nnot put the tuple on the old tuple's page no code from\nRelationGetBufferForTuple is executed.\n\nWe could change the update code to always go through\nRelationGetBufferForTuple to determine the target buffer, and make\nthat function consider page-local updates (instead of heap_update, who\ndoes that now), but I think that'd need significant extra work in\nother callsites of RelationGetBufferForTuple as well as that function\nitself.\n\n> Now, it's not my intention to hijack this thread, but this discussion\n> reminds me one of the ideas from my \"BRIN improvements\" talk, about\n> maybe using BRIN indexes for routing. UPDATEs may be a major issue for\n> BRIN, making them gradually worse over time. If we could \"tell\"\n> RelationGetBufferForTuple() which buffers are more suitable (by looking\n> at an index, histogram or some approximate mapping), that might help.\n\nImproved tuple routing sounds like a great idea, and I've thought\nabout it as well. I'm not sure whether BRIN (as-is) is the best\ncandidate though, considering its O(N) scan complexity - 100GB-scale\ntables can reasonably have BRIN indexes of MBs, and running a scan on\nthat is not likely to have good performance.\nIf BRIN had hierarchical summaries (e.g. if we had range summaries for\ndata stored in every nonnegative power of 16 of page ranges) then we\ncould reduce that to something more reasonable, but that's not\ncurrently implemented and so I don't think that's quite relevant yet.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Thu, 24 Aug 2023 17:22:34 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 19 Jul 2023 at 15:13, Thom Brown <[email protected]> wrote:\n>\n> On Wed, 19 Jul 2023, 13:58 Laurenz Albe, <[email protected]> wrote:\n>> I agree that the name \"max_local_update\" could be improved.\n>> Perhaps \"avoid_hot_above_size_mb\".\n>\n> Or \"hot_table_size_threshold\" or \"hot_update_limit\"?\n\nAlthough I like these names, it doesn't quite cover the use of the\nparameter for me, as updated tuples prefer to be inserted on the same\npage as the old tuple regardless of whether HOT applies.\n\nExample: a bloated table test(\n id int primary key,\n num_updates int,\n unique (id, num_updates)\n)\nwould be assumed to remain bloated if I'd set a parameter named\nsomething_hot_something, as all updates would be non-hot and thus\nshould not be influenced by the GUC/parameter.\n\nHow about 'local_update_limit'?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 24 Aug 2023 18:23:18 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Thu, 2023-08-24 at 18:23 +0200, Matthias van de Meent wrote:\n> On Wed, 19 Jul 2023 at 15:13, Thom Brown <[email protected]> wrote:\n> > \n> > On Wed, 19 Jul 2023, 13:58 Laurenz Albe, <[email protected]> wrote:\n> > > I agree that the name \"max_local_update\" could be improved.\n> > > Perhaps \"avoid_hot_above_size_mb\".\n> > \n> > Or \"hot_table_size_threshold\" or \"hot_update_limit\"?\n> \n> Although I like these names, it doesn't quite cover the use of the\n> parameter for me, as updated tuples prefer to be inserted on the same\n> page as the old tuple regardless of whether HOT applies.\n> \n> How about 'local_update_limit'?\n\nI agree with your concern. I cannot think of a better name than yours.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 28 Aug 2023 14:20:17 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 19 Jul 2023 at 14:58, Laurenz Albe <[email protected]> wrote:\n>\n> On Thu, 2023-07-06 at 22:18 +0200, Matthias van de Meent wrote:\n> > On Wed, 5 Jul 2023 at 19:55, Thom Brown <[email protected]> wrote:\n> > >\n> > > On Wed, 5 Jul 2023 at 18:05, Matthias van de Meent\n> > > <[email protected]> wrote:\n> > > > So what were you thinking of? A session GUC? A table option?\n> > >\n> > > Both.\n> >\n> > Here's a small patch implementing a new table option max_local_update\n> > (name very much bikesheddable). Value is -1 (default, disabled) or the\n> > size of the table in MiB that you still want to allow to update on the\n> > same page. I didn't yet go for a GUC as I think that has too little\n> > control on the impact on the system.\n> >\n> > I decided that max_local_update would be in MB because there is no\n> > reloption value that can contain MaxBlockNumber and -1/disabled; and 1\n> > MiB seems like enough granularity for essentially all use cases.\n> >\n> > The added regression tests show how this feature works, that the new\n> > feature works, and validate that lock levels are acceptable\n> > (ShareUpdateExclusiveLock, same as for updating fillfactor).\n>\n> I have looked at your patch, and I must say that I like it. Having\n> a size limit is better than my original idea of just \"on\" or \"off\".\n> Essentially, it is \"try to shrink the table if it grows above a limit\".\n>\n> The patch builds fine and passes all regression tests.\n>\n> Documentation is missing.\n\nYes, the first patch was a working proof-of-concept. Here's a new one\nwith documentation.\n\n> I agree that the name \"max_local_update\" could be improved.\n> Perhaps \"avoid_hot_above_size_mb\".\n>\n> --- a/src/include/utils/rel.h\n> +++ b/src/include/utils/rel.h\n> @@ -342,6 +342,7 @@ typedef struct StdRdOptions\n> int parallel_workers; /* max number of parallel workers */\n> StdRdOptIndexCleanup vacuum_index_cleanup; /* controls index vacuuming */\n> bool vacuum_truncate; /* enables vacuum to truncate a relation */\n> + int max_local_update; /* Updates to pages after this block must go through the VM */\n> } StdRdOptions;\n>\n> #define HEAP_MIN_FILLFACTOR 10\n>\n> In the comment, it should be FSM, not VM, right?\n\nGood catch.\n\nIn this new patch, I've updated a few comments to get mostly within\nline length limits; the name of the storage parameter is now\n\"local_update_limit\", as per discussion on naming.\nI've also added local_update_limit to psql's autocomplete file, and\nadded documentation on how the parameter behaves - including warnings\n- in create_table.sgml.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Mon, 28 Aug 2023 15:51:07 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 10:52 AM Matthias van de Meent\n<[email protected]> wrote:\n> In this new patch, I've updated a few comments to get mostly within\n> line length limits; the name of the storage parameter is now\n> \"local_update_limit\", as per discussion on naming.\n> I've also added local_update_limit to psql's autocomplete file, and\n> added documentation on how the parameter behaves - including warnings\n> - in create_table.sgml.\n\nI feel like this is the sort of setting that experts will sometimes be\nable to use to improve the situation, and non-experts will have great\ndifficulty using. It relies on the user to know what size limit will\nwork out well, which probably involves knowing how much real data is\nin the table, and how that's going to change over time, and probably\nalso some things about how PostgreSQL does space management\ninternally. I don't know that I'd be able to guide a non-expert user\nin how to make effective use of this as a tool.\n\nI don't know exactly what to propose, but I would definitely like it\nif we could come up with something with which a casual user would be\nless likely to shoot themselves in the foot and more likely to derive\na benefit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Aug 2023 11:14:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 17:14, Robert Haas <[email protected]> wrote:\n>\n> On Mon, Aug 28, 2023 at 10:52 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > In this new patch, I've updated a few comments to get mostly within\n> > line length limits; the name of the storage parameter is now\n> > \"local_update_limit\", as per discussion on naming.\n> > I've also added local_update_limit to psql's autocomplete file, and\n> > added documentation on how the parameter behaves - including warnings\n> > - in create_table.sgml.\n>\n> I feel like this is the sort of setting that experts will sometimes be\n> able to use to improve the situation, and non-experts will have great\n> difficulty using. It relies on the user to know what size limit will\n> work out well, which probably involves knowing how much real data is\n> in the table, and how that's going to change over time, and probably\n> also some things about how PostgreSQL does space management\n> internally. I don't know that I'd be able to guide a non-expert user\n> in how to make effective use of this as a tool.\n\nAgreed on all points. But isn't that true for most most tools on bloat\nprevention and/or detection? E.g. fillfactor, autovacuum_*, ...\n\n> I don't know exactly what to propose, but I would definitely like it\n> if we could come up with something with which a casual user would be\n> less likely to shoot themselves in the foot and more likely to derive\n> a benefit.\n\nI'd prefer that too, but by lack of other work in this area this seems\nlike it fills a niche that would otherwise require extremely expensive\nlocking over a long time for CLUSTER, superuser+pg_repack, or manual\nscripts that update tuples until they're located on a different page\n(begin; update tuple WHERE ctid > '(12,0)' returning ctid; ...;\ncommit;). I agree this is very minimal and can definitely be used as a\nfootgun, but with the description that it can be a footgun I don't\nthink it's (much) worse than the current situation - a user should\nonly reach for this once they've realized they actually have an issue.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:49:50 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 11:50 AM Matthias van de Meent\n<[email protected]> wrote:\n> Agreed on all points. But isn't that true for most most tools on bloat\n> prevention and/or detection? E.g. fillfactor, autovacuum_*, ...\n\nNot nearly to the same extent, IMHO. A lot of those parameters can be\nleft alone forever and you lose nothing. That's not so here.\n\n> I'd prefer that too, but by lack of other work in this area this seems\n> like it fills a niche that would otherwise require extremely expensive\n> locking over a long time for CLUSTER, superuser+pg_repack, or manual\n> scripts that update tuples until they're located on a different page\n> (begin; update tuple WHERE ctid > '(12,0)' returning ctid; ...;\n> commit;). I agree this is very minimal and can definitely be used as a\n> footgun, but with the description that it can be a footgun I don't\n> think it's (much) worse than the current situation - a user should\n> only reach for this once they've realized they actually have an issue.\n\nWell, I sort of expected that counter-argument, but I'm not sure that I buy it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Aug 2023 11:57:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Mon, 28 Aug 2023 at 17:57, Robert Haas <[email protected]> wrote:\n>\n> On Mon, Aug 28, 2023 at 11:50 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > Agreed on all points. But isn't that true for most most tools on bloat\n> > prevention and/or detection? E.g. fillfactor, autovacuum_*, ...\n>\n> Not nearly to the same extent, IMHO. A lot of those parameters can be\n> left alone forever and you lose nothing. That's not so here.\n\nI've reworked the patch a bit to remove the \"excessive bloat with low\nfillfactors when local space is available\" issue that this parameter\ncould cause - local updates are now done if the selected page we would\nbe inserting into is after the old tuple's page and the old tuple's\npage still (or: now) has space available.\n\nDoes that alleviate your concerns?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Wed, 30 Aug 2023 15:01:36 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 9:01 AM Matthias van de Meent\n<[email protected]> wrote:\n> I've reworked the patch a bit to remove the \"excessive bloat with low\n> fillfactors when local space is available\" issue that this parameter\n> could cause - local updates are now done if the selected page we would\n> be inserting into is after the old tuple's page and the old tuple's\n> page still (or: now) has space available.\n>\n> Does that alleviate your concerns?\n\nThat seems like a good chance, but my core concern is around people\nhaving to micromanage local_update_limit, and probably either not\nknowing how to do it properly, or not being able or willing to keep\nupdating it as things change.\n\nIn a way, this parameter is a lot like work_mem, which is notoriously\nvery difficult to tune. If you set it too high, you run out of memory.\nIf you set it too low, you get bad plans. You can switch from having\none of those problems to having the other very quickly as load changs,\nand sometimes you can have both at the same time. If an omniscient\noracle could set work_mem properly for every query based not only on\nwhat the query does but the state of the system at that moment, it\nwould still be a very crude parameter, and since omniscient oracles\nare rare in practice, problems are reasonably common. I think that if\nwe add this parameter, it's going to end up in the same category. A\nlot of people will ignore it, and they'll be OK, but 30% of the people\nwho do try to use it will shoot themselves in the foot, or something\nlike that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:31:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 9:31 AM Robert Haas <[email protected]> wrote:\n> That seems like a good chance, but\n\n*change\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Aug 2023 09:31:41 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 30 Aug 2023 at 15:31, Robert Haas <[email protected]> wrote:\n>\n> On Wed, Aug 30, 2023 at 9:01 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > I've reworked the patch a bit to remove the \"excessive bloat with low\n> > fillfactors when local space is available\" issue that this parameter\n> > could cause - local updates are now done if the selected page we would\n> > be inserting into is after the old tuple's page and the old tuple's\n> > page still (or: now) has space available.\n> >\n> > Does that alleviate your concerns?\n>\n> That seems like a good chance, but my core concern is around people\n> having to micromanage local_update_limit, and probably either not\n> knowing how to do it properly, or not being able or willing to keep\n> updating it as things change.\n\nAssuming you do want to provide a way to users to solve the issue of\n\"there is a lot of free space in the table, but I don't want to take\nan access exclusive lock or wait for new inserts to fix the issue\",\nhow would you suggest we do that then?\n\nAlternative approaches that I can think of are:\n\n- A %-based parameter.\n This does scale with the table, but doesn't stop being a performance\nhog once you've reached the optimal table size, and thus also needs to\nbe disabled.\n\n- Measure the parameter from the end of the table, instead of from the\nfront; i.e. \"try to empty the last X=50 MBs of the table\".\n Scales with the table, but same issue as above - once the table has\nan optimal size, it doesn't stop.\n\n- Install one more dynamic system to move the tuples to a better page,\none the users don't directly control (yet to be designed).\n I don't know if or when this will be implemented and what benefits\nit will have, but we don't have access to a lot of state in\ntable_tuple_update or heap_update, so any data needs special lookup.\n\n- Let users keep using VACUUM FULL and CLUSTER instead.\n I don't think this is a reasonable solution.\n\n> In a way, this parameter is a lot like work_mem, which is notoriously\n> very difficult to tune. If you set it too high, you run out of memory.\n> If you set it too low, you get bad plans. You can switch from having\n> one of those problems to having the other very quickly as load changs,\n> and sometimes you can have both at the same time. If an omniscient\n> oracle could set work_mem properly for every query based not only on\n> what the query does but the state of the system at that moment, it\n> would still be a very crude parameter, and since omniscient oracles\n> are rare in practice, problems are reasonably common. I think that if\n> we add this parameter, it's going to end up in the same category. A\n> lot of people will ignore it, and they'll be OK, but 30% of the people\n> who do try to use it will shoot themselves in the foot, or something\n> like that.\n\nThe \"shoot yourself in the foot\" in this case is limited to \"your\nUPDATE statement's performance is potentially Y times worse due to\nforced FSM lookups for every update at the end of the table\". I'll\nadmit that this is not great, but I'd say it is also not the end of\nthe world, and still much better than the performance differences that\nyou can see when the plan changes due to an updated work_mem.\n\nI'd love to have more contextual information available on the table's\nfree space distribution so that this decision could be made by the\nsystem, but that info just isn't available right now. We don't really\nhave infrastructure in place that would handle such information\neither, and table_tuple_update does not get to use reuse state across\ntuples, so any use of information will add cost for every update. With\nthis patch, the FSM cost is gated behind the storage parameter, and\nthus only limited, but I don't think we can store much more than\nstorage parameters in the Relation data.\n\nVACUUM / ANALYZE could probably create and store sketches about the\nfree space distribution in the relation, but that would widen the\nscope significantly, and I have only limited bandwidth available for\nthis.\nSo, while I do plan to implement any small changes or fixes required\nto get this in, a major change in direction for this patch won't put\nit anywhere high on my active items list.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 30 Aug 2023 18:11:05 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 2023-08-30 at 09:31 -0400, Robert Haas wrote:\n> On Wed, Aug 30, 2023 at 9:01 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > I've reworked the patch a bit to remove the \"excessive bloat with low\n> > fillfactors when local space is available\" issue that this parameter\n> > could cause - local updates are now done if the selected page we would\n> > be inserting into is after the old tuple's page and the old tuple's\n> > page still (or: now) has space available.\n> > \n> > Does that alleviate your concerns?\n> \n> That seems like a good chance, but my core concern is around people\n> having to micromanage local_update_limit, and probably either not\n> knowing how to do it properly, or not being able or willing to keep\n> updating it as things change.\n> \n> In a way, this parameter is a lot like work_mem, which is notoriously\n> very difficult to tune.\n\nI don't think that is a good comparison. While most people probably\nnever need to touch \"local_update_limit\", \"work_mem\" is something everybody\nhas to consider.\n\nAnd it is not so hard to tune: the setting would be the desired table\nsize, and you could use pgstattuple to find a good value.\n\nI don't know what other use cases come to mind, but I see it as a tool to\nshrink a table after it has grown big holes, perhaps after a mass delete.\nToday, you can only VACUUM (FULL) or play with the likes of pg_squeeze and\npg_repack.\n\nI think this is useful.\n\nTo alleviate your concerns, perhaps it would help to describe the use case\nand ideas for a good setting in the documentation.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 06 Sep 2023 05:15:45 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 11:15 PM Laurenz Albe <[email protected]> wrote:\n> I don't think that is a good comparison. While most people probably\n> never need to touch \"local_update_limit\", \"work_mem\" is something everybody\n> has to consider.\n>\n> And it is not so hard to tune: the setting would be the desired table\n> size, and you could use pgstattuple to find a good value.\n\nWhat I suspect would happen, though, is that you'd end up tuning the\nvalue over and over. You'd set it to some value and after some number\nof vacuums maybe you'd realize that you could save even more disk\nspace if you reduced it a bit further or maybe your data set would\ngrow a bit and you'd have to increase it a little (or a lot). And if\nyou didn't keep adjusting it then maybe something quite bad would\nhappen to your database.\n\nwork_mem isn't quite the same in the sense that most people don't need\nto keep on iteratively tuning work_mem, at least not in my experience.\nYou figure out a value that works OK in practice and then leave it\nalone. The problem is mostly discovering what that initial value ought\nto be, which is often hard. But what is the same here and in the case\nof work_mem is that you can suddenly get hosed if the situation\nchanges substantially and you don't respond by updating the parameter\nsetting. In the case of work_mem, again in my experience, it's quite\ncommon for people to suddenly find themselves in a lot of trouble if\nthey have a load spike, because now they're running a lot more copies\nof the same query and the machine runs out of memory. The equivalent\nproblem here would be if the table suddenly gets a lot bigger due to a\nload spike or some change in the way the application is used. Then\nsuddenly, a setting that was previously serving to keep the table\npleasantly small and un-bloated on disk is instead causing tons of\nupdates that would have been HOT to become non-HOT, which could very\neasily result in both the table and its indexes bloating quite\nrapidly. I really don't like the idea of an anti-bloat feature that,\nwhen set to the wrong value, becomes a bloat-amplification feature. I\ndon't know how to describe that other than \"fragile and dangerous.\"\n\nImagine a hypothetical feature that knew how small the table could\nreasonably be kept, say by magic, and did non-HOT updates instead of\nHOT updates whenever doing so would allow moving a tuple from a page\nbeyond that magical boundary to an earlier page. Such a feature would\nnot have the downsides that this one does -- if there were\nopportunities to make the table smaller, the system would take\nadvantage of them automatically, and if the table grew, the system\nwould automatically become more relaxed to stay out of trouble. Such a\nfeature is clearly more work to design and implement than what is\nproposed here, but it would also work a lot better in practice. In\nfact, I daresay that if we accept the feature as proposed, somebody's\ngoing to go out and write a tool to calculate what the threshold ought\nto be and automatically adjust it as things change. Users of the tool\nwill then divide into two camps:\n\n- People who try to tune it manually and get burned if anything\nchanges on their system.\n- People who use that out-of-core tool.\n\nSo the out-of-core tool that does this tuning becomes a stealth\ndependency for any user who is facing this problem. Gosh, don't we\nhave enough of those already? Connection pooling being perhaps the\nmost obvious example, but far from the only one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 12:22:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Mon, 2023-09-18 at 12:22 -0400, Robert Haas wrote:\n> On Tue, Sep 5, 2023 at 11:15 PM Laurenz Albe <[email protected]> wrote:\n> > I don't think that is a good comparison. While most people probably\n> > never need to touch \"local_update_limit\", \"work_mem\" is something everybody\n> > has to consider.\n> > \n> > And it is not so hard to tune: the setting would be the desired table\n> > size, and you could use pgstattuple to find a good value.\n> \n> What I suspect would happen, though, is that you'd end up tuning the\n> value over and over. You'd set it to some value and after some number\n> of vacuums maybe you'd realize that you could save even more disk\n> space if you reduced it a bit further or maybe your data set would\n> grow a bit and you'd have to increase it a little (or a lot). And if\n> you didn't keep adjusting it then maybe something quite bad would\n> happen to your database.\n\nThere is that risk, yes.\n\n> work_mem isn't quite the same [...] But what is the same here and in the case\n> of work_mem is that you can suddenly get hosed if the situation\n> changes substantially and you don't respond by updating the parameter\n> setting. In the case of work_mem, again in my experience, it's quite\n> common for people to suddenly find themselves in a lot of trouble if\n> they have a load spike, because now they're running a lot more copies\n> of the same query and the machine runs out of memory.\n\nSo the common ground is \"both parameters are not so easy to get right,\nand if you get them wrong, it's a problem\". For me the big difference is\nthat while you pretty much have to tune \"work_mem\", you can normally just ignore\n\"local_update_limit\".\n\n> The equivalent\n> problem here would be if the table suddenly gets a lot bigger due to a\n> load spike or some change in the way the application is used. Then\n> suddenly, a setting that was previously serving to keep the table\n> pleasantly small and un-bloated on disk is instead causing tons of\n> updates that would have been HOT to become non-HOT, which could very\n> easily result in both the table and its indexes bloating quite\n> rapidly. I really don't like the idea of an anti-bloat feature that,\n> when set to the wrong value, becomes a bloat-amplification feature. I\n> don't know how to describe that other than \"fragile and dangerous.\"\n\nYes, you can hurt yourself that way. But that applies to many other\nsettings as well. You can tank your performance with a bad value for\n\"commit_delay\", \"hot_standby_feedback\" can bloat your primary, and\nso on. Still we consider these useful parameters.\n\n> Imagine a hypothetical feature that knew how small the table could\n> reasonably be kept, say by magic, and did non-HOT updates instead of\n> HOT updates whenever doing so would allow moving a tuple from a page\n> beyond that magical boundary to an earlier page. Such a feature would\n> not have the downsides that this one does -- if there were\n> opportunities to make the table smaller, the system would take\n> advantage of them automatically, and if the table grew, the system\n> would automatically become more relaxed to stay out of trouble. Such a\n> feature is clearly more work to design and implement than what is\n> proposed here, but it would also work a lot better in practice.\n\nThat sounds a bit like we should not have \"shared_buffers\" unless we\nhave a magical tool built in that gets the value right automatically.\nYes, the better is the enemy of the good. You can kill everything with\na line of reasoning like that.\n\n> In\n> fact, I daresay that if we accept the feature as proposed, somebody's\n> going to go out and write a tool to calculate what the threshold ought\n> to be and automatically adjust it as things change. Users of the tool\n> will then divide into two camps:\n> \n> - People who try to tune it manually and get burned if anything\n> changes on their system.\n> - People who use that out-of-core tool.\n> \n> So the out-of-core tool that does this tuning becomes a stealth\n> dependency for any user who is facing this problem. Gosh, don't we\n> have enough of those already? Connection pooling being perhaps the\n> most obvious example, but far from the only one.\n\nI cannot follow you there. What I envision is that \"local_update_limit\"\nis not set permanently on a table. You set it when you realize your table\ngot bloated. Then you wait until the bloat goes away or you launch a\ncouple of UPDATEs that eventually shrink the table. Then you reset\n\"local_update_limit\" again.\nIt's a more difficult, but less invasive alternative to VACUUM (FULL).\n\nIf a setting is hard to understand and hard to get right, we could invest\nin good documentation that explains the use cases and pitfalls.\nWouldn't that go a long way towards defusing this perceived footgun?\nI am aware that a frightening number of users don't read documentation,\nbut I find it hard to believe that anyone would twiddle a non-obvious\nknob like \"local_update_limit\" without first trying to figure out what\nit actually does.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 18 Sep 2023 22:02:04 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On 2023-Sep-18, Robert Haas wrote:\n\n> On Tue, Sep 5, 2023 at 11:15 PM Laurenz Albe <[email protected]> wrote:\n> > I don't think that is a good comparison. While most people probably\n> > never need to touch \"local_update_limit\", \"work_mem\" is something everybody\n> > has to consider.\n> >\n> > And it is not so hard to tune: the setting would be the desired table\n> > size, and you could use pgstattuple to find a good value.\n> \n> What I suspect would happen, though, is that you'd end up tuning the\n> value over and over. You'd set it to some value and after some number\n> of vacuums maybe you'd realize that you could save even more disk\n> space if you reduced it a bit further or maybe your data set would\n> grow a bit and you'd have to increase it a little (or a lot). And if\n> you didn't keep adjusting it then maybe something quite bad would\n> happen to your database.\n\nAs I understand it, the setting being proposed is useful as an emergency\nfor removing excessive bloat -- a substitute for VACUUM FULL when you\ndon't want to lock the table for long. Trying to use it as a permanent\ngadget is going to be misguided. So my first thought is that we should\ntell people to use it that way: if you're not in the irrecoverable-space\nsituation, just do not use this. Then we don't have to worry about\npeople misusing it the way you imagine.\n\nSecond, I think we should make it auto-reset. That is, have the user\nset some value; later, when some condition triggers (say, the table size\nis 1.2x the limit value you configured), then the local_update_limit is\nautomatically removed from the table options. From that point onwards,\nthe table is operated normally.\n\nThis removes the other concern that makes the system behaves\nsuboptimally because some DBA in the past decade left this set for no\ngood reason: if you run into an emergency, then you activate the\nemergency escape hatch, and it will close on its own as soon as the\nemergency is over.\n\nThis also dissuades people from using it for these other things you\ndescribe. It just won't work.\n\n\nThe point here is that third-party tools such as pg_repack or pg_squeeze\nexist, which work in a way we don't like, yet we offer no alternative.\nThis proposal is a mechanism that essentially replaces those tools with\na simple in-core feature, without having to include the tool itself in\ncore.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n",
"msg_date": "Tue, 19 Sep 2023 12:26:36 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 6:26 AM Alvaro Herrera <[email protected]> wrote:\n> Second, I think we should make it auto-reset. That is, have the user\n> set some value; later, when some condition triggers (say, the table size\n> is 1.2x the limit value you configured), then the local_update_limit is\n> automatically removed from the table options. From that point onwards,\n> the table is operated normally.\n\nThat's an interesting idea. It would require taking AEL on the table.\nAnd also, what do you mean by 1.2x the limit value? Is that supposed\nto be a >= condition or a <= condition? It can't really be a >=\ncondition, but you wouldn't set it in the first place unless the table\nwere significantly bigger than it could be. But if it's a <= condition\nit doesn't really protect you from hosing yourself. You just have to\ninsert a bit more data before enough of the bloat gets removed, and\nnow the table just bloats infinitely and probably rather quickly. The\ncorrect value of the setting depends on the amount of real data\n(non-bloat) in the table, not the actual table size.\n\n> The point here is that third-party tools such as pg_repack or pg_squeeze\n> exist, which work in a way we don't like, yet we offer no alternative.\n> This proposal is a mechanism that essentially replaces those tools with\n> a simple in-core feature, without having to include the tool itself in\n> core.\n\nI agree that it would be nice to have something in core that can be\nused to help with this problem, but this feature isn't the same thing\nas pg_repack or pg_squeeze, either. In some ways, it's better, because\nit can shrink the table without rewriting it, which is very desirable.\nBut in other ways, it's worse, and the fact that it seems like it can\nbackfire spectacularly if you set the wrong value seems like one big\nway that it is a lot worse. If there is a way that we can make this a\nmode that you activate for a table, and the system calculates and\nupdates the threshold, I think that would actually be a pretty good\nfeature. It would be tricky to use it to recover from acute\nemergencies, because it doesn't actually do anything until updates\nhappen, but you could use it for that in a pinch. And even without\nthat it would be useful if you have a table that is sometimes very\nlarge and sometimes very small and you want to get the space back from\nthe OS when it is in the small phase of its lifecycle.\n\nBut without any kind of auto-tuning, in my opinion, it's a fairly poor\nfeature. Sure, some people will get use out of it, if they're\nsufficiently knowledgeable and sufficiently determined. But I think\nfor most people in most situations, it will be a struggle.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 12:09:09 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On 2023-Sep-19, Robert Haas wrote:\n\n> On Tue, Sep 19, 2023 at 6:26 AM Alvaro Herrera <[email protected]> wrote:\n> > Second, I think we should make it auto-reset. That is, have the user\n> > set some value; later, when some condition triggers (say, the table size\n> > is 1.2x the limit value you configured), then the local_update_limit is\n> > automatically removed from the table options. From that point onwards,\n> > the table is operated normally.\n> \n> That's an interesting idea. It would require taking AEL on the table.\n> And also, what do you mean by 1.2x the limit value? Is that supposed\n> to be a >= condition or a <= condition? It can't really be a >=\n> condition, but you wouldn't set it in the first place unless the table\n> were significantly bigger than it could be. But if it's a <= condition\n> it doesn't really protect you from hosing yourself. You just have to\n> insert a bit more data before enough of the bloat gets removed, and\n> now the table just bloats infinitely and probably rather quickly. The\n> correct value of the setting depends on the amount of real data\n> (non-bloat) in the table, not the actual table size.\n\nI was thinking something vaguely like \"a table size that's roughly what\nan optimal autovacuuming schedule would leave the table at\" assuming 0.2\nvacuum_scale_factor. You would determine the absolute minimum size for\nthe table given the current live tuples in the table, then add 20% to\naccount for a steady state of dead tuples and vacuumed space. So it's\nnot 1.2x of the \"current\" table size at the time the local_update_limit\nfeature is installed, but 1.2x of the optimal table size.\n\nThis makes me think that maybe the logic needs to be a little more\ncomplex to avoid the problem you describe: if an UPDATE is prevented\nfrom being HOT because of this setting, but then it goes and consults\nFSM and it gives the update a higher block number than the tuple's\ncurrent block (or it fails to give a block number at all so it is forced\nto extend the relation), then the update should give up on that strategy\nand use a HOT update after all. (I have not read the actual patch;\nmaybe it already does this? It sounds kinda obvious.)\n\n\nHaving to set AEL is not nice for sure, but wouldn't\nShareUpdateExclusiveLock be sufficient? We have a bunch of reloptions\nfor which that is sufficient.\n\n\n> But without any kind of auto-tuning, in my opinion, it's a fairly poor\n> feature. Sure, some people will get use out of it, if they're\n> sufficiently knowledgeable and sufficiently determined. But I think\n> for most people in most situations, it will be a struggle.\n> \n> -- \n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n",
"msg_date": "Tue, 19 Sep 2023 18:30:44 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 12:30 PM Alvaro Herrera <[email protected]> wrote:\n> I was thinking something vaguely like \"a table size that's roughly what\n> an optimal autovacuuming schedule would leave the table at\" assuming 0.2\n> vacuum_scale_factor. You would determine the absolute minimum size for\n> the table given the current live tuples in the table, then add 20% to\n> account for a steady state of dead tuples and vacuumed space. So it's\n> not 1.2x of the \"current\" table size at the time the local_update_limit\n> feature is installed, but 1.2x of the optimal table size.\n\nRight, that would be great. And honestly if that's something we can\nfigure out, then why does the parameter even need to be an integer\ninstead of a Boolean? If the system knows the optimal table size, then\nthe user can just say \"try to compact this table\" and need not say to\nwhat size. The 1.2 multiplier is probably situation dependent and\nmaybe the multiplier should indeed be a configuration parameter, but\nwe would be way better off if the absolute size didn't need to be.\n\n> This makes me think that maybe the logic needs to be a little more\n> complex to avoid the problem you describe: if an UPDATE is prevented\n> from being HOT because of this setting, but then it goes and consults\n> FSM and it gives the update a higher block number than the tuple's\n> current block (or it fails to give a block number at all so it is forced\n> to extend the relation), then the update should give up on that strategy\n> and use a HOT update after all. (I have not read the actual patch;\n> maybe it already does this? It sounds kinda obvious.)\n\n+1 to all of that. Anything we can do to reduce the chance of the\nparameter doing the opposite of what it's intended to do is, IMHO,\nreally, really valuable. If you're in the situation where you really\nneed something like this, you're probably having a pretty bad day\nalready.\n\nJust to be more clear about my position, I don't think that having\nsome kind of a feature along these lines is a bad idea. I do think\nthat this is one of those cases where the perfect is the enemy of the\ngood, and we can fall into the trap of saying that since we can't do\nthe perfect thing let's not do anything at all. At the same time, just\nbecause we need to do something doesn't mean we should do exactly the\nfirst thing that anybody thought up, or that we shouldn't try as hard\nas we can to mitigate the downsides. If we add something like this I\nbet it will get a lot of use. Even a minor improvement to the design\nthat removes one pitfall of many could turn out to help a lot of\npeople. If we could get to the point where most people have a positive\nuser experience without too much effort, this could turn out to be one\nof the most impactful features in years.\n\n> Having to set AEL is not nice for sure, but wouldn't\n> ShareUpdateExclusiveLock be sufficient? We have a bunch of reloptions\n> for which that is sufficient.\n\nHmm, yeah, I think you're right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 12:52:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-19 18:30:44 +0200, Alvaro Herrera wrote:\n> This makes me think that maybe the logic needs to be a little more\n> complex to avoid the problem you describe: if an UPDATE is prevented\n> from being HOT because of this setting, but then it goes and consults\n> FSM and it gives the update a higher block number than the tuple's\n> current block (or it fails to give a block number at all so it is forced\n> to extend the relation), then the update should give up on that strategy\n> and use a HOT update after all. (I have not read the actual patch;\n> maybe it already does this? It sounds kinda obvious.)\n\nYea, a setting like what's discussed here seems, uh, not particularly useful\nfor achieving the goal of compacting tables. I don't think guiding this\nthrough SQL makes a lot of sense. For decent compaction you'd want to scan the\ntable backwards, and move rows from the end to earlier, but stop once\neverything is filled up. You can somewhat do that from SQL, but it's going to\nbe awkward and slow. I doubt you even want to use the normal UPDATE WAL\nlogging.\n\nI think having explicit compaction support in VACUUM or somewhere similar\nwould make sense, but I don't think the proposed GUC is a useful stepping\nstone.\n\n\n> > But without any kind of auto-tuning, in my opinion, it's a fairly poor\n> > feature. Sure, some people will get use out of it, if they're\n> > sufficiently knowledgeable and sufficiently determined. But I think\n> > for most people in most situations, it will be a struggle.\n\nIndeed. I think it'd often just explode table and index sizes, because HOT\npruning won't be able to make usable space in pages anymore (due to dead\nitems).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Sep 2023 09:56:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, 19 Sept 2023 at 18:56, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-09-19 18:30:44 +0200, Alvaro Herrera wrote:\n> > This makes me think that maybe the logic needs to be a little more\n> > complex to avoid the problem you describe: if an UPDATE is prevented\n> > from being HOT because of this setting, but then it goes and consults\n> > FSM and it gives the update a higher block number than the tuple's\n> > current block (or it fails to give a block number at all so it is forced\n> > to extend the relation), then the update should give up on that strategy\n> > and use a HOT update after all. (I have not read the actual patch;\n> > maybe it already does this? It sounds kinda obvious.)\n>\n> Yea, a setting like what's discussed here seems, uh, not particularly useful\n> for achieving the goal of compacting tables. I don't think guiding this\n> through SQL makes a lot of sense. For decent compaction you'd want to scan the\n> table backwards, and move rows from the end to earlier, but stop once\n> everything is filled up. You can somewhat do that from SQL, but it's going to\n> be awkward and slow. I doubt you even want to use the normal UPDATE WAL\n> logging.\n\nWe can't move tuples around (or, not that I know of) without using a\ntransaction ID to control the visibility of the two locations of that\ntuple. Doing table compaction would thus likely require using\ntransactions to move these tuples around. Using a single backend and\nbulk operations, it'll still lock each tuple that is being moved, and\nthat can be noticed by user DML queries. I'd rather make the user's\nqueries move the data around than this long-duration, locking\nbackground operation.\n\n> I think having explicit compaction support in VACUUM or somewhere similar\n> would make sense, but I don't think the proposed GUC is a useful stepping\n> stone.\n\nThe point of this GUC is that the compaction can happen organically in\nthe user's UPDATE workflow, so that there is no long locking operation\ngoing on (as you would see with VACUUM FULL / CLUSTER / pg_repack).\n\n> > > But without any kind of auto-tuning, in my opinion, it's a fairly poor\n> > > feature. Sure, some people will get use out of it, if they're\n> > > sufficiently knowledgeable and sufficiently determined. But I think\n> > > for most people in most situations, it will be a struggle.\n>\n> Indeed. I think it'd often just explode table and index sizes, because HOT\n> pruning won't be able to make usable space in pages anymore (due to dead\n> items).\n\nYou seem to misunderstand the latest patch. It explicitly only blocks\nlocal updates if the update can then move the new tuple to an earlier\npage. If that is not possible, then it'll insert locally (assuming\nthat is still possible) and HOT can then still apply.\n\nAnd yes, moving tuples to earlier pages will indeed increase index\nbloat, because it does create dead tuples where previously we could've\napplied HOT. But we do have VACUUM and REINDEX CONCURRENTLY to clean\nthat up without serious long-duration stop-the-world actions, while\nthe other builtin cleanup methods don't.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Sep 2023 19:33:22 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, 19 Sept 2023 at 18:52, Robert Haas <[email protected]> wrote:\n>\n> On Tue, Sep 19, 2023 at 12:30 PM Alvaro Herrera <[email protected]> wrote:\n> > I was thinking something vaguely like \"a table size that's roughly what\n> > an optimal autovacuuming schedule would leave the table at\" assuming 0.2\n> > vacuum_scale_factor. You would determine the absolute minimum size for\n> > the table given the current live tuples in the table, then add 20% to\n> > account for a steady state of dead tuples and vacuumed space. So it's\n> > not 1.2x of the \"current\" table size at the time the local_update_limit\n> > feature is installed, but 1.2x of the optimal table size.\n>\n> Right, that would be great. And honestly if that's something we can\n> figure out, then why does the parameter even need to be an integer\n> instead of a Boolean? If the system knows the optimal table size, then\n> the user can just say \"try to compact this table\" and need not say to\n> what size. The 1.2 multiplier is probably situation dependent and\n> maybe the multiplier should indeed be a configuration parameter, but\n> we would be way better off if the absolute size didn't need to be.\n\nMostly agreed, but I think there's a pitfall here. You seem to assume\nwe have a perfect oracle that knows the optimal data size, but we\nalready know that our estimates can be significantly off. I don't\nquite trust the statistics enough to do any calculations based on the\nnumber of tuples in the relation. That also ignores the fact that we\ndon't actually have any good information about the average size of the\ntuples in the table. So with current statistics, any automated \"this\nis how large the table should be\" decisions would result in an\nautomated footgun, instead of the current patch's where the user has\nto decide to configure it to an explicit value.\n\nBut about that: I'm not sure what the \"footgun\" is that you've\nmentioned recently?\nThe issue with excessive bloat (when the local_update_limit is set too\nsmall and fillfactor is low) was fixed in the latest patch nearly\nthree weeks ago, so the only remaining issue with misconfiguration is\nslower updates. Sure, that's not great, but in my opinion not a\n\"footgun\": performance returns immediately after resetting\nlocal_update_limit, and no space was lost.\n\n> > This makes me think that maybe the logic needs to be a little more\n> > complex to avoid the problem you describe: if an UPDATE is prevented\n> > from being HOT because of this setting, but then it goes and consults\n> > FSM and it gives the update a higher block number than the tuple's\n> > current block (or it fails to give a block number at all so it is forced\n> > to extend the relation), then the update should give up on that strategy\n> > and use a HOT update after all. (I have not read the actual patch;\n> > maybe it already does this? It sounds kinda obvious.)\n>\n> +1 to all of that. Anything we can do to reduce the chance of the\n> parameter doing the opposite of what it's intended to do is, IMHO,\n> really, really valuable. If you're in the situation where you really\n> need something like this, you're probably having a pretty bad day\n> already.\n\nYes, it does that with the latest patch, from not quite 3 weeks ago.\n\n> Just to be more clear about my position, I don't think that having\n> some kind of a feature along these lines is a bad idea.\n\nThanks for clarifying.\n\n> I do think\n> that this is one of those cases where the perfect is the enemy of the\n> good, and we can fall into the trap of saying that since we can't do\n> the perfect thing let's not do anything at all. At the same time, just\n> because we need to do something doesn't mean we should do exactly the\n> first thing that anybody thought up, or that we shouldn't try as hard\n> as we can to mitigate the downsides. If we add something like this I\n> bet it will get a lot of use. Even a minor improvement to the design\n> that removes one pitfall of many could turn out to help a lot of\n> people.\n\n100% agreed.\n\n> > Having to set AEL is not nice for sure, but wouldn't\n> > ShareUpdateExclusiveLock be sufficient? We have a bunch of reloptions\n> > for which that is sufficient.\n>\n> Hmm, yeah, I think you're right.\n\nUpdating the reloption after relation truncation implies having the\nsame lock as relation truncation, i.e. AEL (if the vacuum docs are to\nbe believed). So the AEL is not reqiored for updating the storage\noption (that would require SUEL), but for the block truncation\noperation operation.\n\nKind regards,\n\nMatthias van de Meent\nNeon (http://neon.tech)\n\n\n",
"msg_date": "Tue, 19 Sep 2023 20:20:06 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 12:56 PM Andres Freund <[email protected]> wrote:\n> Yea, a setting like what's discussed here seems, uh, not particularly useful\n> for achieving the goal of compacting tables. I don't think guiding this\n> through SQL makes a lot of sense. For decent compaction you'd want to scan the\n> table backwards, and move rows from the end to earlier, but stop once\n> everything is filled up. You can somewhat do that from SQL, but it's going to\n> be awkward and slow. I doubt you even want to use the normal UPDATE WAL\n> logging.\n>\n> I think having explicit compaction support in VACUUM or somewhere similar\n> would make sense, but I don't think the proposed GUC is a useful stepping\n> stone.\n\nI think there's a difference between wanting to compact instantly and\nwanting to compact over time. I think that this kind of thing is\nreasonably well-suited to the latter, if we can engineer away the\ncases where it backfires.\n\nBut I know people will try to use it for instant compaction too, and\nthere it's worth remembering why we removed old-style VACUUM FULL. The\nmain problem is that it was mind-bogglingly slow. The other really bad\nproblem is that it caused massive index bloat. I think any system\nthat's based on moving around my tuples right now to make my table\nsmaller right now is likely to have similar issues.\n\nIn the case where you're trying to compact gradually, I think there\nare potentially serious issues with index bloat, but only potentially.\nIt seems like there are reasonable cases where it's fine.\nSpecifically, if you have relatively few indexes per table, relatively\nfew long-running transactions, and all tuples get updated on a\nsemi-regular basis, I'm thinking that you're more likely to win than\nlose.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 14:50:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-19 19:33:22 +0200, Matthias van de Meent wrote:\n> On Tue, 19 Sept 2023 at 18:56, Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-09-19 18:30:44 +0200, Alvaro Herrera wrote:\n> > > This makes me think that maybe the logic needs to be a little more\n> > > complex to avoid the problem you describe: if an UPDATE is prevented\n> > > from being HOT because of this setting, but then it goes and consults\n> > > FSM and it gives the update a higher block number than the tuple's\n> > > current block (or it fails to give a block number at all so it is forced\n> > > to extend the relation), then the update should give up on that strategy\n> > > and use a HOT update after all. (I have not read the actual patch;\n> > > maybe it already does this? It sounds kinda obvious.)\n> >\n> > Yea, a setting like what's discussed here seems, uh, not particularly useful\n> > for achieving the goal of compacting tables. I don't think guiding this\n> > through SQL makes a lot of sense. For decent compaction you'd want to scan the\n> > table backwards, and move rows from the end to earlier, but stop once\n> > everything is filled up. You can somewhat do that from SQL, but it's going to\n> > be awkward and slow. I doubt you even want to use the normal UPDATE WAL\n> > logging.\n>\n> We can't move tuples around (or, not that I know of) without using a\n> transaction ID to control the visibility of the two locations of that\n> tuple.\n\nCorrect, otherwise you'd end up with broken visibility in scans (seeing the\nsame tuple twice or never).\n\n\n> Doing table compaction would thus likely require using transactions to move\n> these tuples around.\n\nYes - but I don't think that has to be a problem. I'd expect something like\nthis to use multiple transactions internally. Possibly optimizing xid usage by\nchecking if other transactions are currently waiting on the xid and committing\nif that's the case. Processing a single page should be quite fast, so the\nmaximum delay on other sessions is quite small.\n\n\n> Using a single backend and bulk operations, it'll still lock each tuple that\n> is being moved, and that can be noticed by user DML queries. I'd rather make\n> the user's queries move the data around than this long-duration, locking\n> background operation.\n\nI doubt that works well enough in practice. It's very common to have tuples\nthat aren't updated after some point. So you then end up with needing tooling\nthat triggers UPDATEs for tuples at the end of the relation.\n\n\n> > I think having explicit compaction support in VACUUM or somewhere similar\n> > would make sense, but I don't think the proposed GUC is a useful stepping\n> > stone.\n>\n> The point of this GUC is that the compaction can happen organically in\n> the user's UPDATE workflow, so that there is no long locking operation\n> going on (as you would see with VACUUM FULL / CLUSTER / pg_repack).\n\nIt certainly shouldn't use an AEL. I think we could even get away without an\nSUE (it's basically just UPDATEs after all), but whether it's worth doing that\nI'm not sure.\n\n\n> > > > But without any kind of auto-tuning, in my opinion, it's a fairly poor\n> > > > feature. Sure, some people will get use out of it, if they're\n> > > > sufficiently knowledgeable and sufficiently determined. But I think\n> > > > for most people in most situations, it will be a struggle.\n> >\n> > Indeed. I think it'd often just explode table and index sizes, because HOT\n> > pruning won't be able to make usable space in pages anymore (due to dead\n> > items).\n>\n> You seem to misunderstand the latest patch. It explicitly only blocks\n> local updates if the update can then move the new tuple to an earlier\n> page. If that is not possible, then it'll insert locally (assuming\n> that is still possible) and HOT can then still apply.\n\nI indeed apparently had looked at the wrong patch. But I still don't think\nthis is a useful way of controlling this. I guess it could be a small part of\nsomething larger, but you are going to need something that actively updates\ntuples at the end of the table, otherwise it's very unlikely in practice that\nyou'll ever be able to shrink the table.\n\n\nLeaving aside what process \"moves\" tuples, I doubt that controlling \"moving\"\nvia the table size is useful. Controlling via the amount free space in the FSM\nwould make more sense. If there's no known free space in the FSM, this\napproach can't compact. Using the table size to control also means that the\nvalue needs to be updated with the growth of the table. Whereas controlling\nmoving via a percentage of free space in the FSM would allow the same setting\nto be used even for a growing (or shrinking) table.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Sep 2023 11:55:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 2:20 PM Matthias van de Meent\n<[email protected]> wrote:\n> Mostly agreed, but I think there's a pitfall here. You seem to assume\n> we have a perfect oracle that knows the optimal data size, but we\n> already know that our estimates can be significantly off. I don't\n> quite trust the statistics enough to do any calculations based on the\n> number of tuples in the relation. That also ignores the fact that we\n> don't actually have any good information about the average size of the\n> tuples in the table. So with current statistics, any automated \"this\n> is how large the table should be\" decisions would result in an\n> automated footgun, instead of the current patch's where the user has\n> to decide to configure it to an explicit value.\n\nI'm not assuming that there's an oracle here. I'm hoping that there's\nsome way that we can construct one. If we can't, then I think we're\nasking the user to figure out a value that we don't have any idea how\nto compute ourselves. And I think that kind of thing is usually a bad\nidea. It's reasonable to ask the user for input when they know\nsomething relevant that we can't know, like how large they think their\ndatabase will get, or what hardware they're using. But it's not\nreasonable to essentially hope that the user is smarter than we are.\nThat's leaving our job half-undone and forcing the user into coping\nwith the result. And note that the value we need here is largely about\nthe present, not the future. The question is \"how small can the table\nbe practically made right now?\". And there is no reason at all to\nsuppose that the user is better-placed to answer that question than\nthe database itself.\n\n> But about that: I'm not sure what the \"footgun\" is that you've\n> mentioned recently?\n> The issue with excessive bloat (when the local_update_limit is set too\n> small and fillfactor is low) was fixed in the latest patch nearly\n> three weeks ago, so the only remaining issue with misconfiguration is\n> slower updates. Sure, that's not great, but in my opinion not a\n> \"footgun\": performance returns immediately after resetting\n> local_update_limit, and no space was lost.\n\nThat does seem like a very good change, but I'm not convinced that it\nsolves the whole problem. I would agree with your argument if the only\ndownside of enabling the feature were searching the FSM, failing to\nfind a suitable free page, and falling back to a HOT update. Such a\nthing might be slow, but it won't cause any bloat, and as you say, if\nthe feature doesn't do what you want, don't use it. But I think the\nfeature can still cause bloat.\n\nIf we're using this feature on a reasonably heavily-updated table,\nthen sometimes when we check whether any low-numbered pages have free\nspace, it will turn out that one of them does. This will happen even\nif local_update_limit is set far too low, because the table is\nheavily-updated, and sometimes that means tuples are moving around,\nleaving holes. So when there is a hole, i.e. just by luck we happen to\nfind some space on a low-numbered page, we'll suffer the cost of a\nnon-HOT update to move that tuple to an earlier page of the relation.\nHowever, there's a good chance that the next time we update that\ntuple, the page will have become completely full, because everybody's\nfuriously trying to jam as many tuples as possible into those\nlow-numbered pages, so now the tuple will have to bounce to some\nhigher-numbered page.\n\nSo I think what will happen if the local update limit is set too low,\nand the table is actually being updated a lot, is that we'll just\nuselessly do a bunch of HOT updates on high-numbered pages as non-HOT,\nwhich will fill up low-numbered pages turning even potentially HOT\nupdates on those pages to non-HOT as well. Doing a bunch of updates\nthat could have been HOT as non-HOT can for sure cause index bloat. It\ncould maybe also cause table bloat, because if we'd done the updates\nas HOT, we would have been able to recover the line pointers via\nHOT-pruning, but since we turned them into non-HOT updates, we have to\nwait for vacuum, which is comparatively much less frequent.\n\nI'm not quite sure how bad this residual problem is. It's certainly a\nlot better if a failed attempt to move a tuple earlier can turn into a\nnormal HOT update instead of a non-HOT update. But I don't think it\ncompletely eliminates the problem of useless tuple movement either.\n\nAs Andres points out, I think rightly, we should really be thinking\nabout ways to guide this behavior other than a page number. As you\npoint out, there's no guarantee that we can know the right page\nnumber. If we can, cool. But there are other approaches too. He\nmentions looking at how full the FSM is, which seems like an\ninteresting idea although surely we don't want every backend\nrepeatedly iterating over the FSM to recompute statistics. I wonder if\nthere are other good ideas we haven't thought of yet. Certainly, if\nyou found that you were frequently being forced to move tuples to\nhigher-numbered pages for lack of space anywhere else, that would be a\ngood sign that you were trying to squeeze the relation into too few\npages. But ideally you'd like to realize that you have a problem\nbefore things get to that point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 17:08:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, 2023-09-19 at 14:50 -0400, Robert Haas wrote:\n> But I know people will try to use it for instant compaction too, and\n> there it's worth remembering why we removed old-style VACUUM FULL. The\n> main problem is that it was mind-bogglingly slow. The other really bad\n> problem is that it caused massive index bloat. I think any system\n> that's based on moving around my tuples right now to make my table\n> smaller right now is likely to have similar issues.\n\nI had the same feeling that this is sort of bringing back old-style\nVACUUM (FULL). But I don't think that index bloat is a show stopper\nthese days, when we have REINDEX CONCURRENTLY, so I am not worried.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 20 Sep 2023 05:02:42 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Tue, 2023-09-19 at 12:52 -0400, Robert Haas wrote:\n> On Tue, Sep 19, 2023 at 12:30 PM Alvaro Herrera <[email protected]> wrote:\n> > I was thinking something vaguely like \"a table size that's roughly what\n> > an optimal autovacuuming schedule would leave the table at\" assuming 0.2\n> > vacuum_scale_factor. You would determine the absolute minimum size for\n> > the table given the current live tuples in the table, then add 20% to\n> > account for a steady state of dead tuples and vacuumed space. So it's\n> > not 1.2x of the \"current\" table size at the time the local_update_limit\n> > feature is installed, but 1.2x of the optimal table size.\n> \n> Right, that would be great. And honestly if that's something we can\n> figure out, then why does the parameter even need to be an integer\n> instead of a Boolean? If the system knows the optimal table size, then\n> the user can just say \"try to compact this table\" and need not say to\n> what size. The 1.2 multiplier is probably situation dependent and\n> maybe the multiplier should indeed be a configuration parameter, but\n> we would be way better off if the absolute size didn't need to be.\n\nI don't have high hopes for a reliable way to automatically determine\nthe target table size. There are these queries floating around to estimate\ntable bloat, which are used by various monitoring systems. I find that they\nget it right a lot of the time, but sometimes they get it wrong. Perhaps\nwe can do better than that, but I vastly prefer a setting that I can control\n(even at the danger that I can misconfigure it) over an automatism that I\ncannot control and that sometimes gets it wrong.\n\nI like Alvaro's idea to automatically reset \"local_update_limit\" when the\ntable has shrunk enough. Why not perform that task during vacuum truncation?\nIf vacuum truncation has taken place, check if the table size is no bigger\nthan \"local_update_limit\" * (1 + \"autovacuum_vacuum_scale_factor\"), and if\nit is no bigger, reset \"local_update_limit\". That way, we would not have\nto worry about a lock, because vacuum truncation already has the table locked.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 20 Sep 2023 05:18:20 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe ([email protected]) wrote:\n> On Tue, 2023-09-19 at 12:52 -0400, Robert Haas wrote:\n> > On Tue, Sep 19, 2023 at 12:30 PM Alvaro Herrera <[email protected]> wrote:\n> > > I was thinking something vaguely like \"a table size that's roughly what\n> > > an optimal autovacuuming schedule would leave the table at\" assuming 0.2\n> > > vacuum_scale_factor. You would determine the absolute minimum size for\n> > > the table given the current live tuples in the table, then add 20% to\n> > > account for a steady state of dead tuples and vacuumed space. So it's\n> > > not 1.2x of the \"current\" table size at the time the local_update_limit\n> > > feature is installed, but 1.2x of the optimal table size.\n> > \n> > Right, that would be great. And honestly if that's something we can\n> > figure out, then why does the parameter even need to be an integer\n> > instead of a Boolean? If the system knows the optimal table size, then\n> > the user can just say \"try to compact this table\" and need not say to\n> > what size. The 1.2 multiplier is probably situation dependent and\n> > maybe the multiplier should indeed be a configuration parameter, but\n> > we would be way better off if the absolute size didn't need to be.\n> \n> I don't have high hopes for a reliable way to automatically determine\n> the target table size. There are these queries floating around to estimate\n> table bloat, which are used by various monitoring systems. I find that they\n> get it right a lot of the time, but sometimes they get it wrong. Perhaps\n> we can do better than that, but I vastly prefer a setting that I can control\n> (even at the danger that I can misconfigure it) over an automatism that I\n> cannot control and that sometimes gets it wrong.\n\nNot completely against a setting- but would certainly prefer that this\nbe done in a more automated way, if possible.\n\nTo that end, my thought would be some kind of regular review of the FSM,\nor maybe actual review by walking through the table (as VACUUM already\ndoes...) to get an idea of where there's space and where there's used up\nareas and then use that to inform various operations (either VACUUM\nitself or perhaps UPDATEs from SQL). We could also try to 'start\nsimple' and look for cases that we can say \"well, that's definitely not\ngood\" and address those initially.\n\nConsider (imagine as a histogram; X is used space, . is empty):\n\n 1: XXXXXXX\n 2: XXX\n 3: XXXXXXX\n 4: XXX\n 5: X\n 6: X\n 7: .\n 8: .\n 9: .\n10: .\n11: .\n12: .\n13: .\n14: .\n15: .\n16: .\n17: .\n18: .\n19: .\n20: X\n\nWell, obviously there's tons of free space in the middle and if we could\njust move those few tuples/pages/whatever that are near the end to\nearlier in the table then we'd be able to truncate off and shrink a\nlot of the table.\n\n> I like Alvaro's idea to automatically reset \"local_update_limit\" when the\n> table has shrunk enough. Why not perform that task during vacuum truncation?\n> If vacuum truncation has taken place, check if the table size is no bigger\n> than \"local_update_limit\" * (1 + \"autovacuum_vacuum_scale_factor\"), and if\n> it is no bigger, reset \"local_update_limit\". That way, we would not have\n> to worry about a lock, because vacuum truncation already has the table locked.\n\nAgreed on this too. Essentially, once we've done some truncation, we\nshould 'reset'.\n\nI've no doubt that there's some better algorithm for this, but I keep\ncoming back to something as simple as- if the entire second half of the\ntable will fit into the entire first half then the table is twice as\nlarge as it needs to be and perhaps that triggers a preference for\nplacing tuples in the first half of the table. As for what handles\nthis- maybe have both UPDATE and VACUUM able to, but prefer for UPDATE\nto do so and only have VACUUM kick in once the tuples at the end of the\nrelation are older than some xid-based threshold (perhaps all of the\ntuples on a given page have to be old enough?).\n\nWhile it feels a bit 'late' in terms of when to start taking this\naction, we could possibly start with 'all frozen' as an indicator of\n'old enough'? Then, between the FSM and the VM, VACUUM could decide\nthat pages at the end of the table should be moved to be earlier and go\nabout making that happen. I'm a bit concerned about the risk of some\nkind of deadlock or similar happening between VACUUM and user processes\nif we're trying to do this with multiple tuples at a time but hopefully\nwe could come up with a way to avoid that. This process naturally would\nhave to involve updating indexes and the VM and FSM as the tuples get\nmoved.\n\nIn terms of what this would look like, my thinking is that VACUUM would\nscan the table and the FSM and perhaps the VM and then say \"ok, this\ntable is bigger than it needs to be, let's try to fix that\" and then set\na flag on the table, which a user could also explicitly set to give them\ncontrol over this process happening sooner or not happening at all, and\nthat would indicate to UPDATE to prefer earlier pages over the current\npage or HOT updates, while VACUUM would also look at the flag to decide\nif it should try to move tuples itself to earlier. Then, once a VACUUM\nhas been able to come through and truncate the table, the flag would be\nreset (maybe even if the user set it? Or perhaps we'd have a way for\nthe user to indicate if they want VACUUM to reset the flag on truncation\nor not).\n\nBroadly speaking, I agree with the points made that we should be trying\nto design a way for this to all happen both automatically and from a\nbackground process without requiring the user to issue UPDATE statements\nto make it happen- but I do like the idea of making it work with user\nissued UPDATE statements if the right conditions are met, to avoid the\ncase of VACUUM getting in the way of user activity due to locking or\ncreating excess writes.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 20 Sep 2023 10:02:23 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-19 14:50:13 -0400, Robert Haas wrote:\n> On Tue, Sep 19, 2023 at 12:56 PM Andres Freund <[email protected]> wrote:\n> > Yea, a setting like what's discussed here seems, uh, not particularly useful\n> > for achieving the goal of compacting tables. I don't think guiding this\n> > through SQL makes a lot of sense. For decent compaction you'd want to scan the\n> > table backwards, and move rows from the end to earlier, but stop once\n> > everything is filled up. You can somewhat do that from SQL, but it's going to\n> > be awkward and slow. I doubt you even want to use the normal UPDATE WAL\n> > logging.\n> >\n> > I think having explicit compaction support in VACUUM or somewhere similar\n> > would make sense, but I don't think the proposed GUC is a useful stepping\n> > stone.\n> \n> I think there's a difference between wanting to compact instantly and\n> wanting to compact over time. I think that this kind of thing is\n> reasonably well-suited to the latter, if we can engineer away the\n> cases where it backfires.\n> \n> But I know people will try to use it for instant compaction too, and\n> there it's worth remembering why we removed old-style VACUUM FULL. The\n> main problem is that it was mind-bogglingly slow.\n\nI think some of the slowness was implementation related, rather than\nfundamental. But more importantly, storage was something entirely different\nback then than it is now.\n\n\n> The other really bad problem is that it caused massive index bloat. I think\n> any system that's based on moving around my tuples right now to make my\n> table smaller right now is likely to have similar issues.\n\nI think the problem of exploding WAL usage exists both for compaction being\ndone in VACUUM (or a dedicated command) and being done by backends. I think to\nmake using a facility like this realistic, you really need some form of rate\nlimiting, regardless of when compaction is performed. Even leaving WAL volume\naside, naively doing on-update compaction will cause lots of additional\ncontention on early FSM pages.\n\n\n> In the case where you're trying to compact gradually, I think there\n> are potentially serious issues with index bloat, but only potentially.\n> It seems like there are reasonable cases where it's fine.\n\n> Specifically, if you have relatively few indexes per table, relatively\n> few long-running transactions, and all tuples get updated on a\n> semi-regular basis, I'm thinking that you're more likely to win than\n> lose.\n\nMaybe - but are you going to have a significant bloat issue in that case?\nSure, if the updates update most of the table, youre are going to - but then\non-update compaction won't really be needed either, since you're going to run\nout of space on pages on a regular basis.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Sep 2023 15:33:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-19 20:20:06 +0200, Matthias van de Meent wrote:\n> Mostly agreed, but I think there's a pitfall here. You seem to assume\n> we have a perfect oracle that knows the optimal data size, but we\n> already know that our estimates can be significantly off. I don't\n> quite trust the statistics enough to do any calculations based on the\n> number of tuples in the relation. That also ignores the fact that we\n> don't actually have any good information about the average size of the\n> tuples in the table. So with current statistics, any automated \"this\n> is how large the table should be\" decisions would result in an\n> automated footgun, instead of the current patch's where the user has\n> to decide to configure it to an explicit value.\n\nThe proposed patch already relies on the FSM being reasonably up2date, no? If\nthe FSM doesn't know about free space, the patch won't be able to place tuples\nearlier in the relation. And if the FSM wrongly thinks there's lots of free\nspace, it'll make updates very expensive.\n\nWe obviously don't want to scan the whole FSM on an ongoing basis, but\nvisiting the top-level FSM pages and/or having vacuum/analyze update some\nstatistic based on a more thorough analysis of the FSM doesn't seem insane.\n\n\nA related issue is that an accurate tuple size and accurate number of tuples\nisn't really sufficient - if tuples are wider, there can be plenty space on\npages without updates being able to reuse that space. And the width of tuples\ndoesn't have to be evenly distributed, so a simple approach of calculating how\nmany tuples of the average width fit in a page and then using that to come up\nwith the overall number of required pages isn't necessarily accurate either.\n\n\n> But about that: I'm not sure what the \"footgun\" is that you've\n> mentioned recently?\n> The issue with excessive bloat (when the local_update_limit is set too\n> small and fillfactor is low) was fixed in the latest patch nearly\n> three weeks ago, so the only remaining issue with misconfiguration is\n> slower updates.\n\nThere seem to be plenty footguns. Just to name a few:\n\n- The user has to determine a good value for local_update_limit, without\n really any good way of doing so.\n\n- A \"too low\" local_update_limit will often succeed in finding some space in\n earlier pages, without that providing useful progress on compaction -\n e.g. because subsequently tuples on the earlier page will be updated and\n there's now no space anymore. Leading to index bloat.\n\n- Configuring local_update_limit as a fixed size will be fragile when the data\n actually grows, leading to lots of pointless out-of-page updates.\n\n\nI think a minimal working approach could be to have the configuration be based\non the relation size vs space known to the FSM. If the target block of an\nupdate is higher than ((relation_size - fsm_free_space) *\nnew_reloption_or_guc), try finding the target block via the FSM, even if\nthere's space on the page.\n\n\n> Sure, that's not great, but in my opinion not a\n> \"footgun\": performance returns immediately after resetting\n> local_update_limit, and no space was lost.\n\nI think there's plenty ways to get pointless out-of-page updates, and\ntherefore index bloat, with local_update_limit as-proposed (see earlier in the\nemail). Once you have such pointless out-of-page updates, disabling\nlocal_update_limit won't bring performance back immediately (space usage due\nto index bloat and lookup performance issues due to the additional index\nentries).\n\n\n> Updating the reloption after relation truncation implies having the\n> same lock as relation truncation, i.e. AEL (if the vacuum docs are to\n> be believed).\n\nAside: We really need to get rid of the AEL for relation trunction - it's\nquite painful for hot standby workloads...\n\nThomas has been talking about a patch (and perhaps even posted it) that adds\ninfrastructure providing a \"shared smgrrelation\". Once we have that I think we\ncould lower the required lock level for truncation, by having storing both the\nfilesystem size and the \"valid\" size. There's a few potential models:\n\n- Vacuum truncation could lower the valid size in-memory, end its transaction,\n wait for concurrent accesses to the relation to finish, check if/where to\n the relation has been extended since, acquire the extension lock and\n truncate down to the \"valid\" size.\n\n The danger with that is that the necessary waiting can be long, threatening\n to starve autovacuum of workers.\n\n- Instead of making a single vacuum wait, we could have one vacuum update the\n valid size of the relation and also store an xid horizon. Later vacuums can\n truncate the physical size down the to valid size if there are no snapshot\n conflicts with said xid anymore.\n\n\nIf we had such an shared smgrrel, we could also make relation extension a lot\nmore efficient, because we would not need to pin all pages that a relation\nextension \"covers\" - the reason that we need to pin the to-be-extended-pages\nis to prevent concurrent scans from reading \"new\" blocks while the extension\nis in progress, as otherwise such a buffer can be dirtied and written out,\npotentially leading to lost writes and other fun issues. But with the shared\nsmgrrel, we can store the size-currently-being-extended-to separately from the\nfilesystem size. If it's not allowed to read the block range covered by those\nblocks into s_b, the race doesn't exist anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Sep 2023 16:18:52 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Thu, 2023-09-21 at 16:18 -0700, Andres Freund wrote:\n> I think a minimal working approach could be to have the configuration be based\n> on the relation size vs space known to the FSM. If the target block of an\n> update is higher than ((relation_size - fsm_free_space) *\n> new_reloption_or_guc), try finding the target block via the FSM, even if\n> there's space on the page.\n\nThat sounds like a good way forward.\n\nThe patch is in state \"needs review\", but it got review. I'll change it to\n\"waiting for author\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 13 Mar 2024 14:27:27 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 14:27, Laurenz Albe <[email protected]> wrote:\n>\n> On Thu, 2023-09-21 at 16:18 -0700, Andres Freund wrote:\n> > I think a minimal working approach could be to have the configuration be based\n> > on the relation size vs space known to the FSM. If the target block of an\n> > update is higher than ((relation_size - fsm_free_space) *\n> > new_reloption_or_guc), try finding the target block via the FSM, even if\n> > there's space on the page.\n>\n> That sounds like a good way forward.\n>\n> The patch is in state \"needs review\", but it got review. I'll change it to\n> \"waiting for author\".\n\nThen I'll withdraw this patch as I don't currently have (nor expect to\nhave anytime soon) the bandwitdh or expertise to rewrite this patch to\ninclude a system that calculates the free space available in a\nrelation.\n\nI've added a TODO item in the UPDATE section with a backlink to this\nthread so the discussion isn't lost.\n\n-Matthias\n\n\n",
"msg_date": "Fri, 15 Mar 2024 12:06:31 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disabling Heap-Only Tuples"
}
] |
[
{
"msg_contents": "Hi,\n\n(patch proposal below).\n\nConsider a table with a FK pointing to a partitioned table.\n\n CREATE TABLE p ( id bigint PRIMARY KEY )\n PARTITION BY list (id);\n CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\n\n CREATE TABLE r_1 (\n id bigint PRIMARY KEY,\n p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES p (id)\n );\n\nNow, attach this table \"refg_1\" as partition of another one having the same FK:\n\n CREATE TABLE r (\n id bigint PRIMARY KEY,\n p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES p (id)\n ) PARTITION BY list (id);\n\n ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1); \n\nThe old sub-FKs (below 18289) created in this table to enforce the action\ntriggers on referenced partitions are not deleted when the table becomes a\npartition. Because of this, we have additional and useless triggers on the\nreferenced partitions and we can not DETACH this partition on the referencing\nside anymore:\n\n => ALTER TABLE r DETACH PARTITION r_1;\n ERROR: could not find ON INSERT check triggers of foreign key\n constraint 18289\n\n => SELECT c.oid, conparentid, \n conrelid::regclass, \n confrelid::regclass, \n t.tgfoid::regproc\n FROM pg_constraint c \n JOIN pg_trigger t ON t.tgconstraint = c.oid\n WHERE confrelid::regclass = 'p_1'::regclass;\n oid │ conparentid │ conrelid │ confrelid │ tgfoid \n ───────┼─────────────┼──────────┼───────────┼────────────────────────\n 18289 │ 18286 │ r_1 │ p_1 │ \"RI_FKey_noaction_del\"\n 18289 │ 18286 │ r_1 │ p_1 │ \"RI_FKey_noaction_upd\"\n 18302 │ 18299 │ r │ p_1 │ \"RI_FKey_noaction_del\"\n 18302 │ 18299 │ r │ p_1 │ \"RI_FKey_noaction_upd\"\n (4 rows)\n\nThe legitimate constraint and triggers here are 18302. The old sub-FK\n18289 having 18286 as parent should have gone during the ATTACH PARTITION.\n\nPlease, find in attachment a patch dropping old \"sub-FK\" during the ATTACH\nPARTITION command and adding a regression test about it. At the very least, it\nhelp understanding the problem and sketch a possible solution.\n\nRegards,",
"msg_date": "Wed, 5 Jul 2023 23:30:28 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2023-Jul-05, Jehan-Guillaume de Rorthais wrote:\n\n> ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1); \n> \n> The old sub-FKs (below 18289) created in this table to enforce the action\n> triggers on referenced partitions are not deleted when the table becomes a\n> partition. Because of this, we have additional and useless triggers on the\n> referenced partitions and we can not DETACH this partition on the referencing\n> side anymore:\n\nOh, hm, interesting. Thanks for the report and patch. I found a couple\nof minor issues with it (most serious one: nkeys should be 3, not 2;\nalso sysscan should use conrelid index), but I'll try and complete it so\nthat it's ready for 2023-08-10's releases.\n\nRegards\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 31 Jul 2023 14:57:53 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "I think old \"sub-FK\" should not be dropped, that will be violates foreign\nkey constraint. For example :\npostgres=# insert into r values(1,1);\nINSERT 0 1\npostgres=# ALTER TABLE r DETACH PARTITION r_1;\nALTER TABLE\npostgres=# delete from p_1 where id = 1;\nDELETE 1\npostgres=# select * from r_1;\n id | p_id\n----+------\n 1 | 1\n(1 row)\n\nIf I run above SQLs on pg12.12, it will report error below:\npostgres=# delete from p_1 where id = 1;\nERROR: update or delete on table \"p_1\" violates foreign key constraint\n\"r_1_p_id_fkey1\" on table \"r_1\"\nDETAIL: Key (id)=(1) is still referenced from table \"r_1\".\n\nAlvaro Herrera <[email protected]> 于2023年7月31日周一 20:58写道:\n\n> On 2023-Jul-05, Jehan-Guillaume de Rorthais wrote:\n>\n> > ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\n> >\n> > The old sub-FKs (below 18289) created in this table to enforce the action\n> > triggers on referenced partitions are not deleted when the table becomes\n> a\n> > partition. Because of this, we have additional and useless triggers on\n> the\n> > referenced partitions and we can not DETACH this partition on the\n> referencing\n> > side anymore:\n>\n> Oh, hm, interesting. Thanks for the report and patch. I found a couple\n> of minor issues with it (most serious one: nkeys should be 3, not 2;\n> also sysscan should use conrelid index), but I'll try and complete it so\n> that it's ready for 2023-08-10's releases.\n>\n> Regards\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n>\n>\n>\n\nI think \n\nold \"sub-FK\" should not be dropped, that will be violates foreign key constraint. For example :postgres=# insert into r values(1,1);INSERT 0 1postgres=# ALTER TABLE r DETACH PARTITION r_1;ALTER TABLEpostgres=# delete from p_1 where id = 1;DELETE 1postgres=# select * from r_1; id | p_id ----+------ 1 | 1(1 row)If I run above SQLs on pg12.12, it will report error below:postgres=# delete from p_1 where id = 1;ERROR: update or delete on table \"p_1\" violates foreign key constraint \"r_1_p_id_fkey1\" on table \"r_1\"DETAIL: Key (id)=(1) is still referenced from table \"r_1\".Alvaro Herrera <[email protected]> 于2023年7月31日周一 20:58写道:On 2023-Jul-05, Jehan-Guillaume de Rorthais wrote:\n\n> ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1); \n> \n> The old sub-FKs (below 18289) created in this table to enforce the action\n> triggers on referenced partitions are not deleted when the table becomes a\n> partition. Because of this, we have additional and useless triggers on the\n> referenced partitions and we can not DETACH this partition on the referencing\n> side anymore:\n\nOh, hm, interesting. Thanks for the report and patch. I found a couple\nof minor issues with it (most serious one: nkeys should be 3, not 2;\nalso sysscan should use conrelid index), but I'll try and complete it so\nthat it's ready for 2023-08-10's releases.\n\nRegards\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 3 Aug 2023 14:55:03 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2023-Aug-03, tender wang wrote:\n\n> I think old \"sub-FK\" should not be dropped, that will be violates foreign\n> key constraint.\n\nYeah, I've been playing more with the patch and it is definitely not\ndoing the right things. Just eyeballing the contents of pg_trigger and\npg_constraint for partitions added by ALTER...ATTACH shows that the\ncatalog contents are inconsistent with those added by CREATE TABLE\nPARTITION OF.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 3 Aug 2023 11:02:43 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "I think the code to determine that fk of a partition is inherited or not is\nnot enough.\nFor example, in this case, foreign key r_1_p_id_fkey1 is not inherited\nfrom parent.\n\nIf conform->conparentid(in DetachPartitionFinalize func) is valid, we\nshould recheck confrelid(pg_constraint) field.\n\nI try to fix this problem in the attached patch.\nAny thoughts.\n\nAlvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:\n\n> On 2023-Aug-03, tender wang wrote:\n>\n> > I think old \"sub-FK\" should not be dropped, that will be violates\n> foreign\n> > key constraint.\n>\n> Yeah, I've been playing more with the patch and it is definitely not\n> doing the right things. Just eyeballing the contents of pg_trigger and\n> pg_constraint for partitions added by ALTER...ATTACH shows that the\n> catalog contents are inconsistent with those added by CREATE TABLE\n> PARTITION OF.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>",
"msg_date": "Thu, 3 Aug 2023 17:34:40 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Oversight the DetachPartitionFinalize(), I found another bug below:\n\npostgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);\nCREATE TABLE\npostgres=# CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\nCREATE TABLE\npostgres=# CREATE TABLE r_1 (\npostgres(# id bigint PRIMARY KEY,\npostgres(# p_id bigint NOT NULL\npostgres(# );\nCREATE TABLE\npostgres=# CREATE TABLE r (\npostgres(# id bigint PRIMARY KEY,\npostgres(# p_id bigint NOT NULL,\npostgres(# FOREIGN KEY (p_id) REFERENCES p (id)\npostgres(# ) PARTITION BY list (id);\nCREATE TABLE\npostgres=# ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\nALTER TABLE\npostgres=# ALTER TABLE r DETACH PARTITION r_1;\nALTER TABLE\npostgres=# insert into r_1 values(1,1);\nERROR: insert or update on table \"r_1\" violates foreign key constraint\n\"r_p_id_fkey\"\nDETAIL: Key (p_id)=(1) is not present in table \"p\".\n\nAfter detach operation, r_1 is normal relation and the inherited foreign\nkey 'r_p_id_fkey' should be removed.\n\n\ntender wang <[email protected]> 于2023年8月3日周四 17:34写道:\n\n> I think the code to determine that fk of a partition is inherited or not\n> is not enough.\n> For example, in this case, foreign key r_1_p_id_fkey1 is not inherited\n> from parent.\n>\n> If conform->conparentid(in DetachPartitionFinalize func) is valid, we\n> should recheck confrelid(pg_constraint) field.\n>\n> I try to fix this problem in the attached patch.\n> Any thoughts.\n>\n> Alvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:\n>\n>> On 2023-Aug-03, tender wang wrote:\n>>\n>> > I think old \"sub-FK\" should not be dropped, that will be violates\n>> foreign\n>> > key constraint.\n>>\n>> Yeah, I've been playing more with the patch and it is definitely not\n>> doing the right things. Just eyeballing the contents of pg_trigger and\n>> pg_constraint for partitions added by ALTER...ATTACH shows that the\n>> catalog contents are inconsistent with those added by CREATE TABLE\n>> PARTITION OF.\n>>\n>> --\n>> Álvaro Herrera PostgreSQL Developer —\n>> https://www.EnterpriseDB.com/\n>>\n>\n\nOversight the DetachPartitionFinalize(), I found another bug below:postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);CREATE TABLEpostgres=# CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);CREATE TABLEpostgres=# CREATE TABLE r_1 (postgres(# id bigint PRIMARY KEY,postgres(# p_id bigint NOT NULLpostgres(# );CREATE TABLEpostgres=# CREATE TABLE r (postgres(# id bigint PRIMARY KEY,postgres(# p_id bigint NOT NULL,postgres(# FOREIGN KEY (p_id) REFERENCES p (id)postgres(# ) PARTITION BY list (id);CREATE TABLEpostgres=# ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);ALTER TABLEpostgres=# ALTER TABLE r DETACH PARTITION r_1;ALTER TABLEpostgres=# insert into r_1 values(1,1);ERROR: insert or update on table \"r_1\" violates foreign key constraint \"r_p_id_fkey\"DETAIL: Key (p_id)=(1) is not present in table \"p\".After detach operation, r_1 is normal relation and the inherited foreign key 'r_p_id_fkey' should be removed. tender wang <[email protected]> 于2023年8月3日周四 17:34写道:I think the code to determine that fk of a partition is inherited or not is not enough. For example, in this case, foreign key r_1_p_id_fkey1 is not inherited from parent.If conform->conparentid(in DetachPartitionFinalize func) is valid, we should recheck confrelid(pg_constraint) field.I try to fix this problem in the attached patch.Any thoughts.Alvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:On 2023-Aug-03, tender wang wrote:\n\n> I think old \"sub-FK\" should not be dropped, that will be violates foreign\n> key constraint.\n\nYeah, I've been playing more with the patch and it is definitely not\ndoing the right things. Just eyeballing the contents of pg_trigger and\npg_constraint for partitions added by ALTER...ATTACH shows that the\ncatalog contents are inconsistent with those added by CREATE TABLE\nPARTITION OF.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 4 Aug 2023 17:04:29 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Oversight the DetachPartitionFinalize() again, I found the root cause why\n'r_p_id_fkey' wat not removed.\n\nDetachPartitionFinalize() call the GetParentedForeignKeyRefs() func to get\ntuple from pg_constraint that will be delete but failed.\n according to the comments, the GetParentedForeignKeyRefs() func get the\ntuple reference me not I reference others.\n\nI try to fix this bug :\ni. ConstraintSetParentConstraint() should not be called in\nDetachPartitionFinalize(), because after conparentid was set to 0,\nwe can not find inherited foreign keys.\nii. create another function like GetParentedForeignKeyRefs(), but the\nScanKey should be conrelid field not confrelid.\n\nI quickly test on my above solution in my env, can be solve above issue.\n\ntender wang <[email protected]> 于2023年8月4日周五 17:04写道:\n\n> Oversight the DetachPartitionFinalize(), I found another bug below:\n>\n> postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);\n> CREATE TABLE\n> postgres=# CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\n> CREATE TABLE\n> postgres=# CREATE TABLE r_1 (\n> postgres(# id bigint PRIMARY KEY,\n> postgres(# p_id bigint NOT NULL\n> postgres(# );\n> CREATE TABLE\n> postgres=# CREATE TABLE r (\n> postgres(# id bigint PRIMARY KEY,\n> postgres(# p_id bigint NOT NULL,\n> postgres(# FOREIGN KEY (p_id) REFERENCES p (id)\n> postgres(# ) PARTITION BY list (id);\n> CREATE TABLE\n> postgres=# ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\n> ALTER TABLE\n> postgres=# ALTER TABLE r DETACH PARTITION r_1;\n> ALTER TABLE\n> postgres=# insert into r_1 values(1,1);\n> ERROR: insert or update on table \"r_1\" violates foreign key constraint\n> \"r_p_id_fkey\"\n> DETAIL: Key (p_id)=(1) is not present in table \"p\".\n>\n> After detach operation, r_1 is normal relation and the inherited foreign\n> key 'r_p_id_fkey' should be removed.\n>\n>\n> tender wang <[email protected]> 于2023年8月3日周四 17:34写道:\n>\n>> I think the code to determine that fk of a partition is inherited or not\n>> is not enough.\n>> For example, in this case, foreign key r_1_p_id_fkey1 is not inherited\n>> from parent.\n>>\n>> If conform->conparentid(in DetachPartitionFinalize func) is valid, we\n>> should recheck confrelid(pg_constraint) field.\n>>\n>> I try to fix this problem in the attached patch.\n>> Any thoughts.\n>>\n>> Alvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:\n>>\n>>> On 2023-Aug-03, tender wang wrote:\n>>>\n>>> > I think old \"sub-FK\" should not be dropped, that will be violates\n>>> foreign\n>>> > key constraint.\n>>>\n>>> Yeah, I've been playing more with the patch and it is definitely not\n>>> doing the right things. Just eyeballing the contents of pg_trigger and\n>>> pg_constraint for partitions added by ALTER...ATTACH shows that the\n>>> catalog contents are inconsistent with those added by CREATE TABLE\n>>> PARTITION OF.\n>>>\n>>> --\n>>> Álvaro Herrera PostgreSQL Developer —\n>>> https://www.EnterpriseDB.com/\n>>>\n>>\n\nOversight the DetachPartitionFinalize() again, I found the root cause why 'r_p_id_fkey' wat not removed.DetachPartitionFinalize() call the GetParentedForeignKeyRefs() func to get tuple from pg_constraint that will be delete but failed. according to the comments, the GetParentedForeignKeyRefs() func get the tuple reference me not I reference others.I try to fix this bug :i. ConstraintSetParentConstraint() should not be called in DetachPartitionFinalize(), because after conparentid was set to 0, we can not find inherited foreign keys.ii. create another function like GetParentedForeignKeyRefs(), but the ScanKey should be conrelid field not confrelid.I quickly test on my above solution in my env, can be solve above issue. tender wang <[email protected]> 于2023年8月4日周五 17:04写道:Oversight the DetachPartitionFinalize(), I found another bug below:postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);CREATE TABLEpostgres=# CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);CREATE TABLEpostgres=# CREATE TABLE r_1 (postgres(# id bigint PRIMARY KEY,postgres(# p_id bigint NOT NULLpostgres(# );CREATE TABLEpostgres=# CREATE TABLE r (postgres(# id bigint PRIMARY KEY,postgres(# p_id bigint NOT NULL,postgres(# FOREIGN KEY (p_id) REFERENCES p (id)postgres(# ) PARTITION BY list (id);CREATE TABLEpostgres=# ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);ALTER TABLEpostgres=# ALTER TABLE r DETACH PARTITION r_1;ALTER TABLEpostgres=# insert into r_1 values(1,1);ERROR: insert or update on table \"r_1\" violates foreign key constraint \"r_p_id_fkey\"DETAIL: Key (p_id)=(1) is not present in table \"p\".After detach operation, r_1 is normal relation and the inherited foreign key 'r_p_id_fkey' should be removed. tender wang <[email protected]> 于2023年8月3日周四 17:34写道:I think the code to determine that fk of a partition is inherited or not is not enough. For example, in this case, foreign key r_1_p_id_fkey1 is not inherited from parent.If conform->conparentid(in DetachPartitionFinalize func) is valid, we should recheck confrelid(pg_constraint) field.I try to fix this problem in the attached patch.Any thoughts.Alvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:On 2023-Aug-03, tender wang wrote:\n\n> I think old \"sub-FK\" should not be dropped, that will be violates foreign\n> key constraint.\n\nYeah, I've been playing more with the patch and it is definitely not\ndoing the right things. Just eyeballing the contents of pg_trigger and\npg_constraint for partitions added by ALTER...ATTACH shows that the\ncatalog contents are inconsistent with those added by CREATE TABLE\nPARTITION OF.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 4 Aug 2023 18:10:53 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "The foreign key still works even though partition was detached. Is this\nbehavior expected?\nI can't find the answer in the document. If it is expected behavior ,\nplease ignore the bug I reported a few days ago.\n\ntender wang <[email protected]> 于2023年8月4日周五 17:04写道:\n\n> Oversight the DetachPartitionFinalize(), I found another bug below:\n>\n> postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);\n> CREATE TABLE\n> postgres=# CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\n> CREATE TABLE\n> postgres=# CREATE TABLE r_1 (\n> postgres(# id bigint PRIMARY KEY,\n> postgres(# p_id bigint NOT NULL\n> postgres(# );\n> CREATE TABLE\n> postgres=# CREATE TABLE r (\n> postgres(# id bigint PRIMARY KEY,\n> postgres(# p_id bigint NOT NULL,\n> postgres(# FOREIGN KEY (p_id) REFERENCES p (id)\n> postgres(# ) PARTITION BY list (id);\n> CREATE TABLE\n> postgres=# ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\n> ALTER TABLE\n> postgres=# ALTER TABLE r DETACH PARTITION r_1;\n> ALTER TABLE\n> postgres=# insert into r_1 values(1,1);\n> ERROR: insert or update on table \"r_1\" violates foreign key constraint\n> \"r_p_id_fkey\"\n> DETAIL: Key (p_id)=(1) is not present in table \"p\".\n>\n> After detach operation, r_1 is normal relation and the inherited foreign\n> key 'r_p_id_fkey' should be removed.\n>\n>\n> tender wang <[email protected]> 于2023年8月3日周四 17:34写道:\n>\n>> I think the code to determine that fk of a partition is inherited or not\n>> is not enough.\n>> For example, in this case, foreign key r_1_p_id_fkey1 is not inherited\n>> from parent.\n>>\n>> If conform->conparentid(in DetachPartitionFinalize func) is valid, we\n>> should recheck confrelid(pg_constraint) field.\n>>\n>> I try to fix this problem in the attached patch.\n>> Any thoughts.\n>>\n>> Alvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:\n>>\n>>> On 2023-Aug-03, tender wang wrote:\n>>>\n>>> > I think old \"sub-FK\" should not be dropped, that will be violates\n>>> foreign\n>>> > key constraint.\n>>>\n>>> Yeah, I've been playing more with the patch and it is definitely not\n>>> doing the right things. Just eyeballing the contents of pg_trigger and\n>>> pg_constraint for partitions added by ALTER...ATTACH shows that the\n>>> catalog contents are inconsistent with those added by CREATE TABLE\n>>> PARTITION OF.\n>>>\n>>> --\n>>> Álvaro Herrera PostgreSQL Developer —\n>>> https://www.EnterpriseDB.com/\n>>>\n>>\n\nThe foreign key still works even though partition was detached. Is this behavior expected?I can't find the answer in the document. If it is expected behavior , please ignore the bug I reported a few days ago. tender wang <[email protected]> 于2023年8月4日周五 17:04写道:Oversight the DetachPartitionFinalize(), I found another bug below:postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);CREATE TABLEpostgres=# CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);CREATE TABLEpostgres=# CREATE TABLE r_1 (postgres(# id bigint PRIMARY KEY,postgres(# p_id bigint NOT NULLpostgres(# );CREATE TABLEpostgres=# CREATE TABLE r (postgres(# id bigint PRIMARY KEY,postgres(# p_id bigint NOT NULL,postgres(# FOREIGN KEY (p_id) REFERENCES p (id)postgres(# ) PARTITION BY list (id);CREATE TABLEpostgres=# ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);ALTER TABLEpostgres=# ALTER TABLE r DETACH PARTITION r_1;ALTER TABLEpostgres=# insert into r_1 values(1,1);ERROR: insert or update on table \"r_1\" violates foreign key constraint \"r_p_id_fkey\"DETAIL: Key (p_id)=(1) is not present in table \"p\".After detach operation, r_1 is normal relation and the inherited foreign key 'r_p_id_fkey' should be removed. tender wang <[email protected]> 于2023年8月3日周四 17:34写道:I think the code to determine that fk of a partition is inherited or not is not enough. For example, in this case, foreign key r_1_p_id_fkey1 is not inherited from parent.If conform->conparentid(in DetachPartitionFinalize func) is valid, we should recheck confrelid(pg_constraint) field.I try to fix this problem in the attached patch.Any thoughts.Alvaro Herrera <[email protected]> 于2023年8月3日周四 17:02写道:On 2023-Aug-03, tender wang wrote:\n\n> I think old \"sub-FK\" should not be dropped, that will be violates foreign\n> key constraint.\n\nYeah, I've been playing more with the patch and it is definitely not\ndoing the right things. Just eyeballing the contents of pg_trigger and\npg_constraint for partitions added by ALTER...ATTACH shows that the\ncatalog contents are inconsistent with those added by CREATE TABLE\nPARTITION OF.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 7 Aug 2023 19:15:54 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2023-Aug-07, tender wang wrote:\n\n> The foreign key still works even though partition was detached. Is this\n> behavior expected?\n\nWell, there's no reason for it not to, right? For example, if you\ndetach a partition and then attach it again, you don't have to scan the\npartition on attach, because you know the constraint has remained valid\nall along.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Mon, 7 Aug 2023 13:25:04 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Thu, 3 Aug 2023 11:02:43 +0200\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2023-Aug-03, tender wang wrote:\n> \n> > I think old \"sub-FK\" should not be dropped, that will be violates foreign\n> > key constraint. \n> \n> Yeah, I've been playing more with the patch and it is definitely not\n> doing the right things. Just eyeballing the contents of pg_trigger and\n> pg_constraint for partitions added by ALTER...ATTACH shows that the\n> catalog contents are inconsistent with those added by CREATE TABLE\n> PARTITION OF.\n\nWell, as stated in my orignal message, at the patch helps understanding the\nproblem and sketch a possible solution. It definitely is not complete.\n\nAfter DETACHing the table, we surely needs to check everything again and\nrecreating what is needed to keep the FK consistent.\n\nBut should we keep the FK after DETACH? Did you check the two other discussions\nrelated to FK, self-FK & partition? Unfortunately, as Tender experienced, the\nmore we dig the more we find bugs. Moreover, the second one might seems\nunsolvable and deserve a closer look. See:\n\n* FK broken after DETACHing referencing part\n https://www.postgresql.org/message-id/20230420144344.40744130%40karst\n* Issue attaching a table to a partitioned table with an auto-referenced\n foreign key\n https://www.postgresql.org/message-id/20230707175859.17c91538%40karst\n\n\n\n",
"msg_date": "Thu, 10 Aug 2023 17:03:45 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Hi\n Is there any conclusion to this issue?\n\nJehan-Guillaume de Rorthais <[email protected]> 于2023年8月10日周四 23:03写道:\n\n> On Thu, 3 Aug 2023 11:02:43 +0200\n> Alvaro Herrera <[email protected]> wrote:\n>\n> > On 2023-Aug-03, tender wang wrote:\n> >\n> > > I think old \"sub-FK\" should not be dropped, that will be violates\n> foreign\n> > > key constraint.\n> >\n> > Yeah, I've been playing more with the patch and it is definitely not\n> > doing the right things. Just eyeballing the contents of pg_trigger and\n> > pg_constraint for partitions added by ALTER...ATTACH shows that the\n> > catalog contents are inconsistent with those added by CREATE TABLE\n> > PARTITION OF.\n>\n> Well, as stated in my orignal message, at the patch helps understanding the\n> problem and sketch a possible solution. It definitely is not complete.\n>\n> After DETACHing the table, we surely needs to check everything again and\n> recreating what is needed to keep the FK consistent.\n>\n> But should we keep the FK after DETACH? Did you check the two other\n> discussions\n> related to FK, self-FK & partition? Unfortunately, as Tender experienced,\n> the\n> more we dig the more we find bugs. Moreover, the second one might seems\n> unsolvable and deserve a closer look. See:\n>\n> * FK broken after DETACHing referencing part\n> https://www.postgresql.org/message-id/20230420144344.40744130%40karst\n> * Issue attaching a table to a partitioned table with an auto-referenced\n> foreign key\n> https://www.postgresql.org/message-id/20230707175859.17c91538%40karst\n>\n>\n\nHi Is there any conclusion to this issue?Jehan-Guillaume de Rorthais <[email protected]> 于2023年8月10日周四 23:03写道:On Thu, 3 Aug 2023 11:02:43 +0200\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2023-Aug-03, tender wang wrote:\n> \n> > I think old \"sub-FK\" should not be dropped, that will be violates foreign\n> > key constraint. \n> \n> Yeah, I've been playing more with the patch and it is definitely not\n> doing the right things. Just eyeballing the contents of pg_trigger and\n> pg_constraint for partitions added by ALTER...ATTACH shows that the\n> catalog contents are inconsistent with those added by CREATE TABLE\n> PARTITION OF.\n\nWell, as stated in my orignal message, at the patch helps understanding the\nproblem and sketch a possible solution. It definitely is not complete.\n\nAfter DETACHing the table, we surely needs to check everything again and\nrecreating what is needed to keep the FK consistent.\n\nBut should we keep the FK after DETACH? Did you check the two other discussions\nrelated to FK, self-FK & partition? Unfortunately, as Tender experienced, the\nmore we dig the more we find bugs. Moreover, the second one might seems\nunsolvable and deserve a closer look. See:\n\n* FK broken after DETACHing referencing part\n https://www.postgresql.org/message-id/20230420144344.40744130%40karst\n* Issue attaching a table to a partitioned table with an auto-referenced\n foreign key\n https://www.postgresql.org/message-id/20230707175859.17c91538%40karst",
"msg_date": "Wed, 25 Oct 2023 19:51:45 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2023-Oct-25, tender wang wrote:\n\n> Hi\n> Is there any conclusion to this issue?\n\nNone yet. I intend to work on this at some point, hopefully soon.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 25 Oct 2023 14:12:53 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Hi Alvaro,\nI re-analyzed this issue, and here is my analysis process.\nstep 1: CREATE TABLE p ( id bigint PRIMARY KEY )\n PARTITION BY list (id);\n step2: CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\n step3: CREATE TABLE r_1 (\n id bigint PRIMARY KEY,\n p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES p (id)\n );\nAfter above step 3 operations, we have below catalog tuples:\npostgres=# select oid, relname from pg_class where relname = 'p';\noid | relname\n-------+---------\n16384 | p\n(1 row)\npostgres=# select oid, relname from pg_class where relname = 'p_1';\noid | relname\n-------+---------\n16389 | p_1\n(1 row)\npostgres=# select oid, relname from pg_class where relname = 'r_1';\noid | relname\n-------+---------\n16394 | r_1\n(1 row)\npostgres=# select oid, conname,conrelid,conparentid,confrelid from\npg_constraint where conrelid = 16394;\noid | conname | conrelid | conparentid | confrelid\n-------+-------------------+----------+-------------+-----------\n16397 | r_1_p_id_not_null | 16394 | 0 | 0\n16399 | r_1_pkey | 16394 | 0 | 0\n16400 | r_1_p_id_fkey | 16394 | 0 | 16384\n16403 | r_1_p_id_fkey1 | 16394 | 16400 | 16389\n(4 rows)\npostgres=# select oid, tgrelid, tgparentid,\ntgconstrrelid,tgconstrindid,tgconstraint from pg_trigger where tgconstraint\n= 16403;\noid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid | tgconstraint\n-------+---------+------------+---------------+---------------+--------------\n16404 | 16389 | 16401 | 16394 | 16392 | 16403\n16405 | 16389 | 16402 | 16394 | 16392 | 16403\n(2 rows)\npostgres=# select oid, tgrelid, tgparentid,\ntgconstrrelid,tgconstrindid,tgconstraint from pg_trigger where tgconstraint\n= 16400;\noid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid | tgconstraint\n-------+---------+------------+---------------+---------------+--------------\n16401 | 16384 | 0 | 16394 | 16387 | 16400\n16402 | 16384 | 0 | 16394 | 16387 | 16400\n16406 | 16394 | 0 | 16384 | 16387 | 16400\n16407 | 16394 | 0 | 16384 | 16387 | 16400\n(4 rows)\nBecause table p is partitioned table and it has one child table p_1. So\nwhen r_1 add foreign key constraint, according to addFkRecurseReferenced(),\neach partition should have one pg_constraint row(e.g. r_1_p_id_fkey1).\nAfter called addFkRecurseReferenced() in ATAddForeignKeyConstraint(),\naddFkRecurseReferencing() will be called, in which\nit will add INSERT check trigger and UPDATE check trigger for r_1_p_id_fkey\nbut not for r_1_p_id_fkey1.\nSo when detach r_1 from r, according to DetachPartitionFinalize(), the\ninherited fks should unlink relationship from parent.\nThe created INSERT and UPDATE check triggers should unlink relationship\nlink fks. But just like I said above, the r_1_p_id_fkey1\nactually doesn't have INSERT check trigger.\n\nI slightly modified the previous patch,but I didn't add test case, because\nI found another issue.\nAfter done ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\nI run the oidjoins.sql and has warnings as belwo:\npsql:/tender/postgres/src/test/regress/sql/oidjoins.sql:49: WARNING: FK\nVIOLATION IN pg_trigger({tgparentid}): (\"(0,3)\",16401)\npsql:/tender/postgres/src/test/regress/sql/oidjoins.sql:49: WARNING: FK\nVIOLATION IN pg_trigger({tgparentid}): (\"(0,4)\",16402)\npostgres=# select oid, tgrelid, tgparentid,\ntgconstrrelid,tgconstrindid,tgconstraint from pg_trigger where oid >= 16384;\n oid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid |\ntgconstraint\n-------+---------+------------+---------------+---------------+--------------\n 16404 | 16389 | 16401 | 16394 | 16392 | 16403\n 16405 | 16389 | 16402 | 16394 | 16392 | 16403\n 16415 | 16384 | 0 | 16408 | 16387 | 16414\n 16416 | 16384 | 0 | 16408 | 16387 | 16414\n 16418 | 16389 | 16415 | 16408 | 16392 | 16417\n 16419 | 16389 | 16416 | 16408 | 16392 | 16417\n 16420 | 16408 | 0 | 16384 | 16387 | 16414\n 16421 | 16408 | 0 | 16384 | 16387 | 16414\n 16406 | 16394 | 16420 | 16384 | 16387 | 16400\n 16407 | 16394 | 16421 | 16384 | 16387 | 16400\n(10 rows)\noid = 16401 and oid = 16402 has been deleted.\nThe two trigger tuples are deleted in tryAttachPartitionForeignKey called\nby CloneFkReferencing.\n/*\n* Looks good! Attach this constraint. The action triggers in the new\n* partition become redundant -- the parent table already has equivalent\n* ones, and those will be able to reach the partition. Remove the ones\n* in the partition. We identify them because they have our constraint\n* OID, as well as being on the referenced rel.\n*/\nThe attached patch can't fix above issue. I'm not sure about the impact of\nthis issue. Maybe redundant triggers no need removed.\n\nBut it surely make oidjoings.sql fail if I add test case into v2 patch, so\nI don't add test case in v2 patch.\nNo test case is not good patch. I just share my idea about this issue. Hope\nto get your reply.\n\n\n\n\nAlvaro Herrera <[email protected]> 于2023年10月25日周三 20:13写道:\n\n> On 2023-Oct-25, tender wang wrote:\n>\n> > Hi\n> > Is there any conclusion to this issue?\n>\n> None yet. I intend to work on this at some point, hopefully soon.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>",
"msg_date": "Fri, 27 Oct 2023 17:05:49 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Hi Alvaro,\n\nRecently, Alexander reported the same issue on [1]. And before that,\nanother same issue was reported on [2].\nSo I try to re-work those issues. In my last email on this thread, I said\nthat\n\"\nI slightly modified the previous patch,but I didn't add test case, because\nI found another issue.\nAfter done ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\nI run the oidjoins.sql and has warnings as belwo:\npsql:/tender/postgres/src/test/regress/sql/oidjoins.sql:49: WARNING: FK\nVIOLATION IN pg_trigger({tgparentid}): (\"(0,3)\",16401)\npsql:/tender/postgres/src/test/regress/sql/oidjoins.sql:49: WARNING: FK\nVIOLATION IN pg_trigger({tgparentid}): (\"(0,4)\",16402)\n\"\n\nAnd I gave the explanation:\n\"\nThe two trigger tuples are deleted in tryAttachPartitionForeignKey called\nby CloneFkReferencing.\n/*\n* Looks good! Attach this constraint. The action triggers in the new\n* partition become redundant -- the parent table already has equivalent\n* ones, and those will be able to reach the partition. Remove the ones\n* in the partition. We identify them because they have our constraint\n* OID, as well as being on the referenced rel.\n*/\n\"\nI try to fix above fk violation. I have two ideas.\ni. Do not remove redundant, but when detaching parittion, the action\ntrigger on referenced side will be create again.\nI have consider about this situation.\n\nii. We still remove redundant, and the remove the child action trigger,\ntoo. If we do this way.\nShould we create action trigger recursively on referced side when detaching\npartition.\n\nI can't decide which one is better. And I'm not sure that keep this FK\nVIOLATION will cause some problem.\nI rebase and send v3 patch, which only fix NOT FOUND INSERT CHECK TRIGGER.\n\n\n[1]\nhttps://www.postgresql.org/message-id/18541-628a61bc267cd2d3%40postgresql.org\n[2]\nhttps://www.postgresql.org/message-id/GVAP278MB02787E7134FD691861635A8BC9032%40GVAP278MB0278.CHEP278.PROD.OUTLOOK.COM\n\n-- \nTender Wang",
"msg_date": "Thu, 18 Jul 2024 14:34:43 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Hello,\n\nI think the fix for the check triggers should be as the attached. Very\nclose to what you did, but you were skipping some operations that needed\nto be kept. AFAICS this patch works correctly for the posted cases.\n\nI haven't looked at the action triggers yet; I think we need to create\none trigger for each partition of the referenced side, so we need to\nloop instead of doing a single one.\n\n\n\nI find this pair of queries useful; they show which constraints exist\nand which triggers belong to each. We need to make the constraints and\ntriggers match after a detach right as things would be if the\njust-detached partition were an individual table having the same foreign\nkey.\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)",
"msg_date": "Fri, 19 Jul 2024 15:18:28 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年7月19日周五 21:18写道:\n\n> Hello,\n>\n> I think the fix for the check triggers should be as the attached. Very\n> close to what you did, but you were skipping some operations that needed\n> to be kept. AFAICS this patch works correctly for the posted cases.\n>\n\nAfter applying the attached, the r_1_p_id_fkey1 will have redundant action\ntriggers, as below:\npostgres=# select oid, conname, contype, conrelid, conindid,conparentid,\nconfrelid,conislocal,coninhcount, connoinherit from pg_constraint where oid\n= 16402;\n oid | conname | contype | conrelid | conindid | conparentid |\nconfrelid | conislocal | coninhcount | connoinherit\n-------+----------------+---------+----------+----------+-------------+-----------+------------+-------------+--------------\n 16402 | r_1_p_id_fkey1 | f | 16394 | 16392 | 0 |\n16389 | t | 0 | f\n(1 row)\n\npostgres=# select oid, tgrelid, tgparentid, tgconstrrelid, tgconstrindid,\ntgconstraint from pg_trigger where tgconstraint = 16402;\n oid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid | tgconstraint\n-------+---------+------------+---------------+---------------+--------------\n 16403 | 16389 | 16400 | 16394 | 16392 | 16402\n 16404 | 16389 | 16401 | 16394 | 16392 | 16402\n 16422 | 16389 | 0 | 16394 | 16392 | 16402\n 16423 | 16389 | 0 | 16394 | 16392 | 16402\n(4 rows)\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年7月19日周五 21:18写道:Hello,\n\nI think the fix for the check triggers should be as the attached. Very\nclose to what you did, but you were skipping some operations that needed\nto be kept. AFAICS this patch works correctly for the posted cases.After applying the attached, the r_1_p_id_fkey1 will have redundant actiontriggers, as below:postgres=# select oid, conname, contype, conrelid, conindid,conparentid, confrelid,conislocal,coninhcount, connoinherit from pg_constraint where oid = 16402; oid | conname | contype | conrelid | conindid | conparentid | confrelid | conislocal | coninhcount | connoinherit-------+----------------+---------+----------+----------+-------------+-----------+------------+-------------+-------------- 16402 | r_1_p_id_fkey1 | f | 16394 | 16392 | 0 | 16389 | t | 0 | f(1 row)postgres=# select oid, tgrelid, tgparentid, tgconstrrelid, tgconstrindid, tgconstraint from pg_trigger where tgconstraint = 16402; oid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid | tgconstraint-------+---------+------------+---------------+---------------+-------------- 16403 | 16389 | 16400 | 16394 | 16392 | 16402 16404 | 16389 | 16401 | 16394 | 16392 | 16402 16422 | 16389 | 0 | 16394 | 16392 | 16402 16423 | 16389 | 0 | 16394 | 16392 | 16402(4 rows)-- Tender Wang",
"msg_date": "Mon, 22 Jul 2024 13:52:19 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年7月19日周五 21:18写道:\n\n>\n> I find this pair of queries useful; they show which constraints exist\n> and which triggers belong to each. We need to make the constraints and\n> triggers match after a detach right as things would be if the\n> just-detached partition were an individual table having the same foreign\n> key.\n>\n\nI don't find the useful queries in your last email. Can you provide them.\nThanks.\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年7月19日周五 21:18写道:\n\nI find this pair of queries useful; they show which constraints exist\nand which triggers belong to each. We need to make the constraints and\ntriggers match after a detach right as things would be if the\njust-detached partition were an individual table having the same foreign\nkey.I don't find the useful queries in your last email. Can you provide them.Thanks. -- Tender Wang",
"msg_date": "Tue, 23 Jul 2024 10:15:47 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 1:52 PM Tender Wang <[email protected]> wrote:\n>\n>\n>\n> Alvaro Herrera <[email protected]> 于2024年7月19日周五 21:18写道:\n>>\n>> Hello,\n>>\n>> I think the fix for the check triggers should be as the attached. Very\n>> close to what you did, but you were skipping some operations that needed\n>> to be kept. AFAICS this patch works correctly for the posted cases.\n>\n>\n> After applying the attached, the r_1_p_id_fkey1 will have redundant action\n> triggers, as below:\n> postgres=# select oid, conname, contype, conrelid, conindid,conparentid, confrelid,conislocal,coninhcount, connoinherit from pg_constraint where oid = 16402;\n> oid | conname | contype | conrelid | conindid | conparentid | confrelid | conislocal | coninhcount | connoinherit\n> -------+----------------+---------+----------+----------+-------------+-----------+------------+-------------+--------------\n> 16402 | r_1_p_id_fkey1 | f | 16394 | 16392 | 0 | 16389 | t | 0 | f\n> (1 row)\n>\n> postgres=# select oid, tgrelid, tgparentid, tgconstrrelid, tgconstrindid, tgconstraint from pg_trigger where tgconstraint = 16402;\n> oid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid | tgconstraint\n> -------+---------+------------+---------------+---------------+--------------\n> 16403 | 16389 | 16400 | 16394 | 16392 | 16402\n> 16404 | 16389 | 16401 | 16394 | 16392 | 16402\n> 16422 | 16389 | 0 | 16394 | 16392 | 16402\n> 16423 | 16389 | 0 | 16394 | 16392 | 16402\n> (4 rows)\n>\n\nYes, seems Alvaro has mentioned that he hasn't looked at the\naction triggers, in the attached patch, I add some logic that\nfirst check if there exists action triggers, if yes, just update\ntheir Parent Trigger to InvalidOid.\n\n>\n> --\n> Tender Wang\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 26 Jul 2024 14:36:16 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 2:36 PM Junwang Zhao <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 1:52 PM Tender Wang <[email protected]> wrote:\n> >\n> >\n> >\n> > Alvaro Herrera <[email protected]> 于2024年7月19日周五 21:18写道:\n> >>\n> >> Hello,\n> >>\n> >> I think the fix for the check triggers should be as the attached. Very\n> >> close to what you did, but you were skipping some operations that needed\n> >> to be kept. AFAICS this patch works correctly for the posted cases.\n> >\n> >\n> > After applying the attached, the r_1_p_id_fkey1 will have redundant action\n> > triggers, as below:\n> > postgres=# select oid, conname, contype, conrelid, conindid,conparentid, confrelid,conislocal,coninhcount, connoinherit from pg_constraint where oid = 16402;\n> > oid | conname | contype | conrelid | conindid | conparentid | confrelid | conislocal | coninhcount | connoinherit\n> > -------+----------------+---------+----------+----------+-------------+-----------+------------+-------------+--------------\n> > 16402 | r_1_p_id_fkey1 | f | 16394 | 16392 | 0 | 16389 | t | 0 | f\n> > (1 row)\n> >\n> > postgres=# select oid, tgrelid, tgparentid, tgconstrrelid, tgconstrindid, tgconstraint from pg_trigger where tgconstraint = 16402;\n> > oid | tgrelid | tgparentid | tgconstrrelid | tgconstrindid | tgconstraint\n> > -------+---------+------------+---------------+---------------+--------------\n> > 16403 | 16389 | 16400 | 16394 | 16392 | 16402\n> > 16404 | 16389 | 16401 | 16394 | 16392 | 16402\n> > 16422 | 16389 | 0 | 16394 | 16392 | 16402\n> > 16423 | 16389 | 0 | 16394 | 16392 | 16402\n> > (4 rows)\n> >\n>\n> Yes, seems Alvaro has mentioned that he hasn't looked at the\n> action triggers, in the attached patch, I add some logic that\n> first check if there exists action triggers, if yes, just update\n> their Parent Trigger to InvalidOid.\n>\n> >\n> > --\n> > Tender Wang\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n\nThere is a bug report[0] Tender comments might be the same\nissue as this one, but I tried Alvaro's and mine patch, neither\ncould solve that problem, I did not tried Tender's earlier patch\nthought. I post the test script below in case you are interested.\n\nCREATE TABLE t1 (a int, PRIMARY KEY (a));\nCREATE TABLE t (a int, PRIMARY KEY (a), FOREIGN KEY (a) REFERENCES t1)\nPARTITION BY LIST (a);\nALTER TABLE t ATTACH PARTITION t1 FOR VALUES IN (1);\nALTER TABLE t DETACH PARTITION t1;\nALTER TABLE t ATTACH PARTITION t1 FOR VALUES IN (1);\n\n\n[0] https://www.postgresql.org/message-id/[email protected]\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 26 Jul 2024 14:56:51 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2024-Jul-26, Junwang Zhao wrote:\n\n> There is a bug report[0] Tender comments might be the same\n> issue as this one, but I tried Alvaro's and mine patch, neither\n> could solve that problem, I did not tried Tender's earlier patch\n> thought. I post the test script below in case you are interested.\n\nYeah, I've been looking at this whole debacle this week and after\nlooking at it more closely, I realized that the overall problem requires\na much more invasive solution -- namely, that on DETACH, if the\nreferenced table is partitioned, we need to create additional\npg_constraint entries from the now-standalone table (was partition) to\neach of the partitions of the referenced table; and also add action\ntriggers to each of those. Without that, the constraint is incomplete\nand doesn't work (as reported multiple times already).\n\nOne thing I have not yet tried is what if the partition being detach is\nalso partitioned. I mean, do we need to handle each sub-partition\nexplicitly in some way? I think the answer is no, but it needs tests.\n\nI have written the patch to do this on detach, and AFAICS it works well,\nthough it changes the behavior of some existing tests (IIRC related to\nself-referencing FKs). Also, the next problem is making sure that\nATTACH deals with it correctly. I'm on this bit today.\n\nSelf-referencing FKs seem to have additional problems :-(\n\nThe queries I was talking about are these\n\n\\set tables ''''prim.*''',''forign.*''',''''lone''''\n\nselect oid, conparentid, contype, conname, conrelid::regclass, confrelid::regclass, conkey, confkey, conindid::regclass from pg_constraint where contype = 'f' and (conrelid::regclass::text ~ any (array[:tables]) or confrelid::regclass::text ~ any (array[:tables])) order by contype, conrelid, confrelid; select tgconstraint, oid, tgrelid::regclass, tgconstrrelid::regclass, tgname, tgparentid, tgconstrindid::regclass, tgfoid::regproc from pg_trigger where tgconstraint in (select oid from pg_constraint where conrelid::regclass::text ~ any (array[:tables]) or confrelid::regclass::text ~ any (array[:tables])) order by tgconstraint, tgrelid::regclass::text, tgfoid;\n\nWritten as a single line in psql they let you quickly see all the\nconstraints and their associated triggers, so for instance you can see\nwhether this sequence\n\ncreate table prim (a int primary key) partition by list (a);\ncreate table prim1 partition of prim for values in (1);\ncreate table prim2 partition of prim for values in (2);\ncreate table forign (a int references prim) partition by list (a);\ncreate table forign1 partition of forign for values in (1);\ncreate table forign2 partition of forign for values in (2);\nalter table forign detach partition forign1;\n\nproduces the same set of constraints and triggers as this other sequence\n\ncreate table prim (a int primary key) partition by list (a);\ncreate table prim1 partition of prim for values in (1);\ncreate table prim2 partition of prim for values in (2);\ncreate table forign (a int references prim) partition by list (a);\ncreate table forign2 partition of forign for values in (2);\ncreate table forign1 (a int references prim);\n\n\nThe patch is more or less like the attached, far from ready.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.",
"msg_date": "Fri, 26 Jul 2024 10:36:08 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n\n>\n> There is a bug report[0] Tender comments might be the same\n> issue as this one, but I tried Alvaro's and mine patch, neither\n> could solve that problem, I did not tried Tender's earlier patch\n> thought. I post the test script below in case you are interested.\n>\n\nMy earlier patch should handle Alexander reported case. But I did not do\nmore\ntest. I'm not sure that wether or not has dismatching between pg_constraint\nand pg_trigger.\n\nI aggred with Alvaro said that \"requires a much more invasive solution\".\n\n-- \nTender Wang\n\nJunwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\nThere is a bug report[0] Tender comments might be the same\nissue as this one, but I tried Alvaro's and mine patch, neither\ncould solve that problem, I did not tried Tender's earlier patch\nthought. I post the test script below in case you are interested.\nMy earlier patch should handle Alexander reported case. But I did not do moretest. I'm not sure that wether or not has dismatching between pg_constraint and pg_trigger.I aggred with Alvaro said that \"requires a much more invasive solution\".-- Tender Wang",
"msg_date": "Fri, 26 Jul 2024 17:08:34 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2024-Jul-26, Tender Wang wrote:\n\n> Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> \n> > There is a bug report[0] Tender comments might be the same issue as\n> > this one, but I tried Alvaro's and mine patch, neither could solve\n> > that problem, I did not tried Tender's earlier patch thought. I post\n> > the test script below in case you are interested.\n> \n> My earlier patch should handle Alexander reported case. But I did not\n> do more test. I'm not sure that wether or not has dismatching between\n> pg_constraint and pg_trigger.\n> \n> I aggred with Alvaro said that \"requires a much more invasive\n> solution\".\n\nHere's the patch which, as far as I can tell, fixes all the reported\nproblems (other than the one in bug 18541, for which I proposed an\nunrelated fix in that thread[1]). If you can double-check, I would very\nmuch appreciate that. Also, I think the test cases the patch adds\nreflect the provided examples sufficiently, but if we're still failing\nto cover some, please let me know.\n\n\nAs I understand, this fix needs to be applied all the way back to 12,\nbecause the overall functionality is that old. However, in branches 14\nand back, the patch doesn't apply cleanly, because of the changes we\nmade in commit f4566345cf40 :-( I'm tempted to fix it in branches 15,\n16, 17, master now and potentially backpatch later, to avoid dragging\nthings along further. It's taken long enough already.\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)",
"msg_date": "Wed, 7 Aug 2024 18:50:10 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:\n\n> On 2024-Jul-26, Tender Wang wrote:\n>\n> > Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> >\n> > > There is a bug report[0] Tender comments might be the same issue as\n> > > this one, but I tried Alvaro's and mine patch, neither could solve\n> > > that problem, I did not tried Tender's earlier patch thought. I post\n> > > the test script below in case you are interested.\n> >\n> > My earlier patch should handle Alexander reported case. But I did not\n> > do more test. I'm not sure that wether or not has dismatching between\n> > pg_constraint and pg_trigger.\n> >\n> > I aggred with Alvaro said that \"requires a much more invasive\n> > solution\".\n>\n> Here's the patch which, as far as I can tell, fixes all the reported\n> problems (other than the one in bug 18541, for which I proposed an\n> unrelated fix in that thread[1]). If you can double-check, I would very\n> much appreciate that. Also, I think the test cases the patch adds\n> reflect the provided examples sufficiently, but if we're still failing\n> to cover some, please let me know.\n>\n\nThanks for your work. I will take some time to look it in detail.\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:On 2024-Jul-26, Tender Wang wrote:\n\n> Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> \n> > There is a bug report[0] Tender comments might be the same issue as\n> > this one, but I tried Alvaro's and mine patch, neither could solve\n> > that problem, I did not tried Tender's earlier patch thought. I post\n> > the test script below in case you are interested.\n> \n> My earlier patch should handle Alexander reported case. But I did not\n> do more test. I'm not sure that wether or not has dismatching between\n> pg_constraint and pg_trigger.\n> \n> I aggred with Alvaro said that \"requires a much more invasive\n> solution\".\n\nHere's the patch which, as far as I can tell, fixes all the reported\nproblems (other than the one in bug 18541, for which I proposed an\nunrelated fix in that thread[1]). If you can double-check, I would very\nmuch appreciate that. Also, I think the test cases the patch adds\nreflect the provided examples sufficiently, but if we're still failing\nto cover some, please let me know.Thanks for your work. I will take some time to look it in detail. -- Tender Wang",
"msg_date": "Thu, 8 Aug 2024 10:16:40 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:\n\n> On 2024-Jul-26, Tender Wang wrote:\n>\n> > Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> >\n> > > There is a bug report[0] Tender comments might be the same issue as\n> > > this one, but I tried Alvaro's and mine patch, neither could solve\n> > > that problem, I did not tried Tender's earlier patch thought. I post\n> > > the test script below in case you are interested.\n> >\n> > My earlier patch should handle Alexander reported case. But I did not\n> > do more test. I'm not sure that wether or not has dismatching between\n> > pg_constraint and pg_trigger.\n> >\n> > I aggred with Alvaro said that \"requires a much more invasive\n> > solution\".\n>\n> Here's the patch which, as far as I can tell, fixes all the reported\n> problems (other than the one in bug 18541, for which I proposed an\n> unrelated fix in that thread[1]). If you can double-check, I would very\n> much appreciate that. Also, I think the test cases the patch adds\n> reflect the provided examples sufficiently, but if we're still failing\n> to cover some, please let me know.\n>\n\nI did a lot of tests, and did not report error and did not find any\nwarnings using oidjoins.sql.\n+1\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:On 2024-Jul-26, Tender Wang wrote:\n\n> Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> \n> > There is a bug report[0] Tender comments might be the same issue as\n> > this one, but I tried Alvaro's and mine patch, neither could solve\n> > that problem, I did not tried Tender's earlier patch thought. I post\n> > the test script below in case you are interested.\n> \n> My earlier patch should handle Alexander reported case. But I did not\n> do more test. I'm not sure that wether or not has dismatching between\n> pg_constraint and pg_trigger.\n> \n> I aggred with Alvaro said that \"requires a much more invasive\n> solution\".\n\nHere's the patch which, as far as I can tell, fixes all the reported\nproblems (other than the one in bug 18541, for which I proposed an\nunrelated fix in that thread[1]). If you can double-check, I would very\nmuch appreciate that. Also, I think the test cases the patch adds\nreflect the provided examples sufficiently, but if we're still failing\nto cover some, please let me know.I did a lot of tests, and did not report error and did not find any warnings using oidjoins.sql.+1 -- Tender Wang",
"msg_date": "Thu, 8 Aug 2024 22:25:35 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:\n\n> On 2024-Jul-26, Tender Wang wrote:\n>\n> > Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> >\n> > > There is a bug report[0] Tender comments might be the same issue as\n> > > this one, but I tried Alvaro's and mine patch, neither could solve\n> > > that problem, I did not tried Tender's earlier patch thought. I post\n> > > the test script below in case you are interested.\n> >\n> > My earlier patch should handle Alexander reported case. But I did not\n> > do more test. I'm not sure that wether or not has dismatching between\n> > pg_constraint and pg_trigger.\n> >\n> > I aggred with Alvaro said that \"requires a much more invasive\n> > solution\".\n>\n> Here's the patch which, as far as I can tell, fixes all the reported\n> problems (other than the one in bug 18541, for which I proposed an\n> unrelated fix in that thread[1]). If you can double-check, I would very\n> much appreciate that. Also, I think the test cases the patch adds\n> reflect the provided examples sufficiently, but if we're still failing\n> to cover some, please let me know.\n>\n>\n> As I understand, this fix needs to be applied all the way back to 12,\n> because the overall functionality is that old. However, in branches 14\n> and back, the patch doesn't apply cleanly, because of the changes we\n> made in commit f4566345cf40 :-( I'm tempted to fix it in branches 15,\n> 16, 17, master now and potentially backpatch later, to avoid dragging\n> things along further. It's taken long enough already.\n>\n\nI haven't seen this patch on master. Is there something that we fotget to\nhandle?\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:On 2024-Jul-26, Tender Wang wrote:\n\n> Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> \n> > There is a bug report[0] Tender comments might be the same issue as\n> > this one, but I tried Alvaro's and mine patch, neither could solve\n> > that problem, I did not tried Tender's earlier patch thought. I post\n> > the test script below in case you are interested.\n> \n> My earlier patch should handle Alexander reported case. But I did not\n> do more test. I'm not sure that wether or not has dismatching between\n> pg_constraint and pg_trigger.\n> \n> I aggred with Alvaro said that \"requires a much more invasive\n> solution\".\n\nHere's the patch which, as far as I can tell, fixes all the reported\nproblems (other than the one in bug 18541, for which I proposed an\nunrelated fix in that thread[1]). If you can double-check, I would very\nmuch appreciate that. Also, I think the test cases the patch adds\nreflect the provided examples sufficiently, but if we're still failing\nto cover some, please let me know.\n\n\nAs I understand, this fix needs to be applied all the way back to 12,\nbecause the overall functionality is that old. However, in branches 14\nand back, the patch doesn't apply cleanly, because of the changes we\nmade in commit f4566345cf40 :-( I'm tempted to fix it in branches 15,\n16, 17, master now and potentially backpatch later, to avoid dragging\nthings along further. It's taken long enough already.I haven't seen this patch on master. Is there something that we fotget to handle?-- Tender Wang",
"msg_date": "Tue, 20 Aug 2024 09:51:16 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2024-Aug-20, Tender Wang wrote:\n\n> > As I understand, this fix needs to be applied all the way back to 12,\n> > because the overall functionality is that old. However, in branches 14\n> > and back, the patch doesn't apply cleanly, because of the changes we\n> > made in commit f4566345cf40 :-( I'm tempted to fix it in branches 15,\n> > 16, 17, master now and potentially backpatch later, to avoid dragging\n> > things along further. It's taken long enough already.\n> \n> I haven't seen this patch on master. Is there something that we fotget to\n> handle?\n\nI haven't pushed it yet, mostly because of being unsure about not doing\nanything for the oldest branches (14 and back).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n",
"msg_date": "Mon, 19 Aug 2024 22:25:13 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月20日周二 10:25写道:\n\n> On 2024-Aug-20, Tender Wang wrote:\n>\n> > > As I understand, this fix needs to be applied all the way back to 12,\n> > > because the overall functionality is that old. However, in branches 14\n> > > and back, the patch doesn't apply cleanly, because of the changes we\n> > > made in commit f4566345cf40 :-( I'm tempted to fix it in branches 15,\n> > > 16, 17, master now and potentially backpatch later, to avoid dragging\n> > > things along further. It's taken long enough already.\n> >\n> > I haven't seen this patch on master. Is there something that we fotget\n> to\n> > handle?\n>\n> I haven't pushed it yet, mostly because of being unsure about not doing\n> anything for the oldest branches (14 and back).\n>\n\nI only did codes and tests on master. I'm not sure how much complicated it\nwould be\nto fix this issue on branches 14 and back.\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月20日周二 10:25写道:On 2024-Aug-20, Tender Wang wrote:\n\n> > As I understand, this fix needs to be applied all the way back to 12,\n> > because the overall functionality is that old. However, in branches 14\n> > and back, the patch doesn't apply cleanly, because of the changes we\n> > made in commit f4566345cf40 :-( I'm tempted to fix it in branches 15,\n> > 16, 17, master now and potentially backpatch later, to avoid dragging\n> > things along further. It's taken long enough already.\n> \n> I haven't seen this patch on master. Is there something that we fotget to\n> handle?\n\nI haven't pushed it yet, mostly because of being unsure about not doing\nanything for the oldest branches (14 and back).I only did codes and tests on master. I'm not sure how much complicated it would beto fix this issue on branches 14 and back. -- Tender Wang",
"msg_date": "Tue, 20 Aug 2024 10:45:47 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Wed, 7 Aug 2024 18:50:10 -0400\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2024-Jul-26, Tender Wang wrote:\n> \n> > Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> > \n> > > There is a bug report[0] Tender comments might be the same issue as\n> > > this one, but I tried Alvaro's and mine patch, neither could solve\n> > > that problem, I did not tried Tender's earlier patch thought. I post\n> > > the test script below in case you are interested.\n> > \n> > My earlier patch should handle Alexander reported case. But I did not\n> > do more test. I'm not sure that wether or not has dismatching between\n> > pg_constraint and pg_trigger.\n> > \n> > I aggred with Alvaro said that \"requires a much more invasive\n> > solution\".\n> \n> Here's the patch which, as far as I can tell, fixes all the reported\n> problems (other than the one in bug 18541, for which I proposed an\n> unrelated fix in that thread[1]). If you can double-check, I would very\n> much appreciate that. Also, I think the test cases the patch adds\n> reflect the provided examples sufficiently, but if we're still failing\n> to cover some, please let me know.\n\nI'm back on this issue as well. I start poking at this patch to review it,\ntest it, challenge it and then report here.\n\nI'll try to check if some other issues might have lost/forgot on they way as\nwell.\n\nIn the meantime, thank you Alvaro, Tender and Junwang for your work, time,\nresearch and patches!\n\n\n",
"msg_date": "Tue, 20 Aug 2024 10:26:51 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2024-Aug-20, Jehan-Guillaume de Rorthais wrote:\n\n> I'm back on this issue as well. I start poking at this patch to review it,\n> test it, challenge it and then report here.\n> \n> I'll try to check if some other issues might have lost/forgot on they way as\n> well.\n\nThanks, much appreciated, looking forward to your feedback.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Aug 2024 23:09:27 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2024-Aug-19, Alvaro Herrera wrote:\n\n> I haven't pushed it yet, mostly because of being unsure about not doing\n> anything for the oldest branches (14 and back).\n\nLast night, after much mulling on this, it occurred to me that one easy\nway out of this problem for the old branches, without having to write\nmore code, is to simply remove the constraint from the partition when\nit's detached (but only if they reference a partitioned relation). It's\nnot a great solution, but at least we're no longer leaving bogus catalog\nentries around. That would be like the attached patch, which was cut\nfrom 14 and applies cleanly to 12 and 13. I'd throw in a couple of\ntests and call it a day.\n\n\n(TBH the idea of leaving the partition without a foreign key feels to me\nlike travelling in a car without a seat belt -- it feels instinctively\ndangerous. This is why I went such lengths to keep FKs on detach\ninitially. But I'm not inclined to spend more time on this issue.\nHowever ... what about fixing catalog content that's already broken\nafter past detach ops?)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 21 Aug 2024 18:00:46 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月22日周四 06:00写道:\n\n> On 2024-Aug-19, Alvaro Herrera wrote:\n>\n> > I haven't pushed it yet, mostly because of being unsure about not doing\n> > anything for the oldest branches (14 and back).\n>\n> Last night, after much mulling on this, it occurred to me that one easy\n> way out of this problem for the old branches, without having to write\n> more code, is to simply remove the constraint from the partition when\n> it's detached (but only if they reference a partitioned relation). It's\n> not a great solution, but at least we're no longer leaving bogus catalog\n> entries around. That would be like the attached patch, which was cut\n> from 14 and applies cleanly to 12 and 13. I'd throw in a couple of\n> tests and call it a day.\n>\n\nI apply the v14 patch on branch REL_14_STABLE. I run this thread issue and I\nfind below error.\npostgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);\nCREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\nCREATE TABLE r_1 (\n id bigint PRIMARY KEY,\n p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES p (id)\n);\nCREATE TABLE r (\n id bigint PRIMARY KEY,\n p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES p (id)\n) PARTITION BY list (id);\nALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\nALTER TABLE r DETACH PARTITION r_1;\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\nCREATE TABLE\nALTER TABLE\nERROR: cache lookup failed for constraint 16400\n\nI haven't look into details to find out where cause above error.\n\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月22日周四 06:00写道:On 2024-Aug-19, Alvaro Herrera wrote:\n\n> I haven't pushed it yet, mostly because of being unsure about not doing\n> anything for the oldest branches (14 and back).\n\nLast night, after much mulling on this, it occurred to me that one easy\nway out of this problem for the old branches, without having to write\nmore code, is to simply remove the constraint from the partition when\nit's detached (but only if they reference a partitioned relation). It's\nnot a great solution, but at least we're no longer leaving bogus catalog\nentries around. That would be like the attached patch, which was cut\nfrom 14 and applies cleanly to 12 and 13. I'd throw in a couple of\ntests and call it a day.I apply the v14 patch on branch REL_14_STABLE. I run this thread issue and Ifind below error.postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);CREATE TABLE r_1 ( id bigint PRIMARY KEY, p_id bigint NOT NULL, FOREIGN KEY (p_id) REFERENCES p (id));CREATE TABLE r ( id bigint PRIMARY KEY, p_id bigint NOT NULL, FOREIGN KEY (p_id) REFERENCES p (id)) PARTITION BY list (id);ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);ALTER TABLE r DETACH PARTITION r_1;CREATE TABLECREATE TABLECREATE TABLECREATE TABLEALTER TABLEERROR: cache lookup failed for constraint 16400I haven't look into details to find out where cause above error. -- Tender Wang",
"msg_date": "Thu, 22 Aug 2024 11:19:23 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Tender Wang <[email protected]> 于2024年8月22日周四 11:19写道:\n\n>\n>\n> Alvaro Herrera <[email protected]> 于2024年8月22日周四 06:00写道:\n>\n>> On 2024-Aug-19, Alvaro Herrera wrote:\n>>\n>> > I haven't pushed it yet, mostly because of being unsure about not doing\n>> > anything for the oldest branches (14 and back).\n>>\n>> Last night, after much mulling on this, it occurred to me that one easy\n>> way out of this problem for the old branches, without having to write\n>> more code, is to simply remove the constraint from the partition when\n>> it's detached (but only if they reference a partitioned relation). It's\n>> not a great solution, but at least we're no longer leaving bogus catalog\n>> entries around. That would be like the attached patch, which was cut\n>> from 14 and applies cleanly to 12 and 13. I'd throw in a couple of\n>> tests and call it a day.\n>>\n>\n> I apply the v14 patch on branch REL_14_STABLE. I run this thread issue and\n> I\n> find below error.\n> postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);\n> CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\n> CREATE TABLE r_1 (\n> id bigint PRIMARY KEY,\n> p_id bigint NOT NULL,\n> FOREIGN KEY (p_id) REFERENCES p (id)\n> );\n> CREATE TABLE r (\n> id bigint PRIMARY KEY,\n> p_id bigint NOT NULL,\n> FOREIGN KEY (p_id) REFERENCES p (id)\n> ) PARTITION BY list (id);\n> ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);\n> ALTER TABLE r DETACH PARTITION r_1;\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TABLE\n> ALTER TABLE\n> ERROR: cache lookup failed for constraint 16400\n>\n\nI guess it is because cascade dropping, the oid=16400 has been deleted.\nAdding a list that remember dropped constraint, if we find the parent of\nconstraint\nis in the list, we skip.\n\nBy the way, I run above SQL sequences on REL_14_STABLE without your partch.\nI didn't find reporting error, and running oidjoins.sql didn't report\nwarnings.\nDo I miss something?\n\n\n-- \nTender Wang\n\nTender Wang <[email protected]> 于2024年8月22日周四 11:19写道:Alvaro Herrera <[email protected]> 于2024年8月22日周四 06:00写道:On 2024-Aug-19, Alvaro Herrera wrote:\n\n> I haven't pushed it yet, mostly because of being unsure about not doing\n> anything for the oldest branches (14 and back).\n\nLast night, after much mulling on this, it occurred to me that one easy\nway out of this problem for the old branches, without having to write\nmore code, is to simply remove the constraint from the partition when\nit's detached (but only if they reference a partitioned relation). It's\nnot a great solution, but at least we're no longer leaving bogus catalog\nentries around. That would be like the attached patch, which was cut\nfrom 14 and applies cleanly to 12 and 13. I'd throw in a couple of\ntests and call it a day.I apply the v14 patch on branch REL_14_STABLE. I run this thread issue and Ifind below error.postgres=# CREATE TABLE p ( id bigint PRIMARY KEY ) PARTITION BY list (id);CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);CREATE TABLE r_1 ( id bigint PRIMARY KEY, p_id bigint NOT NULL, FOREIGN KEY (p_id) REFERENCES p (id));CREATE TABLE r ( id bigint PRIMARY KEY, p_id bigint NOT NULL, FOREIGN KEY (p_id) REFERENCES p (id)) PARTITION BY list (id);ALTER TABLE r ATTACH PARTITION r_1 FOR VALUES IN (1);ALTER TABLE r DETACH PARTITION r_1;CREATE TABLECREATE TABLECREATE TABLECREATE TABLEALTER TABLEERROR: cache lookup failed for constraint 16400I guess it is because cascade dropping, the oid=16400 has been deleted.Adding a list that remember dropped constraint, if we find the parent of constraintis in the list, we skip.By the way, I run above SQL sequences on REL_14_STABLE without your partch.I didn't find reporting error, and running oidjoins.sql didn't report warnings.Do I miss something?-- Tender Wang",
"msg_date": "Thu, 22 Aug 2024 15:07:26 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On 2024-Aug-22, Tender Wang wrote:\n\n> I apply the v14 patch on branch REL_14_STABLE. I run this thread issue and I\n> find below error.\n> [...]\n> ERROR: cache lookup failed for constraint 16400\n> \n> I haven't look into details to find out where cause above error.\n\nRight, we try to drop the constraint twice. We can dodge this by\ncollecting all constraints to drop in the loop and process them in a\nsingle performMultipleDeletions, as in the attached v14-2.\n\nTBH I think it's a bit infuriating that we lose the constraint (which\nwas explicitly declared) because of ATTACH/DETACH. So the behavior of\nv15 and above is better.\n\n> By the way, I run above SQL sequences on REL_14_STABLE without your\n> partch. I didn't find reporting error, and running oidjoins.sql\n> didn't report warnings. Do I miss something?\n\nI think the action triggers are missing, so if you keep rows in the r_1\ntable after you've detached them, you can still delete them from the\nreferenced table p, instead of getting the error that you should get.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)",
"msg_date": "Thu, 22 Aug 2024 14:41:50 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月23日周五 02:41写道:\n\n> On 2024-Aug-22, Tender Wang wrote:\n>\n> > I apply the v14 patch on branch REL_14_STABLE. I run this thread issue\n> and I\n> > find below error.\n> > [...]\n> > ERROR: cache lookup failed for constraint 16400\n> >\n> > I haven't look into details to find out where cause above error.\n>\n> Right, we try to drop the constraint twice. We can dodge this by\n> collecting all constraints to drop in the loop and process them in a\n> single performMultipleDeletions, as in the attached v14-2.\n>\n\nCan we move the CommandCounterIncrement() in\nif (get_rel_relkind(fk->confrelid) == RELKIND_PARTITIONED_TABLE) block\nto be close to performMultipleDeletions().\n\nOthers look good to me.\n\nTBH I think it's a bit infuriating that we lose the constraint (which\n> was explicitly declared) because of ATTACH/DETACH.\n\n\nAgree.\nDo you think it is friendly to users if we add hints that inform them the\nconstraint was dropped?\n\n-- \nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月23日周五 02:41写道:On 2024-Aug-22, Tender Wang wrote:\n\n> I apply the v14 patch on branch REL_14_STABLE. I run this thread issue and I\n> find below error.\n> [...]\n> ERROR: cache lookup failed for constraint 16400\n> \n> I haven't look into details to find out where cause above error.\n\nRight, we try to drop the constraint twice. We can dodge this by\ncollecting all constraints to drop in the loop and process them in a\nsingle performMultipleDeletions, as in the attached v14-2.Can we move the CommandCounterIncrement() in if (get_rel_relkind(fk->confrelid) == RELKIND_PARTITIONED_TABLE) blockto be close to performMultipleDeletions().Others look good to me.\nTBH I think it's a bit infuriating that we lose the constraint (which\nwas explicitly declared) because of ATTACH/DETACH. Agree. Do you think it is friendly to users if we add hints that inform them the constraint was dropped?-- Tender Wang",
"msg_date": "Fri, 23 Aug 2024 10:44:19 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Hi,\n\nOn Tue, 20 Aug 2024 23:09:27 -0400\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2024-Aug-20, Jehan-Guillaume de Rorthais wrote:\n> \n> > I'm back on this issue as well. I start poking at this patch to review it,\n> > test it, challenge it and then report here.\n> > \n> > I'll try to check if some other issues might have lost/forgot on they way as\n> > well.\n> \n> Thanks, much appreciated, looking forward to your feedback.\n\nSorry, it took me a while to come back to you on this topic. It has been hard to\nuntangle subjects, reproductions and patch…\n\nThere's three distinct issues/thread:\n\n* Constraint & trigger catalog cleanup [1] (this thread)\n* FK broken after DETACH [2]\n* Maintenance consideration about self referencing FK between partitions [3]\n\n0. Splitting in two commits\n\n Your patch addresses two bugs:\n\n * one for the constraint & trigger catalog cleanup;\n * one for the FK broken after DETACH.\n\n These issues are unrelated, therefore I am wondering if it would be better\n to split their resolution in two different patches.\n\n Last year, I reported them in two different threads [1][2]. The first with\n implementation consideration, the second with a demo/proposal/draft fix.\n\n Unfortunately, this discussion about the first bug slipped to the second one\n when Tender stumbled on this bug as well and reported it. But, both bugs can\n be triggered independently, and have distinct fixes.\n\n Finally, splitting the patch might help setting finer patch co-authoring. I\n know my patch for [2] was a draft and somewhat trivial, but I spend a fair\n amount of time to report, then produce a draft patch, so I was wondering if\n it would be candidate to a co-author flag on this (small, humble and\n refactored by you) patch?\n\n I'm definitely not involved (yet) in the second part though.\n\n1. Constraint & trigger catalog cleanup [1]\n\n I have been focusing on the current master branch and haven't taken into\n consideration backpatching related issues yet.\n\n When I first studied this bug and reported it, I held on writing a patch\n because it seemed it would duplicate some existing code. I wrote:\n\n > I poked around DetachPartitionFinalize() to try to find a way to fix this,\n > but it looks like it would duplicate a bunch of code from other code path\n > (eg. from CloneFkReferenced).\n\n My proposal was to clean everything related to the old FK and use some\n existing code path to create a fresh and cleaner one. This requires some\n refactoring in existing code, but we would win a common path of code between\n create/attach/detach, a cleaner catalog and easier code maintenance.\n\n I've finally been able to write a PoC that implement this by calling\n addFkRecurseReferenced() from DetachPartitionFinalize(). I can't join\n it here because it is currently an ugly draft and I still have some work\n to do. But I would really like to have a little more time (one or two days) to\n explore this avenue further before you commit yours, if you don't mind? Or\n maybe you already have considered this avenue and rejected it?\n\n\n2. FK broken after DETACH [2]\n\n Comparing your patch to my draft from [2], I just have a question about the\n refactoring.\n\n Fencing the constraint/trigger removal inside a conditional\n RELKIND_PARTITIONED_TABLE block of code was obvious. It avoids some useless\n catalog scan compared to my draft patch.\n\n Also, the \"contype == CONSTRAINT_FOREIGN\" I had sounds safe to remove.\n\n However, is it clean/light enough to add the \"conparentid == fk->conoid\" in\n the scan key as I did? I'm not sure it saves anything else but the small\n conditional block you inserted inside the loop, but I wonder if there's a\n serious concern about this anyway?\n\n Last, considering the tests, I think we should add some rows in the tables,\n to make sure the FK is correctly enforced after DETACH. Something like:\n\n CREATE SCHEMA fkpart12\n CREATE TABLE fk_p ( id bigint PRIMARY KEY ) PARTITION BY list (id)\n CREATE TABLE fk_p_1 PARTITION OF fk_p FOR VALUES IN (1)\n CREATE TABLE fk_p_2 PARTITION OF fk_p FOR VALUES IN (2)\n CREATE TABLE fk_r_1 ( id bigint PRIMARY KEY, p_id bigint NOT NULL)\n CREATE TABLE fk_r_2 ( id bigint PRIMARY KEY, p_id bigint NOT NULL)\n CREATE TABLE fk_r ( id bigint PRIMARY KEY, p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES fk_p (id)\n ) PARTITION BY list (id);\n SET search_path TO fkpart12;\n\n INSERT INTO fk_p VALUES (1);\n\n ALTER TABLE fk_r ATTACH PARTITION fk_r_2 FOR VALUES IN (2);\n\n ALTER TABLE fk_r ATTACH PARTITION fk_r_1 FOR VALUES IN (1);\n \\d fk_r_1\n\n INSERT INTO fk_r VALUES (1,1);\n\n ALTER TABLE fk_r DETACH PARTITION fk_r_1;\n \\d fk_r_1\n\n INSERT INTO c_1 VALUES (2,2); -- fails as EXPECTED\n DELETE FROM p; -- should fails but was buggy\n\n ALTER TABLE fk_r ATTACH PARTITION fk_r_1 FOR VALUES IN (1);\n \\d fk_r_1\n\n\n3. Self referencing FK between partitions [3]\n\n You added to your commit message:\n\n verify: 20230707175859.17c91538@karst\n\n I'm not sure what the \"verify\" flag means. Unfortunately, your patch doesn't\n help on this topic.\n\n This bug really needs more discussion and design consideration. I have\n thought about this problem and haven't found any solution that don't involve\n breaking the current core behavior. It really looks like an impossible bug to\n fix without dropping the constraint itself. On both side. Maybe the only sane\n behavior would be to forbid detaching the partition if it would break the\n constraint.\n\n But let's discuss this on the related thread, should we?\n\n\nThank you for reading me all the way down to the bottom!\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20230705233028.2f554f73%40karst\n[2] https://www.postgresql.org/message-id/20230420144344.40744130%40karst\n[3] https://www.postgresql.org/message-id/20230707175859.17c91538%40karst\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 23:01:47 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Jehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 05:02写道:\n\n> Hi,\n>\n> On Tue, 20 Aug 2024 23:09:27 -0400\n> Alvaro Herrera <[email protected]> wrote:\n>\n> > On 2024-Aug-20, Jehan-Guillaume de Rorthais wrote:\n> >\n> > > I'm back on this issue as well. I start poking at this patch to review\n> it,\n> > > test it, challenge it and then report here.\n> > >\n> > > I'll try to check if some other issues might have lost/forgot on they\n> way as\n> > > well.\n> >\n> > Thanks, much appreciated, looking forward to your feedback.\n>\n> Sorry, it took me a while to come back to you on this topic. It has been\n> hard to\n> untangle subjects, reproductions and patch…\n>\n> There's three distinct issues/thread:\n>\n> * Constraint & trigger catalog cleanup [1] (this thread)\n> * FK broken after DETACH [2]\n> * Maintenance consideration about self referencing FK between partitions\n> [3]\n>\n\nThe third issue has been fixed, and codes have been pushed. Because of my\nmisunderstanding,\nIt should not be here.\n\n\n> 0. Splitting in two commits\n>\n> Your patch addresses two bugs:\n>\n> * one for the constraint & trigger catalog cleanup;\n> * one for the FK broken after DETACH.\n>\n> These issues are unrelated, therefore I am wondering if it would be\n> better\n> to split their resolution in two different patches.\n>\n> Last year, I reported them in two different threads [1][2]. The first\n> with\n> implementation consideration, the second with a demo/proposal/draft fix.\n>\n> Unfortunately, this discussion about the first bug slipped to the second\n> one\n> when Tender stumbled on this bug as well and reported it. But, both bugs\n> can\n> be triggered independently, and have distinct fixes.\n>\n\nIt's ok that these two issues are fixed together. It is because current\ncodes don't handle better\nwhen the referenced side is the partition table.\n\n\n> Finally, splitting the patch might help setting finer patch\n> co-authoring. I\n> know my patch for [2] was a draft and somewhat trivial, but I spend a\n> fair\n> amount of time to report, then produce a draft patch, so I was wondering\n> if\n> it would be candidate to a co-author flag on this (small, humble and\n> refactored by you) patch?\n>\n> I'm definitely not involved (yet) in the second part though.\n>\n> 1. Constraint & trigger catalog cleanup [1]\n>\n> I have been focusing on the current master branch and haven't taken into\n> consideration backpatching related issues yet.\n>\n> When I first studied this bug and reported it, I held on writing a patch\n> because it seemed it would duplicate some existing code. I wrote:\n>\n> > I poked around DetachPartitionFinalize() to try to find a way to fix\n> this,\n> > but it looks like it would duplicate a bunch of code from other code\n> path\n> > (eg. from CloneFkReferenced).\n>\n> My proposal was to clean everything related to the old FK and use some\n> existing code path to create a fresh and cleaner one. This requires some\n> refactoring in existing code, but we would win a common path of code\n> between\n> create/attach/detach, a cleaner catalog and easier code maintenance.\n>\n> I've finally been able to write a PoC that implement this by calling\n> addFkRecurseReferenced() from DetachPartitionFinalize(). I can't join\n> it here because it is currently an ugly draft and I still have some work\n> to do. But I would really like to have a little more time (one or two\n> days) to\n> explore this avenue further before you commit yours, if you don't mind?\n> Or\n> maybe you already have considered this avenue and rejected it?\n>\n>\n> 2. FK broken after DETACH [2]\n>\n> Comparing your patch to my draft from [2], I just have a question about\n> the\n> refactoring.\n>\n> Fencing the constraint/trigger removal inside a conditional\n> RELKIND_PARTITIONED_TABLE block of code was obvious. It avoids some\n> useless\n> catalog scan compared to my draft patch.\n>\n> Also, the \"contype == CONSTRAINT_FOREIGN\" I had sounds safe to remove.\n>\n> However, is it clean/light enough to add the \"conparentid == fk->conoid\"\n> in\n> the scan key as I did? I'm not sure it saves anything else but the small\n> conditional block you inserted inside the loop, but I wonder if there's a\n> serious concern about this anyway?\n>\n> Last, considering the tests, I think we should add some rows in the\n> tables,\n> to make sure the FK is correctly enforced after DETACH. Something like:\n>\n> CREATE SCHEMA fkpart12\n> CREATE TABLE fk_p ( id bigint PRIMARY KEY ) PARTITION BY list (id)\n> CREATE TABLE fk_p_1 PARTITION OF fk_p FOR VALUES IN (1)\n> CREATE TABLE fk_p_2 PARTITION OF fk_p FOR VALUES IN (2)\n> CREATE TABLE fk_r_1 ( id bigint PRIMARY KEY, p_id bigint NOT NULL)\n> CREATE TABLE fk_r_2 ( id bigint PRIMARY KEY, p_id bigint NOT NULL)\n> CREATE TABLE fk_r ( id bigint PRIMARY KEY, p_id bigint NOT NULL,\n> FOREIGN KEY (p_id) REFERENCES fk_p (id)\n> ) PARTITION BY list (id);\n> SET search_path TO fkpart12;\n>\n> INSERT INTO fk_p VALUES (1);\n>\n> ALTER TABLE fk_r ATTACH PARTITION fk_r_2 FOR VALUES IN (2);\n>\n> ALTER TABLE fk_r ATTACH PARTITION fk_r_1 FOR VALUES IN (1);\n> \\d fk_r_1\n>\n> INSERT INTO fk_r VALUES (1,1);\n>\n> ALTER TABLE fk_r DETACH PARTITION fk_r_1;\n> \\d fk_r_1\n>\n> INSERT INTO c_1 VALUES (2,2); -- fails as EXPECTED\n> DELETE FROM p; -- should fails but was buggy\n>\n> ALTER TABLE fk_r ATTACH PARTITION fk_r_1 FOR VALUES IN (1);\n> \\d fk_r_1\n>\n>\n> 3. Self referencing FK between partitions [3]\n>\n> You added to your commit message:\n>\n> verify: 20230707175859.17c91538@karst\n>\n> I'm not sure what the \"verify\" flag means. Unfortunately, your patch\n> doesn't\n> help on this topic.\n>\n> This bug really needs more discussion and design consideration. I have\n> thought about this problem and haven't found any solution that don't\n> involve\n> breaking the current core behavior. It really looks like an impossible\n> bug to\n> fix without dropping the constraint itself. On both side. Maybe the only\n> sane\n> behavior would be to forbid detaching the partition if it would break the\n> constraint.\n>\n> But let's discuss this on the related thread, should we?\n>\n>\n> Thank you for reading me all the way down to the bottom!\n>\n> Regards,\n>\n> [1] https://www.postgresql.org/message-id/20230705233028.2f554f73%40karst\n> [2] https://www.postgresql.org/message-id/20230420144344.40744130%40karst\n> [3] https://www.postgresql.org/message-id/20230707175859.17c91538%40karst\n>\n>\n>\n>\n\n-- \nTender Wang\n\nJehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 05:02写道:Hi,\n\nOn Tue, 20 Aug 2024 23:09:27 -0400\nAlvaro Herrera <[email protected]> wrote:\n\n> On 2024-Aug-20, Jehan-Guillaume de Rorthais wrote:\n> \n> > I'm back on this issue as well. I start poking at this patch to review it,\n> > test it, challenge it and then report here.\n> > \n> > I'll try to check if some other issues might have lost/forgot on they way as\n> > well.\n> \n> Thanks, much appreciated, looking forward to your feedback.\n\nSorry, it took me a while to come back to you on this topic. It has been hard to\nuntangle subjects, reproductions and patch…\n\nThere's three distinct issues/thread:\n\n* Constraint & trigger catalog cleanup [1] (this thread)\n* FK broken after DETACH [2]\n* Maintenance consideration about self referencing FK between partitions [3]The third issue has been fixed, and codes have been pushed. Because of my misunderstanding,It should not be here.\n\n0. Splitting in two commits\n\n Your patch addresses two bugs:\n\n * one for the constraint & trigger catalog cleanup;\n * one for the FK broken after DETACH.\n\n These issues are unrelated, therefore I am wondering if it would be better\n to split their resolution in two different patches.\n\n Last year, I reported them in two different threads [1][2]. The first with\n implementation consideration, the second with a demo/proposal/draft fix.\n\n Unfortunately, this discussion about the first bug slipped to the second one\n when Tender stumbled on this bug as well and reported it. But, both bugs can\n be triggered independently, and have distinct fixes.It's ok that these two issues are fixed together. It is because current codes don't handle better when the referenced side is the partition table.\n\n Finally, splitting the patch might help setting finer patch co-authoring. I\n know my patch for [2] was a draft and somewhat trivial, but I spend a fair\n amount of time to report, then produce a draft patch, so I was wondering if\n it would be candidate to a co-author flag on this (small, humble and\n refactored by you) patch?\n\n I'm definitely not involved (yet) in the second part though.\n\n1. Constraint & trigger catalog cleanup [1]\n\n I have been focusing on the current master branch and haven't taken into\n consideration backpatching related issues yet.\n\n When I first studied this bug and reported it, I held on writing a patch\n because it seemed it would duplicate some existing code. I wrote:\n\n > I poked around DetachPartitionFinalize() to try to find a way to fix this,\n > but it looks like it would duplicate a bunch of code from other code path\n > (eg. from CloneFkReferenced).\n\n My proposal was to clean everything related to the old FK and use some\n existing code path to create a fresh and cleaner one. This requires some\n refactoring in existing code, but we would win a common path of code between\n create/attach/detach, a cleaner catalog and easier code maintenance.\n\n I've finally been able to write a PoC that implement this by calling\n addFkRecurseReferenced() from DetachPartitionFinalize(). I can't join\n it here because it is currently an ugly draft and I still have some work\n to do. But I would really like to have a little more time (one or two days) to\n explore this avenue further before you commit yours, if you don't mind? Or\n maybe you already have considered this avenue and rejected it?\n\n\n2. FK broken after DETACH [2]\n\n Comparing your patch to my draft from [2], I just have a question about the\n refactoring.\n\n Fencing the constraint/trigger removal inside a conditional\n RELKIND_PARTITIONED_TABLE block of code was obvious. It avoids some useless\n catalog scan compared to my draft patch.\n\n Also, the \"contype == CONSTRAINT_FOREIGN\" I had sounds safe to remove.\n\n However, is it clean/light enough to add the \"conparentid == fk->conoid\" in\n the scan key as I did? I'm not sure it saves anything else but the small\n conditional block you inserted inside the loop, but I wonder if there's a\n serious concern about this anyway?\n\n Last, considering the tests, I think we should add some rows in the tables,\n to make sure the FK is correctly enforced after DETACH. Something like:\n\n CREATE SCHEMA fkpart12\n CREATE TABLE fk_p ( id bigint PRIMARY KEY ) PARTITION BY list (id)\n CREATE TABLE fk_p_1 PARTITION OF fk_p FOR VALUES IN (1)\n CREATE TABLE fk_p_2 PARTITION OF fk_p FOR VALUES IN (2)\n CREATE TABLE fk_r_1 ( id bigint PRIMARY KEY, p_id bigint NOT NULL)\n CREATE TABLE fk_r_2 ( id bigint PRIMARY KEY, p_id bigint NOT NULL)\n CREATE TABLE fk_r ( id bigint PRIMARY KEY, p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES fk_p (id)\n ) PARTITION BY list (id);\n SET search_path TO fkpart12;\n\n INSERT INTO fk_p VALUES (1);\n\n ALTER TABLE fk_r ATTACH PARTITION fk_r_2 FOR VALUES IN (2);\n\n ALTER TABLE fk_r ATTACH PARTITION fk_r_1 FOR VALUES IN (1);\n \\d fk_r_1\n\n INSERT INTO fk_r VALUES (1,1);\n\n ALTER TABLE fk_r DETACH PARTITION fk_r_1;\n \\d fk_r_1\n\n INSERT INTO c_1 VALUES (2,2); -- fails as EXPECTED\n DELETE FROM p; -- should fails but was buggy\n\n ALTER TABLE fk_r ATTACH PARTITION fk_r_1 FOR VALUES IN (1);\n \\d fk_r_1\n\n\n3. Self referencing FK between partitions [3]\n\n You added to your commit message:\n\n verify: 20230707175859.17c91538@karst\n\n I'm not sure what the \"verify\" flag means. Unfortunately, your patch doesn't\n help on this topic.\n\n This bug really needs more discussion and design consideration. I have\n thought about this problem and haven't found any solution that don't involve\n breaking the current core behavior. It really looks like an impossible bug to\n fix without dropping the constraint itself. On both side. Maybe the only sane\n behavior would be to forbid detaching the partition if it would break the\n constraint.\n\n But let's discuss this on the related thread, should we?\n\n\nThank you for reading me all the way down to the bottom!\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20230705233028.2f554f73%40karst\n[2] https://www.postgresql.org/message-id/20230420144344.40744130%40karst\n[3] https://www.postgresql.org/message-id/20230707175859.17c91538%40karst\n\n\n\n-- Tender Wang",
"msg_date": "Tue, 3 Sep 2024 10:16:44 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Hi Tender,\n\nOn Tue, 3 Sep 2024 10:16:44 +0800\nTender Wang <[email protected]> wrote:\n\n> Jehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 05:02写道:\n[…]\n> > * Constraint & trigger catalog cleanup [1] (this thread)\n> > * FK broken after DETACH [2]\n> > * Maintenance consideration about self referencing FK between partitions\n> > [3]\n> > \n> \n> The third issue has been fixed, and codes have been pushed. Because of my\n> misunderstanding,\n> It should not be here.\n\nI just retried the SQL scenario Guillaume gave on both master and master with\nAlvaro's patch. See:\n\nhttps://www.postgresql.org/message-id/flat/CAECtzeWHCA%2B6tTcm2Oh2%2Bg7fURUJpLZb-%3DpRXgeWJ-Pi%2BVU%3D_w%40mail.gmail.com\n\nIt doesn't seem fixed at all. Maybe you are mixing up with another thread/issue?\n\n> > 0. Splitting in two commits\n> >\n> > […]\n> >\n> > Unfortunately, this discussion about the first bug slipped to the second\n> > one when Tender stumbled on this bug as well and reported it. But, both\n> > bugs can be triggered independently, and have distinct fixes.\n> \n> It's ok that these two issues are fixed together. It is because current\n> codes don't handle better when the referenced side is the partition table.\n\nI don't feel the same. Mixing two discussions and fixes together in the same\nthread and commit makes life harder.\n\nLast year, when you found the other bug, I tried to point you to the\nright thread to avoid mixing subjects:\n\nhttps://www.postgresql.org/message-id/20230810170345.26e41b05%40karst\n\nIf I wrote about the third (non fixed) issue yesterday, it's just because\nAlvaro included a reference to it in his commit message. But I think we should\nreally keep up with this issue on its own, dedicated discussion:\n\nhttps://www.postgresql.org/message-id/flat/CAECtzeWHCA%2B6tTcm2Oh2%2Bg7fURUJpLZb-%3DpRXgeWJ-Pi%2BVU%3D_w%40mail.gmail.com\n\nRegards\n\n\n",
"msg_date": "Tue, 3 Sep 2024 11:26:37 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Jehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 17:26写道:\n\n> Hi Tender,\n>\n> On Tue, 3 Sep 2024 10:16:44 +0800\n> Tender Wang <[email protected]> wrote:\n>\n> > Jehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 05:02写道:\n> […]\n> > > * Constraint & trigger catalog cleanup [1] (this thread)\n> > > * FK broken after DETACH [2]\n> > > * Maintenance consideration about self referencing FK between\n> partitions\n> > > [3]\n> > >\n> >\n> > The third issue has been fixed, and codes have been pushed. Because of\n> my\n> > misunderstanding,\n> > It should not be here.\n>\n> I just retried the SQL scenario Guillaume gave on both master and master\n> with\n> Alvaro's patch. See:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAECtzeWHCA%2B6tTcm2Oh2%2Bg7fURUJpLZb-%3DpRXgeWJ-Pi%2BVU%3D_w%40mail.gmail.com\n>\n> It doesn't seem fixed at all. Maybe you are mixing up with another\n> thread/issue?\n>\n\nSorry, I mixed up the third issue with the Alexander reported issue.\nPlease ignore the above noise.\n\n>\n> > > 0. Splitting in two commits\n> > >\n> > > […]\n> > >\n> > > Unfortunately, this discussion about the first bug slipped to the\n> second\n> > > one when Tender stumbled on this bug as well and reported it. But,\n> both\n> > > bugs can be triggered independently, and have distinct fixes.\n> >\n> > It's ok that these two issues are fixed together. It is because current\n> > codes don't handle better when the referenced side is the partition\n> table.\n>\n> I don't feel the same. Mixing two discussions and fixes together in the\n> same\n> thread and commit makes life harder.\n>\n\nHmm, these two issues have a close relationship. Anyway, I think it's ok to\nfix the two issues together.\n\n\n> Last year, when you found the other bug, I tried to point you to the\n> right thread to avoid mixing subjects:\n>\n> https://www.postgresql.org/message-id/20230810170345.26e41b05%40karst\n>\n> If I wrote about the third (non fixed) issue yesterday, it's just because\n> Alvaro included a reference to it in his commit message. But I think we\n> should\n> really keep up with this issue on its own, dedicated discussion:\n>\n>\n> https://www.postgresql.org/message-id/flat/CAECtzeWHCA%2B6tTcm2Oh2%2Bg7fURUJpLZb-%3DpRXgeWJ-Pi%2BVU%3D_w%40mail.gmail.com\n\n\nThanks for the reminder. I didn't take the time to look into the third\nissue. Please give me some to analyze it.\n\n\n-- \nThanks,\nTender Wang\n\nJehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 17:26写道:Hi Tender,\n\nOn Tue, 3 Sep 2024 10:16:44 +0800\nTender Wang <[email protected]> wrote:\n\n> Jehan-Guillaume de Rorthais <[email protected]> 于2024年9月3日周二 05:02写道:\n[…]\n> > * Constraint & trigger catalog cleanup [1] (this thread)\n> > * FK broken after DETACH [2]\n> > * Maintenance consideration about self referencing FK between partitions\n> > [3]\n> > \n> \n> The third issue has been fixed, and codes have been pushed. Because of my\n> misunderstanding,\n> It should not be here.\n\nI just retried the SQL scenario Guillaume gave on both master and master with\nAlvaro's patch. See:\n\nhttps://www.postgresql.org/message-id/flat/CAECtzeWHCA%2B6tTcm2Oh2%2Bg7fURUJpLZb-%3DpRXgeWJ-Pi%2BVU%3D_w%40mail.gmail.com\n\nIt doesn't seem fixed at all. Maybe you are mixing up with another thread/issue?Sorry, I mixed up the third issue with the Alexander reported issue. Please ignore the above noise. \n\n> > 0. Splitting in two commits\n> >\n> > […]\n> >\n> > Unfortunately, this discussion about the first bug slipped to the second\n> > one when Tender stumbled on this bug as well and reported it. But, both\n> > bugs can be triggered independently, and have distinct fixes.\n> \n> It's ok that these two issues are fixed together. It is because current\n> codes don't handle better when the referenced side is the partition table.\n\nI don't feel the same. Mixing two discussions and fixes together in the same\nthread and commit makes life harder.Hmm, these two issues have a close relationship. Anyway, I think it's ok to fix the two issues together.\n\nLast year, when you found the other bug, I tried to point you to the\nright thread to avoid mixing subjects:\n\nhttps://www.postgresql.org/message-id/20230810170345.26e41b05%40karst\n\nIf I wrote about the third (non fixed) issue yesterday, it's just because\nAlvaro included a reference to it in his commit message. But I think we should\nreally keep up with this issue on its own, dedicated discussion:\n\nhttps://www.postgresql.org/message-id/flat/CAECtzeWHCA%2B6tTcm2Oh2%2Bg7fURUJpLZb-%3DpRXgeWJ-Pi%2BVU%3D_w%40mail.gmail.comThanks for the reminder. I didn't take the time to look into the third issue. Please give me some to analyze it.-- Thanks,Tender Wang",
"msg_date": "Tue, 3 Sep 2024 17:55:12 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Mon, 2 Sep 2024 23:01:47 +0200\nJehan-Guillaume de Rorthais <[email protected]> wrote:\n\n[…]\n\n> My proposal was to clean everything related to the old FK and use some\n> existing code path to create a fresh and cleaner one. This requires some\n> refactoring in existing code, but we would win a common path of code between\n> create/attach/detach, a cleaner catalog and easier code maintenance.\n> \n> I've finally been able to write a PoC that implement this by calling\n> addFkRecurseReferenced() from DetachPartitionFinalize(). I can't join\n> it here because it is currently an ugly draft and I still have some work\n> to do. But I would really like to have a little more time (one or two days)\n> to explore this avenue further before you commit yours, if you don't mind?\n> Or maybe you already have considered this avenue and rejected it?\n\nPlease, find in attachment a patch implementing this idea.\n\nRegards,",
"msg_date": "Thu, 5 Sep 2024 00:57:28 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:\n\n> On 2024-Jul-26, Tender Wang wrote:\n>\n> > Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> >\n> > > There is a bug report[0] Tender comments might be the same issue as\n> > > this one, but I tried Alvaro's and mine patch, neither could solve\n> > > that problem, I did not tried Tender's earlier patch thought. I post\n> > > the test script below in case you are interested.\n> >\n> > My earlier patch should handle Alexander reported case. But I did not\n> > do more test. I'm not sure that wether or not has dismatching between\n> > pg_constraint and pg_trigger.\n> >\n> > I aggred with Alvaro said that \"requires a much more invasive\n> > solution\".\n>\n> Here's the patch which, as far as I can tell, fixes all the reported\n> problems (other than the one in bug 18541, for which I proposed an\n> unrelated fix in that thread[1]). If you can double-check, I would very\n> much appreciate that. Also, I think the test cases the patch adds\n> reflect the provided examples sufficiently, but if we're still failing\n> to cover some, please let me know.\n>\n\nWhen I review Jehan-Guillaume v2 patch, I found the below codes that need\na little tweak. In DetachPartitionFinalize()\n/*\n* If the referenced side is partitioned (which we know because our\n* parent's constraint points to a different relation than ours) then\n* we must, in addition to the above, create pg_constraint rows that\n* point to each partition, each with its own action triggers.\n*/\nif (parentConForm->conrelid != conform->conrelid)\n\nI found that the above IF was always true, regardless of whether the\nreferenced side is partitioned.\nAlthough find_all_inheritors() can return an empty children list when the\nreferenced side is not partitioned,\nwe can avoid much useless work.\nHow about this way:\nif (get_rel_relkind(conform->confrelid) == RELKIND_PARTITIONED_TABLE)\n\n\n--\nThanks,\nTender Wang\n\nAlvaro Herrera <[email protected]> 于2024年8月8日周四 06:50写道:On 2024-Jul-26, Tender Wang wrote:\n\n> Junwang Zhao <[email protected]> 于2024年7月26日周五 14:57写道:\n> \n> > There is a bug report[0] Tender comments might be the same issue as\n> > this one, but I tried Alvaro's and mine patch, neither could solve\n> > that problem, I did not tried Tender's earlier patch thought. I post\n> > the test script below in case you are interested.\n> \n> My earlier patch should handle Alexander reported case. But I did not\n> do more test. I'm not sure that wether or not has dismatching between\n> pg_constraint and pg_trigger.\n> \n> I aggred with Alvaro said that \"requires a much more invasive\n> solution\".\n\nHere's the patch which, as far as I can tell, fixes all the reported\nproblems (other than the one in bug 18541, for which I proposed an\nunrelated fix in that thread[1]). If you can double-check, I would very\nmuch appreciate that. Also, I think the test cases the patch adds\nreflect the provided examples sufficiently, but if we're still failing\nto cover some, please let me know.When I review Jehan-Guillaume v2 patch, I found the below codes that needa little tweak. In DetachPartitionFinalize()/*\t\t * If the referenced side is partitioned (which we know because our\t\t * parent's constraint points to a different relation than ours) then\t\t * we must, in addition to the above, create pg_constraint rows that\t\t * point to each partition, each with its own action triggers.\t\t */\t\tif (parentConForm->conrelid != conform->conrelid)I found that the above IF was always true, regardless of whether the referenced side is partitioned.Although find_all_inheritors() can return an empty children list when the referenced side is not partitioned, we can avoid much useless work.How about this way:if (get_rel_relkind(conform->confrelid) == RELKIND_PARTITIONED_TABLE)--Thanks,Tender Wang",
"msg_date": "Thu, 5 Sep 2024 12:56:39 +0800",
"msg_from": "Tender Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Thu, 5 Sep 2024 00:57:28 +0200\nJehan-Guillaume de Rorthais <[email protected]> wrote:\n\n> On Mon, 2 Sep 2024 23:01:47 +0200\n> Jehan-Guillaume de Rorthais <[email protected]> wrote:\n> \n> […]\n> \n> > My proposal was to clean everything related to the old FK and use some\n> > existing code path to create a fresh and cleaner one. This requires some\n> > refactoring in existing code, but we would win a common path of code\n> > between create/attach/detach, a cleaner catalog and easier code maintenance.\n> > \n> > I've finally been able to write a PoC that implement this by calling\n> > addFkRecurseReferenced() from DetachPartitionFinalize(). I can't join\n> > it here because it is currently an ugly draft and I still have some work\n> > to do. But I would really like to have a little more time (one or two\n> > days) to explore this avenue further before you commit yours, if you don't\n> > mind? Or maybe you already have considered this avenue and rejected it? \n> \n> Please, find in attachment a patch implementing this idea.\n\nPlease, find in attachment a set of patch based on the previous one.\n\nv3-0001-Add-tests-about-FK-between-partitionned-tables.patch:\n\n This patch implement tests triggering the bugs discussed. Based on Michael\n advice, I added one level sub-partitioning to stress test the recursive code\n and some queries checking on the catalog objects.\n\nv3-0002-Rework-foreign-key-mangling-during-ATTACH-DETACH.patch:\n\n The main patch, similar to v2 in my previous patch with more comments\n added/restored. I added some more explanations in the commit message about\n the refactoring itself, making addFkRecurseReferencing() and\n addFkRecurseReferenced() having the same logic.\n\nv3-0003-Use-addFkConstraint-in-addFkRecurseReferencing.patch\n\n A new patch refactoring the constraint creation in addFkRecurseReferencing()\n to use the new addFkConstraint() function.\n\nv3-0004-Use-addFkConstraint-in-CloneFkReferencing.patch\n\n A new patch refactoring the constraint creation in CloneFkReferencing()\n to use the new addFkConstraint() function.\n\nTODO:\n\n* I hadn't time to study last Tender Wang comment here:\n https://postgr.es/m/CAHewXNkuU2V7GfgFkwd265RJ99%2BBfJueNEZhrHSk39o3thqxNA%40mail.gmail.com\n* I still think we should split v3-0002 in two different patch…\n* backporting…\n\nRegards,",
"msg_date": "Wed, 25 Sep 2024 14:42:40 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Wed, 25 Sep 2024 14:42:40 +0200\nJehan-Guillaume de Rorthais <[email protected]> wrote:\n\n> On Thu, 5 Sep 2024 00:57:28 +0200\n> Jehan-Guillaume de Rorthais <[email protected]> wrote:\n[…]\n> > \n> > Please, find in attachment a patch implementing this idea. \n> \n> Please, find in attachment a set of patch based on the previous one.\n\nPlease, find in attachment the same set of patch for REL_17_STABLE.\n\nRegards,",
"msg_date": "Wed, 25 Sep 2024 16:14:07 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
},
{
"msg_contents": "On Wed, 25 Sep 2024 16:14:07 +0200\nJehan-Guillaume de Rorthais <[email protected]> wrote:\n\n> On Wed, 25 Sep 2024 14:42:40 +0200\n> Jehan-Guillaume de Rorthais <[email protected]> wrote:\n> \n> > On Thu, 5 Sep 2024 00:57:28 +0200\n> > Jehan-Guillaume de Rorthais <[email protected]> wrote: \n> […]\n> > > \n> > > Please, find in attachment a patch implementing this idea. \n> > \n> > Please, find in attachment a set of patch based on the previous one. \n> \n> Please, find in attachment the same set of patch for REL_17_STABLE.\n\nThe set of patch for REL_17_STABLE apply on REL_16_STABLE with no effort.\n\nI've been able to backpatch on REL_15_STABLE with minimal effort. See\nattachments.\n\nREL_14_STABLE backport doesn't seem trivial, so I'll wait for some feedback,\nreview & decision before going further down in backpatching.\n\nRegards,",
"msg_date": "Thu, 26 Sep 2024 10:48:35 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Fix DETACH with FK pointing to a partitioned table fails"
}
] |
[
{
"msg_contents": "Hi, all. I want to report a bug about recovery of 2pc data, in current implementation of crash recovery, there are two ways to recover 2pc data:\n1、before redo, func restoreTwoPhaseData() will restore 2pc data those xid < ShmemVariableCache->nextXid, which is initialized from checkPoint.nextXid;\n2、during redo, func xact_redo() will add 2pc from wal;\nThe following scenario may cause the same 2pc to be added repeatedly:\n1、start creating checkpoint_1, checkpoint_1.redo is set as curInsert;\n2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose the xid of this 2pc is 100, and then ShmemVariableCache->nextXid will be advanced as 101;\n3、checkPoint_1.nextXid is set as 101;\n4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied to disk because its prepare_end_lsn > checkpoint_1.redo;\n5、checkPoint_1 is finished, after checkpoint_timeout, start creating checkpoint_2;\n6、during checkpoint_2, data of 2pc_100 will be copied to disk;\n7、before UpdateControlFile() of checkpoint_2, crash happened;\n8、during crash recovery, redo will start from checkpoint_1, and 2pc_100 will be restored first by restoreTwoPhaseData() because xid_100 < checkPoint_1.nextXid, which is 101; \n9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100 will be added again by xact_redo() during wal replay, resulting in the same 2pc data being added twice;\n10、In RecoverPreparedTransactions() -> lock_twophase_recover(), lock the same 2pc will cause panic.\nIs the above scenario reasonable, and do you have any good ideas for fixing this bug?\nThanks & Best Regard\n\nHi, all. I want to report a bug about recovery of 2pc data, in current implementation of crash recovery, there are two ways to recover 2pc data:1、before redo, func restoreTwoPhaseData() will restore 2pc data those xid < ShmemVariableCache->nextXid, which is initialized from checkPoint.nextXid;2、during redo, func xact_redo() will add 2pc from wal;The following scenario may cause the same 2pc to be added repeatedly:1、start creating checkpoint_1, checkpoint_1.redo is set as curInsert;2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose the xid of this 2pc is 100, and then ShmemVariableCache->nextXid will be advanced as 101;3、checkPoint_1.nextXid is set as 101;4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied to disk because its prepare_end_lsn > checkpoint_1.redo;5、checkPoint_1 is finished, after checkpoint_timeout, start creating checkpoint_2;6、during checkpoint_2, data of 2pc_100 will be copied to disk;7、before UpdateControlFile() of checkpoint_2, crash happened;8、during crash recovery, redo will start from checkpoint_1, and 2pc_100 will be restored first by restoreTwoPhaseData() because xid_100 < checkPoint_1.nextXid, which is 101; 9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100 will be added again by xact_redo() during wal replay, resulting in the same 2pc data being added twice;10、In RecoverPreparedTransactions() -> lock_twophase_recover(), lock the same 2pc will cause panic.Is the above scenario reasonable, and do you have any good ideas for fixing this bug?Thanks & Best Regard",
"msg_date": "Thu, 06 Jul 2023 10:02:15 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?VGhlIHNhbWUgMlBDIGRhdGEgbWF5YmUgcmVjb3ZlcmVkIHR3aWNl?="
},
{
"msg_contents": "Hi, all\nI add a patch for pg11 to fix this bug, hope you can check it.\nThanks & Best Regard\n------------------------------------------------------------------\n发件人:蔡梦娟(玊于) <[email protected]>\n发送时间:2023年7月6日(星期四) 10:02\n收件人:pgsql-hackers <[email protected]>\n抄 送:pgsql-bugs <[email protected]>\n主 题:The same 2PC data maybe recovered twice\nHi, all. I want to report a bug about recovery of 2pc data, in current implementation of crash recovery, there are two ways to recover 2pc data:\n1、before redo, func restoreTwoPhaseData() will restore 2pc data those xid < ShmemVariableCache->nextXid, which is initialized from checkPoint.nextXid;\n2、during redo, func xact_redo() will add 2pc from wal;\nThe following scenario may cause the same 2pc to be added repeatedly:\n1、start creating checkpoint_1, checkpoint_1.redo is set as curInsert;\n2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose the xid of this 2pc is 100, and then ShmemVariableCache->nextXid will be advanced as 101;\n3、checkPoint_1.nextXid is set as 101;\n4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied to disk because its prepare_end_lsn > checkpoint_1.redo;\n5、checkPoint_1 is finished, after checkpoint_timeout, start creating checkpoint_2;\n6、during checkpoint_2, data of 2pc_100 will be copied to disk;\n7、before UpdateControlFile() of checkpoint_2, crash happened;\n8、during crash recovery, redo will start from checkpoint_1, and 2pc_100 will be restored first by restoreTwoPhaseData() because xid_100 < checkPoint_1.nextXid, which is 101; \n9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100 will be added again by xact_redo() during wal replay, resulting in the same 2pc data being added twice;\n10、In RecoverPreparedTransactions() -> lock_twophase_recover(), lock the same 2pc will cause panic.\nIs the above scenario reasonable, and do you have any good ideas for fixing this bug?\nThanks & Best Regard",
"msg_date": "Fri, 07 Jul 2023 17:48:39 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaVGhlIHNhbWUgMlBDIGRhdGEgbWF5YmUgcmVjb3ZlcmVkIHR3aWNl?="
},
{
"msg_contents": "Hi:\n\n\n\nOn Sat, Jul 8, 2023 at 2:53 AM 蔡梦娟(玊于) <[email protected]> wrote:\n\n> Hi, all\n> I add a patch for pg11 to fix this bug, hope you can check it.\n>\n>\nThanks for the bug report and patch! Usually we talk about bugs\nagainst the master branch, no people want to check out a history\nbranch and do the discussion there:) This bug is reproducible on\nthe master IIUC.\n\nI dislike the patch here because it uses more CPU cycles to detect\nduplication for every 2pc record. How many CPU cycles we use\ndepends on how many 2pc are used. How about detecting such\nduplication only at restoreTwoPhaseData stage? Instead of\n\nProcessTwoPhaseBuffer:\nif (TransactionIdFollowsOrEquals(xid, origNextXid))\n{\n ...\nereport(WARNING,\n(errmsg(\"removing future two-phase state file for transaction %u\",\nxid)));\nRemoveTwoPhaseFile(xid, true);\n ...\n}\n\nwe use:\n\nif (TwoPhaseFileHeader.startup_lsn > checkpoint.redo)\n{\nereport(WARNING,\n(errmsg(\"removing future two-phase state file for transaction %u\",\nxid)));\n}\n\nWe have several advantages with this approach. a). We only care\nabout the restoreTwoPhaseData, not for every WAL record recovery.\nb). We use constant comparison rather than an-array-for-loop. c).\nIt is better design since we avoid the issue at the first place rather\nthan allowing it at the first stage and fix that at the following stage.\n\nThe only blocker I know is currently we don't write startup_lsn into\nthe 2pc checkpoint file and if we do that, the decode on the old 2pc\nfile will fail. We also have several choices here.\n\na). Notify users to complete all the pending 2pc before upgrading\nwithin manual. b). Use a different MAGIC NUMBER in the 2pc\ncheckpoint file to distinguish the 2 versions. Basically I prefer\nthe method a).\n\nAny suggestion is welcome.\n\n\n>\n> ------------------------------------------------------------------\n> 发件人:蔡梦娟(玊于) <[email protected]>\n> 发送时间:2023年7月6日(星期四) 10:02\n> 收件人:pgsql-hackers <[email protected]>\n> 抄 送:pgsql-bugs <[email protected]>\n> 主 题:The same 2PC data maybe recovered twice\n>\n> Hi, all. I want to report a bug about recovery of 2pc data, in current\n> implementation of crash recovery, there are two ways to recover 2pc data:\n> 1、before redo, func restoreTwoPhaseData() will restore 2pc data those xid\n> < ShmemVariableCache->nextXid, which is initialized from\n> checkPoint.nextXid;\n> 2、during redo, func xact_redo() will add 2pc from wal;\n>\n> The following scenario may cause the same 2pc to be added repeatedly:\n> 1、start creating checkpoint_1, checkpoint_1.redo is set as curInsert;\n> 2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose the xid\n> of this 2pc is 100, and then ShmemVariableCache->nextXid will be advanced\n> as 101;\n> 3、checkPoint_1.nextXid is set as 101;\n> 4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied to\n> disk because its prepare_end_lsn > checkpoint_1.redo;\n> 5、checkPoint_1 is finished, after checkpoint_timeout, start creating\n> checkpoint_2;\n> 6、during checkpoint_2, data of 2pc_100 will be copied to disk;\n> 7、before UpdateControlFile() of checkpoint_2, crash happened;\n> 8、during crash recovery, redo will start from checkpoint_1, and 2pc_100\n> will be restored first by restoreTwoPhaseData() because xid_100 < checkPoint_1.nextXid,\n> which is 101;\n> 9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100 will\n> be added again by xact_redo() during wal replay, resulting in the same\n> 2pc data being added twice;\n> 10、In RecoverPreparedTransactions() -> lock_twophase_recover(), lock the\n> same 2pc will cause panic.\n>\n> Is the above scenario reasonable, and do you have any good ideas for\n> fixing this bug?\n>\n> Thanks & Best Regard\n>\n>\n\n-- \nBest Regards\nAndy Fan\n\nHi:On Sat, Jul 8, 2023 at 2:53 AM 蔡梦娟(玊于) <[email protected]> wrote:Hi, allI add a patch for pg11 to fix this bug, hope you can check it. Thanks for the bug report and patch! Usually we talk about bugsagainst the master branch, no people want to check out a history branch and do the discussion there:) This bug is reproducible onthe master IIUC. I dislike the patch here because it uses more CPU cycles to detectduplication for every 2pc record. How many CPU cycles we use depends on how many 2pc are used. How about detecting such duplication only at restoreTwoPhaseData stage? Instead of ProcessTwoPhaseBuffer:if (TransactionIdFollowsOrEquals(xid, origNextXid)){ ...\tereport(WARNING,\t\t\t(errmsg(\"removing future two-phase state file for transaction %u\",\t\t\t\t\txid)));\tRemoveTwoPhaseFile(xid, true); ...}we use:if (TwoPhaseFileHeader.startup_lsn > checkpoint.redo){\tereport(WARNING,\t\t\t(errmsg(\"removing future two-phase state file for transaction %u\",\t\t\t\txid)));}We have several advantages with this approach. a). We only careabout the restoreTwoPhaseData, not for every WAL record recovery.b). We use constant comparison rather than an-array-for-loop. c).It is better design since we avoid the issue at the first place ratherthan allowing it at the first stage and fix that at the following stage. The only blocker I know is currently we don't write startup_lsn into the 2pc checkpoint file and if we do that, the decode on the old 2pcfile will fail. We also have several choices here. a). Notify users to complete all the pending 2pc before upgrading within manual. b). Use a different MAGIC NUMBER in the 2pc checkpoint file to distinguish the 2 versions. Basically I prefer the method a). Any suggestion is welcome. ------------------------------------------------------------------发件人:蔡梦娟(玊于) <[email protected]>发送时间:2023年7月6日(星期四) 10:02收件人:pgsql-hackers <[email protected]>抄 送:pgsql-bugs <[email protected]>主 题:The same 2PC data maybe recovered twiceHi, all. I want to report a bug about recovery of 2pc data, in current implementation of crash recovery, there are two ways to recover 2pc data:1、before redo, func restoreTwoPhaseData() will restore 2pc data those xid < ShmemVariableCache->nextXid, which is initialized from checkPoint.nextXid;2、during redo, func xact_redo() will add 2pc from wal;The following scenario may cause the same 2pc to be added repeatedly:1、start creating checkpoint_1, checkpoint_1.redo is set as curInsert;2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose the xid of this 2pc is 100, and then ShmemVariableCache->nextXid will be advanced as 101;3、checkPoint_1.nextXid is set as 101;4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied to disk because its prepare_end_lsn > checkpoint_1.redo;5、checkPoint_1 is finished, after checkpoint_timeout, start creating checkpoint_2;6、during checkpoint_2, data of 2pc_100 will be copied to disk;7、before UpdateControlFile() of checkpoint_2, crash happened;8、during crash recovery, redo will start from checkpoint_1, and 2pc_100 will be restored first by restoreTwoPhaseData() because xid_100 < checkPoint_1.nextXid, which is 101; 9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100 will be added again by xact_redo() during wal replay, resulting in the same 2pc data being added twice;10、In RecoverPreparedTransactions() -> lock_twophase_recover(), lock the same 2pc will cause panic.Is the above scenario reasonable, and do you have any good ideas for fixing this bug?Thanks & Best Regard-- Best RegardsAndy Fan",
"msg_date": "Wed, 12 Jul 2023 10:57:44 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The same 2PC data maybe recovered twice"
},
{
"msg_contents": "Yes, this bug can also be reproduced on the master branch, and the corresponding reproduction patch is attached.\nI also considered comparing the 2pc.prepare_start_lsn and checkpoint.redo in ProcessTwoPhaseBuffer before, but this method requires modifying the format of the 2pc checkpoint file, which will bring compatibility issues. Especially for released branches, assuming that a node has encountered this bug, it will not be able to start successfully due to FATAL during crash recovery, and therefore cannot manually commit previous two-phase transactions. Using magic number to distinguish 2pc checkpoint file versions can't solve the problem in the above scenario either.\nFor unreleased branches, writing 2pc.prepare_start_lsn into the checkpoint file will be a good solution, but for released branches, I personally think using WAL record to overwrite checkpoint data would be a more reasonable approach, What do you think?\nBest Regards\nsuyu.cmj",
"msg_date": "Wed, 12 Jul 2023 15:20:57 +0800",
"msg_from": "\"suyu.cmj\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmU6IFRoZSBzYW1lIDJQQyBkYXRhIG1heWJlIHJlY292ZXJlZCB0d2ljZQ==?="
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 03:20:57PM +0800, suyu.cmj wrote:\n> Yes, this bug can also be reproduced on the master branch, and the\n> corresponding reproduction patch is attached.\n\nThat's an interesting reproducer with injection points. It looks like\nyou've spent a lot of time investigating that. So, basically, a\ncheckpoint fails after writing a 2PC file to disk, but before the redo\nLSN has been updated.\n\n> I also considered comparing the 2pc.prepare_start_lsn and\n> checkpoint.redo in ProcessTwoPhaseBuffer before, but this method\n> requires modifying the format of the 2pc checkpoint file, which will\n> bring compatibility issues. Especially for released branches,\n> assuming that a node has encountered this bug, it will not be able\n> to start successfully due to FATAL during crash recovery, and\n> therefore cannot manually commit previous two-phase\n> transactions. Using magic number to distinguish 2pc checkpoint file\n> versions can't solve the problem in the above scenario either. \n> For unreleased branches, writing 2pc.prepare_start_lsn into the\n> checkpoint file will be a good solution, but for released branches,\n\nYes, changing anything in this format is a no-go. Now, things could\nbe written so as the recovery code is able to handle multiple formats,\nmeaning that it would be able to feed from the a new format that\nincludes a LSN or something else for the comparison, but that would\nnot save from the case where 2PC files with the old format are still\naround and a 2PC WAL record is replayed.\n\n> I personally think using WAL record to overwrite checkpoint data\n> would be a more reasonable approach, What do you think? \n\nThe O(2) loop added in PrepareRedoAdd() to scan the set of 2PC\ntransactions stored in TwoPhaseState for the purpose of checking for a\nduplicate sucks from a performance point of view, particularly for \ndeployments with many 2PC transactions allowed. It could delay\nrecovery a lot. And actually, this is not completely correct, no?\nIt is OK to bypass the recovery of the same transaction if the server\nhas not reached a consistent state, but getting a duplicate when\nconsistency has been reached should lead to a hard failure.\n\nOne approach to avoid this O(2) would be to use a hash table to store\nthe 2PC entries, for example, rather than an array. That would be\nsimple enough but such refactoring is rather scary from the point of\nview of recovery.\n\nAnd, actually, we could do something much more simpler than what's\nbeen proposed on this thread.. PrepareRedoAdd() would be called when\nscanning pg_twophase at the beginning of recovery, or when replaying a\nPREPARE record, both aiming at adding an entry in shmem for the 2PC\ntransaction tracked. Here is a simpler idea: why don't we just check\nin PrepareRedoAdd() if the 2PC file of the transaction being recovery\nis in pg_twophase/ when adding an entry from a WAL record? If a\nconsistent point has *not* been reached by recovery and we find a file\non disk, then do nothing because we *know* thanks to\nrestoreTwoPhaseData() done at the beginning of recover that there is\nan entry for this file. If a consistent point has been reached in\nrecovery and we find a file on disk while replaying a WAL record for\nthe same 2PC file, then fail. If there is no file in pg_twophase for\nthe record replayed, then add it to the array TwoPhaseState.\n\nAdding a O(2) loop that checks for duplicates may be a good idea as a\ncross-check if replaying a record, but I'd rather put that under an\nUSE_ASSERT_CHECKING so as there is no impact on production systems,\nstill we'd have some sanity checks for test setups.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 15:58:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The same 2PC data maybe recovered twice"
},
{
"msg_contents": "Yes, the method you proposed is simpler and more efficient. Following your idea, I have modified the corresponding patch, hope you can review it when you have time.\nBest Regards\nsuyu.cmj",
"msg_date": "Mon, 17 Jul 2023 14:26:56 +0800",
"msg_from": "\"suyu.cmj\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmU6IFRoZSBzYW1lIDJQQyBkYXRhIG1heWJlIHJlY292ZXJlZCB0d2ljZQ==?="
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 02:26:56PM +0800, suyu.cmj wrote:\n> Yes, the method you proposed is simpler and more\n> efficient. Following your idea, I have modified the corresponding\n> patch, hope you can review it when you have time.\n\nI'll double-check that tomorrow, but yes, that's basically what I had\nin mind. Thanks for the patch!\n\n+ char path[MAXPGPATH];\n+ struct stat stat_buf;\nThese two variables can be declared in the code block added by the\npatch where start_lsn is valid.\n\n+ ereport(FATAL,\n+ (errmsg(\"found unexpected duplicate two-phase\ntransaction:%u in pg_twophase, check for data correctness.\",\n+ hdr->xid)));\n\nThe last part of this sentence has no need to be IMO, because it is\nmisleading when building without assertions. How about a single\nFATAL/WARNING like that:\n- errmsg: \"could not recover two-phase state file for transaction %u\"\n- errdetail: \"Two-phase state file has been found in WAL record %X/%X\nbut this transaction has already been restored from disk.\"\n\nThen a WARNING simply means that we've skipped the record entirely.\n--\nMichael",
"msg_date": "Mon, 17 Jul 2023 16:59:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The same 2PC data maybe recovered twice"
},
{
"msg_contents": "Thanks for the feedback! I have updated the patch, hope you can check it.\nBest Regards\nsuyu.cmj",
"msg_date": "Mon, 17 Jul 2023 17:20:00 +0800",
"msg_from": "\"suyu.cmj\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmU6IFRoZSBzYW1lIDJQQyBkYXRhIG1heWJlIHJlY292ZXJlZCB0d2ljZQ==?="
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 05:20:00PM +0800, suyu.cmj wrote:\n> Thanks for the feedback! I have updated the patch, hope you can check it.\n\nI have looked at v3, and noticed that the stat() call is actually a\nbit incorrect in its error handling because it would miss any errors\nthat happen when checking for the existence of the file. The only\nerror that we should safely expect is ENOENT, for a missing entry.\nAll the other had better fail like the other code paths restoring 2PC\nentries from the shared state. At the end, I have made the choice of\nrelying only on access() to check the existence of the file as this is\nan internal place, simplified a bit the comment. Finally, I have made\nthe choice to remove the assert-only check. After sleeping on it, it\nwould have value in very limited cases while a bunch of recovery cases\nwould take a hit, including developers with large 2PC setups, so there\nare a lot of downsides with limited upsides.\n--\nMichael",
"msg_date": "Tue, 18 Jul 2023 14:13:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The same 2PC data maybe recovered twice"
}
] |
[
{
"msg_contents": "Hi.\n\nIn:\nhttps://git.postgresql.org/cgit/postgresql.git/tree/contrib/pg_freespacemap/pg_freespacemap.c\n\nrel = relation_open(relid, AccessShareLock);\n\nif (blkno < 0 || blkno > MaxBlockNumber)\nereport(ERROR,\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\nerrmsg(\"invalid block number\")));\n\n--------------------\nshould it first check input arguments, then relation_open?\nDoes ereport automatically unlock the relation?\n\nHi.In: https://git.postgresql.org/cgit/postgresql.git/tree/contrib/pg_freespacemap/pg_freespacemap.crel = relation_open(relid, AccessShareLock);if (blkno < 0 || blkno > MaxBlockNumber)ereport(ERROR,(errcode(ERRCODE_INVALID_PARAMETER_VALUE),errmsg(\"invalid block number\")));--------------------should it first check input arguments, then relation_open? Does ereport automatically unlock the relation?",
"msg_date": "Thu, 6 Jul 2023 10:14:46 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "contrib/pg_freespacemap first check input argument,\n then relation_open."
},
{
"msg_contents": "Hi,\n\nOn Thu, Jul 06, 2023 at 10:14:46AM +0800, jian he wrote:\n>\n> In:\n> https://git.postgresql.org/cgit/postgresql.git/tree/contrib/pg_freespacemap/pg_freespacemap.c\n>\n> rel = relation_open(relid, AccessShareLock);\n>\n> if (blkno < 0 || blkno > MaxBlockNumber)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"invalid block number\")));\n>\n> --------------------\n> should it first check input arguments, then relation_open?\n\nIt would probably be a slightly better approach but wouldn't really change much\nin practice so I'm not sure it's worth changing now.\n\n> Does ereport automatically unlock the relation?\n\nYes, locks, lwlocks, memory contexts and everything else is properly cleaned /\nreleased in case of error.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:09:33 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: contrib/pg_freespacemap first check input argument, then\n relation_open."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nEkaterina Sokolova and I have found a bug in PG 15. Since 88103567cb\nMarkGUCPrefixReserved() is supposed not only reporting GUCs that haven't\nbeen\ndefined by the extension, but also removing them. However, it removes them\nin\na wrong way, so that a GUC that goes after the removed GUC is never checked.\n\nTo reproduce the bug add the following to the postgresql.conf\n\nshared_preload_libraries = 'pg_stat_statements'\npg_stat_statements.nonexisting_option_1 = on\npg_stat_statements.nonexisting_option_2 = on\npg_stat_statements.nonexisting_option_3 = on\npg_stat_statements.nonexisting_option_4 = on\n\nand start the server. In the logfile you'll see only first and third options\nreported invalid and removed.\n\nIn master MarkGUCPrefixReserved() iterates a hash table, not an array as in\nPG 15. I'm not sure whether it is safe to remove an entry from this hash\ntable\nwhile iterating it, but at least I can't reproduce the bug on master.\n\nI attached a bugfix for PG 15.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Thu, 6 Jul 2023 12:17:44 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "MarkGUCPrefixReserved() doesn't check all options"
},
{
"msg_contents": "On 06/07/2023 12:17, Karina Litskevich wrote:\n> Hi hackers,\n> \n> Ekaterina Sokolova and I have found a bug in PG 15. Since 88103567cb \n> MarkGUCPrefixReserved() is supposed not only reporting GUCs that\n> haven't been defined by the extension, but also removing them.\n> However, it removes them in a wrong way, so that a GUC that goes\n> after the removed GUC is never checked.\n> \n> To reproduce the bug add the following to the postgresql.conf\n> \n> shared_preload_libraries = 'pg_stat_statements'\n> pg_stat_statements.nonexisting_option_1 = on\n> pg_stat_statements.nonexisting_option_2 = on\n> pg_stat_statements.nonexisting_option_3 = on\n> pg_stat_statements.nonexisting_option_4 = on\n> \n> and start the server. In the logfile you'll see only first and third\n> options reported invalid and removed.\n\nGood catch!\n\n> In master MarkGUCPrefixReserved() iterates a hash table, not an array\n> as in PG 15. I'm not sure whether it is safe to remove an entry from\n> this hash table while iterating it, but at least I can't reproduce\n> the bug on master.\nYes, it's safe to remove the current element, while scanning a hash \ntable with hash_seq_init/search. See comment on hash_seq_init.\n\n> I attached a bugfix for PG 15.\n\nApplied, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 13:07:02 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MarkGUCPrefixReserved() doesn't check all options"
}
] |
[
{
"msg_contents": "Hi,\n\nWith PG16's 19d8e230, we got rid of BRIN's blocking of HOT updates,\nbut I just realized that we failed to update the README.HOT document\nwith this new exception for summarizing indexes.\n\nAttached a patch that updates that document, detailing the related rationale.\n\nI'm not sure if such internal documentation is relevant for\nbackpatching, but I also don't think it woudl hurt to have this\nincluded in the REL_16_STABLE branch.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)",
"msg_date": "Thu, 6 Jul 2023 13:40:31 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "HOT readme missing documentation on summarizing index handling"
},
{
"msg_contents": "Yeah, README.HOT should have been updated, and I see no reason not to\nbackpatch this to v16. Barring objections, I'll do that tomorrow.\n\nI have two suggesting regarding the README.HOT changes:\n\n1) I'm not entirely sure it's very clear what \"referential integrity of\nindexes across tuple updates\" actually means. I'm afraid \"referential\nintegrity\" may lead readers to think about foreign keys. Maybe it'd be\nbetter to explain this is about having index pointers to the new tuple\nversion, etc.\n\n2) Wouldn't it be good to make it a bit more explicit we now have three\n\"levels\" of HOT:\n\n (a) no indexes need update\n (b) update only summarizing indexes\n (c) update all indexes\n\nThe original text was really about on/off, and I'm not quite sure the\npart about \"exception\" makes this very clear.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jul 2023 00:14:42 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HOT readme missing documentation on summarizing index handling"
},
{
"msg_contents": "On Fri, 7 Jul 2023 at 00:14, Tomas Vondra <[email protected]> wrote:\n>\n> Yeah, README.HOT should have been updated, and I see no reason not to\n> backpatch this to v16. Barring objections, I'll do that tomorrow.\n>\n> I have two suggesting regarding the README.HOT changes:\n>\n> 1) I'm not entirely sure it's very clear what \"referential integrity of\n> indexes across tuple updates\" actually means. I'm afraid \"referential\n> integrity\" may lead readers to think about foreign keys. Maybe it'd be\n> better to explain this is about having index pointers to the new tuple\n> version, etc.\n>\n> 2) Wouldn't it be good to make it a bit more explicit we now have three\n> \"levels\" of HOT:\n>\n> (a) no indexes need update\n> (b) update only summarizing indexes\n> (c) update all indexes\n>\n> The original text was really about on/off, and I'm not quite sure the\n> part about \"exception\" makes this very clear.\n\nAgreed on both points. Attached an updated version which incorporates\nyour points.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Fri, 7 Jul 2023 18:34:09 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HOT readme missing documentation on summarizing index handling"
},
{
"msg_contents": "On 7/7/23 18:34, Matthias van de Meent wrote:\n> On Fri, 7 Jul 2023 at 00:14, Tomas Vondra <[email protected]> wrote:\n>>\n>> Yeah, README.HOT should have been updated, and I see no reason not to\n>> backpatch this to v16. Barring objections, I'll do that tomorrow.\n>>\n>> I have two suggesting regarding the README.HOT changes:\n>>\n>> 1) I'm not entirely sure it's very clear what \"referential integrity of\n>> indexes across tuple updates\" actually means. I'm afraid \"referential\n>> integrity\" may lead readers to think about foreign keys. Maybe it'd be\n>> better to explain this is about having index pointers to the new tuple\n>> version, etc.\n>>\n>> 2) Wouldn't it be good to make it a bit more explicit we now have three\n>> \"levels\" of HOT:\n>>\n>> (a) no indexes need update\n>> (b) update only summarizing indexes\n>> (c) update all indexes\n>>\n>> The original text was really about on/off, and I'm not quite sure the\n>> part about \"exception\" makes this very clear.\n> \n> Agreed on both points. Attached an updated version which incorporates\n> your points.\n> \n\nThanks, pushed after correcting a couple typos.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jul 2023 19:06:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HOT readme missing documentation on summarizing index handling"
},
{
"msg_contents": "On Fri, 7 Jul 2023 at 19:06, Tomas Vondra <[email protected]> wrote:\n>\n> On 7/7/23 18:34, Matthias van de Meent wrote:\n> > On Fri, 7 Jul 2023 at 00:14, Tomas Vondra <[email protected]> wrote:\n> >> The original text was really about on/off, and I'm not quite sure the\n> >> part about \"exception\" makes this very clear.\n> >\n> > Agreed on both points. Attached an updated version which incorporates\n> > your points.\n> >\n>\n> Thanks, pushed after correcting a couple typos.\n\nThanks!\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 7 Jul 2023 19:12:31 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HOT readme missing documentation on summarizing index handling"
},
{
"msg_contents": "Hi,\n\n> > Thanks, pushed after correcting a couple typos.\n>\n> Thanks!\n\nI noticed that ec99d6e9c87a introduced a slight typo:\n\ns/if there is not room/if there is no room\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:33:27 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HOT readme missing documentation on summarizing index handling"
}
] |
[
{
"msg_contents": "We are now a few days in on Commitfest 2023-07, so it seems about time to send\nthe (by now) customary statistics email on how we are doing, and where we\nideally want go.\n\nThere are 350 patches registered in this commitfest, with 150 of those having\nbeen moved from the past commitfest. If it's not the record, then it's at\nleast in the top-5 of all times. Currently the breakdown looks like this:\n\n Needs review: 181\n Waiting on Author: 48\n Ready for Committer: 38\n Committed: 64\n Moved to next CF: 3\n Withdrawn: 5\n Returned with Feedback: 11\n\nLooking at the closed statuses, that means we've already closed 23.7% of all\npatches. Now, that is heavily influenced by bug-fixes being registered in this\nCF being closed ahead of the CF as part of the v16 cycle, but it's still very\ngood. Let's focus on reducing the number of patches in Needs Review in order\nto get them closer to being committable!\n\nI will shortly do a patch triage and send to the list to try and solicit work\non patches which has promise, but have gone stale or slowed down for whatever\nreason.\n\nThe success of PostgreSQL relies on all of us working together. If you haven't\nyet signed up for reviewing a patch, please consider doing so. If you are\nsigned up and actively reviewing, thank you very much!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 15:23:33 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-07 has started"
}
] |
[
{
"msg_contents": "Hi,\n\nSimpleHash.\n\nThe function SH_START_ITERATE can trigger some overflow.\n\nSee:\ntypedef struct SH_ITERATOR\n{\nuint32 cur; /* current element */\nuint32 end;\nbool done; /* iterator exhausted? */\n} SH_ITERATOR;\n\nThe cur field is uint32 size and currently can be stored a uint64,\nwhich obviously does not fit.\n\nAlso, the current index is int, which is possibly insufficient\nsince items can be up to uint32.\n\nAttached a fix.\n\nbest regards,\nRanier Vilela",
"msg_date": "Thu, 6 Jul 2023 11:28:18 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid overflow with simplehash"
},
{
"msg_contents": "> On 6 Jul 2023, at 16:28, Ranier Vilela <[email protected]> wrote:\n\n> The function SH_START_ITERATE can trigger some overflow.\n> \n> See:\n> typedef struct SH_ITERATOR\n> {\n> uint32 cur; /* current element */\n> uint32 end;\n> bool done; /* iterator exhausted? */\n> } SH_ITERATOR;\n> \n> The cur field is uint32 size and currently can be stored a uint64,\n> which obviously does not fit.\n\n-\tAssert(startelem < SH_MAX_SIZE);\n+\tAssert(startelem < PG_UINT32_MAX);\n\nI mighe be missing something, but from skimming the current code, SH_MAX_SIZE\nis currently defined as:\n\n#define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1)\n\nCan you show a reproducer example where you are able to overflow?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 16:37:17 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Em qui., 6 de jul. de 2023 às 11:37, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 6 Jul 2023, at 16:28, Ranier Vilela <[email protected]> wrote:\n>\n> > The function SH_START_ITERATE can trigger some overflow.\n> >\n> > See:\n> > typedef struct SH_ITERATOR\n> > {\n> > uint32 cur; /* current element */\n> > uint32 end;\n> > bool done; /* iterator exhausted? */\n> > } SH_ITERATOR;\n> >\n> > The cur field is uint32 size and currently can be stored a uint64,\n> > which obviously does not fit.\n>\n> - Assert(startelem < SH_MAX_SIZE);\n> + Assert(startelem < PG_UINT32_MAX);\n>\n> I mighe be missing something, but from skimming the current code,\n> SH_MAX_SIZE\n> is currently defined as:\n>\n> #define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1)\n>\nThis is Assert, that is, in production this test is not done.\n\nSee the comments:\n\"Search for the first empty element.\"\n\nIf the empty element is not found, startelem has PG_UINT64_MAX value,\nwhich do not fit in uint32.\n\nCan you see this?\n\nregards,\nRanier Vilela\n\nEm qui., 6 de jul. de 2023 às 11:37, Daniel Gustafsson <[email protected]> escreveu:> On 6 Jul 2023, at 16:28, Ranier Vilela <[email protected]> wrote:\n\n> The function SH_START_ITERATE can trigger some overflow.\n> \n> See:\n> typedef struct SH_ITERATOR\n> {\n> uint32 cur; /* current element */\n> uint32 end;\n> bool done; /* iterator exhausted? */\n> } SH_ITERATOR;\n> \n> The cur field is uint32 size and currently can be stored a uint64,\n> which obviously does not fit.\n\n- Assert(startelem < SH_MAX_SIZE);\n+ Assert(startelem < PG_UINT32_MAX);\n\nI mighe be missing something, but from skimming the current code, SH_MAX_SIZE\nis currently defined as:\n\n#define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1)This is Assert, that is, in production this test is not done.See the comments:\"Search for the first empty element.\"If the empty element is not found, startelem has PG_UINT64_MAX value, which do not fit in uint32.Can you see this?regards,Ranier Vilela",
"msg_date": "Thu, 6 Jul 2023 11:42:17 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "> On 6 Jul 2023, at 16:42, Ranier Vilela <[email protected]> wrote:\n> Em qui., 6 de jul. de 2023 às 11:37, Daniel Gustafsson <[email protected] <mailto:[email protected]>> escreveu:\n\n> #define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1)\n> This is Assert, that is, in production this test is not done.\n\nCorrect, which implies that it's a test for something which is deemed highly\nunlikely to happen in production.\n\n> If the empty element is not found, startelem has PG_UINT64_MAX value, \n> which do not fit in uint32.\n\nCan you show an example where the hash isn't grown automatically to accomodate\nthis such that the assertion is tripped?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 17:00:09 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Em qui., 6 de jul. de 2023 às 12:00, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 6 Jul 2023, at 16:42, Ranier Vilela <[email protected]> wrote:\n> > Em qui., 6 de jul. de 2023 às 11:37, Daniel Gustafsson <[email protected]\n> <mailto:[email protected]>> escreveu:\n>\n> > #define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1)\n> > This is Assert, that is, in production this test is not done.\n>\n> Correct, which implies that it's a test for something which is deemed\n> highly\n> unlikely to happen in production.\n>\n Highly improbable does not mean impossible, or that it will never happen.\n\n\n> > If the empty element is not found, startelem has PG_UINT64_MAX value,\n> > which do not fit in uint32.\n>\n> Can you show an example where the hash isn't grown automatically to\n> accomodate\n> this such that the assertion is tripped?\n>\nA demo won't change the fact that the function can fail, even if it isn't\ncurrently failing.\nAs a precaution to avoid future bugs, I think it's necessary to apply the\npatch to increase the robustness of the function.\n\nregards,\nRanier Vilela\n\nEm qui., 6 de jul. de 2023 às 12:00, Daniel Gustafsson <[email protected]> escreveu:> On 6 Jul 2023, at 16:42, Ranier Vilela <[email protected]> wrote:\n> Em qui., 6 de jul. de 2023 às 11:37, Daniel Gustafsson <[email protected] <mailto:[email protected]>> escreveu:\n\n> #define SH_MAX_SIZE (((uint64) PG_UINT32_MAX) + 1)\n> This is Assert, that is, in production this test is not done.\n\nCorrect, which implies that it's a test for something which is deemed highly\nunlikely to happen in production. Highly improbable does not mean impossible, or that it will never happen.\n\n> If the empty element is not found, startelem has PG_UINT64_MAX value, \n> which do not fit in uint32.\n\nCan you show an example where the hash isn't grown automatically to accomodate\nthis such that the assertion is tripped?A demo won't change the fact that the function can fail, even if it isn't currently failing.As a precaution to avoid future bugs, I think it's necessary to apply the patch to increase the robustness of the function.regards,Ranier Vilela",
"msg_date": "Thu, 6 Jul 2023 12:05:01 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Ranier Vilela <[email protected]> writes:\n> See the comments:\n> \"Search for the first empty element.\"\n> If the empty element is not found, startelem has PG_UINT64_MAX value,\n> which do not fit in uint32.\n\nI think the point of that assertion is exactly that we're required to\nhave an empty element (because max fillfactor is less than 1),\nso the search should have succeeded.\n\nIt does seem like we could do\n\n\tuint64\t\tstartelem = SH_MAX_SIZE;\n\n\t...\n\n\tAssert(startelem < SH_MAX_SIZE);\n\nwhich'd make it a little clearer that the expectation is for\nstartelem to have changed value. And I agree that declaring \"i\"\nas int is wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jul 2023 11:16:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-06 11:16:26 -0400, Tom Lane wrote:\n> Ranier Vilela <[email protected]> writes:\n> > See the comments:\n> > \"Search for the first empty element.\"\n> > If the empty element is not found, startelem has PG_UINT64_MAX value,\n> > which do not fit in uint32.\n> \n> I think the point of that assertion is exactly that we're required to\n> have an empty element (because max fillfactor is less than 1),\n> so the search should have succeeded.\n\nRight, that part of the proposed change seems bogus to me.\n\n\n> It does seem like we could do\n> \n> \tuint64\t\tstartelem = SH_MAX_SIZE;\n> \n> \t...\n> \n> \tAssert(startelem < SH_MAX_SIZE);\n> \n> which'd make it a little clearer that the expectation is for\n> startelem to have changed value.\n\nI guess? I find it easier to understand all-bits-set in a coredump as\ntoo-large than SH_MAX_SIZE, but ...\n\n\n> And I agree that declaring \"i\" as int is wrong.\n\nYea, that's definitely not right, not sure how I ended up with that. Will push\na fix. I guess it should be backpatched...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 08:27:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-07-06 11:16:26 -0400, Tom Lane wrote:\n>> It does seem like we could do\n>> \tuint64\t\tstartelem = SH_MAX_SIZE;\n>> \t...\n>> \tAssert(startelem < SH_MAX_SIZE);\n>> which'd make it a little clearer that the expectation is for\n>> startelem to have changed value.\n\n> I guess? I find it easier to understand all-bits-set in a coredump as\n> too-large than SH_MAX_SIZE, but ...\n\nWhat'd help even more is a comment:\n\n\t/* We should have found an empty element */\n\tAssert(startelem < SH_MAX_SIZE);\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jul 2023 11:46:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Em qui., 6 de jul. de 2023 às 12:16, Tom Lane <[email protected]> escreveu:\n\n> Ranier Vilela <[email protected]> writes:\n> > See the comments:\n> > \"Search for the first empty element.\"\n> > If the empty element is not found, startelem has PG_UINT64_MAX value,\n> > which do not fit in uint32.\n>\n> Hi Tom,\n\n> I think the point of that assertion is exactly that we're required to\n> have an empty element (because max fillfactor is less than 1),\n> so the search should have succeeded.\n>\n> It does seem like we could do\n>\n> uint64 startelem = SH_MAX_SIZE;\n>\n> ...\n>\n> Assert(startelem < SH_MAX_SIZE);\n>\n> which'd make it a little clearer that the expectation is for\n> startelem to have changed value.\n\nI still have doubts about this.\n\nsee:\n#include <iostream>\n#include <string>\n#include <limits.h>\n\n#define SH_MAX_SIZE1 (((unsigned long long) 0xFFFFFFFFU) + 1)\n#define SH_MAX_SIZE2 (((unsigned long long) 0xFFFFFFFFU) - 1)\n\nint main()\n{\n unsigned long long max_size1 = SH_MAX_SIZE1;\n unsigned long long max_size2 = SH_MAX_SIZE2;\n unsigned int cur1 = SH_MAX_SIZE1;\n unsigned int cur2 = SH_MAX_SIZE2;\n\n printf(\"SH_MAX_SIZE1=%llu\\n\", max_size1);\n printf(\"SH_MAX_SIZE2=%llu\\n\", max_size2);\n printf(\"cur1=%u\\n\", cur1);\n printf(\"cur2=%u\\n\", cur2);\n}\nwarning: implicit conversion from 'unsigned long long' to 'unsigned int'\nchanges value from 4294967296 to 0 [-Wconstant-conversion]\n\noutputs:\nSH_MAX_SIZE1=4294967296\nSH_MAX_SIZE2=4294967294\ncur1=0\ncur2=4294967294\n\nAnd in the comments we have:\n\"Iterate backwards, that allows the current element to be deleted, even\n* if there are backward shifts\"\n\nSo if an empty element is not found and the *cur* field is set to 0\n(SH_MAX_SIZE -> uint32),\nthen will it iterate forwards?\n\n And I agree that declaring \"i\"\n> as int is wrong.\n>\nThanks for the confirmation.\n\nregards,\nRanier Vilela\n\nEm qui., 6 de jul. de 2023 às 12:16, Tom Lane <[email protected]> escreveu:Ranier Vilela <[email protected]> writes:\n> See the comments:\n> \"Search for the first empty element.\"\n> If the empty element is not found, startelem has PG_UINT64_MAX value,\n> which do not fit in uint32.\nHi Tom, \nI think the point of that assertion is exactly that we're required to\nhave an empty element (because max fillfactor is less than 1),\nso the search should have succeeded.\n\nIt does seem like we could do\n\n uint64 startelem = SH_MAX_SIZE;\n\n ...\n\n Assert(startelem < SH_MAX_SIZE);\n\nwhich'd make it a little clearer that the expectation is for\nstartelem to have changed value.I still have doubts about this.see:#include <iostream>#include <string>#include <limits.h>#define SH_MAX_SIZE1 (((unsigned long long) 0xFFFFFFFFU) + 1)#define SH_MAX_SIZE2 (((unsigned long long) 0xFFFFFFFFU) - 1)int main(){ unsigned long long max_size1 = SH_MAX_SIZE1; unsigned long long max_size2 = SH_MAX_SIZE2; unsigned int cur1 = SH_MAX_SIZE1; unsigned int cur2 = SH_MAX_SIZE2; printf(\"SH_MAX_SIZE1=%llu\\n\", max_size1); printf(\"SH_MAX_SIZE2=%llu\\n\", max_size2); printf(\"cur1=%u\\n\", cur1); printf(\"cur2=%u\\n\", cur2);}\nwarning: implicit conversion from 'unsigned long long' to 'unsigned int'\n changes value from 4294967296 to 0 [-Wconstant-conversion] outputs:SH_MAX_SIZE1=4294967296SH_MAX_SIZE2=4294967294cur1=0cur2=4294967294And in the comments we have:\"Iterate backwards, that allows the current element to be deleted, even\t * if there are backward shifts\"So if an empty element is not found and the *cur* field is set to 0 (SH_MAX_SIZE -> uint32),then will it iterate forwards? And I agree that declaring \"i\"\nas int is wrong.Thanks for the confirmation.regards,Ranier Vilela",
"msg_date": "Thu, 6 Jul 2023 13:40:00 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-06 13:40:00 -0300, Ranier Vilela wrote:\n> I still have doubts about this.\n> \n> see:\n> #include <iostream>\n> #include <string>\n> #include <limits.h>\n> \n> #define SH_MAX_SIZE1 (((unsigned long long) 0xFFFFFFFFU) + 1)\n> #define SH_MAX_SIZE2 (((unsigned long long) 0xFFFFFFFFU) - 1)\n> \n> int main()\n> {\n> unsigned long long max_size1 = SH_MAX_SIZE1;\n> unsigned long long max_size2 = SH_MAX_SIZE2;\n> unsigned int cur1 = SH_MAX_SIZE1;\n> unsigned int cur2 = SH_MAX_SIZE2;\n> \n> printf(\"SH_MAX_SIZE1=%llu\\n\", max_size1);\n> printf(\"SH_MAX_SIZE2=%llu\\n\", max_size2);\n> printf(\"cur1=%u\\n\", cur1);\n> printf(\"cur2=%u\\n\", cur2);\n> }\n> warning: implicit conversion from 'unsigned long long' to 'unsigned int'\n> changes value from 4294967296 to 0 [-Wconstant-conversion]\n\nI don't think we at the moment try to not have implicit-conversion warnings\n(nor do we enable them), this would be far from the only place. If we wanted\nto here, we'd just need an explicit cast.\n\n\n> outputs:\n> SH_MAX_SIZE1=4294967296\n> SH_MAX_SIZE2=4294967294\n> cur1=0\n> cur2=4294967294\n> \n> And in the comments we have:\n> \"Iterate backwards, that allows the current element to be deleted, even\n> * if there are backward shifts\"\n> \n> So if an empty element is not found and the *cur* field is set to 0\n> (SH_MAX_SIZE -> uint32),\n\nThat should never be reachable - which the assert tries to ensure.\n\n\n> then will it iterate forwards?\n\nNo, it'd still iterate backwards, but starting from the wrong place - but\nthere is no correct place to start iterating from if there is no unused\nelement.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 09:51:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Em qui., 6 de jul. de 2023 às 13:52, Andres Freund <[email protected]>\nescreveu:\n\n> Hi,\n>\n> On 2023-07-06 13:40:00 -0300, Ranier Vilela wrote:\n> > I still have doubts about this.\n> >\n> > see:\n> > #include <iostream>\n> > #include <string>\n> > #include <limits.h>\n> >\n> > #define SH_MAX_SIZE1 (((unsigned long long) 0xFFFFFFFFU) + 1)\n> > #define SH_MAX_SIZE2 (((unsigned long long) 0xFFFFFFFFU) - 1)\n> >\n> > int main()\n> > {\n> > unsigned long long max_size1 = SH_MAX_SIZE1;\n> > unsigned long long max_size2 = SH_MAX_SIZE2;\n> > unsigned int cur1 = SH_MAX_SIZE1;\n> > unsigned int cur2 = SH_MAX_SIZE2;\n> >\n> > printf(\"SH_MAX_SIZE1=%llu\\n\", max_size1);\n> > printf(\"SH_MAX_SIZE2=%llu\\n\", max_size2);\n> > printf(\"cur1=%u\\n\", cur1);\n> > printf(\"cur2=%u\\n\", cur2);\n> > }\n> > warning: implicit conversion from 'unsigned long long' to 'unsigned int'\n> > changes value from 4294967296 to 0 [-Wconstant-conversion]\n>\n> I don't think we at the moment try to not have implicit-conversion warnings\n> (nor do we enable them), this would be far from the only place. If we\n> wanted\n> to here, we'd just need an explicit cast.\n>\nIt was just for show.\n\n\n>\n>\n> > outputs:\n> > SH_MAX_SIZE1=4294967296\n> > SH_MAX_SIZE2=4294967294\n> > cur1=0\n> > cur2=4294967294\n> >\n> > And in the comments we have:\n> > \"Iterate backwards, that allows the current element to be deleted, even\n> > * if there are backward shifts\"\n> >\n> > So if an empty element is not found and the *cur* field is set to 0\n> > (SH_MAX_SIZE -> uint32),\n>\n> That should never be reachable - which the assert tries to ensure.\n>\nRight.\n\n\n>\n>\n> > then will it iterate forwards?\n>\n> No, it'd still iterate backwards, but starting from the wrong place - but\n> there is no correct place to start iterating from if there is no unused\n> element.\n>\nThanks for the confirmation.\n\nSo I suppose we could have this in v1, attached.\nWith comments added by Tom.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 6 Jul 2023 14:01:55 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Hi,\n\nI pushed changing i to uint32 and adding Tom's comment to 11-HEAD.\n\n\nOn 2023-07-06 14:01:55 -0300, Ranier Vilela wrote:\n> > > then will it iterate forwards?\n> >\n> > No, it'd still iterate backwards, but starting from the wrong place - but\n> > there is no correct place to start iterating from if there is no unused\n> > element.\n> >\n> Thanks for the confirmation.\n> \n> So I suppose we could have this in v1, attached.\n> With comments added by Tom.\n\n\n> diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h\n> index 48db837ec8..4fe627a921 100644\n> --- a/src/include/lib/simplehash.h\n> +++ b/src/include/lib/simplehash.h\n> @@ -964,8 +964,8 @@ SH_DELETE_ITEM(SH_TYPE * tb, SH_ELEMENT_TYPE * entry)\n> SH_SCOPE void\n> SH_START_ITERATE(SH_TYPE * tb, SH_ITERATOR * iter)\n> {\n> -\tint\t\t\ti;\n> -\tuint64\t\tstartelem = PG_UINT64_MAX;\n> +\tuint32\t\ti;\n> +\tuint32\t\tstartelem = PG_UINT32_MAX;\n\nThe startelem type change doesn't strike me as a good idea. Currently\nPG_UINT32_MAX is a valid element.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:33:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflow with simplehash"
},
{
"msg_contents": "Em qui., 6 de jul. de 2023 às 14:33, Andres Freund <[email protected]>\nescreveu:\n\n> Hi,\n>\n> I pushed changing i to uint32 and adding Tom's comment to 11-HEAD.\n>\nThank you.\n\nregards,\nRanier Vilela\n\nEm qui., 6 de jul. de 2023 às 14:33, Andres Freund <[email protected]> escreveu:Hi,\n\nI pushed changing i to uint32 and adding Tom's comment to 11-HEAD.Thank you. regards,Ranier Vilela",
"msg_date": "Thu, 6 Jul 2023 15:08:41 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflow with simplehash"
}
] |
[
{
"msg_contents": "Thanks a lot Mark,\n\nI will take a look at this and get back to you if I find anything unclear\n\n---\nHannu\n\nOn Tue, Jul 4, 2023 at 10:14 PM Mark Dilger\n<[email protected]> wrote:\n>\n> Hackers,\n>\n> Over in [1], Hannu Krosing asked me to create and post several Table Access Methods for testing/example purposes. I am fairly happy to do so, but each one is large, and should be considered separately for inclusion/rejection in contrib/, or in src/test/modules as Michael Paquier suggests. As such, I am starting this new email thread for the first such TAM. I've named it \"pile\", which is an English synonym of \"heap\", and which is also four characters in length, making for easier side-by-side diffs with the heap code. The pile code is a deep copy of the heap code, meaning that pile functions do not call heap code, nor run the in-core regression tests, but rather pile's own modified copy of the heap code, the regression tests, and even the test data. Rather than creating a bare-bones skeleton which needs to be populated with an implementation and regression tests, this patch instead offers a fully fleshed out TAM which can be pruned down to something reasonably compact once the user changes it into whatever they want it to be. To reiterate, the patch is highly duplicative of in-core files.\n>\n> Hannu, I'm happy to post something like this three times again, for the named TAMs you request, but could you first review this patch and maybe try turning it into something else, such as the in memory temp tables, overlay tables, or python based tables that you mentioned in [1]? Anything that needs to be changed to make similar TAMs suitable for the community should be discussed prior to spamming -hackers with more TAMs. Thanks.\n>\n>\n> [1] https://www.postgresql.org/message-id/CAMT0RQQXtq8tgVPdFb0mk4v%2BcVuGvPWk1Oz9LDr0EgBfrV6e6w%40mail.gmail.com\n>\n>\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n\n",
"msg_date": "Thu, 6 Jul 2023 16:54:12 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Example Table AM implementation"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18016\nLogged by: Richard Vesely\nEmail address: [email protected]\nPostgreSQL version: 15.3\nOperating system: Windows 10 Enterprise 22H2\nDescription: \n\nHi,\r\n\r\nGiven a table with a TOASTed variable length attribute, REINDEX TABLE fails\nto rebuild indexes when you truncate (or otherwise corrupt) relation files\nfor both TOAST table index and a custom index on the varlena.\r\n\r\nHere's an error from server log with log_error_verbosity set to verbose:\r\n\r\nERROR: XX001: could not read block 0 in file \"base/[datoid]/[relfilenode]\":\nread only 0 of 8192 bytes\r\nLOCATION: mdread, md.c:724\r\nSTATEMENT: reindex table t1\r\n\r\nHowever, when you perform a manual reindex in the correct order - REINDEX\nINDEX pg_toast.pg_toast_oid_index and then REINDEX INDEX t1_column1_idx it\nworks as expected. REINDEX TABLE should ensure that the TOAST index is\nrebuilt first before rebuilding an index on (potentially) TOASTed values. In\nthis particular example when you REINDEX TOAST index first and then run the\nfull REINDEX TABLE you can see that it always rebuilds the custom index\nfirst based on relation file nodes.\r\n\r\nBest regards,\r\nRichard Veselý\r\n\r\nHere's a minimal repro dump:\r\n\r\n--\r\n-- PostgreSQL database dump\r\n--\r\n\r\n-- Dumped from database version 15.3\r\n-- Dumped by pg_dump version 15.3\r\n\r\nSET statement_timeout = 0;\r\nSET lock_timeout = 0;\r\nSET idle_in_transaction_session_timeout = 0;\r\nSET client_encoding = 'UTF8';\r\nSET standard_conforming_strings = on;\r\nSELECT pg_catalog.set_config('search_path', '', false);\r\nSET check_function_bodies = false;\r\nSET xmloption = content;\r\nSET client_min_messages = warning;\r\nSET row_security = off;\r\n\r\n--\r\n-- Name: bug_report; Type: DATABASE; Schema: -; Owner: postgres\r\n--\r\n\r\nCREATE DATABASE bug_report WITH TEMPLATE = template0 ENCODING = 'UTF8'\nLOCALE_PROVIDER = libc LOCALE = 'en_US.UTF-8';\r\n\r\n\r\nALTER DATABASE bug_report OWNER TO postgres;\r\n\r\n\\connect bug_report\r\n\r\nSET statement_timeout = 0;\r\nSET lock_timeout = 0;\r\nSET idle_in_transaction_session_timeout = 0;\r\nSET client_encoding = 'UTF8';\r\nSET standard_conforming_strings = on;\r\nSELECT pg_catalog.set_config('search_path', '', false);\r\nSET check_function_bodies = false;\r\nSET xmloption = content;\r\nSET client_min_messages = warning;\r\nSET row_security = off;\r\n\r\n--\r\n-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres\r\n--\r\n\r\n-- *not* creating schema, since initdb creates it\r\n\r\n\r\nALTER SCHEMA public OWNER TO postgres;\r\n\r\nSET default_tablespace = '';\r\n\r\nSET default_table_access_method = heap;\r\n\r\n--\r\n-- Name: t1; Type: TABLE; Schema: public; Owner: postgres\r\n--\r\n\r\nCREATE TABLE public.t1 (\r\n column1 text\r\n);\r\n\r\n\r\nALTER TABLE public.t1 OWNER TO postgres;\r\n\r\n--\r\n-- Data for Name: t1; Type: TABLE DATA; Schema: public; Owner: postgres\r\n--\r\n\r\nCOPY public.t1 (column1) FROM stdin;\r\nvkifpbzxdplzkizpaugzhlejhqmvgwmlhqlgofbvoaiowqohnmxaldkyoawdrpttppkxfratkgeyxogzdvihkssbpyvgbnbhgaezhhgyehqcduakvrahnauymfuqznthijohfbbuzitrpifmqkezjbujngzsijsquskztqypdkienyhytyergfbibasksgntabxgzgrmhtzrukjuykaqfrksqcswwbsmlmdfrpovbdlvcaofztwasbfzwyoeklbnacgtdrwjfvdpdccnyetkohmtgwdkzlnofyccxgrbojcjnruvwlbwbpxyzubwqjmfnzvzkjsdgozewauqlbmckpxztuidtdfpvbhizlbrezvkndjcodbjabxggywtqpsofdtsfyspjscrmghbbpxhuvqvxpgwfdvhhcvekncudhzbtotqxxzixoqnybzpnhvgnhdlcbctyitiqdilwuensusfcfelojvzhgrefyrqohdqiaewddpharcwipjyyijudozpkomgsstqbarykbuoxgnmjwcvkufidiozxccwtfzatxyztjmeihlzyafdafqbkkqqekasgfllfcdaelwsecayspnspvofkelkxfytrwfccuynwjlafelgnuggvejoiketoeqpxtofivpxeqahxnhdkhfwdbytqlfulogxdpjbbtioelkuxywcdvknjbllmyvuckduywllkljfpoxiwgunwjwoiokenfygsduokepxjetyjjzbnxqbvsdbrpefdlghluynoqsxkfrttsibjkdtforzhmhazyzoaanvstmqafsuynrvmknivmcvcqlwxmdgjnhuivxzwjefszyrkzmvleskghrknohfyntnsovqiquojnrzsusyvjfcogtdgrlbyemggllpyvqxclqqcmwcvrvtejmiinlmqfcznszledlavaqwnugijgevehlrydlrlluqmepaqyqlhpyxeuryqwauyfaoifsxsxxxemgidmzxzjpoecapyubvprnzlgvrlidotzluaodlwrrphgxfpcsskkaxguwajcytusnpbudvuvdjqzujgdlqnoksainpdwcfdwizvpgnhysunadzaizywtzgydpgumfedoqbhdlqynufivmqyihkfqnvavofgojzjrzpfhmqqgxqmmhkyvsloegljgjglkywqfjqcwawigxhlbmztzytlqlheghhhykttjvbqkdnuuiajqvpihyrwjnlihglgxebhalthpizkrccgnxkwfxjsjrpcsitmdounnbxoeoomstbykypoflitwvirpwdrdvrtwkqwbqlsqxkvogdsdkwffvvzalibtgtkbcmqjcpvlwpubdhykqsrqwzmaqbwndmvribafoyizgbpbavvvtivkcofijaubtpmzfgauvrgfqjlsksdtfaaimfnurstbfikildbcdfzbwzqicjwewrxzppneyrlhsrdaprgmaofulgcffstvikvwvkmprddflkudytkrlccrkivvzwvmsyeigowqoqkidzcetlnfaxlpyalzennzgexiaqduzffijgsbhshyaiephqviluzzjdfgjjgkphdkamlwzppqpvpjbgnjnmvmgyrqubvsgpivstqbydtbpakripvsvnuqwwgngwdoeeichpljrnqstcdeobubjcudjizrgxjfmcvghrlhvjseinrfkmeqhrcullxildvkcjcbozpsowddwdqusclysmaasmcgruosqqjcjurtqhnnigvpviuhwroydcxhasvqwcgeauiawnqyreaoikhbaymizkanzjyrbtftiddryylqxfhmzomlqkcqkgrapqgiiylahganeibkzahxitcwswgpqmvnlgyuxywoaqqlbqdpfexlpzpzlpucwgqxfraqwqmvwhuojbmpngdhenplmkomgwmnplwnfnlgmejgyoapkjmyvsolpiqlebfumcywfxvbgshaakujitbbgrvtqxvsfvapuejebqoknhaefyeebmlqvoifjvlnosxkvk\r\n\\.\r\n\r\n\r\n--\r\n-- Name: t1_column1_idx; Type: INDEX; Schema: public; Owner: postgres\r\n--\r\n\r\nCREATE INDEX t1_column1_idx ON public.t1 USING btree (column1);\r\n\r\n\r\n--\r\n-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: postgres\r\n--\r\n\r\nREVOKE USAGE ON SCHEMA public FROM PUBLIC;\r\nGRANT ALL ON SCHEMA public TO PUBLIC;\r\n\r\n\r\n--\r\n-- PostgreSQL database dump complete\r\n--",
"msg_date": "Thu, 06 Jul 2023 20:29:19 +0000",
"msg_from": "PG Bug reporting form <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 08:29:19PM +0000, PG Bug reporting form wrote:\n> Given a table with a TOASTed variable length attribute, REINDEX TABLE fails\n> to rebuild indexes when you truncate (or otherwise corrupt) relation files\n> for both TOAST table index and a custom index on the varlena.\n\nCould you clarify what you have done here? Did you manipulate the\nphysical files in the data folder before running the REINDEX commands\nyou expected should work? There are many things that can go wrong if\nyou do anything like that.\n--\nMichael",
"msg_date": "Sat, 8 Jul 2023 09:17:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Jul 06, 2023 at 08:29:19PM +0000, PG Bug reporting form wrote:\n>> Given a table with a TOASTed variable length attribute, REINDEX TABLE fails\n>> to rebuild indexes when you truncate (or otherwise corrupt) relation files\n>> for both TOAST table index and a custom index on the varlena.\n\n> Could you clarify what you have done here? Did you manipulate the\n> physical files in the data folder before running the REINDEX commands\n> you expected should work? There are many things that can go wrong if\n> you do anything like that.\n\nI think the point of that was just to have a way to reproduce the problem\non-demand. I follow the argument, which is that if there's actual\ncorruption in the TOAST index (for whatever reason) that might interfere\nwith rebuilding the table's other indexes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Jul 2023 20:20:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 5:20 PM Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > On Thu, Jul 06, 2023 at 08:29:19PM +0000, PG Bug reporting form wrote:\n> >> Given a table with a TOASTed variable length attribute, REINDEX TABLE fails\n> >> to rebuild indexes when you truncate (or otherwise corrupt) relation files\n> >> for both TOAST table index and a custom index on the varlena.\n>\n> > Could you clarify what you have done here? Did you manipulate the\n> > physical files in the data folder before running the REINDEX commands\n> > you expected should work? There are many things that can go wrong if\n> > you do anything like that.\n>\n> I think the point of that was just to have a way to reproduce the problem\n> on-demand. I follow the argument, which is that if there's actual\n> corruption in the TOAST index (for whatever reason) that might interfere\n> with rebuilding the table's other indexes.\n\nThat's my understanding, as well.\n\nThis shouldn't be treated as a bug, but as a desirable improvement in\nREINDEX TABLE's behaviour. Stated another way, we want REINDEX TABLE\nto reindex toast tables' indexes before attempting to reindex the\ntable's index.\n\nBelow [1] are the commands to create the test case and reproduce the error.\n\nI am taking a look at this; I'd like to avoid duplicate work if\nsomeone else is looking at it, too.\n\nPreliminary reading of the code indicates that a simple rearrangement\nof the code in reindex_relation() would be sufficient to get the\ndesired behaviour. The code towards the bottom in that function,\nprotected by `if ((flags & REINDEX_REL_PROCESS_TOAST ...)` needs to be\nmoved to just before the `foreach(indexId, indexIds)` loop.\n\nThe only downside I see so far with the proposed change is that the\ntoast tables are currently reindexed after table_close() call, but\nafter the proposed change they'll be reindexed before that call to\nclose_table(). But since close_table() does not release the ShareLock\non the table that is taken at the beginning of reindex_relation(), I\ndon't think we'll losing anything by the proposed rearrangement of\ncode.\n\n[1]:\ninitdb ./db/data\npg_ctl -D ./db/data -l db/server.log start\npsql -d postgres\n\ncreate table t1(column1 text);\ncreate index on t1 (column1);\ninsert into t1 select repeat('fsdfaf', 30000);\n\nselect oid, relfilenode, relname from pg_class\n where oid >= (select oid from pg_class where relname = 't1');\n\n// Generate command to corrupt toast table's index\nselect 'echo > db/data/base/'\n|| (select oid from pg_database where datname = current_database())\n|| '/'\n|| (select relfilenode from pg_class\n where relname = ('pg_toast_'\n || (select oid from pg_class where relname = 't1'))\n || '_index');\n\n# Stop the database before inducing corruption; else the reindex command may\n# use cached copies of toast index blocks and succeed\npg_ctl -D ./db/data stop\necho > db/data/base/5/16388\npg_ctl -D ./db/data -l db/server.log start\npsql -d postgres\n\nreindex table t1;\nERROR: could not read block 0 in file \"base/5/16388\": read only 1 of 8192 bytes\n\nreindex index pg_toast.pg_toast_16384_index ;\n//REINDEX\nreindex table t1;\n//REINDEX\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sun, 9 Jul 2023 00:01:03 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Sun, Jul 09, 2023 at 12:01:03AM -0700, Gurjeet Singh wrote:\n> Preliminary reading of the code indicates that a simple rearrangement\n> of the code in reindex_relation() would be sufficient to get the\n> desired behaviour. The code towards the bottom in that function,\n> protected by `if ((flags & REINDEX_REL_PROCESS_TOAST ...)` needs to be\n> moved to just before the `foreach(indexId, indexIds)` loop.\n\nI guess that it should be OK to do that from the point where\nreltoastrelid is known, when extracted the parent relation locked with\nthis ShareLock.\n\n> The only downside I see so far with the proposed change is that the\n> toast tables are currently reindexed after table_close() call, but\n> after the proposed change they'll be reindexed before that call to\n> close_table(). But since close_table() does not release the ShareLock\n> on the table that is taken at the beginning of reindex_relation(), I\n> don't think we'll losing anything by the proposed rearrangement of\n> code.\n\nThat should be OK, I assume. However, if this is improved and\nsomething we want to support in the long-run I guess that a TAP test\nmay be appropriate.\n--\nMichael",
"msg_date": "Sun, 9 Jul 2023 16:55:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "Hi everyone,\n\nSorry for the delay, I was away from computer for a couple of days.\n\nTom is exactly right. I was just giving you a minimum number of steps to reproduce the issue. That being said, it is also a good idea to give you a bit of a background context and maybe start a broader discussion. However, I don't want to pollute a bug report with an unrelated topic so someone might suggest a more appropriate venue.\n\nIn my particular case, I didn't encounter some hardware failure that caused corruption of both TOAST table index and other dependent indexes, but instead I didn't have either of them in the first place (hence my suggestion to truncate them to accurately reproduce my exact setup). So in that sense Michael is also asking a legitimate question of how we got to where we are.\n\nI was dissatisfied with storage layer performance, especially during the initial database population, so I rewrote it for my use case. I'm done with the heap, but for the moment I still rely on PostgreSQL to build indexes, specifically by using the REINDEX TABLE command for its convenience and that's how I discovered this problem with a couple of tables that had the required combination of indexes and data to trigger the original issue.\n\nI won't derail this discussion any further, because some people downthread are already working on fixing/improving this scenario, but there's no shortage of people that suffer from sluggish pg_dump/pg_restore cycle and I imagine there are any number of people that would be interested in improving bulk ingestion which is often a bottleneck for analytical workloads as you are well aware. What's the best place to discuss this topic further - pgsql-performance or someplace else?\n\nBest,\nRichard\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]> \nSent: Saturday, July 8, 2023 2:20 AM\nTo: Michael Paquier <[email protected]>\nCc: Richard Veselý <[email protected]>; [email protected]\nSubject: Re: BUG #18016: REINDEX TABLE failure\n\nMichael Paquier <[email protected]> writes:\n> On Thu, Jul 06, 2023 at 08:29:19PM +0000, PG Bug reporting form wrote:\n>> Given a table with a TOASTed variable length attribute, REINDEX TABLE \n>> fails to rebuild indexes when you truncate (or otherwise corrupt) \n>> relation files for both TOAST table index and a custom index on the varlena.\n\n> Could you clarify what you have done here? Did you manipulate the \n> physical files in the data folder before running the REINDEX commands \n> you expected should work? There are many things that can go wrong if \n> you do anything like that.\n\nI think the point of that was just to have a way to reproduce the problem on-demand. I follow the argument, which is that if there's actual corruption in the TOAST index (for whatever reason) that might interfere with rebuilding the table's other indexes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 9 Jul 2023 13:54:35 +0000",
"msg_from": "=?iso-8859-2?Q?Richard_Vesel=FD?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> That should be OK, I assume. However, if this is improved and\n> something we want to support in the long-run I guess that a TAP test\n> may be appropriate.\n\nI do not see the point of a TAP test. It's not like the code isn't\ncovered perfectly well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jul 2023 10:18:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 7:18 AM Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > That should be OK, I assume. However, if this is improved and\n> > something we want to support in the long-run I guess that a TAP test\n> > may be appropriate.\n>\n> I do not see the point of a TAP test. It's not like the code isn't\n> covered perfectly well.\n\nPlease find attached the patch that makes REINDEX TABLE perform\nreindex on toast table before reindexing the main table's indexes.\n\nThe code block movement involved slightly more thought and care than I\nhad previously imagined. As explained in comments in the patch, the\nenumeration and suppression of indexes on the main table must happen\nbefore any CommandCounterIncrement() call, hence the\nreindex-the-toast-table-if-any code had to be placed after that\nenumeration.\n\nIn support of the argument above, the patch does not include any TAP\ntests. Reliably reproducing the original error message involves\nrestarting the database, and since that can't be done via SQL\ncommands, no sql tests are included, either.\n\nThe patch also includes minor wordsmithing, and benign whitespace\nchanges in neighboring code.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 10 Jul 2023 09:35:05 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 7:21 AM Richard Veselý <[email protected]> wrote:\n>\n> ... there's no shortage of people that suffer from sluggish pg_dump/pg_restore cycle and I imagine there are any number of people that would be interested in improving bulk ingestion which is often a bottleneck for analytical workloads as you are well aware. What's the best place to discuss this topic further - pgsql-performance or someplace else?\n\n(moved conversation to -hackers, and moved -bugs to BCC)\n\n> I was dissatisfied with storage layer performance, especially during the initial database population, so I rewrote it for my use case. I'm done with the heap, but for the moment I still rely on PostgreSQL to build indexes,\n\nIt sounds like you've developed a method to speed up loading of\ntables, and might have ideas/suggestions for speeding up CREATE\nINDEX/REINDEX. The -hackers list feels like a place to discuss such\nchanges.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Mon, 10 Jul 2023 09:43:49 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "Hi Gurjeet,\r\n\r\nThank you for the follow-up. I was worried my message got buried in the middle of the thread. I also appreciate your work on the patch to fix/improve the REINDEX TABLE behavior even though most people would never encounter it in the wild.\r\n\r\nAs a preface I would first like to say that I can appreciate the emphasis on general maintainability of the codebase, trying to avoid having some overly clever hacks that might impede understanding, having ideally one way of doing things like having a common database page structure, etc. The more one keeps to this \"happy\" path the better the general state of the project end up by keeping it accessible to the rest of the community and attracting more contributions in turn.\r\n\r\nThat being said, PostgreSQL can be extremely conservative in scenarios where it might not be warranted while giving a limited opportunity to influence said behavior. This often leads to a very low hardware resource utilization. You can easily find many instances across StackOverflow, dba.stackexchange.com, /r/postgres and pgsql-performance where people run into ingress/egress bottlenecks even though their hardware can trivially support much larger workload.\r\n\r\nIn my experience, you can be very hard-pressed in many cases to saturate even a modest enterprise HDD while observing the official guidelines (https://www.postgresql.org/docs/current/populate.html), e.g. minimal WAL and host of other configuration optimizations, having no indexes and constraints and creating table and filling it with binary COPY within the same transaction. And the situation with pg_dump/pg_restore is often much worse.\r\n\r\nIs there an interest in improving the current state of affairs? I will be rewriting the indexing first to get the whole picture, but I can already tell you that there is a -lot- of performance left on the table even considering the effort to improve COPY performance in PostgreSQL 16. Given sufficient hardware, you should always be heavily IO-bound without exception and saturate any reasonable number of NVMe SSDs.\r\n\r\nBest regards,\r\nRichard\r\n\r\n-----Original Message-----\r\nFrom: Gurjeet Singh <[email protected]> \r\nSent: Monday, July 10, 2023 6:44 PM\r\nTo: Richard Veselý <[email protected]>; Postgres Hackers <[email protected]>\r\nCc: Tom Lane <[email protected]>; Michael Paquier <[email protected]>\r\nSubject: Re: BUG #18016: REINDEX TABLE failure\r\n\r\nOn Sun, Jul 9, 2023 at 7:21 AM Richard Veselý <[email protected]> wrote:\r\n>\r\n> ... there's no shortage of people that suffer from sluggish pg_dump/pg_restore cycle and I imagine there are any number of people that would be interested in improving bulk ingestion which is often a bottleneck for analytical workloads as you are well aware. What's the best place to discuss this topic further - pgsql-performance or someplace else?\r\n\r\n(moved conversation to -hackers, and moved -bugs to BCC)\r\n\r\n> I was dissatisfied with storage layer performance, especially during \r\n> the initial database population, so I rewrote it for my use case. I'm \r\n> done with the heap, but for the moment I still rely on PostgreSQL to \r\n> build indexes,\r\n\r\nIt sounds like you've developed a method to speed up loading of tables, and might have ideas/suggestions for speeding up CREATE INDEX/REINDEX. The -hackers list feels like a place to discuss such changes.\r\n\r\nBest regards,\r\nGurjeet\r\nhttp://Gurje.et\r\n",
"msg_date": "Tue, 11 Jul 2023 09:31:06 +0000",
"msg_from": "=?utf-8?B?UmljaGFyZCBWZXNlbMO9?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 09:35:05AM -0700, Gurjeet Singh wrote:\n> The code block movement involved slightly more thought and care than I\n> had previously imagined. As explained in comments in the patch, the\n> enumeration and suppression of indexes on the main table must happen\n> before any CommandCounterIncrement() call, hence the\n> reindex-the-toast-table-if-any code had to be placed after that\n> enumeration.\n\nDo we need to add another CCI after reindexing the TOAST table? It looks\nlike we presently do so between reindexing each relation, including the\nTOAST table.\n\n+\t * This should be done after the suppression of the use of indexes (above),\n+\t * because the recursive call to reindex_relation() below will invoke\n+\t * CommandCounterIncrement(), which may prevent enumeration of the indexes\n+\t * on the table.\n\nI'm not following this. We've already obtained the list of index OIDs\nbefore this point. Does this create problems when we try to open and lock\nthe relations? And if so, how?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Jul 2023 13:16:51 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "(Re-sending with -hackers list removed, to avoid message being held\nfor moderation)\n\n---------- Forwarded message ---------\nFrom: Gurjeet Singh <[email protected]>\nDate: Wed, Jul 26, 2023 at 2:53 PM\n\n\nOn Wed, Jul 26, 2023 at 10:50 AM Nathan Bossart\n<[email protected]> wrote:\n>\n> On Mon, Jul 10, 2023 at 09:35:05AM -0700, Gurjeet Singh wrote:\n> > The code block movement involved slightly more thought and care than I\n> > had previously imagined. As explained in comments in the patch, the\n> > enumeration and suppression of indexes on the main table must happen\n> > before any CommandCounterIncrement() call, hence the\n> > reindex-the-toast-table-if-any code had to be placed after that\n> > enumeration.\n>\n> Do we need to add another CCI after reindexing the TOAST table? It looks\n> like we presently do so between reindexing each relation, including the\n> TOAST table.\n\nYes, we do need to do a CCI after reindex the relation's toast table.\nBut note that it is done by the recursive call to reindex_relation(),\nright after it calls reindex_index().\n\n> + * This should be done after the suppression of the use of indexes (above),\n> + * because the recursive call to reindex_relation() below will invoke\n> + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n> + * on the table.\n>\n> I'm not following this. We've already obtained the list of index OIDs\n> before this point. Does this create problems when we try to open and lock\n> the relations? And if so, how?\n\nThis comment is calling out the fact that there's a recursive call to\nreindex_relation() inside the 'if' code block, and that that\nreindex_relation() calls CCI. Hence this 'if' code block should _not_\nbe placed before the the calls to RelationGetIndexList() and\nSetReindexPending(). Because if we do, then the CCI done by\nreindex_relation() will impact what RelationGetIndexList() sees.\n\nThis is to match the expectation set for the\nREINDEX_REL_SUPPRESS_INDEX_USE flag.\n\n * REINDEX_REL_SUPPRESS_INDEX_USE: if true, the relation was just completely\n...\n * ... The caller is required to call us *without*\n * having made the rebuilt table visible by doing CommandCounterIncrement;\n * we'll do CCI after having collected the index list. (This way we can still\n * use catalog indexes while collecting the list.)\n\nI hope that makes sense.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 26 Jul 2023 18:42:14 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "(Re-sending with -hackers list removed, to avoid message getting held\nfor moderation)\n\n---------- Forwarded message ---------\nFrom: Gurjeet Singh <[email protected]>\nDate: Wed, Jul 26, 2023 at 4:01 PM\n\nOn Wed, Jul 26, 2023 at 2:53 PM Gurjeet Singh <[email protected]> wrote:\n>\n> On Wed, Jul 26, 2023 at 10:50 AM Nathan Bossart\n> <[email protected]> wrote:\n> >\n> > On Mon, Jul 10, 2023 at 09:35:05AM -0700, Gurjeet Singh wrote:\n\n>\n> > + * This should be done after the suppression of the use of indexes (above),\n> > + * because the recursive call to reindex_relation() below will invoke\n> > + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n> > + * on the table.\n> >\n> > I'm not following this. We've already obtained the list of index OIDs\n> > before this point. Does this create problems when we try to open and lock\n> > the relations? And if so, how?\n>\n> This comment is calling out the fact that there's a recursive call to\n> reindex_relation() inside the 'if' code block, and that that\n> reindex_relation() calls CCI. Hence this 'if' code block should _not_\n> be placed before the the calls to RelationGetIndexList() and\n> SetReindexPending(). Because if we do, then the CCI done by\n> reindex_relation() will impact what RelationGetIndexList() sees.\n>\n> This is to match the expectation set for the\n> REINDEX_REL_SUPPRESS_INDEX_USE flag.\n\nGiven that the issue is already explained by the flag's comments above\nthe function, this comment paragraph in the patch may be considered\nextraneous. Feel free to remove it, if you think so.\n\nI felt the need for that paragraph, because it doesn't feel obvious to\nme as to why we can't simply reindex the toast table as the first\nthing in this function; the toast table reindex will trigger CCI, and\nthat'd be bad if done before RelationGetIndexList().\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 26 Jul 2023 18:43:18 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 06:42:14PM -0700, Gurjeet Singh wrote:\n> On Wed, Jul 26, 2023 at 10:50 AM Nathan Bossart\n> <[email protected]> wrote:\n>> On Mon, Jul 10, 2023 at 09:35:05AM -0700, Gurjeet Singh wrote:\n>> > The code block movement involved slightly more thought and care than I\n>> > had previously imagined. As explained in comments in the patch, the\n>> > enumeration and suppression of indexes on the main table must happen\n>> > before any CommandCounterIncrement() call, hence the\n>> > reindex-the-toast-table-if-any code had to be placed after that\n>> > enumeration.\n>>\n>> Do we need to add another CCI after reindexing the TOAST table? It looks\n>> like we presently do so between reindexing each relation, including the\n>> TOAST table.\n> \n> Yes, we do need to do a CCI after reindex the relation's toast table.\n> But note that it is done by the recursive call to reindex_relation(),\n> right after it calls reindex_index().\n\n*facepalm*\n\nAh, I see it now.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 27 Jul 2023 16:10:59 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 06:43:18PM -0700, Gurjeet Singh wrote:\n> On Wed, Jul 26, 2023 at 2:53 PM Gurjeet Singh <[email protected]> wrote:\n>> On Wed, Jul 26, 2023 at 10:50 AM Nathan Bossart\n>> <[email protected]> wrote:\n>> > On Mon, Jul 10, 2023 at 09:35:05AM -0700, Gurjeet Singh wrote:\n>> > + * This should be done after the suppression of the use of indexes (above),\n>> > + * because the recursive call to reindex_relation() below will invoke\n>> > + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n>> > + * on the table.\n>> >\n>> > I'm not following this. We've already obtained the list of index OIDs\n>> > before this point. Does this create problems when we try to open and lock\n>> > the relations? And if so, how?\n>>\n>> This comment is calling out the fact that there's a recursive call to\n>> reindex_relation() inside the 'if' code block, and that that\n>> reindex_relation() calls CCI. Hence this 'if' code block should _not_\n>> be placed before the the calls to RelationGetIndexList() and\n>> SetReindexPending(). Because if we do, then the CCI done by\n>> reindex_relation() will impact what RelationGetIndexList() sees.\n>>\n>> This is to match the expectation set for the\n>> REINDEX_REL_SUPPRESS_INDEX_USE flag.\n> \n> Given that the issue is already explained by the flag's comments above\n> the function, this comment paragraph in the patch may be considered\n> extraneous. Feel free to remove it, if you think so.\n> \n> I felt the need for that paragraph, because it doesn't feel obvious to\n> me as to why we can't simply reindex the toast table as the first\n> thing in this function; the toast table reindex will trigger CCI, and\n> that'd be bad if done before RelationGetIndexList().\n\nI see. I'd suggest referencing the comment above the function, but in\ngeneral I do think having a comment about this is appropriate.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 27 Jul 2023 16:14:41 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Thu, Jul 27, 2023 at 04:14:41PM -0700, Nathan Bossart wrote:\n> On Wed, Jul 26, 2023 at 06:43:18PM -0700, Gurjeet Singh wrote:\n>> I felt the need for that paragraph, because it doesn't feel obvious to\n>> me as to why we can't simply reindex the toast table as the first\n>> thing in this function; the toast table reindex will trigger CCI, and\n>> that'd be bad if done before RelationGetIndexList().\n> \n> I see. I'd suggest referencing the comment above the function, but in\n> general I do think having a comment about this is appropriate.\n\n+ * This should be done after the suppression of the use of indexes (above),\n+ * because the recursive call to reindex_relation() below will invoke\n+ * CommandCounterIncrement(), which may prevent enumeration of the indexes\n+ * on the table.\n\nThis does not explain the reason why this would prevent the creation\nof a consistent index list fetched from the parent table, does it?\nWould some indexes be missing from what should be reindexed? Or some\nadded unnecessarily? Would that be that an incorrect list?\n--\nMichael",
"msg_date": "Fri, 28 Jul 2023 10:50:50 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 10:50:50AM +0900, Michael Paquier wrote:\n> On Thu, Jul 27, 2023 at 04:14:41PM -0700, Nathan Bossart wrote:\n>> On Wed, Jul 26, 2023 at 06:43:18PM -0700, Gurjeet Singh wrote:\n>>> I felt the need for that paragraph, because it doesn't feel obvious to\n>>> me as to why we can't simply reindex the toast table as the first\n>>> thing in this function; the toast table reindex will trigger CCI, and\n>>> that'd be bad if done before RelationGetIndexList().\n>> \n>> I see. I'd suggest referencing the comment above the function, but in\n>> general I do think having a comment about this is appropriate.\n> \n> + * This should be done after the suppression of the use of indexes (above),\n> + * because the recursive call to reindex_relation() below will invoke\n> + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n> + * on the table.\n> \n> This does not explain the reason why this would prevent the creation\n> of a consistent index list fetched from the parent table, does it?\n> Would some indexes be missing from what should be reindexed? Or some\n> added unnecessarily? Would that be that an incorrect list?\n\nIIUC the issue is that something (e.g., VACUUM FULL, CLUSTER) might've just\nrebuilt the heap, so if we CCI'd before gathering the list of indexes, the\nnew heap contents would become visible, and the indexes would be\ninconsistent with the heap. This is a problem when the relation in\nquestion is a system catalog that needs to be consulted to gather the list\nof indexes. To handle this, we avoid the CCI until after gathering the\nindexes so that the old heap contents appear valid and can be used as\nneeded. Once that is done, we mark the indexes as pending-rebuild and do a\nCCI, at which point the indexes become inconsistent with the heap. This\nbehavior appears to have been added by commit b9b8831.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 11:00:56 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 11:00:56AM -0700, Nathan Bossart wrote:\n> On Fri, Jul 28, 2023 at 10:50:50AM +0900, Michael Paquier wrote:\n>> On Thu, Jul 27, 2023 at 04:14:41PM -0700, Nathan Bossart wrote:\n>>> On Wed, Jul 26, 2023 at 06:43:18PM -0700, Gurjeet Singh wrote:\n>>>> I felt the need for that paragraph, because it doesn't feel obvious to\n>>>> me as to why we can't simply reindex the toast table as the first\n>>>> thing in this function; the toast table reindex will trigger CCI, and\n>>>> that'd be bad if done before RelationGetIndexList().\n>>> \n>>> I see. I'd suggest referencing the comment above the function, but in\n>>> general I do think having a comment about this is appropriate.\n>> \n>> + * This should be done after the suppression of the use of indexes (above),\n>> + * because the recursive call to reindex_relation() below will invoke\n>> + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n>> + * on the table.\n>> \n>> This does not explain the reason why this would prevent the creation\n>> of a consistent index list fetched from the parent table, does it?\n>> Would some indexes be missing from what should be reindexed? Or some\n>> added unnecessarily? Would that be that an incorrect list?\n> \n> IIUC the issue is that something (e.g., VACUUM FULL, CLUSTER) might've just\n> rebuilt the heap, so if we CCI'd before gathering the list of indexes, the\n> new heap contents would become visible, and the indexes would be\n> inconsistent with the heap. This is a problem when the relation in\n> question is a system catalog that needs to be consulted to gather the list\n> of indexes. To handle this, we avoid the CCI until after gathering the\n> indexes so that the old heap contents appear valid and can be used as\n> needed. Once that is done, we mark the indexes as pending-rebuild and do a\n> CCI, at which point the indexes become inconsistent with the heap. This\n> behavior appears to have been added by commit b9b8831.\n\nHow do we move this one forward? Michael and I provided some feedback\nabout the comment, but AFAICT this patch is in good shape otherwise.\nGurjeet, would you mind putting together a new version of the patch so that\nwe can close on this one?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 09:55:35 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 9:55 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Fri, Jul 28, 2023 at 11:00:56AM -0700, Nathan Bossart wrote:\n> > On Fri, Jul 28, 2023 at 10:50:50AM +0900, Michael Paquier wrote:\n> >> On Thu, Jul 27, 2023 at 04:14:41PM -0700, Nathan Bossart wrote:\n> >>> On Wed, Jul 26, 2023 at 06:43:18PM -0700, Gurjeet Singh wrote:\n> >>>> I felt the need for that paragraph, because it doesn't feel obvious to\n> >>>> me as to why we can't simply reindex the toast table as the first\n> >>>> thing in this function; the toast table reindex will trigger CCI, and\n> >>>> that'd be bad if done before RelationGetIndexList().\n> >>>\n> >>> I see. I'd suggest referencing the comment above the function, but in\n> >>> general I do think having a comment about this is appropriate.\n> >>\n> >> + * This should be done after the suppression of the use of indexes (above),\n> >> + * because the recursive call to reindex_relation() below will invoke\n> >> + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n> >> + * on the table.\n> >>\n> >> This does not explain the reason why this would prevent the creation\n> >> of a consistent index list fetched from the parent table, does it?\n> >> Would some indexes be missing from what should be reindexed? Or some\n> >> added unnecessarily? Would that be that an incorrect list?\n> >\n> > IIUC the issue is that something (e.g., VACUUM FULL, CLUSTER) might've just\n> > rebuilt the heap, so if we CCI'd before gathering the list of indexes, the\n> > new heap contents would become visible, and the indexes would be\n> > inconsistent with the heap. This is a problem when the relation in\n> > question is a system catalog that needs to be consulted to gather the list\n> > of indexes. To handle this, we avoid the CCI until after gathering the\n> > indexes so that the old heap contents appear valid and can be used as\n> > needed. Once that is done, we mark the indexes as pending-rebuild and do a\n> > CCI, at which point the indexes become inconsistent with the heap. This\n> > behavior appears to have been added by commit b9b8831.\n>\n> How do we move this one forward? Michael and I provided some feedback\n> about the comment, but AFAICT this patch is in good shape otherwise.\n> Gurjeet, would you mind putting together a new version of the patch so that\n> we can close on this one?\n\nPlease see attached v2 of the patch; no code changes since v1, just\ncomments are changed to describe the reason for behaviour and the\nplacement of code.\n\nI have tried to make the comment describe in more detail the condition\nit's trying to avoid. I've also referenced the function comments, as\nyou suggested, so that the reader can get more context about why the\ncode is placed _after_ certain other code.\n\nHopefully the comments are sufficiently descriptive. If you or another\ncommitter feels the need to change the comments any further, please\nfeel free to edit them as necessary.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Wed, 29 Nov 2023 13:43:46 -0800",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Thu, 30 Nov 2023 at 03:14, Gurjeet Singh <[email protected]> wrote:\n>\n> On Fri, Sep 1, 2023 at 9:55 AM Nathan Bossart <[email protected]> wrote:\n> >\n> > On Fri, Jul 28, 2023 at 11:00:56AM -0700, Nathan Bossart wrote:\n> > > On Fri, Jul 28, 2023 at 10:50:50AM +0900, Michael Paquier wrote:\n> > >> On Thu, Jul 27, 2023 at 04:14:41PM -0700, Nathan Bossart wrote:\n> > >>> On Wed, Jul 26, 2023 at 06:43:18PM -0700, Gurjeet Singh wrote:\n> > >>>> I felt the need for that paragraph, because it doesn't feel obvious to\n> > >>>> me as to why we can't simply reindex the toast table as the first\n> > >>>> thing in this function; the toast table reindex will trigger CCI, and\n> > >>>> that'd be bad if done before RelationGetIndexList().\n> > >>>\n> > >>> I see. I'd suggest referencing the comment above the function, but in\n> > >>> general I do think having a comment about this is appropriate.\n> > >>\n> > >> + * This should be done after the suppression of the use of indexes (above),\n> > >> + * because the recursive call to reindex_relation() below will invoke\n> > >> + * CommandCounterIncrement(), which may prevent enumeration of the indexes\n> > >> + * on the table.\n> > >>\n> > >> This does not explain the reason why this would prevent the creation\n> > >> of a consistent index list fetched from the parent table, does it?\n> > >> Would some indexes be missing from what should be reindexed? Or some\n> > >> added unnecessarily? Would that be that an incorrect list?\n> > >\n> > > IIUC the issue is that something (e.g., VACUUM FULL, CLUSTER) might've just\n> > > rebuilt the heap, so if we CCI'd before gathering the list of indexes, the\n> > > new heap contents would become visible, and the indexes would be\n> > > inconsistent with the heap. This is a problem when the relation in\n> > > question is a system catalog that needs to be consulted to gather the list\n> > > of indexes. To handle this, we avoid the CCI until after gathering the\n> > > indexes so that the old heap contents appear valid and can be used as\n> > > needed. Once that is done, we mark the indexes as pending-rebuild and do a\n> > > CCI, at which point the indexes become inconsistent with the heap. This\n> > > behavior appears to have been added by commit b9b8831.\n> >\n> > How do we move this one forward? Michael and I provided some feedback\n> > about the comment, but AFAICT this patch is in good shape otherwise.\n> > Gurjeet, would you mind putting together a new version of the patch so that\n> > we can close on this one?\n>\n> Please see attached v2 of the patch; no code changes since v1, just\n> comments are changed to describe the reason for behaviour and the\n> placement of code.\n>\n> I have tried to make the comment describe in more detail the condition\n> it's trying to avoid. I've also referenced the function comments, as\n> you suggested, so that the reader can get more context about why the\n> code is placed _after_ certain other code.\n>\n> Hopefully the comments are sufficiently descriptive. If you or another\n> committer feels the need to change the comments any further, please\n> feel free to edit them as necessary.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n06a66d87dbc7e06581af6765131ea250063fb4ac ===\n=== applying patch\n./v2-0001-Reindex-the-toast-table-if-any-before-the-main-ta.patch\npatching file src/backend/catalog/index.c\n...\nHunk #5 FAILED at 4001.\n1 out of 5 hunks FAILED -- saving rejects to file\nsrc/backend/catalog/index.c.rej\n\nPlease have a look and post an updated version.\n\n[1] - http://cfbot.cputube.org/patch_46_4443.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 08:22:49 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 08:22:49AM +0530, vignesh C wrote:\n> Please have a look and post an updated version.\n> \n> [1] - http://cfbot.cputube.org/patch_46_4443.log\n\nIt happens that I am at the origin of both the conflicts when applying\nthe patch and the delay in handling it in the CF app as I was\nregistered as a committer for it while the entry was marked as RfC, so\nthanks for the reminder. I have looked at it today with a fresh pair\nof eyes, and reworked a bit the comments before applying it on HEAD.\n--\nMichael",
"msg_date": "Fri, 26 Jan 2024 18:22:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: BUG #18016: REINDEX TABLE failure"
}
] |
[
{
"msg_contents": "Windows has support for some signals[0], like SIGTERM and SIGINT. SIGINT\nmust be handled with care on Windows since it is handled in a separate\nthread. SIGTERM however can be handled in a similar way to UNIX-like\nsystems. I audited a few pqsignal calls that were blocked by WIN32 to\nsee if they could become used, and made some adjustments. Definitely\nhoping for someone with more Windows knowledge to audit this.\n\nIn addition, I found that signal_cleanup() in pg_test_fsync.c was not\nusing signal-safe functions, so I went ahead and fixed those to use\ntheir signal-safe equivalents.\n\n[0]: https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/signal\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Thu, 06 Jul 2023 15:43:32 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On 06.07.23 22:43, Tristan Partin wrote:\n> \t/* Finish incomplete line on stdout */\n> -\tputs(\"\");\n> -\texit(1);\n> +\twrite(STDOUT_FILENO, \"\", 1);\n> +\t_exit(1);\n\nputs() writes a newline, so it should probably be something like\n\n write(STDOUT_FILENO, \"\\n\", 1);\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 10:56:13 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed Jul 12, 2023 at 3:56 AM CDT, Peter Eisentraut wrote:\n> On 06.07.23 22:43, Tristan Partin wrote:\n> > \t/* Finish incomplete line on stdout */\n> > -\tputs(\"\");\n> > -\texit(1);\n> > +\twrite(STDOUT_FILENO, \"\", 1);\n> > +\t_exit(1);\n>\n> puts() writes a newline, so it should probably be something like\n>\n> write(STDOUT_FILENO, \"\\n\", 1);\n\nSilly mistake. Thanks. v2 attached.\n\nIt has come to my attention that STDOUT_FILENO might not be portable and\nfileno(3) isn't marked as signal-safe, so I have just used the raw 1 for\nstdout, which as far as I know is portable.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 12 Jul 2023 09:23:06 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On 12.07.23 16:23, Tristan Partin wrote:\n> It has come to my attention that STDOUT_FILENO might not be portable and\n> fileno(3) isn't marked as signal-safe, so I have just used the raw 1 for\n> stdout, which as far as I know is portable.\n\nWe do use STDOUT_FILENO elsewhere in the code, and there are even \nworkaround definitions for Windows, so it appears it is meant to be used.\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 16:31:23 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed Jul 12, 2023 at 9:31 AM CDT, Peter Eisentraut wrote:\n> On 12.07.23 16:23, Tristan Partin wrote:\n> > It has come to my attention that STDOUT_FILENO might not be portable and\n> > fileno(3) isn't marked as signal-safe, so I have just used the raw 1 for\n> > stdout, which as far as I know is portable.\n>\n> We do use STDOUT_FILENO elsewhere in the code, and there are even \n> workaround definitions for Windows, so it appears it is meant to be used.\n\nv3 is back to the original patch with newline being printed. Thanks.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 12 Jul 2023 09:35:58 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed Jul 12, 2023 at 9:35 AM CDT, Tristan Partin wrote:\n> On Wed Jul 12, 2023 at 9:31 AM CDT, Peter Eisentraut wrote:\n> > On 12.07.23 16:23, Tristan Partin wrote:\n> > > It has come to my attention that STDOUT_FILENO might not be portable and\n> > > fileno(3) isn't marked as signal-safe, so I have just used the raw 1 for\n> > > stdout, which as far as I know is portable.\n> >\n> > We do use STDOUT_FILENO elsewhere in the code, and there are even \n> > workaround definitions for Windows, so it appears it is meant to be used.\n>\n> v3 is back to the original patch with newline being printed. Thanks.\n\nPeter, did you have anything more to say about patch 1 in this series? \nThinking about patch 2 more, not sure it should be considered until \nI setup a Windows VM to do some testing, or unless some benevolent \nWindows user wants to look at it and test it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 01 Dec 2023 16:10:26 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On 01.12.23 23:10, Tristan Partin wrote:\n> On Wed Jul 12, 2023 at 9:35 AM CDT, Tristan Partin wrote:\n>> On Wed Jul 12, 2023 at 9:31 AM CDT, Peter Eisentraut wrote:\n>> > On 12.07.23 16:23, Tristan Partin wrote:\n>> > > It has come to my attention that STDOUT_FILENO might not be \n>> portable and\n>> > > fileno(3) isn't marked as signal-safe, so I have just used the raw \n>> 1 for\n>> > > stdout, which as far as I know is portable.\n>> >\n>> > We do use STDOUT_FILENO elsewhere in the code, and there are even > \n>> workaround definitions for Windows, so it appears it is meant to be used.\n>>\n>> v3 is back to the original patch with newline being printed. Thanks.\n> \n> Peter, did you have anything more to say about patch 1 in this series?\n\nI think that patch is correct. However, I wonder whether we even need \nthat signal handler. We could just delete the file immediately after \nopening it; then we don't need to worry about deleting it later. On \nWindows, we could use O_TEMPORARY instead.\n\n> Thinking about patch 2 more, not sure it should be considered until I \n> setup a Windows VM to do some testing, or unless some benevolent Windows \n> user wants to look at it and test it.\n\nYeah, that should probably be tested interactively by someone.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 16:22:31 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Mon Dec 4, 2023 at 9:22 AM CST, Peter Eisentraut wrote:\n> On 01.12.23 23:10, Tristan Partin wrote:\n> > On Wed Jul 12, 2023 at 9:35 AM CDT, Tristan Partin wrote:\n> >> On Wed Jul 12, 2023 at 9:31 AM CDT, Peter Eisentraut wrote:\n> >> > On 12.07.23 16:23, Tristan Partin wrote:\n> >> > > It has come to my attention that STDOUT_FILENO might not be \n> >> portable and\n> >> > > fileno(3) isn't marked as signal-safe, so I have just used the raw \n> >> 1 for\n> >> > > stdout, which as far as I know is portable.\n> >> >\n> >> > We do use STDOUT_FILENO elsewhere in the code, and there are even > \n> >> workaround definitions for Windows, so it appears it is meant to be used.\n> >>\n> >> v3 is back to the original patch with newline being printed. Thanks.\n> > \n> > Peter, did you have anything more to say about patch 1 in this series?\n>\n> I think that patch is correct. However, I wonder whether we even need \n> that signal handler. We could just delete the file immediately after \n> opening it; then we don't need to worry about deleting it later. On \n> Windows, we could use O_TEMPORARY instead.\n\nI don't think that would work because the same file is opened and closed \nmultiple times throughout the course of the program. We first open the \nfile in test_open() which sets needs_unlink to true, so for the rest of \nthe program we need to unlink the file, but also continue to be able to \nopen it. Here is the unlink(2) description for context:\n\n> unlink() deletes a name from the filesystem. If that name was the \n> last link to a file and no processes have the file open, the file is \n> deleted and the space it was using is made available for reuse.\n\nGiven what you've suggested, we could never reopen the file after the \nunlink(2) call.\n\nThis is my first time reading this particular code, so maybe you have \ncome to a different conclusion.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 04 Dec 2023 11:20:56 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On 04.12.23 18:20, Tristan Partin wrote:\n> On Mon Dec 4, 2023 at 9:22 AM CST, Peter Eisentraut wrote:\n>> On 01.12.23 23:10, Tristan Partin wrote:\n>> > On Wed Jul 12, 2023 at 9:35 AM CDT, Tristan Partin wrote:\n>> >> On Wed Jul 12, 2023 at 9:31 AM CDT, Peter Eisentraut wrote:\n>> >> > On 12.07.23 16:23, Tristan Partin wrote:\n>> >> > > It has come to my attention that STDOUT_FILENO might not be >> \n>> portable and\n>> >> > > fileno(3) isn't marked as signal-safe, so I have just used the \n>> raw >> 1 for\n>> >> > > stdout, which as far as I know is portable.\n>> >> >\n>> >> > We do use STDOUT_FILENO elsewhere in the code, and there are even \n>> > >> workaround definitions for Windows, so it appears it is meant to \n>> be used.\n>> >>\n>> >> v3 is back to the original patch with newline being printed. Thanks.\n>> > > Peter, did you have anything more to say about patch 1 in this \n>> series?\n>>\n>> I think that patch is correct. However, I wonder whether we even need \n>> that signal handler. We could just delete the file immediately after \n>> opening it; then we don't need to worry about deleting it later. On \n>> Windows, we could use O_TEMPORARY instead.\n> \n> I don't think that would work because the same file is opened and closed \n> multiple times throughout the course of the program.\n\nOk, I have committed your 0001 patch.\n\n\n\n",
"msg_date": "Wed, 6 Dec 2023 10:23:52 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 10:23:52AM +0100, Peter Eisentraut wrote:\n> Ok, I have committed your 0001 patch.\n\nMy compiler is unhappy about this one:\n\n../postgresql/src/bin/pg_test_fsync/pg_test_fsync.c:605:2: error: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Werror=unused-result]\n 605 | write(STDOUT_FILENO, \"\\n\", 1);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nI think we need to do something like the following, which is similar to\nwhat was done in aa90e148ca7, 27314d32a88, and 6c72a28e5ce.\n\ndiff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c\nindex f109aa5717..0684f4bc54 100644\n--- a/src/bin/pg_test_fsync/pg_test_fsync.c\n+++ b/src/bin/pg_test_fsync/pg_test_fsync.c\n@@ -598,11 +598,14 @@ test_non_sync(void)\n static void\n signal_cleanup(SIGNAL_ARGS)\n {\n+ int rc;\n+\n /* Delete the file if it exists. Ignore errors */\n if (needs_unlink)\n unlink(filename);\n /* Finish incomplete line on stdout */\n- write(STDOUT_FILENO, \"\\n\", 1);\n+ rc = write(STDOUT_FILENO, \"\\n\", 1);\n+ (void) rc;\n _exit(1);\n }\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 10:18:39 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed Dec 6, 2023 at 10:18 AM CST, Nathan Bossart wrote:\n> On Wed, Dec 06, 2023 at 10:23:52AM +0100, Peter Eisentraut wrote:\n> > Ok, I have committed your 0001 patch.\n>\n> My compiler is unhappy about this one:\n>\n> ../postgresql/src/bin/pg_test_fsync/pg_test_fsync.c:605:2: error: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Werror=unused-result]\n> 605 | write(STDOUT_FILENO, \"\\n\", 1);\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSome glibc source:\n\n> /* If fortification mode, we warn about unused results of certain\n> function calls which can lead to problems. */\n> #if __GNUC_PREREQ (3,4) || __glibc_has_attribute (__warn_unused_result__)\n> # define __attribute_warn_unused_result__ \\\n> __attribute__ ((__warn_unused_result__))\n> # if defined __USE_FORTIFY_LEVEL && __USE_FORTIFY_LEVEL > 0\n> # define __wur __attribute_warn_unused_result__\n> # endif\n> #else\n> # define __attribute_warn_unused_result__ /* empty */\n> #endif\n> #ifndef __wur\n> # define __wur /* Ignore */\n> #endif\n\n> extern ssize_t write (int __fd, const void *__buf, size_t __n) __wur\n> __attr_access ((__read_only__, 2, 3));\n\nAccording to my setup, I am hitting the /* Ignore */ variant of __wur. \nI am guessing that Fedora doesn't add fortification to the default \nCFLAGS. What distro are you using? But yes, something like what you \nproposed sounds good to me. Sorry for leaving this out!\n\nMakes me wonder if setting -D_FORTIFY_SOURCE=2 in debug builds at least \nwould make sense, if not all builds. According to the OpenSSF[0], level \n2 is only supposed to impact runtime performance by 0.1%.\n\n[0]: https://best.openssf.org/Compiler-Hardening-Guides/Compiler-Options-Hardening-Guide-for-C-and-C++.html#performance-implications\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 06 Dec 2023 10:28:49 -0600",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 10:28:49AM -0600, Tristan Partin wrote:\n> According to my setup, I am hitting the /* Ignore */ variant of __wur. I am\n> guessing that Fedora doesn't add fortification to the default CFLAGS. What\n> distro are you using? But yes, something like what you proposed sounds good\n> to me. Sorry for leaving this out!\n\nThis was on an Ubuntu LTS. I always build with -Werror during development,\ntoo.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 10:37:10 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On 06.12.23 17:18, Nathan Bossart wrote:\n> On Wed, Dec 06, 2023 at 10:23:52AM +0100, Peter Eisentraut wrote:\n>> Ok, I have committed your 0001 patch.\n> \n> My compiler is unhappy about this one:\n> \n> ../postgresql/src/bin/pg_test_fsync/pg_test_fsync.c:605:2: error: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Werror=unused-result]\n> 605 | write(STDOUT_FILENO, \"\\n\", 1);\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> I think we need to do something like the following, which is similar to\n> what was done in aa90e148ca7, 27314d32a88, and 6c72a28e5ce.\n> \n> diff --git a/src/bin/pg_test_fsync/pg_test_fsync.c b/src/bin/pg_test_fsync/pg_test_fsync.c\n> index f109aa5717..0684f4bc54 100644\n> --- a/src/bin/pg_test_fsync/pg_test_fsync.c\n> +++ b/src/bin/pg_test_fsync/pg_test_fsync.c\n> @@ -598,11 +598,14 @@ test_non_sync(void)\n> static void\n> signal_cleanup(SIGNAL_ARGS)\n> {\n> + int rc;\n> +\n> /* Delete the file if it exists. Ignore errors */\n> if (needs_unlink)\n> unlink(filename);\n> /* Finish incomplete line on stdout */\n> - write(STDOUT_FILENO, \"\\n\", 1);\n> + rc = write(STDOUT_FILENO, \"\\n\", 1);\n> + (void) rc;\n> _exit(1);\n> }\n\nMakes sense. Can you commit that?\n\n\n\n",
"msg_date": "Wed, 6 Dec 2023 18:27:04 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 06:27:04PM +0100, Peter Eisentraut wrote:\n> Makes sense. Can you commit that?\n\nYes, I will do so shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 11:30:02 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Wed, Dec 06, 2023 at 11:30:02AM -0600, Nathan Bossart wrote:\n> On Wed, Dec 06, 2023 at 06:27:04PM +0100, Peter Eisentraut wrote:\n>> Makes sense. Can you commit that?\n> \n> Yes, I will do so shortly.\n\nCommitted. Apologies for the delay.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 17:20:18 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
},
{
"msg_contents": "On Thu, 7 Dec 2023 at 04:50, Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Dec 06, 2023 at 11:30:02AM -0600, Nathan Bossart wrote:\n> > On Wed, Dec 06, 2023 at 06:27:04PM +0100, Peter Eisentraut wrote:\n> >> Makes sense. Can you commit that?\n> >\n> > Yes, I will do so shortly.\n>\n> Committed. Apologies for the delay.\n\nI have marked the commitfest entry as committed as the patch has been committed.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 11 Jan 2024 16:37:48 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some signal usage mainly related to Windows"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nWhen I try to change log_destination using ALTER SYSTEM with the wrong value,\nit complains of the \"Unrecognized key word\" without available values. This\npatch tries to add a hint message that provides available values for\nlog_destination. Any thoughts?\n\n-- \nRegrads,\nJapin Li.",
"msg_date": "Fri, 07 Jul 2023 13:06:30 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add hint message for check_log_destination()"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 1:06 PM Japin Li <[email protected]> wrote:\n>\n>\n> Hi, hackers\n>\n> When I try to change log_destination using ALTER SYSTEM with the wrong value,\n> it complains of the \"Unrecognized key word\" without available values. This\n> patch tries to add a hint message that provides available values for\n> log_destination. Any thoughts?\n>\n> --\n> Regrads,\n> Japin Li.\n>\n\nselect * from pg_settings where name ~* 'log.*destin*' \\gx\n\nshort_desc | Sets the destination for server log output.\nextra_desc | Valid values are combinations of \"stderr\", \"syslog\",\n\"csvlog\", \"jsonlog\", and \"eventlog\", depending on the platform.\n\nyou can just reuse extra_desc in the pg_settings (view) column ?\n\n\n",
"msg_date": "Fri, 7 Jul 2023 14:46:48 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Fri, 07 Jul 2023 at 14:46, jian he <[email protected]> wrote:\n> On Fri, Jul 7, 2023 at 1:06 PM Japin Li <[email protected]> wrote:\n>>\n>>\n>> Hi, hackers\n>>\n>> When I try to change log_destination using ALTER SYSTEM with the wrong value,\n>> it complains of the \"Unrecognized key word\" without available values. This\n>> patch tries to add a hint message that provides available values for\n>> log_destination. Any thoughts?\n>>\n>\n> select * from pg_settings where name ~* 'log.*destin*' \\gx\n>\n> short_desc | Sets the destination for server log output.\n> extra_desc | Valid values are combinations of \"stderr\", \"syslog\",\n> \"csvlog\", \"jsonlog\", and \"eventlog\", depending on the platform.\n>\n> you can just reuse extra_desc in the pg_settings (view) column ?\n\nThanks for your review!\n\nYeah, the description of extra_desc is more accurate. Updated.\n\n-- \nRegrads,\nJapin Li.",
"msg_date": "Fri, 07 Jul 2023 15:52:51 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 4:53 PM Japin Li <[email protected]> wrote:\n>\n>\n> On Fri, 07 Jul 2023 at 14:46, jian he <[email protected]> wrote:\n> > On Fri, Jul 7, 2023 at 1:06 PM Japin Li <[email protected]> wrote:\n> >>\n> >>\n> >> Hi, hackers\n> >>\n> >> When I try to change log_destination using ALTER SYSTEM with the wrong value,\n> >> it complains of the \"Unrecognized key word\" without available values. This\n> >> patch tries to add a hint message that provides available values for\n> >> log_destination. Any thoughts?\n\n+1\n\n+ appendStringInfo(&errhint, \"\\\"stderr\\\"\");\n+#ifdef HAVE_SYSLOG\n+ appendStringInfo(&errhint, \", \\\"syslog\\\"\");\n+#endif\n+#ifdef WIN32\n+ appendStringInfo(&errhint, \", \\\"eventlog\\\"\");\n+#endif\n+ appendStringInfo(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n\nI think using appendStringInfoString() is a bit more natural and faster.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 17:21:04 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Fri, 07 Jul 2023 at 16:21, Masahiko Sawada <[email protected]> wrote:\n> On Fri, Jul 7, 2023 at 4:53 PM Japin Li <[email protected]> wrote:\n>>\n>>\n>> On Fri, 07 Jul 2023 at 14:46, jian he <[email protected]> wrote:\n>> > On Fri, Jul 7, 2023 at 1:06 PM Japin Li <[email protected]> wrote:\n>> >>\n>> >>\n>> >> Hi, hackers\n>> >>\n>> >> When I try to change log_destination using ALTER SYSTEM with the wrong value,\n>> >> it complains of the \"Unrecognized key word\" without available values. This\n>> >> patch tries to add a hint message that provides available values for\n>> >> log_destination. Any thoughts?\n>\n> +1\n>\n> + appendStringInfo(&errhint, \"\\\"stderr\\\"\");\n> +#ifdef HAVE_SYSLOG\n> + appendStringInfo(&errhint, \", \\\"syslog\\\"\");\n> +#endif\n> +#ifdef WIN32\n> + appendStringInfo(&errhint, \", \\\"eventlog\\\"\");\n> +#endif\n> + appendStringInfo(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n>\n> I think using appendStringInfoString() is a bit more natural and faster.\n>\n\nThanks for your review! Fixed as per your suggession.\n\n\n\n\n-- \nRegrads,\nJapin Li.",
"msg_date": "Fri, 07 Jul 2023 19:23:47 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n> +\t\t\tappendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n> +#ifdef HAVE_SYSLOG\n> +\t\t\tappendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n> +#endif\n> +#ifdef WIN32\n> +\t\t\tappendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n> +#endif\n> +\t\t\tappendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n\nHmm. Is that OK as a translatable string?\n--\nMichael",
"msg_date": "Sat, 8 Jul 2023 13:48:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "\nOn Sat, 08 Jul 2023 at 12:48, Michael Paquier <[email protected]> wrote:\n> On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n>> +\t\t\tappendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n>> +#ifdef HAVE_SYSLOG\n>> +\t\t\tappendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n>> +#endif\n>> +#ifdef WIN32\n>> +\t\t\tappendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n>> +#endif\n>> +\t\t\tappendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n>\n> Hmm. Is that OK as a translatable string?\n\n\nSorry for the late reply! I'm not sure. How can I know whether it is translatable?\n\n-- \nRegrads,\nJapin Li.\n\n\n",
"msg_date": "Mon, 10 Jul 2023 09:04:42 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "At Mon, 10 Jul 2023 09:04:42 +0800, Japin Li <[email protected]> wrote in \n> \n> On Sat, 08 Jul 2023 at 12:48, Michael Paquier <[email protected]> wrote:\n> > On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n> >> +\t\t\tappendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n> >> +#ifdef HAVE_SYSLOG\n> >> +\t\t\tappendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n> >> +#endif\n> >> +#ifdef WIN32\n> >> +\t\t\tappendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n> >> +#endif\n> >> +\t\t\tappendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n> >\n> > Hmm. Is that OK as a translatable string?\n> \n> \n> Sorry for the late reply! I'm not sure. How can I know whether it is translatable?\n\nAt the very least, we can't generate comma-separated lists\nprogramatically because punctuation marks vary across languages.\n\nOne potential approach could involve defining the message for every\npotential combination, in full length.\n\nHonestly, I'm not sold on the idea that we need to exhaust ourselves\nproviding an exhaustive list of usable keywords for users here. I\nbelieve that it is unlikely that these keywords will be used in\ndifferent combinations each time without looking at the\ndocumentation. On top of that, consider \"csvlog\" as an example, -- it\ndoesn't work as expected if logging_collector is off. Although this is\ndocumented, we don't give any warnings at startup. This seems like a\nbigger issue than the unusable keywords. (I don't mean to suggest to\nfix this, as usual.)\n\nIn short, I think a simple message like '\"xxx\" cannot be used in this\nbuild' should suffice for keywords defined but unusable, and we should\nstick with \"unknown\" for the undefined ones.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 10 Jul 2023 14:07:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 02:07:09PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 10 Jul 2023 09:04:42 +0800, Japin Li <[email protected]> wrote in \n>> Sorry for the late reply! I'm not sure. How can I know whether it is translatable?\n\nPer the documentation:\nhttps://www.postgresql.org/docs/devel/nls-programmer.html#NLS-GUIDELINES\n\nNow, if you want to look at the shape of the messages, you could also\nrun something like a `make init-po` and look at the messages generated\nin a .pot file.\n\n> Honestly, I'm not sold on the idea that we need to exhaust ourselves\n> providing an exhaustive list of usable keywords for users here. I\n> believe that it is unlikely that these keywords will be used in\n> different combinations each time without looking at the\n> documentation. On top of that, consider \"csvlog\" as an example, -- it\n> doesn't work as expected if logging_collector is off. Although this is\n> documented, we don't give any warnings at startup. This seems like a\n> bigger issue than the unusable keywords. (I don't mean to suggest to\n> fix this, as usual.)\n> \n> In short, I think a simple message like '\"xxx\" cannot be used in this\n> build' should suffice for keywords defined but unusable, and we should\n> stick with \"unknown\" for the undefined ones.\n\nWhich is roughly what the existing GUC_check_errdetail() does as well,\nbut you indeed lose a bit of context because the option wanted is not\nbuilt. I am not convinced that there is something to change here.\n--\nMichael",
"msg_date": "Mon, 10 Jul 2023 14:17:39 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 2:07 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Mon, 10 Jul 2023 09:04:42 +0800, Japin Li <[email protected]> wrote in\n> >\n> > On Sat, 08 Jul 2023 at 12:48, Michael Paquier <[email protected]> wrote:\n> > > On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n> > >> + appendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n> > >> +#ifdef HAVE_SYSLOG\n> > >> + appendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n> > >> +#endif\n> > >> +#ifdef WIN32\n> > >> + appendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n> > >> +#endif\n> > >> + appendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n> > >\n> > > Hmm. Is that OK as a translatable string?\n\nIt seems okay to me but needs to be checked.\n\n> >\n> >\n> > Sorry for the late reply! I'm not sure. How can I know whether it is translatable?\n>\n> At the very least, we can't generate comma-separated lists\n> programatically because punctuation marks vary across languages.\n>\n> One potential approach could involve defining the message for every\n> potential combination, in full length.\n\nDon't we generate a comma-separated list for an error hint of an enum\nparameter? For example, to generate the following error hint:\n\n=# alter system set client_min_messages = 'aaa';\nERROR: invalid value for parameter \"client_min_messages\": \"aaa\"\nHINT: Available values: debug5, debug4, debug3, debug2, debug1, log,\nnotice, warning, error.\n\nwe use the comma-separated generated by config_enum_get_options() and\ndo ereport() like:\n\n ereport(elevel,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\n name, value),\n hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n\nIMO log_destination is a string GUC parameter but its value is the\nlist of enums. So it makes sense to me to add a hint message like what\nwe do for enum parameters in case where the user mistypes a wrong\nvalue. I'm not sure why the proposed patch needs to quote the usable\nvalues, though. A similar type of GUC parameter is debug_io_direct.\nBut I'm not sure we need a hint message for it too as it's a developer\noption.\n\n> On top of that, consider \"csvlog\" as an example, -- it\n> doesn't work as expected if logging_collector is off. Although this is\n> documented, we don't give any warnings at startup. This seems like a\n> bigger issue than the unusable keywords. (I don't mean to suggest to\n> fix this, as usual.)\n\nYes, but I think it's a separate problem.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:23:27 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "\nOn Mon, 10 Jul 2023 at 14:23, Masahiko Sawada <[email protected]> wrote:\n> On Mon, Jul 10, 2023 at 2:07 PM Kyotaro Horiguchi\n> <[email protected]> wrote:\n>>\n>> At Mon, 10 Jul 2023 09:04:42 +0800, Japin Li <[email protected]> wrote in\n>> >\n>> > On Sat, 08 Jul 2023 at 12:48, Michael Paquier <[email protected]> wrote:\n>> > > On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n>> > >> + appendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n>> > >> +#ifdef HAVE_SYSLOG\n>> > >> + appendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n>> > >> +#endif\n>> > >> +#ifdef WIN32\n>> > >> + appendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n>> > >> +#endif\n>> > >> + appendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n>> > >\n>> > > Hmm. Is that OK as a translatable string?\n>\n> It seems okay to me but needs to be checked.\n>\n>> >\n>> >\n>> > Sorry for the late reply! I'm not sure. How can I know whether it is translatable?\n>>\n>> At the very least, we can't generate comma-separated lists\n>> programatically because punctuation marks vary across languages.\n>>\n>> One potential approach could involve defining the message for every\n>> potential combination, in full length.\n>\n> Don't we generate a comma-separated list for an error hint of an enum\n> parameter? For example, to generate the following error hint:\n>\n> =# alter system set client_min_messages = 'aaa';\n> ERROR: invalid value for parameter \"client_min_messages\": \"aaa\"\n> HINT: Available values: debug5, debug4, debug3, debug2, debug1, log,\n> notice, warning, error.\n>\n> we use the comma-separated generated by config_enum_get_options() and\n> do ereport() like:\n>\n> ereport(elevel,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\n> name, value),\n> hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n\n> IMO log_destination is a string GUC parameter but its value is the\n> list of enums. So it makes sense to me to add a hint message like what\n> we do for enum parameters in case where the user mistypes a wrong\n> value. I'm not sure why the proposed patch needs to quote the usable\n> values, though.\n\nI borrowed the description from pg_settings extra_desc. In my first patch,\nI used the hint message line enum parameter, however, since it might be a\ncombination of multiple log destinations, so, I update the patch using\nextra_desc.\n\n-- \nRegrads,\nJapin Li.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 09:24:35 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 10:24 AM Japin Li <[email protected]> wrote:\n>\n>\n> On Mon, 10 Jul 2023 at 14:23, Masahiko Sawada <[email protected]> wrote:\n> > On Mon, Jul 10, 2023 at 2:07 PM Kyotaro Horiguchi\n> > <[email protected]> wrote:\n> >>\n> >> At Mon, 10 Jul 2023 09:04:42 +0800, Japin Li <[email protected]> wrote in\n> >> >\n> >> > On Sat, 08 Jul 2023 at 12:48, Michael Paquier <[email protected]> wrote:\n> >> > > On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n> >> > >> + appendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n> >> > >> +#ifdef HAVE_SYSLOG\n> >> > >> + appendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n> >> > >> +#endif\n> >> > >> +#ifdef WIN32\n> >> > >> + appendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n> >> > >> +#endif\n> >> > >> + appendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n> >> > >\n> >> > > Hmm. Is that OK as a translatable string?\n> >\n> > It seems okay to me but needs to be checked.\n> >\n> >> >\n> >> >\n> >> > Sorry for the late reply! I'm not sure. How can I know whether it is translatable?\n> >>\n> >> At the very least, we can't generate comma-separated lists\n> >> programatically because punctuation marks vary across languages.\n> >>\n> >> One potential approach could involve defining the message for every\n> >> potential combination, in full length.\n> >\n> > Don't we generate a comma-separated list for an error hint of an enum\n> > parameter? For example, to generate the following error hint:\n> >\n> > =# alter system set client_min_messages = 'aaa';\n> > ERROR: invalid value for parameter \"client_min_messages\": \"aaa\"\n> > HINT: Available values: debug5, debug4, debug3, debug2, debug1, log,\n> > notice, warning, error.\n> >\n> > we use the comma-separated generated by config_enum_get_options() and\n> > do ereport() like:\n> >\n> > ereport(elevel,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\n> > name, value),\n> > hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n>\n> > IMO log_destination is a string GUC parameter but its value is the\n> > list of enums. So it makes sense to me to add a hint message like what\n> > we do for enum parameters in case where the user mistypes a wrong\n> > value. I'm not sure why the proposed patch needs to quote the usable\n> > values, though.\n>\n> I borrowed the description from pg_settings extra_desc. In my first patch,\n> I used the hint message line enum parameter, however, since it might be a\n> combination of multiple log destinations, so, I update the patch using\n> extra_desc.\n\nI agree to use description from pg_settings extra_desc, but it seems\nto be better not to quote each available value like we do for enum\nparameter cases. That is, the hint message would be like:\n\n=# alter system set log_destination to 'xxx';\nERROR: invalid value for parameter \"log_destination\": \"xxx\"\nDETAIL: Unrecognized key word: \"xxx\".\nHINT: Valid values are combinations of stderr, syslog, csvlog, and jsonlog.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 17:19:26 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add hint message for check_log_destination()"
},
{
"msg_contents": "On Thu, 13 Jul 2023 at 16:19, Masahiko Sawada <[email protected]> wrote:\n> On Tue, Jul 11, 2023 at 10:24 AM Japin Li <[email protected]> wrote:\n>>\n>>\n>> On Mon, 10 Jul 2023 at 14:23, Masahiko Sawada <[email protected]> wrote:\n>> > On Mon, Jul 10, 2023 at 2:07 PM Kyotaro Horiguchi\n>> > <[email protected]> wrote:\n>> >>\n>> >> At Mon, 10 Jul 2023 09:04:42 +0800, Japin Li <[email protected]> wrote in\n>> >> >\n>> >> > On Sat, 08 Jul 2023 at 12:48, Michael Paquier <[email protected]> wrote:\n>> >> > > On Fri, Jul 07, 2023 at 07:23:47PM +0800, Japin Li wrote:\n>> >> > >> + appendStringInfoString(&errhint, \"\\\"stderr\\\"\");\n>> >> > >> +#ifdef HAVE_SYSLOG\n>> >> > >> + appendStringInfoString(&errhint, \", \\\"syslog\\\"\");\n>> >> > >> +#endif\n>> >> > >> +#ifdef WIN32\n>> >> > >> + appendStringInfoString(&errhint, \", \\\"eventlog\\\"\");\n>> >> > >> +#endif\n>> >> > >> + appendStringInfoString(&errhint, \", \\\"csvlog\\\", and \\\"jsonlog\\\"\");\n>> >> > >\n>> >> > > Hmm. Is that OK as a translatable string?\n>> >\n>> > It seems okay to me but needs to be checked.\n>> >\n>> >> >\n>> >> >\n>> >> > Sorry for the late reply! I'm not sure. How can I know whether it is translatable?\n>> >>\n>> >> At the very least, we can't generate comma-separated lists\n>> >> programatically because punctuation marks vary across languages.\n>> >>\n>> >> One potential approach could involve defining the message for every\n>> >> potential combination, in full length.\n>> >\n>> > Don't we generate a comma-separated list for an error hint of an enum\n>> > parameter? For example, to generate the following error hint:\n>> >\n>> > =# alter system set client_min_messages = 'aaa';\n>> > ERROR: invalid value for parameter \"client_min_messages\": \"aaa\"\n>> > HINT: Available values: debug5, debug4, debug3, debug2, debug1, log,\n>> > notice, warning, error.\n>> >\n>> > we use the comma-separated generated by config_enum_get_options() and\n>> > do ereport() like:\n>> >\n>> > ereport(elevel,\n>> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> > errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\n>> > name, value),\n>> > hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n>>\n>> > IMO log_destination is a string GUC parameter but its value is the\n>> > list of enums. So it makes sense to me to add a hint message like what\n>> > we do for enum parameters in case where the user mistypes a wrong\n>> > value. I'm not sure why the proposed patch needs to quote the usable\n>> > values, though.\n>>\n>> I borrowed the description from pg_settings extra_desc. In my first patch,\n>> I used the hint message line enum parameter, however, since it might be a\n>> combination of multiple log destinations, so, I update the patch using\n>> extra_desc.\n>\n> I agree to use description from pg_settings extra_desc, but it seems\n> to be better not to quote each available value like we do for enum\n> parameter cases. That is, the hint message would be like:\n>\n> =# alter system set log_destination to 'xxx';\n> ERROR: invalid value for parameter \"log_destination\": \"xxx\"\n> DETAIL: Unrecognized key word: \"xxx\".\n> HINT: Valid values are combinations of stderr, syslog, csvlog, and jsonlog.\n>\n\nAgreed. Fixed as per your suggestions.\n\n-- \nRegrads,\nJapin Li",
"msg_date": "Fri, 14 Jul 2023 09:29:42 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add hint message for check_log_destination()"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nWhile analyzing -Wclobbered warnings from gcc we found a true one in \nPostgresMain():\n\npostgres.c: In function ‘PostgresMain’:\npostgres.c:4118:25: warning: variable \n‘idle_in_transaction_timeout_enabled’ might be clobbered by ‘longjmp’ or \n‘vfork’ [-Wclobbered]\n 4118 | bool idle_in_transaction_timeout_enabled = \nfalse;\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\npostgres.c:4119:25: warning: variable ‘idle_session_timeout_enabled’ \nmight be clobbered by ‘longjmp’ or ‘vfork’ [-Wclobbered]\n 4119 | bool idle_session_timeout_enabled = false;\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThese variables must be declared volatile, because they are read after \nlongjmp(). send_ready_for_query declared there is volatile.\n\nWithout volatile, these variables are kept in registers and restored by \nlongjump(). I think, this is harmless because the error handling code \ncalls disable_all_timeouts() anyway. But strictly speaking the code is \ninvoking undefined behavior by reading those variables after longjmp(), \nso it's worth fixing. And for consistency with send_ready_for_query too. \nI believe, making them volatile doesn't affect performance in any way.\n\nI also moved firstchar's declaration inside the loop where it's used, to \nmake it clear that this variable needn't be volatile and is not \npreserved after longjmp().\n\nBest regards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/",
"msg_date": "Fri, 7 Jul 2023 15:13:14 +0300",
"msg_from": "Sergey Shinderuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "gcc -Wclobbered in PostgresMain"
},
{
"msg_contents": "Sergey Shinderuk <[email protected]> writes:\n> While analyzing -Wclobbered warnings from gcc we found a true one in \n> PostgresMain():\n> ...\n> These variables must be declared volatile, because they are read after \n> longjmp(). send_ready_for_query declared there is volatile.\n\nYeah, you're on to something there.\n\n> Without volatile, these variables are kept in registers and restored by \n> longjump(). I think, this is harmless because the error handling code \n> calls disable_all_timeouts() anyway.\n\nHmm. So what could happen (if these *aren't* in registers) is that we\nmight later uselessly call disable_timeout to get rid of timeouts that\nare long gone anyway. While that's not terribly expensive, it's not\ngreat either. What we ought to be doing is resetting these two flags\nafter the disable_all_timeouts call.\n\nHaving done that, it wouldn't really be necessary to mark these\nas volatile. I kept that marking anyway for consistency with \nsend_ready_for_query, but perhaps we shouldn't?\n\n> I also moved firstchar's declaration inside the loop where it's used, to \n> make it clear that this variable needn't be volatile and is not \n> preserved after longjmp().\n\nGood idea, but then why not the same for input_message? It's fully\nreinitialized each time through the loop, too.\n\nIn short, something like the attached, except I'm not totally sold\non changing the volatility of the timeout flags.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 08 Jul 2023 11:11:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc -Wclobbered in PostgresMain"
},
{
"msg_contents": "Hello, Tom,\n\n\nOn 08.07.2023 18:11, Tom Lane wrote:\n> What we ought to be doing is resetting these two flags\n> after the disable_all_timeouts call.\n\n\nOops, I missed that.\n\n\n> Having done that, it wouldn't really be necessary to mark these\n> as volatile. I kept that marking anyway for consistency with\n> send_ready_for_query, but perhaps we shouldn't?\n\n\nI don't know. Maybe marking them volatile is more future proof. Not sure.\n\n\n>> I also moved firstchar's declaration inside the loop where it's used, to\n>> make it clear that this variable needn't be volatile and is not\n>> preserved after longjmp().\n> \n> Good idea, but then why not the same for input_message? It's fully\n> reinitialized each time through the loop, too.\n\n\nYeah, that's better.\n\n\n> In short, something like the attached, except I'm not totally sold\n> on changing the volatility of the timeout flags.\n\nLooks good to me.\n\n\nThank you.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 12:39:52 +0300",
"msg_from": "Sergey Shinderuk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: gcc -Wclobbered in PostgresMain"
},
{
"msg_contents": "Sergey Shinderuk <[email protected]> writes:\n> On 08.07.2023 18:11, Tom Lane wrote:\n>> Having done that, it wouldn't really be necessary to mark these\n>> as volatile. I kept that marking anyway for consistency with\n>> send_ready_for_query, but perhaps we shouldn't?\n\n> I don't know. Maybe marking them volatile is more future proof. Not sure.\n\nYeah, after sleeping on it, it seems best to have a policy that all\nvariables declared in that place are volatile. Even if there's no bug\nnow, not having volatile creates a risk of surprising behavior after\nfuture changes. Pushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jul 2023 12:17:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gcc -Wclobbered in PostgresMain"
}
] |
[
{
"msg_contents": "Hi,\n\nThis has already been discussed in [1].\nBut I thought it best to start a new thread.\n\nThe commit 31966b1\n<https://github.com/postgres/postgres/commit/31966b151e6ab7a6284deab6e8fe5faddaf2ae4c>\nintroduced the infrastructure to extend\nbuffers.\nBut the patch mixed types with int and uint32.\nThe correct type of the variable counter is uint32.\n\nFix by standardizing the int type to uint32.\n\npatch attached.\n\nbest regards,\nRanier Vilela\n\n[1]\nhttps://www.postgresql.org/message-id/CAEudQAr_oWHpZk4uumZijYS362gp4KHAah-yUe08CQY4a4SsOQ%40mail.gmail.com",
"msg_date": "Fri, 7 Jul 2023 10:11:39 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Standardize type of variable when extending Buffers"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]> wrote:\n>\n> Hi,\n>\n> This has already been discussed in [1].\n> But I thought it best to start a new thread.\n>\n> The commit 31966b1 introduced the infrastructure to extend\n> buffers.\n> But the patch mixed types with int and uint32.\n> The correct type of the variable counter is uint32.\n>\n> Fix by standardizing the int type to uint32.\n>\n> patch attached.\n\nLGTM.\n\n+CC Kyotaro, as they were involved in the previous discussion.\n\n>\n> [1] https://www.postgresql.org/message-id/CAEudQAr_oWHpZk4uumZijYS362gp4KHAah-yUe08CQY4a4SsOQ%40mail.gmail.com\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 7 Jul 2023 11:29:16 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standardize type of variable when extending Buffers"
},
{
"msg_contents": "At Fri, 7 Jul 2023 11:29:16 -0700, Gurjeet Singh <[email protected]> wrote in \r\n> On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]> wrote:\r\n> >\r\n> > Hi,\r\n> >\r\n> > This has already been discussed in [1].\r\n> > But I thought it best to start a new thread.\r\n> >\r\n> > The commit 31966b1 introduced the infrastructure to extend\r\n> > buffers.\r\n> > But the patch mixed types with int and uint32.\r\n> > The correct type of the variable counter is uint32.\r\n> >\r\n> > Fix by standardizing the int type to uint32.\r\n> >\r\n> > patch attached.\r\n> \r\n> LGTM.\r\n\r\nLGTM, too.\r\n\r\nI don't think it will actually come to play, since I believe we won't\r\nbe expanding a relation by 16TB all at once. Nevertheless, I believe\r\nkeeping things tidy is a good habit to stick to.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 10 Jul 2023 15:27:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standardize type of variable when extending Buffers"
},
{
"msg_contents": "Em seg., 10 de jul. de 2023 às 03:27, Kyotaro Horiguchi <\[email protected]> escreveu:\n\n> At Fri, 7 Jul 2023 11:29:16 -0700, Gurjeet Singh <[email protected]> wrote\n> in\n> > On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > This has already been discussed in [1].\n> > > But I thought it best to start a new thread.\n> > >\n> > > The commit 31966b1 introduced the infrastructure to extend\n> > > buffers.\n> > > But the patch mixed types with int and uint32.\n> > > The correct type of the variable counter is uint32.\n> > >\n> > > Fix by standardizing the int type to uint32.\n> > >\n> > > patch attached.\n> >\n> > LGTM.\n>\n> LGTM, too.\n>\nThanks Gurjeet and Kyotaro, for taking a look.\n\n\n> I don't think it will actually come to play, since I believe we won't\n> be expanding a relation by 16TB all at once. Nevertheless, I believe\n> keeping things tidy is a good habit to stick to.\n>\nYeah, mainly because of copy-and-paste.\nAlso, compiler has to promote int to uint32, anyway.\n\nregards,\nRanier Vilela\n\nEm seg., 10 de jul. de 2023 às 03:27, Kyotaro Horiguchi <[email protected]> escreveu:At Fri, 7 Jul 2023 11:29:16 -0700, Gurjeet Singh <[email protected]> wrote in \n> On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > This has already been discussed in [1].\n> > But I thought it best to start a new thread.\n> >\n> > The commit 31966b1 introduced the infrastructure to extend\n> > buffers.\n> > But the patch mixed types with int and uint32.\n> > The correct type of the variable counter is uint32.\n> >\n> > Fix by standardizing the int type to uint32.\n> >\n> > patch attached.\n> \n> LGTM.\n\nLGTM, too.Thanks Gurjeet and Kyotaro, for taking a look.\n\nI don't think it will actually come to play, since I believe we won't\nbe expanding a relation by 16TB all at once. Nevertheless, I believe\nkeeping things tidy is a good habit to stick to.Yeah, mainly because of copy-and-paste.Also, compiler has to promote int to uint32, anyway.regards,Ranier Vilela",
"msg_date": "Mon, 10 Jul 2023 08:08:36 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Standardize type of variable when extending Buffers"
},
{
"msg_contents": "On 10.07.23 13:08, Ranier Vilela wrote:\n> \n> Em seg., 10 de jul. de 2023 às 03:27, Kyotaro Horiguchi \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n> At Fri, 7 Jul 2023 11:29:16 -0700, Gurjeet Singh <[email protected]\n> <mailto:[email protected]>> wrote in\n> > On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]\n> <mailto:[email protected]>> wrote:\n> > >\n> > > Hi,\n> > >\n> > > This has already been discussed in [1].\n> > > But I thought it best to start a new thread.\n> > >\n> > > The commit 31966b1 introduced the infrastructure to extend\n> > > buffers.\n> > > But the patch mixed types with int and uint32.\n> > > The correct type of the variable counter is uint32.\n> > >\n> > > Fix by standardizing the int type to uint32.\n> > >\n> > > patch attached.\n> >\n> > LGTM.\n> \n> LGTM, too.\n> \n> Thanks Gurjeet and Kyotaro, for taking a look.\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 10:07:49 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Standardize type of variable when extending Buffers"
},
{
"msg_contents": "Em ter., 19 de set. de 2023 às 05:07, Peter Eisentraut <[email protected]>\nescreveu:\n\n> On 10.07.23 13:08, Ranier Vilela wrote:\n> >\n> > Em seg., 10 de jul. de 2023 às 03:27, Kyotaro Horiguchi\n> > <[email protected] <mailto:[email protected]>> escreveu:\n> >\n> > At Fri, 7 Jul 2023 11:29:16 -0700, Gurjeet Singh <[email protected]\n> > <mailto:[email protected]>> wrote in\n> > > On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]\n> > <mailto:[email protected]>> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > This has already been discussed in [1].\n> > > > But I thought it best to start a new thread.\n> > > >\n> > > > The commit 31966b1 introduced the infrastructure to extend\n> > > > buffers.\n> > > > But the patch mixed types with int and uint32.\n> > > > The correct type of the variable counter is uint32.\n> > > >\n> > > > Fix by standardizing the int type to uint32.\n> > > >\n> > > > patch attached.\n> > >\n> > > LGTM.\n> >\n> > LGTM, too.\n> >\n> > Thanks Gurjeet and Kyotaro, for taking a look.\n>\n> committed\n>\nThank you Peter.\n\nbest regards,\nRanier Vilela\n\nEm ter., 19 de set. de 2023 às 05:07, Peter Eisentraut <[email protected]> escreveu:On 10.07.23 13:08, Ranier Vilela wrote:\n> \n> Em seg., 10 de jul. de 2023 às 03:27, Kyotaro Horiguchi \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n> At Fri, 7 Jul 2023 11:29:16 -0700, Gurjeet Singh <[email protected]\n> <mailto:[email protected]>> wrote in\n> > On Fri, Jul 7, 2023 at 6:12 AM Ranier Vilela <[email protected]\n> <mailto:[email protected]>> wrote:\n> > >\n> > > Hi,\n> > >\n> > > This has already been discussed in [1].\n> > > But I thought it best to start a new thread.\n> > >\n> > > The commit 31966b1 introduced the infrastructure to extend\n> > > buffers.\n> > > But the patch mixed types with int and uint32.\n> > > The correct type of the variable counter is uint32.\n> > >\n> > > Fix by standardizing the int type to uint32.\n> > >\n> > > patch attached.\n> >\n> > LGTM.\n> \n> LGTM, too.\n> \n> Thanks Gurjeet and Kyotaro, for taking a look.\n\ncommittedThank you Peter.best regards,Ranier Vilela",
"msg_date": "Tue, 19 Sep 2023 08:29:51 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Standardize type of variable when extending Buffers"
}
] |
[
{
"msg_contents": "Dear hackers,\r\n\r\nThis is a fork thread from [1]. While analyzing codes I noticed that UPDATE and\r\nDELETE cannot be replicated when REPLICA IDENTITY is FULL and the table has datatype\r\nwhich does not have the operator class of Btree. I thnk this restriction is not\r\ndocumented but should be. PSA the patch to add that. Thought?\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB586687A51AB511E5A7F7D3E6F526A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 10 Jul 2023 03:33:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 1:33 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> This is a fork thread from [1]. While analyzing codes I noticed that UPDATE and\n> DELETE cannot be replicated when REPLICA IDENTITY is FULL and the table has datatype\n> which does not have the operator class of Btree. I thnk this restriction is not\n> documented but should be. PSA the patch to add that. Thought?\n>\n> [1]: https://www.postgresql.org/message-id/TYAPR01MB586687A51AB511E5A7F7D3E6F526A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n>\n\nHi.\n\n+1 for the patch.\n\nHere are some minor review comments:\n\n======\n\n1.\nSUGGESTION (minor reword)\nIf the published table specifies <literal>REPLICA IDENTITY\nFULL</literal> but the table includes an attribute whose datatype is\nnot an operator class of Btree, then <literal>UPDATE</literal> and\n<literal>DELETE</literal> operations cannot be replicated. To make it\nwork, a primary key should be defined on the subscriber table, or a\ndifferent appropriate replica identity must be specified.\n\n2.\nMaybe \"REPLICA IDENTITY FULL\" should have a link, like from this [1] page.\n\n------\n[1] 31.1 Publication =\nhttps://www.postgresql.org/docs/current/logical-replication-publication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 10 Jul 2023 17:44:39 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for checking! PSA new version.\r\n\r\n> 1.\r\n> SUGGESTION (minor reword)\r\n> If the published table specifies <literal>REPLICA IDENTITY\r\n> FULL</literal> but the table includes an attribute whose datatype is\r\n> not an operator class of Btree, then <literal>UPDATE</literal> and\r\n> <literal>DELETE</literal> operations cannot be replicated. To make it\r\n> work, a primary key should be defined on the subscriber table, or a\r\n> different appropriate replica identity must be specified.\r\n\r\nSeems better, fixed.\r\n\r\n> 2.\r\n> Maybe \"REPLICA IDENTITY FULL\" should have a link, like from this [1] page.\r\n\r\nAdded.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 10 Jul 2023 09:03:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 2:33 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n\n If the published table specifies\n+ <link linkend=\"sql-altertable-replica-identity-full\"><literal>REPLICA\nIDENTITY FULL</literal></link>\n+ but the table includes an attribute whose datatype is not an operator\n+ class of Btree,\n\nIsn't the same true for the hash operator class family as well? Can we\nslightly change the line as: \"... the table includes an attribute\nwhose datatype doesn't have an equality operator defined for it..\".\nAlso, I find the proposed wording a bit odd, can we swap the sentence\nto say something like: \"The UPDATE and DELETE operations cannot be\nreplicated for the published tables that specifies REPLICA IDENTITY\nFULL but the table includes an attribute whose datatype doesn't have\nan equality operator defined for it on the subscriber.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:46:11 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hello\n\nIs this restriction only for the subscriber?\n\nIf we have not changed the replica identity and there is no primary key, then we forbid update and delete on the publication side (a fairly common usage error at the beginning of using publications).\nIf we have replica identity FULL (the table has such a column), then on the subscription side, update and delete will be performed. But we will not be able to apply them on a subscription. Right?\n\nThis is an important difference for real use, when the subscriber is not necessarily postgresql - for example, debezium.\n\nregards, Sergei\n\n\n",
"msg_date": "Mon, 10 Jul 2023 14:03:19 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 4:33 PM Sergei Kornilov <[email protected]> wrote:\n>\n> Is this restriction only for the subscriber?\n>\n> If we have not changed the replica identity and there is no primary key, then we forbid update and delete on the publication side (a fairly common usage error at the beginning of using publications).\n> If we have replica identity FULL (the table has such a column), then on the subscription side, update and delete will be performed.\n>\n\nIn the above sentence, do you mean the publisher side?\n\n>\n But we will not be able to apply them on a subscription. Right?\n>\n\nIf your previous sentence talks about the publisher and this sentence\nabout the subscriber then what you are saying is correct. You can see\nthe example in the email [1].\n\n> This is an important difference for real use, when the subscriber is not necessarily postgresql - for example, debezium.\n>\n\nCan you explain the difference and problem you are seeing? As per my\nunderstanding, this is the behavior from the time logical replication\nhas been introduced.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB5866C7B6086EB74918910F74F527A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jul 2023 18:16:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": ">> Is this restriction only for the subscriber?\n>>\n>> If we have not changed the replica identity and there is no primary key, then we forbid update and delete on the publication side (a fairly common usage error at the beginning of using publications).\n>> If we have replica identity FULL (the table has such a column), then on the subscription side, update and delete will be performed.\n> \n> In the above sentence, do you mean the publisher side?\n\nYep, sorry.\n\n> But we will not be able to apply them on a subscription. Right?\n> \n> If your previous sentence talks about the publisher and this sentence\n> about the subscriber then what you are saying is correct. You can see\n> the example in the email [1].\n\nThank you\n\n>> This is an important difference for real use, when the subscriber is not necessarily postgresql - for example, debezium.\n> \n> Can you explain the difference and problem you are seeing? As per my\n> understanding, this is the behavior from the time logical replication\n> has been introduced.\n\nThe difference is that if it's a subscriber-only restriction, then it won't automatically apply to anyone with a non-postgresql subscriber.\nBut if suddenly this would be a limitation of the publisher - then it will automatically apply to everyone, regardless of which subscriber is used.\n(and it's a completely different problem if the restriction affects the update/delete themselves, not only their replication. Like as default replica identity on table without primary key, not in this case)\n\nSo, I suggest to mention subscriber explicitly:\n\n+ class of Btree, then <literal>UPDATE</literal> and <literal>DELETE</literal>\n- operations cannot be replicated.\n+ operations cannot be applied on subscriber.\n\nAnother example of difference:\nDebezium users sometimes ask to set identity to FULL to get access to old values: https://stackoverflow.com/a/59820210/10983392\nHowever, identity FULL is described in the documentation as: https://www.postgresql.org/docs/current/logical-replication-publication.html\n\n> If the table does not have any suitable key, then it can be set to replica identity “full”, which means the entire row becomes the key. This, however, is very inefficient and should only be used as a fallback if no other solution is possible.\n\nBut not mentioned, this would only be \"very inefficient\" for the subscriber, or would have an huge impact on the publisher too (besides writing more WAL).\n\nregards, Sergei\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:56:14 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 7:26 PM Sergei Kornilov <[email protected]> wrote:\n>\n> >> Is this restriction only for the subscriber?\n> >>\n> >> If we have not changed the replica identity and there is no primary key, then we forbid update and delete on the publication side (a fairly common usage error at the beginning of using publications).\n> >> If we have replica identity FULL (the table has such a column), then on the subscription side, update and delete will be performed.\n> >\n> > In the above sentence, do you mean the publisher side?\n>\n> Yep, sorry.\n>\n> > But we will not be able to apply them on a subscription. Right?\n> >\n> > If your previous sentence talks about the publisher and this sentence\n> > about the subscriber then what you are saying is correct. You can see\n> > the example in the email [1].\n>\n> Thank you\n>\n> >> This is an important difference for real use, when the subscriber is not necessarily postgresql - for example, debezium.\n> >\n> > Can you explain the difference and problem you are seeing? As per my\n> > understanding, this is the behavior from the time logical replication\n> > has been introduced.\n>\n> The difference is that if it's a subscriber-only restriction, then it won't automatically apply to anyone with a non-postgresql subscriber.\n> But if suddenly this would be a limitation of the publisher - then it will automatically apply to everyone, regardless of which subscriber is used.\n> (and it's a completely different problem if the restriction affects the update/delete themselves, not only their replication. Like as default replica identity on table without primary key, not in this case)\n>\n> So, I suggest to mention subscriber explicitly:\n>\n> + class of Btree, then <literal>UPDATE</literal> and <literal>DELETE</literal>\n> - operations cannot be replicated.\n> + operations cannot be applied on subscriber.\n>\n> Another example of difference:\n> Debezium users sometimes ask to set identity to FULL to get access to old values: https://stackoverflow.com/a/59820210/10983392\n> However, identity FULL is described in the documentation as: https://www.postgresql.org/docs/current/logical-replication-publication.html\n>\n\nAfter seeing this, I am thinking about whether we add this restriction\non the Subscription page [1] or Restrictions page [2] as proposed. Do\nyou others have any preference?\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n[2] - https://www.postgresql.org/docs/devel/logical-replication-restrictions.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 10:00:21 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Isn't the same true for the hash operator class family as well?\r\n\r\nTrue. I didn't write it on purpose because I didn't know the operator which is \r\noperator class for BTree but not for Hash. But I agreed to clarify it.\r\n\r\n> Can we\r\n> slightly change the line as: \"... the table includes an attribute\r\n> whose datatype doesn't have an equality operator defined for it..\".\r\n\r\nHmm, this suggestion is dubious for me. Regarding the point datatype, it has the\r\n\"same as\" operator [1]. E.g., following SQL returns true.\r\n\r\n```\r\npostgres=# select point '(1, 1)' ~= point '(1, 1)';\r\n ?column? \r\n----------\r\n t\r\n(1 row)\r\n```\r\n\r\nThe reason why they cannot be supported by tuples_equal() is that lookup_type_cache()\r\nonly checks the operator classes for Btree and Hash. ~= does not defined as the class.\r\n\r\n> Also, I find the proposed wording a bit odd, can we swap the sentence\r\n> to say something like: \"The UPDATE and DELETE operations cannot be\r\n> replicated for the published tables that specifies REPLICA IDENTITY\r\n> FULL but the table includes an attribute whose datatype doesn't have\r\n> an equality operator defined for it on the subscriber.\"?\r\n\r\nSwapped. But based on above reply, I did not completely use your suggestion.\r\n\r\n[1]: https://www.postgresql.org/docs/devel/functions-geometry.html\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 11 Jul 2023 07:00:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Sergei,\r\n\r\nThank you for giving comment!\r\n\r\nThe restriction is only for subscriber: the publisher can publish the changes\r\nto downstream under the condition, but the subscriber cannot apply that.\r\n\r\n> So, I suggest to mention subscriber explicitly:\r\n> \r\n> + class of Btree, then <literal>UPDATE</literal> and\r\n> <literal>DELETE</literal>\r\n> - operations cannot be replicated.\r\n> + operations cannot be applied on subscriber.\r\n\r\nI accepted the comment. Please see [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58664DB6ECA653A6922B3FE3F531A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 11 Jul 2023 07:04:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> After seeing this, I am thinking about whether we add this restriction\r\n> on the Subscription page [1] or Restrictions page [2] as proposed. Do\r\n> you others have any preference?\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/docs/devel/logical-replication-subscription.html\r\n> [2] -\r\n> https://www.postgresql.org/docs/devel/logical-replication-restrictions.html\r\n\r\nThanks for giving suggestion. But I still think it should be at \"Restrictions\" page\r\nbecause all the limitation has been listed that page.\r\nMoreover, the condition of this limitation is not closed to subscriber - the setup\r\non publisher is also related. I think such descriptions it may cause readers\r\nto be confused.\r\n\r\n\r\nBut anyway, I have never been in mind such a point of view.\r\nMaybe I should hear Sergei's opinion. Thought?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 11 Jul 2023 07:17:50 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 12:30 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n>\n> > Isn't the same true for the hash operator class family as well?\n>\n> True. I didn't write it on purpose because I didn't know the operator which is\n> operator class for BTree but not for Hash. But I agreed to clarify it.\n>\n> > Can we\n> > slightly change the line as: \"... the table includes an attribute\n> > whose datatype doesn't have an equality operator defined for it..\".\n>\n> Hmm, this suggestion is dubious for me. Regarding the point datatype, it has the\n> \"same as\" operator [1]. E.g., following SQL returns true.\n>\n> ```\n> postgres=# select point '(1, 1)' ~= point '(1, 1)';\n> ?column?\n> ----------\n> t\n> (1 row)\n> ```\n>\n> The reason why they cannot be supported by tuples_equal() is that lookup_type_cache()\n> only checks the operator classes for Btree and Hash. ~= does not defined as the class.\n>\n\nFair enough, but the part of the line:\".. whose datatype is not an\noperator class of Btree or Hash.\" doesn't appear very clear to me.\nBecause it sounds like we are checking whether datatype has any\noperator class for btree or hash access methods but we are actually\nchecking if there is an equality operator (function) defined in the\ndefault op class for those access methods. Am, I missing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 13:07:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hello\n\nI think it's appropriate to add on the restrictions page. (But mentioning that this restriction is only for subscriber)\n\nIf the list were larger, then the restrictions page could be divided into publisher and subscriber restrictions. But not for one very specific restriction.\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 11 Jul 2023 11:47:16 +0300",
"msg_from": "Sergei Kornilov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 2:17 PM Sergei Kornilov <[email protected]> wrote:\n>\n> I think it's appropriate to add on the restrictions page. (But mentioning that this restriction is only for subscriber)\n>\n> If the list were larger, then the restrictions page could be divided into publisher and subscriber restrictions. But not for one very specific restriction.\n>\n\nOkay, how about something like: \"The UPDATE and DELETE operations\ncannot be applied on the subscriber for the published tables that\nspecify REPLICA IDENTITY FULL when the table has attributes with\ndatatypes (e.g point or box) that don't have a default operator class\nfor Btree or Hash. This won't be a problem if the table has a primary\nkey or replica identity defined for it.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 13 Jul 2023 18:36:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Dear Amit, Sergei,\r\n\r\n> > I think it's appropriate to add on the restrictions page. (But mentioning that this\r\n> restriction is only for subscriber)\r\n> >\r\n> > If the list were larger, then the restrictions page could be divided into publisher\r\n> and subscriber restrictions. But not for one very specific restriction.\r\n> >\r\n> \r\n> Okay, how about something like: \"The UPDATE and DELETE operations\r\n> cannot be applied on the subscriber for the published tables that\r\n> specify REPLICA IDENTITY FULL when the table has attributes with\r\n> datatypes (e.g point or box) that don't have a default operator class\r\n> for Btree or Hash. This won't be a problem if the table has a primary\r\n> key or replica identity defined for it.\"?\r\n\r\nThanks for discussing and giving suggestions. But it seems that the first\r\nsentence is difficult to read for me. How about attached?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 14 Jul 2023 08:45:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 2:15 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > I think it's appropriate to add on the restrictions page. (But mentioning that this\n> > restriction is only for subscriber)\n> > >\n> > > If the list were larger, then the restrictions page could be divided into publisher\n> > and subscriber restrictions. But not for one very specific restriction.\n> > >\n> >\n> > Okay, how about something like: \"The UPDATE and DELETE operations\n> > cannot be applied on the subscriber for the published tables that\n> > specify REPLICA IDENTITY FULL when the table has attributes with\n> > datatypes (e.g point or box) that don't have a default operator class\n> > for Btree or Hash. This won't be a problem if the table has a primary\n> > key or replica identity defined for it.\"?\n>\n> Thanks for discussing and giving suggestions. But it seems that the first\n> sentence is difficult to read for me. How about attached?\n>\n\nThe last line seems repetitive to me. So, I have removed it. Apart\nfrom that patch looks good to me. Sergie, Peter, and others, any\nthoughts?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 15 Jul 2023 09:40:26 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Sat, Jul 15, 2023 at 2:10 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jul 14, 2023 at 2:15 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > > I think it's appropriate to add on the restrictions page. (But mentioning that this\n> > > restriction is only for subscriber)\n> > > >\n> > > > If the list were larger, then the restrictions page could be divided into publisher\n> > > and subscriber restrictions. But not for one very specific restriction.\n> > > >\n> > >\n> > > Okay, how about something like: \"The UPDATE and DELETE operations\n> > > cannot be applied on the subscriber for the published tables that\n> > > specify REPLICA IDENTITY FULL when the table has attributes with\n> > > datatypes (e.g point or box) that don't have a default operator class\n> > > for Btree or Hash. This won't be a problem if the table has a primary\n> > > key or replica identity defined for it.\"?\n> >\n> > Thanks for discussing and giving suggestions. But it seems that the first\n> > sentence is difficult to read for me. How about attached?\n> >\n>\n> The last line seems repetitive to me. So, I have removed it. Apart\n> from that patch looks good to me. Sergie, Peter, and others, any\n> thoughts?\n\nThe v5 patch LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 17 Jul 2023 13:17:40 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "Hi,\n\n>\n> > The last line seems repetitive to me. So, I have removed it. Apart\n> > from that patch looks good to me. Sergie, Peter, and others, any\n> > thoughts?\n>\n> The v5 patch LGTM.\n>\n>\nOverall looks good to me as well. Please consider the following as an\noptional improvement.\n\nMy only minor concern here is the use of the term \"default operator class\".\nIt is accurate to use it. However, as far as I know, not many users can\nfollow that easily. I think the \"pkey/repl full\" suggestion gives some tip,\nbut I wonder if we add something like the following to the text such that\nusers can understand more:\n\n do not have a default operator class for B-tree or Hash.\n\n+ If there is no default operator class, usually the type does not have an\n> equality operator.\n\nHowever, this limitation ..\n\n\nThanks,\nOnder\n\nHi,\n>\n> The last line seems repetitive to me. So, I have removed it. Apart\n> from that patch looks good to me. Sergie, Peter, and others, any\n> thoughts?\n\nThe v5 patch LGTM. Overall looks good to me as well. Please consider the following as an optional improvement.My only minor concern here is the use of the term \"default operator class\". It is accurate to use it. However, as far as I know, not many users can follow that easily. I think the \"pkey/repl full\" suggestion gives some tip, but I wonder if we add something like the following to the text such that users can understand more: do not have a default operator class for B-tree or Hash.+ If there is no default operator class, usually the type does not have an equality operator.However, this limitation ..Thanks,Onder",
"msg_date": "Mon, 17 Jul 2023 09:21:12 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 11:51 AM Önder Kalacı <[email protected]> wrote:\n>\n>> >\n>> > The last line seems repetitive to me. So, I have removed it. Apart\n>> > from that patch looks good to me. Sergie, Peter, and others, any\n>> > thoughts?\n>>\n>> The v5 patch LGTM.\n>>\n>\n> Overall looks good to me as well. Please consider the following as an optional improvement.\n>\n> My only minor concern here is the use of the term \"default operator class\". It is accurate to use it. However, as far as I know, not many users can follow that easily. I think the \"pkey/repl full\" suggestion gives some tip, but I wonder if we add something like the following to the text such that users can understand more:\n>\n>> do not have a default operator class for B-tree or Hash.\n>>\n>> + If there is no default operator class, usually the type does not have an equality operator.\n>>\n\nThis sounds a bit generic to me. If required, we can give an example\nso that it is easier to understand. But OTOH, I see that we use\n\"default operator class\" in the docs and error messages, so this\nshould be probably okay.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jul 2023 15:36:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 11:51 AM Önder Kalacı <[email protected]> wrote:\n>\n>> >\n>> > The last line seems repetitive to me. So, I have removed it. Apart\n>> > from that patch looks good to me. Sergie, Peter, and others, any\n>> > thoughts?\n>>\n>> The v5 patch LGTM.\n>>\n>\n> Overall looks good to me as well. Please consider the following as an optional improvement.\n>\n\nPushed. Thanks for looking into this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:54:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify the limitation for logical replication when REPILICA\n IDENTITY is FULL"
}
] |
[
{
"msg_contents": "Greetings, everyone!\n\nWhile working on an extension, I've found myself using valgrind on a \n32-bit OS (Debian 11)\nand after executing any query (even 'select 1;') under valgrind I've \nbeen shown the same error\neverytime:\n\n==00:00:00:18.109 2528== VALGRINDERROR-BEGIN\n==00:00:00:18.109 2528== Use of uninitialised value of size 4\n==00:00:00:18.109 2528== at 0x645B52: pg_comp_crc32c_sb8 \n(pg_crc32c_sb8.c:79)\n==00:00:00:18.109 2528== by 0x24295F: XLogRecordAssemble \n(xloginsert.c:780)\n==00:00:00:18.109 2528== by 0x24295F: XLogInsert (xloginsert.c:459)\n==00:00:00:18.109 2528== by 0x4B693B: LogCurrentRunningXacts \n(standby.c:1099)\n==00:00:00:18.109 2528== by 0x4B693B: LogStandbySnapshot \n(standby.c:1055)\n==00:00:00:18.109 2528== by 0x43BECF: BackgroundWriterMain \n(bgwriter.c:336)\n==00:00:00:18.109 2528== by 0x249D8A: AuxiliaryProcessMain \n(bootstrap.c:446)\n==00:00:00:18.109 2528== by 0x448B11: StartChildProcess \n(postmaster.c:5445)\n==00:00:00:18.109 2528== by 0x449F57: reaper (postmaster.c:2907)\n==00:00:00:18.109 2528== by 0x486D587: ??? (in \n/usr/lib/i386-linux-gnu/libpthread-2.31.so)\n==00:00:00:18.109 2528== Uninitialised value was created by a stack \nallocation\n==00:00:00:18.109 2528== at 0x4B682F: LogStandbySnapshot \n(standby.c:1018)\n\nI've been able to reproduce this on branches from REL_11_STABLE up to \ncurrent master.\nValgrind version is 3.21.0.\n\nI was wondering, why does this error occur only on 32-bit OS and not \n64-bit?\n\nI found three threads:\nAbout issues with valgrind and padding -\nhttps://www.postgresql.org/message-id/flat/1082573393.7010.27.camel%40tokyo\nAbout sanitizers (and valgrind) in general -\nhttps://www.postgresql.org/message-id/flat/20160321130850.6ed6f598%40fujitsu\nAbout valgrind suppressions -\nhttps://www.postgresql.org/message-id/flat/4dfabec2-a3ad-0546-2d62-f816c97edd0c%402ndQuadrant.com\nand after reading those decided to look for existing valgrind \nsuppressions.\n\nAnd I've found just what I was looking for:\n\n{\n\tpadding_XLogRecData_CRC\n\tMemcheck:Value8\n\n\tfun:pg_comp_crc32c*\n\tfun:XLogRecordAssemble\n}\n\nSupression for this (group of) function(s), but for 8-byte chunks of \nmemory.\n\nSo the suggestion is to add a suppression for 4-byte values in this \nfunction.\n\nThe patch is attached.\n\nOleg Tselebrovskiy, Postgres Pro",
"msg_date": "Mon, 10 Jul 2023 14:51:29 +0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Valgrind errors on 32-bit OS"
}
] |
[
{
"msg_contents": "hello, all.\nRecently, I find one very strange situation to lose data of primary node which the\ndetails can be find at the first patch: 0001-Add-test-case-data-lost-after-restart.patch.\n\nThe first patch shows us that data could be lost after truncating physical file by\nsomeone else before starting up primary node. However, then the primary node\nstill starts up normally without any alarm, even that it find any invalid page\nduring crash recovery.\n\nAnd then I find another situation about abort transaction which details can be find\nat the second patch: 0002-Add-test-case-for-abort-transaction-across-checkpoin.patch.\n\nThe second patch shows us that abort transaction across checkpoint could also cause\ninvalid pages, and leave some undeleted relation files forever during crash recovery.\nAnd then the primary node still starts up normally without any alarm, just like the\nfirst situation.\n\nBy the way, the above experiments are both running after setting the following\nparameters:\n$node_primary->append_conf('postgresql.conf', 'synchronous_commit=on');\n$node_primary->append_conf('postgresql.conf', 'full_page_writes=off');\n$node_primary->append_conf('postgresql.conf', 'log_min_messages=debug2');\n\nAs my opinion, the primary node should alarm some invalid pages found during\ncrash recovery, as same as what the standby node does after reached consistency\nrecovery state. So I put the third bug fix patch which is\n 0003-Check-invalid-pages-at-the-end-of-recovery.patch to do the following two things:\n(1) Primary node checks invalid pages at the end of recovery;\n(2) Flush the abort WAL before truncating or deleting any relation files.\n\nBest wishes,\nrogers.ww.",
"msg_date": "Mon, 10 Jul 2023 15:53:13 +0800",
"msg_from": "\"=?UTF-8?B?546L5LyfKOWtpuW8iCk=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?Q2hlY2sgaW52YWxpZCBwYWdlcyBhdCB0aGUgZW5kIG9mIHJlY292ZXJ5IHRvIGFsYXJtIGxv?=\n =?UTF-8?B?c3QgZGF0YQ==?="
}
] |
[
{
"msg_contents": "Hi hackers,\r\nI am learning the MemoryContext subsystem, but I really don't know where to find it's document (The PostgreSQL Document just provide some spi function).\r\nCan someone provide me some solutions?\r\nThanks in advance!\r\n\r\n\r\nYours,\r\nWen Yi\nHi hackers,I am learning the MemoryContext subsystem, but I really don't know where to find it's document (The PostgreSQL Document just provide some spi function).Can someone provide me some solutions?Thanks in advance!Yours,Wen Yi",
"msg_date": "Mon, 10 Jul 2023 17:20:23 +0800",
"msg_from": "\"=?ISO-8859-1?B?V2VuIFlp?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Question] Can someone provide some links related to the\n MemoryContext?"
},
{
"msg_contents": "On Tue, 11 Jul 2023 at 15:11, Wen Yi <[email protected]> wrote:\n>\n> Hi hackers,\n> I am learning the MemoryContext subsystem, but I really don't know where to find it's document (The PostgreSQL Document just provide some spi function).\n> Can someone provide me some solutions?\n> Thanks in advance!\n\nYou should take a look at the README in the mmgr directory; at\nsrc/backend/utils/mmgr/README. I think that this README provides a lot\nof the requested information.\n\n-- \nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 11 Jul 2023 15:47:48 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Question] Can someone provide some links related to the\n MemoryContext?"
}
] |
[
{
"msg_contents": "Hi,\n\nJeff pointed out that one of the pg_stat_io tests has failed a few times\nover the past months (here on morepork [1] and more recently here on\nfrancolin [2]).\n\nFailing test diff for those who prefer not to scroll:\n\n+++ /home/bf/bf-build/francolin/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/stats.out\n 2023-07-07 18:48:25.976313231 +0000\n@@ -1415,7 +1415,7 @@\n :io_sum_vac_strategy_after_reuses > :io_sum_vac_strategy_before_reuses;\n ?column? | ?column?\n ----------+----------\n- t | t\n+ t | f\n\nMy theory about the test failure is that, when there is enough demand\nfor shared buffers, the flapping test fails because it expects buffer\naccess strategy *reuses* and concurrent queries already flushed those\nbuffers before they could be reused. Attached is a patch which I think\nwill fix the test while keeping some code coverage. If we count\nevictions and reuses together, those should have increased.\n\n- Melanie\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2023-06-16%2018%3A30%3A32\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=francolin&dt=2023-07-07%2018%3A43%3A57&stg=recovery-check",
"msg_date": "Mon, 10 Jul 2023 14:35:11 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "stats test intermittent failure"
},
{
"msg_contents": "Hi Melanie,\n\n10.07.2023 21:35, Melanie Plageman wrote:\n> Hi,\n>\n> Jeff pointed out that one of the pg_stat_io tests has failed a few times\n> over the past months (here on morepork [1] and more recently here on\n> francolin [2]).\n>\n> Failing test diff for those who prefer not to scroll:\n>\n> +++ /home/bf/bf-build/francolin/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/stats.out\n> 2023-07-07 18:48:25.976313231 +0000\n> @@ -1415,7 +1415,7 @@\n> :io_sum_vac_strategy_after_reuses > :io_sum_vac_strategy_before_reuses;\n> ?column? | ?column?\n> ----------+----------\n> - t | t\n> + t | f\n>\n> My theory about the test failure is that, when there is enough demand\n> for shared buffers, the flapping test fails because it expects buffer\n> access strategy *reuses* and concurrent queries already flushed those\n> buffers before they could be reused. Attached is a patch which I think\n> will fix the test while keeping some code coverage. If we count\n> evictions and reuses together, those should have increased.\n\nI managed to reproduce that failure with the attached patch applied\n(on master) and with the following script (that effectively multiplies\nprobability of the failure by 360):\nCPPFLAGS=\"-O0\" ./configure -q --enable-debug --enable-cassert --enable-tap-tests && make -s -j`nproc` && make -s check \n-C src/test/recovery\nmkdir -p src/test/recovery00/t\ncp src/test/recovery/t/027_stream_regress.pl src/test/recovery00/t/\ncp src/test/recovery/Makefile src/test/recovery00/\nfor ((i=1;i<=9;i++)); do cp -r src/test/recovery00/ src/test/recovery$i; done\n\nfor ((i=1;i<=10;i++)); do echo \"iteration $i\"; NO_TEMP_INSTALL=1 parallel --halt now,fail=1 -j9 --linebuffer --tag make \n-s check -C src/test/{} ::: recovery1 recovery2 recovery3 recovery4 recovery5 recovery6 recovery7 recovery8 recovery9 || \nbreak; done\n\nWithout your patch, I get:\niteration 2\n...\nrecovery5 # Failed test 'regression tests pass'\nrecovery5 # at t/027_stream_regress.pl line 92.\nrecovery5 # got: '256'\nrecovery5 # expected: '0'\n...\nsrc/test/recovery5/tmp_check/log/regress_log_027_stream_regress contains:\n--- .../src/test/regress/expected/stats.out 2023-07-11 20:05:10.536059706 +0300\n+++ .../src/test/recovery5/tmp_check/results/stats.out 2023-07-11 20:30:46.790551305 +0300\n@@ -1418,7 +1418,7 @@\n :io_sum_vac_strategy_after_reuses > :io_sum_vac_strategy_before_reuses;\n ?column? | ?column?\n ----------+----------\n- t | t\n+ t | f\n (1 row)\n\nWith your patch applied, 10 iterations performed successfully for me.\nSo it looks like your theory and your fix are correct.\n\nBest regards,\nAlexander",
"msg_date": "Tue, 11 Jul 2023 21:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats test intermittent failure"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jul 11, 2023 at 3:35 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Jeff pointed out that one of the pg_stat_io tests has failed a few times\n> over the past months (here on morepork [1] and more recently here on\n> francolin [2]).\n>\n> Failing test diff for those who prefer not to scroll:\n>\n> +++ /home/bf/bf-build/francolin/HEAD/pgsql.build/testrun/recovery/027_stream_regress/data/results/stats.out\n> 2023-07-07 18:48:25.976313231 +0000\n> @@ -1415,7 +1415,7 @@\n> :io_sum_vac_strategy_after_reuses > :io_sum_vac_strategy_before_reuses;\n> ?column? | ?column?\n> ----------+----------\n> - t | t\n> + t | f\n>\n> My theory about the test failure is that, when there is enough demand\n> for shared buffers, the flapping test fails because it expects buffer\n> access strategy *reuses* and concurrent queries already flushed those\n> buffers before they could be reused. Attached is a patch which I think\n> will fix the test while keeping some code coverage. If we count\n> evictions and reuses together, those should have increased.\n>\n\nYeah, I've not reproduced this issue but it's possible. IIUC if we get\nthe buffer from the ring, we count an I/O as \"reuse\" even if the\nbuffer has already been flushed/replaced. However, if the buffer in\nthe ring is pinned by other backends, we end up evicting a buffer from\noutside of the ring and adding it to the buffer, which is counted as\n\"eviction\".\n\nRegarding the patch, I have a comment:\n\n -- Test that reuse of strategy buffers and reads of blocks into these reused\n--- buffers while VACUUMing are tracked in pg_stat_io.\n+-- buffers while VACUUMing are tracked in pg_stat_io. If there is sufficient\n+-- demand for shared buffers from concurrent queries, some blocks may be\n+-- evicted from the strategy ring before they can be reused. In such cases\n+-- this, the backend will evict a block from a shared buffer outside of the\n+-- ring and add it to the ring. This is considered an eviction and not a reuse.\n\nThe new comment seems not to be accurate if my understanding is\ncorrect. How about the following?\n\nTest that reuse of strategy buffers and reads of blocks into these\nreused buffers while VACUUMing are tracked in pg_stat_io. If there is\nsufficient demand for shared buffers from concurrent queries, some\nbuffers may be pinned by other backends before they can be reused. In\nsuch cases, the backend will evict a buffer from a shared buffer\noutside of the ring and add it to the ring. This is considered an\neviction and not a reuse.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 21:03:07 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats test intermittent failure"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-31 21:03:07 +0900, Masahiko Sawada wrote:\n> Regarding the patch, I have a comment:\n> \n> -- Test that reuse of strategy buffers and reads of blocks into these reused\n> --- buffers while VACUUMing are tracked in pg_stat_io.\n> +-- buffers while VACUUMing are tracked in pg_stat_io. If there is sufficient\n> +-- demand for shared buffers from concurrent queries, some blocks may be\n> +-- evicted from the strategy ring before they can be reused. In such cases\n> +-- this, the backend will evict a block from a shared buffer outside of the\n> +-- ring and add it to the ring. This is considered an eviction and not a reuse.\n> \n> The new comment seems not to be accurate if my understanding is correct. How\n> about the following?\n> \n> Test that reuse of strategy buffers and reads of blocks into these\n> reused buffers while VACUUMing are tracked in pg_stat_io. If there is\n> sufficient demand for shared buffers from concurrent queries, some\n> buffers may be pinned by other backends before they can be reused. In\n> such cases, the backend will evict a buffer from a shared buffer\n> outside of the ring and add it to the ring. This is considered an\n> eviction and not a reuse.\n\nI integrated the suggested change of the comment and tweaked it a bit\nmore. And finally pushed the fix.\n\nSorry that it took so long.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Aug 2023 14:19:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats test intermittent failure"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I integrated the suggested change of the comment and tweaked it a bit\n> more. And finally pushed the fix.\n\nThis failure was originally seen on v16 (that is, pre-fork). Shouldn't\nthe fix be back-patched?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Aug 2023 18:28:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stats test intermittent failure"
}
] |
[
{
"msg_contents": "An instance compiled locally, without assertions, failed like this:\n\n< 2023-07-09 22:04:46.470 UTC >LOG: process 30002 detected deadlock while waiting for ShareLock on transaction 813219577 after 333.228 ms\n< 2023-07-09 22:04:46.470 UTC >DETAIL: Process holding the lock: 2103. Wait queue: 30001.\n< 2023-07-09 22:04:46.470 UTC >CONTEXT: while checking uniqueness of tuple (549,4) in relation \"pg_statistic_ext_data\"\n< 2023-07-09 22:04:46.470 UTC >STATEMENT: REINDEX INDEX pg_statistic_ext_data_stxoid_inh_index\n< 2023-07-09 22:04:46.474 UTC >ERROR: deadlock detected\n< 2023-07-09 22:04:46.474 UTC >DETAIL: Process 30001 waits for ShareLock on transaction 813219577; blocked by process 2103.\n Process 2103 waits for RowExclusiveLock on relation 3429 of database 16588; blocked by process 30001.\n Process 30001: REINDEX INDEX pg_statistic_ext_data_stxoid_inh_index\n Process 2103: autovacuum: ANALYZE child.ericsson_sgsn_rac_202307\n< 2023-07-09 22:04:46.474 UTC >HINT: See server log for query details.\n< 2023-07-09 22:04:46.474 UTC >CONTEXT: while checking uniqueness of tuple (549,4) in relation \"pg_statistic_ext_data\"\n< 2023-07-09 22:04:46.474 UTC >STATEMENT: REINDEX INDEX pg_statistic_ext_data_stxoid_inh_index\n< 2023-07-09 22:04:46.483 UTC >LOG: background worker \"parallel worker\" (PID 30002) exited with exit code 1\n< 2023-07-09 22:04:46.487 UTC postgres >ERROR: deadlock detected\n< 2023-07-09 22:04:46.487 UTC postgres >DETAIL: Process 30001 waits for ShareLock on transaction 813219577; blocked by process 2103.\n Process 2103 waits for RowExclusiveLock on relation 3429 of database 16588; blocked by process 30001.\n< 2023-07-09 22:04:46.487 UTC postgres >HINT: See server log for query details.\n< 2023-07-09 22:04:46.487 UTC postgres >CONTEXT: while checking uniqueness of tuple (549,4) in relation \"pg_statistic_ext_data\"\n parallel worker\n< 2023-07-09 22:04:46.487 UTC postgres >STATEMENT: REINDEX INDEX pg_statistic_ext_data_stxoid_inh_index\n< 2023-07-09 22:04:46.848 UTC >LOG: server process (PID 30001) was terminated by signal 11: Segmentation fault\n< 2023-07-09 22:04:46.848 UTC >DETAIL: Failed process was running: REINDEX INDEX pg_statistic_ext_data_stxoid_inh_index\n\n=> REINDEX was running, with parallel workers, but deadlocked with\nANALYZE, and then crashed.\n\nIt looks like parallel workers are needed to hit this issue.\nI'm not sure if the issue is specific to extended stats - probably not.\n\nI reproduced the crash with manual REINDEX+ANALYZE, and with assertions (which\nwere not hit), and on a more recent commit (1124cb2cf). The crash is hit about\n30% of the time when running a loop around REINDEX and then also running\nANALYZE.\n\nI hope someone has a hunch where to look; so far, I wasn't able to create a\nminimal reproducer. \n\nCore was generated by `postgres: pryzbyj ts [local] REINDEX '.\nProgram terminated with signal 11, Segmentation fault.\n#0 RemoveFromWaitQueue (proc=0x2aaabc1289e0, hashcode=2627626119) at ../src/backend/storage/lmgr/lock.c:1898\n1898 LOCKMETHODID lockmethodid = LOCK_LOCKMETHOD(*waitLock);\n(gdb) bt\n#0 RemoveFromWaitQueue (proc=0x2aaabc1289e0, hashcode=2627626119) at ../src/backend/storage/lmgr/lock.c:1898\n#1 0x00000000007ab56b in LockErrorCleanup () at ../src/backend/storage/lmgr/proc.c:735\n#2 0x0000000000548a7e in AbortTransaction () at ../src/backend/access/transam/xact.c:2735\n#3 0x0000000000549405 in AbortCurrentTransaction () at ../src/backend/access/transam/xact.c:3414\n#4 0x00000000007b6414 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at ../src/backend/tcop/postgres.c:4352\n#5 0x0000000000730e9a in BackendRun (port=<optimized out>, port=<optimized out>) at ../src/backend/postmaster/postmaster.c:4461\n#6 BackendStartup (port=0x12a8a50) at ../src/backend/postmaster/postmaster.c:4189\n#7 ServerLoop () at ../src/backend/postmaster/postmaster.c:1779\n#8 0x000000000073207d in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x127a230) at ../src/backend/postmaster/postmaster.c:1463\n#9 0x00000000004b5535 in main (argc=3, argv=0x127a230) at ../src/backend/main/main.c:198\n\n(gdb) l\n1893 RemoveFromWaitQueue(PGPROC *proc, uint32 hashcode)\n1894 {\n1895 LOCK *waitLock = proc->waitLock;\n1896 PROCLOCK *proclock = proc->waitProcLock;\n1897 LOCKMODE lockmode = proc->waitLockMode;\n1898 LOCKMETHODID lockmethodid = LOCK_LOCKMETHOD(*waitLock);\n1899\n1900 /* Make sure proc is waiting */\n1901 Assert(proc->waitStatus == PROC_WAIT_STATUS_WAITING);\n1902 Assert(proc->links.next != NULL);\n\n(gdb) p waitLock\n$1 = (LOCK *) 0x0\n\nAnother variant on this crash:\n\nJul 11 00:55:19 telsa kernel: postgres[25415]: segfault at f ip 000000000081111a sp 00007ffdbc01ea90 error 4 in postgres[400000+8df000]\n\nCore was generated by `postgres: parallel worker for PID 27096 waiting '.\n\n(gdb) bt\n#0 RemoveFromWaitQueue (proc=0x2aaabc154040, hashcode=2029421528) at ../src/backend/storage/lmgr/lock.c:1874\n#1 0x000000000081de2f in LockErrorCleanup () at ../src/backend/storage/lmgr/proc.c:735\n#2 0x0000000000826990 in ProcessInterrupts () at ../src/backend/tcop/postgres.c:3207\n#3 0x000000000081e355 in ProcSleep (locallock=locallock@entry=0x252a9d0, lockMethodTable=lockMethodTable@entry=0xee1260 <default_lockmethod>) at ../src/backend/storage/lmgr/proc.c:1295\n#4 0x000000000080eff1 in WaitOnLock (locallock=locallock@entry=0x252a9d0, owner=owner@entry=0x253b548) at ../src/backend/storage/lmgr/lock.c:1818\n#5 0x00000000008107ce in LockAcquireExtended (locktag=locktag@entry=0x7ffdbc01ee10, lockmode=lockmode@entry=5, sessionLock=sessionLock@entry=false, dontWait=dontWait@entry=false,\n reportMemoryError=reportMemoryError@entry=true, locallockp=locallockp@entry=0x0) at ../src/backend/storage/lmgr/lock.c:1082\n#6 0x00000000008110a4 in LockAcquire (locktag=locktag@entry=0x7ffdbc01ee10, lockmode=lockmode@entry=5, sessionLock=sessionLock@entry=false, dontWait=dontWait@entry=false) at ../src/backend/storage/lmgr/lock.c:740\n#7 0x000000000080e316 in XactLockTableWait (xid=xid@entry=816478533, rel=rel@entry=0x7f7332090468, ctid=ctid@entry=0x2596374, oper=oper@entry=XLTW_InsertIndexUnique) at ../src/backend/storage/lmgr/lmgr.c:702\n#8 0x00000000005190bb in heapam_index_build_range_scan (heapRelation=0x7f7332090468, indexRelation=0x7f7332099008, indexInfo=0x2596888, allow_sync=<optimized out>, anyvisible=false, progress=false, start_blockno=0,\n numblocks=4294967295, callback=0x53c8c0 <_bt_build_callback>, callback_state=0x7ffdbc01f310, scan=0x2596318) at ../src/backend/access/heap/heapam_handler.c:1496\n#9 0x000000000053ca77 in table_index_build_scan (scan=<optimized out>, callback_state=0x7ffdbc01f310, callback=0x53c8c0 <_bt_build_callback>, progress=false, allow_sync=true, index_info=0x2596888, index_rel=<optimized out>,\n table_rel=<optimized out>) at ../src/include/access/tableam.h:1781\n#10 _bt_parallel_scan_and_sort (btspool=btspool@entry=0x2596d08, btspool2=btspool2@entry=0x2596d38, btshared=btshared@entry=0x2aaaaad423a0, sharedsort=sharedsort@entry=0x2aaaaad42340,\n sharedsort2=sharedsort2@entry=0x2aaaaad422e0, sortmem=<optimized out>, progress=progress@entry=false) at ../src/backend/access/nbtree/nbtsort.c:1985\n#11 0x000000000053ef6f in _bt_parallel_build_main (seg=<optimized out>, toc=0x2aaaaad42080) at ../src/backend/access/nbtree/nbtsort.c:1888\n#12 0x0000000000564ec8 in ParallelWorkerMain (main_arg=<optimized out>) at ../src/backend/access/transam/parallel.c:1520\n#13 0x00000000007892f8 in StartBackgroundWorker () at ../src/backend/postmaster/bgworker.c:861\n#14 0x0000000000493269 in do_start_bgworker (rw=0x2531fc0) at ../src/backend/postmaster/postmaster.c:5762\n#15 maybe_start_bgworkers () at ../src/backend/postmaster/postmaster.c:5986\n#16 0x000000000078dc5a in process_pm_pmsignal () at ../src/backend/postmaster/postmaster.c:5149\n#17 ServerLoop () at ../src/backend/postmaster/postmaster.c:1770\n#18 0x0000000000790635 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x24fe230) at ../src/backend/postmaster/postmaster.c:1463\n#19 0x00000000004b80c5 in main (argc=3, argv=0x24fe230) at ../src/backend/main/main.c:198\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 10 Jul 2023 21:01:37 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg16b2: REINDEX segv on null pointer in RemoveFromWaitQueue"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 09:01:37PM -0500, Justin Pryzby wrote:\n> An instance compiled locally, without assertions, failed like this:\n> \n...\n> \n> => REINDEX was running, with parallel workers, but deadlocked with\n> ANALYZE, and then crashed.\n> \n> It looks like parallel workers are needed to hit this issue.\n> I'm not sure if the issue is specific to extended stats - probably not.\n> \n> I reproduced the crash with manual REINDEX+ANALYZE, and with assertions (which\n> were not hit), and on a more recent commit (1124cb2cf). The crash is hit about\n> 30% of the time when running a loop around REINDEX and then also running\n> ANALYZE.\n> \n> I hope someone has a hunch where to look; so far, I wasn't able to create a\n> minimal reproducer. \n\nI was able to reproduce this in isolation by reloading data into a test\ninstance, ANALYZEing the DB to populate pg_statistic_ext_data (so it's\nover 3MB in size), and then REINDEXing the stats_ext index in a loop\nwhile ANALYZEing a table with extended stats.\n\nI still don't have a minimal reproducer, but on a hunch I found that\nthis fails at 5764f611e but not its parent.\n\ncommit 5764f611e10b126e09e37fdffbe884c44643a6ce\nAuthor: Andres Freund <[email protected]>\nDate: Wed Jan 18 11:41:14 2023 -0800\n\n Use dlist/dclist instead of PROC_QUEUE / SHM_QUEUE for heavyweight locks\n\nI tried compiling with -DILIST_DEBUG, but that shows nothing beyond\nsegfaulting, which seems to show that the lists themselves are fine.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 12 Jul 2023 06:52:16 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg16b2: REINDEX segv on null pointer in RemoveFromWaitQueue"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 8:52 PM Justin Pryzby <[email protected]> wrote:\n>\n> On Mon, Jul 10, 2023 at 09:01:37PM -0500, Justin Pryzby wrote:\n> > An instance compiled locally, without assertions, failed like this:\n> >\n> ...\n> >\n> > => REINDEX was running, with parallel workers, but deadlocked with\n> > ANALYZE, and then crashed.\n> >\n> > It looks like parallel workers are needed to hit this issue.\n> > I'm not sure if the issue is specific to extended stats - probably not.\n> >\n> > I reproduced the crash with manual REINDEX+ANALYZE, and with assertions (which\n> > were not hit), and on a more recent commit (1124cb2cf). The crash is hit about\n> > 30% of the time when running a loop around REINDEX and then also running\n> > ANALYZE.\n> >\n> > I hope someone has a hunch where to look; so far, I wasn't able to create a\n> > minimal reproducer.\n>\n> I was able to reproduce this in isolation by reloading data into a test\n> instance, ANALYZEing the DB to populate pg_statistic_ext_data (so it's\n> over 3MB in size), and then REINDEXing the stats_ext index in a loop\n> while ANALYZEing a table with extended stats.\n>\n> I still don't have a minimal reproducer, but on a hunch I found that\n> this fails at 5764f611e but not its parent.\n>\n> commit 5764f611e10b126e09e37fdffbe884c44643a6ce\n> Author: Andres Freund <[email protected]>\n> Date: Wed Jan 18 11:41:14 2023 -0800\n>\n> Use dlist/dclist instead of PROC_QUEUE / SHM_QUEUE for heavyweight locks\n>\n\nGood catch. I didn't realize this email but while investigating the\nsame issue that has been reported recently[1], I reached the same\ncommit. I've sent my analysis and a patch to fix this issue there.\nAndres, since this issue seems to be relevant with your commit\n5764f611e, could you please look at this issue and my patch?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDs7vzK7NErse7xTruqY-FXmM%2B3K00SdBtMcQhiRNkoeQ%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 24 Jul 2023 10:50:13 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg16b2: REINDEX segv on null pointer in RemoveFromWaitQueue"
}
] |
[
{
"msg_contents": "Hi, all.\nI want to report a bug about the recovery of two-phase transaction, in current implementation of crash recovery, there are two ways to recover 2pc data:\n1、before redo, func restoreTwoPhaseData() will restore 2pc data those xid < ShmemVariableCache->nextXid, which is initialized from checkPoint.nextXid;\n2、during redo, func xact_redo() will add 2pc from wal;\nThe following scenario may cause the same 2pc transaction to be added repeatedly, I have attached a patch for pg11 that reproduces the error:\n1、start creating checkpoint_1, checkpoint_1.redo is set as curInsert;\n2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose the xid of this 2pc is 100, and then ShmemVariableCache->nextXid will be advanced as 101;\n3、checkPoint_1.nextXid is set as 101;\n4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied to disk because its prepare_end_lsn > checkpoint_1.redo;\n5、checkPoint_1 is finished, after checkpoint_timeout, start creating checkpoint_2;\n6、during checkpoint_2, data of 2pc_100 will be copied to disk;\n7、before UpdateControlFile() of checkpoint_2, crash happened;\n8、during crash recovery, redo will start from checkpoint_1, and 2pc_100 will be restored first by restoreTwoPhaseData() because xid_100 < checkPoint_1.nextXid, which is 101; \n9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100 will be added again by xact_redo() during wal replay, resulting in the same 2pc data being added twice;\n10、In RecoverPreparedTransactions() -> lock_twophase_recover(), lock the same 2pc will cause FATAL.\nAfter running the patch that reproduced the error, you will get the following error during crash recovery:\n2023-07-10 13:04:30.670 UTC [11169] LOG: recovering prepared transaction 569 from shared memory\n2023-07-10 13:04:30.670 UTC [11169] LOG: recovering prepared transaction 569 from shared memory\n2023-07-10 13:04:30.670 UTC [11169] FATAL: lock ExclusiveLock on object 569/0/0 is already held\n2023-07-10 13:04:30.670 UTC [11168] LOG: startup process (PID 11169) exited with exit code 1\n2023-07-10 13:04:30.670 UTC [11168] LOG: aborting startup due to startup process failure\nI also added a patch for pg11 to fix this problem, hope you can check it when you have time.\nThanks & Best Regard",
"msg_date": "Tue, 11 Jul 2023 10:35:15 +0800",
"msg_from": "\"suyu.cmj\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?R290IEZBVEFMIGluIGxvY2tfdHdvcGhhc2VfcmVjb3ZlcigpIGR1cmluZyByZWNvdmVyeQ==?="
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 10:35:15AM +0800, suyu.cmj wrote:\n> I want to report a bug about the recovery of two-phase transaction,\n> in current implementation of crash recovery, there are two ways to\n> recover 2pc data: \n> 1、before redo, func restoreTwoPhaseData() will restore 2pc data\n> those xid < ShmemVariableCache->nextXid, which is initialized from\n> checkPoint.nextXid; \n> 2、during redo, func xact_redo() will add 2pc from wal;\n> The following scenario may cause the same 2pc transaction to be\n> added repeatedly, I have attached a patch for pg11 that reproduces\n> the error: \n> 1、start creating checkpoint_1, checkpoint_1.redo is set as\n> curInsert; \n> 2、before set checkPoint_1.nextXid, a new 2pc is prepared, suppose\n> the xid of this 2pc is 100, and then ShmemVariableCache->nextXid\n> will be advanced as 101; \n> 3、checkPoint_1.nextXid is set as 101;\n> 4、in CheckPointTwoPhase() of checkpoint_1, 2pc_100 won't be copied\n> to disk because its prepare_end_lsn > checkpoint_1.redo; \n> 5、checkPoint_1 is finished, after checkpoint_timeout, start\n> creating checkpoint_2; \n> 6、during checkpoint_2, data of 2pc_100 will be copied to disk;\n> 7、before UpdateControlFile() of checkpoint_2, crash happened;\n> 8、during crash recovery, redo will start from checkpoint_1, and\n> 2pc_100 will be restored first by restoreTwoPhaseData() because\n> xid_100 < checkPoint_1.nextXid, which is 101; \n> 9、because prepare_start_lsn of 2pc_100 > checkpoint_1.redo, 2pc_100\n> will be added again by xact_redo() during wal replay, resulting in\n> the same 2pc data being added twice; \n\nIt looks like you have something here. I'll try to look at it. This\nis a bug, so I have removed pgsql-hackers from the CC list keeping\nonly pgsql-bugs as cross-list posts are not encouraged.\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 08:41:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Got FATAL in lock_twophase_recover() during recovery"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile debugging one issue, we have added the below line in\npostgresGetForeignPlan() to see the foreignrel details.\n\n@@ -1238,6 +1238,8 @@ postgresGetForeignPlan(PlannerInfo *root,\n bool has_limit = false;\n ListCell *lc;\n\n+ elog(INFO, \"foreignrel: %s\", nodeToString(foreignrel));\n+\n\nAnd with this change, we ran the postgres_fdw regression (executing\nsql/postgres_fdw.sql) suite. We observed the below warnings that seem\nstrange.\n\n+WARNING: could not dump unrecognized node type: 0\n+WARNING: could not dump unrecognized node type: 26072088\n+WARNING: could not dump unrecognized node type: 26438448\n+WARNING: could not dump unrecognized node type: 368\n\nOf course, with multiple runs, we see some random node types listed above.\nThanks to my colleague Suraj Kharage for this and for working parallel with\nme.\n\nDoes anybody have any idea about these?\n\nAfter debugging one random query from the above-failed case, what we have\nobserved is (we might be wrong, but worth noting here):\n\n1. This warning ended up while displaying RelOptInfo->pathlist.\n2. In create_ordered_paths(), input_rel has two paths, and it loops over\nboth paths to get the best-sorted path.\n3. First path was unsorted, and thus we add a sort node on top of it, and\nadds that to the ordered_rel.\n4. However, 2nd path was already sorted and passed as is to the add_path().\n5. add_path() decides to reject this new path on some metrics. However, in\nthe end, it pfree() this passed in path. It seems wrong as its references\ndo present elsewhere. For example, in the first path's parent rels path\nlist.\n6. So, while displaying the parent's path, we end up with these warnings.\n\nI tried to get a fix for this but no luck so far.\nOne approach was to copy the path before passing it to the add_path().\nHowever, there is no easy way to copy a path due to its circular references.\n\nTo see whether this warning goes or not, I have commented code in add_path()\nthat does pfree() on the new_path. And with that, I don't see any warnings.\nBut removing that code doesn't seem to be the correct fix.\n\nSuggestions?\n\nThanks\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nHello,While debugging one issue, we have added the below line in postgresGetForeignPlan() to see the foreignrel details.@@ -1238,6 +1238,8 @@ postgresGetForeignPlan(PlannerInfo *root, bool has_limit = false; ListCell *lc; + elog(INFO, \"foreignrel: %s\", nodeToString(foreignrel));+And with this change, we ran the postgres_fdw regression (executing sql/postgres_fdw.sql) suite. We observed the below warnings that seem strange.+WARNING: could not dump unrecognized node type: 0+WARNING: could not dump unrecognized node type: 26072088+WARNING: could not dump unrecognized node type: 26438448+WARNING: could not dump unrecognized node type: 368Of course, with multiple runs, we see some random node types listed above. Thanks to my colleague Suraj Kharage for this and for working parallel with me.Does anybody have any idea about these?After debugging one random query from the above-failed case, what we have observed is (we might be wrong, but worth noting here):1. This warning ended up while displaying RelOptInfo->pathlist.2. In create_ordered_paths(), input_rel has two paths, and it loops over both paths to get the best-sorted path.3. First path was unsorted, and thus we add a sort node on top of it, and adds that to the ordered_rel.4. However, 2nd path was already sorted and passed as is to the add_path().5. add_path() decides to reject this new path on some metrics. However, in the end, it pfree() this passed in path. It seems wrong as its references do present elsewhere. For example, in the first path's parent rels path list.6. So, while displaying the parent's path, we end up with these warnings.I tried to get a fix for this but no luck so far.One approach was to copy the path before passing it to the add_path(). However, there is no easy way to copy a path due to its circular references.To see whether this warning goes or not, I have commented code in add_path() that does pfree() on the new_path. And with that, I don't see any warnings. But removing that code doesn't seem to be the correct fix.Suggestions?Thanks-- Jeevan ChalkeSenior Staff SDE, Database Architect, and ManagerProduct Developmentedbpostgres.com",
"msg_date": "Tue, 11 Jul 2023 11:01:51 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On 2023-Jul-11, Jeevan Chalke wrote:\n\n> 4. However, 2nd path was already sorted and passed as is to the add_path().\n> 5. add_path() decides to reject this new path on some metrics. However, in\n> the end, it pfree() this passed in path. It seems wrong as its references\n> do present elsewhere. For example, in the first path's parent rels path\n> list.\n> 6. So, while displaying the parent's path, we end up with these warnings.\n\nIn other words, this is use-after-free, with add_path freeing the\npassed-in Path pointer, but one particular case in which this Path is\nstill used afterwards.\n\n> I tried to get a fix for this but no luck so far.\n\nI proposed to add an add_path_extended() function that adds 'bool\nfree_input_path' argument, and pass it false in that one place in\ncreate_ordered_paths.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 11 Jul 2023 09:49:11 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 1:19 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-Jul-11, Jeevan Chalke wrote:\n>\n> > 4. However, 2nd path was already sorted and passed as is to the\n> add_path().\n> > 5. add_path() decides to reject this new path on some metrics. However,\n> in\n> > the end, it pfree() this passed in path. It seems wrong as its references\n> > do present elsewhere. For example, in the first path's parent rels path\n> > list.\n> > 6. So, while displaying the parent's path, we end up with these warnings.\n>\n> In other words, this is use-after-free, with add_path freeing the\n> passed-in Path pointer, but one particular case in which this Path is\n> still used afterwards.\n>\n> > I tried to get a fix for this but no luck so far.\n>\n> I proposed to add an add_path_extended() function that adds 'bool\n> free_input_path' argument, and pass it false in that one place in\n> create_ordered_paths.\n>\n\nYeah, this can be a way.\n\nHowever, I am thinking the other way around now. What if we first added the\nunmodified input path as it is to the ordered_rel first?\n\nIf we do so, then while adding the next path, add_path() may decide to\nremove the older one as the newer path is the best one. The remove_old\nlogic in add_path() will free the path (unsorted one), and we end up with\nthe same error.\n\nAnd if we conditionally remove that path (remove_old logic one), then we\nneed to pass false in every add_path() call in create_ordered_paths().\n\nAm I missing something?\n\nThanks\n\n\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n>\n\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nOn Tue, Jul 11, 2023 at 1:19 PM Alvaro Herrera <[email protected]> wrote:On 2023-Jul-11, Jeevan Chalke wrote:\n\n> 4. However, 2nd path was already sorted and passed as is to the add_path().\n> 5. add_path() decides to reject this new path on some metrics. However, in\n> the end, it pfree() this passed in path. It seems wrong as its references\n> do present elsewhere. For example, in the first path's parent rels path\n> list.\n> 6. So, while displaying the parent's path, we end up with these warnings.\n\nIn other words, this is use-after-free, with add_path freeing the\npassed-in Path pointer, but one particular case in which this Path is\nstill used afterwards.\n\n> I tried to get a fix for this but no luck so far.\n\nI proposed to add an add_path_extended() function that adds 'bool\nfree_input_path' argument, and pass it false in that one place in\ncreate_ordered_paths.Yeah, this can be a way.However, I am thinking the other way around now. What if we first added the unmodified input path as it is to the ordered_rel first?If we do so, then while adding the next path, add_path() may decide to remove the older one as the newer path is the best one. The remove_old logic in add_path() will free the path (unsorted one), and we end up with the same error.And if we conditionally remove that path (remove_old logic one), then we need to pass false in every add_path() call in create_ordered_paths().Am I missing something?Thanks \n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n-- Jeevan ChalkeSenior Staff SDE, Database Architect, and ManagerProduct Developmentedbpostgres.com",
"msg_date": "Tue, 11 Jul 2023 14:58:51 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 2:58 PM Jeevan Chalke <\[email protected]> wrote:\n\n>\n>\n> On Tue, Jul 11, 2023 at 1:19 PM Alvaro Herrera <[email protected]>\n> wrote:\n>\n>> On 2023-Jul-11, Jeevan Chalke wrote:\n>>\n>> > 4. However, 2nd path was already sorted and passed as is to the\n>> add_path().\n>> > 5. add_path() decides to reject this new path on some metrics. However,\n>> in\n>> > the end, it pfree() this passed in path. It seems wrong as its\n>> references\n>> > do present elsewhere. For example, in the first path's parent rels path\n>> > list.\n>> > 6. So, while displaying the parent's path, we end up with these\n>> warnings.\n>>\n>> In other words, this is use-after-free, with add_path freeing the\n>> passed-in Path pointer, but one particular case in which this Path is\n>> still used afterwards.\n>>\n>> > I tried to get a fix for this but no luck so far.\n>>\n>> I proposed to add an add_path_extended() function that adds 'bool\n>> free_input_path' argument, and pass it false in that one place in\n>> create_ordered_paths.\n>>\n>\n> Yeah, this can be a way.\n>\n> However, I am thinking the other way around now. What if we first added\n> the unmodified input path as it is to the ordered_rel first?\n>\n> If we do so, then while adding the next path, add_path() may decide to\n> remove the older one as the newer path is the best one. The remove_old\n> logic in add_path() will free the path (unsorted one), and we end up with\n> the same error.\n>\n> And if we conditionally remove that path (remove_old logic one), then we\n> need to pass false in every add_path() call in create_ordered_paths().\n>\n\nAttached patch.\n\n\n>\n> Am I missing something?\n>\n> Thanks\n>\n>\n>>\n>> --\n>> Álvaro Herrera 48°01'N 7°57'E —\n>> https://www.EnterpriseDB.com/\n>>\n>\n>\n> --\n> Jeevan Chalke\n>\n> *Senior Staff SDE, Database Architect, and ManagerProduct Development*\n>\n>\n>\n> edbpostgres.com\n>\n\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com",
"msg_date": "Tue, 11 Jul 2023 15:18:22 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "Jeevan Chalke <[email protected]> writes:\n> Attached patch.\n\nI would be astonished if this fixes anything. The code still doesn't\nknow which paths are referenced by which other ones, and so the place\nwhere we free a previously-added path can't know what to do.\n\nI've speculated about adding some form of reference counting to paths\n(maybe just a \"pin\" flag rather than a full refcount) so that we could\nbe smarter about this. The existing kluge for \"don't free IndexPaths\"\ncould be replaced by setting the pin mark on any IndexPath that we\nmake a bitmap path from. Up to now it hasn't seemed necessary to\ngeneralize that hack, but maybe it's time. Can you show a concrete\ncase where we are freeing a still-referenced path?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jul 2023 07:00:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "Hi Tom,\n\nOn Tue, Jul 11, 2023 at 4:30 PM Tom Lane <[email protected]> wrote:\n\n> Jeevan Chalke <[email protected]> writes:\n> > Attached patch.\n>\n> I would be astonished if this fixes anything. The code still doesn't\n> know which paths are referenced by which other ones, and so the place\n> where we free a previously-added path can't know what to do.\n>\n> I've speculated about adding some form of reference counting to paths\n> (maybe just a \"pin\" flag rather than a full refcount) so that we could\n> be smarter about this. The existing kluge for \"don't free IndexPaths\"\n> could be replaced by setting the pin mark on any IndexPath that we\n> make a bitmap path from. Up to now it hasn't seemed necessary to\n> generalize that hack, but maybe it's time. Can you show a concrete\n> case where we are freeing a still-referenced path?\n>\n\nAs mentioned earlier, while debugging some issues, we have put an elog\ndisplaying the foreignrel contents using nodeToString(). Like below:\n\n@@ -1238,6 +1238,8 @@ postgresGetForeignPlan(PlannerInfo *root,\n bool has_limit = false;\n ListCell *lc;\n\n+ elog(INFO, \"foreignrel: %s\", nodeToString(foreignrel));\n+\n\nAnd ran the postgres_fdw regression and observed many warnings saying \"could\nnot dump unrecognized node type\". Here are the queries retrieved and\nadjusted from postgres_fdw.sql\n\nCREATE EXTENSION postgres_fdw;\nCREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname\n'postgres', port '5432');\nCREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\nCREATE TABLE t1 (c1 int NOT NULL, c2 int NOT NULL, CONSTRAINT t1_pkey\nPRIMARY KEY (c1));\nINSERT INTO t1 SELECT id, id % 10 FROM generate_series(1, 1000) id;\nANALYZE t1;\nCREATE FOREIGN TABLE ft2 (c1 int NOT NULL, c2 int NOT NULL) SERVER loopback\nOPTIONS (schema_name 'public', table_name 't1');\n\nexplain (verbose, costs off)\nselect c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) <\n49800 order by c2;\n\nWith the above elog() in place, we can see the warning. And the pathlist\nhas a second path as empty ({}). Which got freed but has a reference in\nthis foreignrel.\n\nThanks\n\n\n>\n> regards, tom lane\n>\n\n\n-- \nJeevan Chalke\n\n*Senior Staff SDE, Database Architect, and ManagerProduct Development*\n\n\n\nedbpostgres.com\n\nHi Tom,On Tue, Jul 11, 2023 at 4:30 PM Tom Lane <[email protected]> wrote:Jeevan Chalke <[email protected]> writes:\n> Attached patch.\n\nI would be astonished if this fixes anything. The code still doesn't\nknow which paths are referenced by which other ones, and so the place\nwhere we free a previously-added path can't know what to do.\n\nI've speculated about adding some form of reference counting to paths\n(maybe just a \"pin\" flag rather than a full refcount) so that we could\nbe smarter about this. The existing kluge for \"don't free IndexPaths\"\ncould be replaced by setting the pin mark on any IndexPath that we\nmake a bitmap path from. Up to now it hasn't seemed necessary to\ngeneralize that hack, but maybe it's time. Can you show a concrete\ncase where we are freeing a still-referenced path?As mentioned earlier, while debugging some issues, we have put an elog displaying the foreignrel contents using nodeToString(). Like below:@@ -1238,6 +1238,8 @@ postgresGetForeignPlan(PlannerInfo *root, bool has_limit = false; ListCell *lc; + elog(INFO, \"foreignrel: %s\", nodeToString(foreignrel));+And ran the postgres_fdw regression and observed many warnings saying \"could not dump unrecognized node type\". Here are the queries retrieved and adjusted from postgres_fdw.sqlCREATE EXTENSION postgres_fdw;CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres', port '5432');CREATE USER MAPPING FOR CURRENT_USER SERVER loopback;CREATE TABLE t1 (c1 int NOT NULL, c2 int NOT NULL, CONSTRAINT t1_pkey PRIMARY KEY (c1));INSERT INTO t1 SELECT id, id % 10 FROM generate_series(1, 1000) id;ANALYZE t1;CREATE FOREIGN TABLE ft2 (c1 int NOT NULL, c2 int NOT NULL) SERVER loopback OPTIONS (schema_name 'public', table_name 't1');explain (verbose, costs off)select c2, sum(c1) from ft2 group by c2 having avg(c1) < 500 and sum(c1) < 49800 order by c2;With the above elog() in place, we can see the warning. And the pathlist has a second path as empty ({}). Which got freed but has a reference in this foreignrel.Thanks \n\n regards, tom lane\n-- Jeevan ChalkeSenior Staff SDE, Database Architect, and ManagerProduct Developmentedbpostgres.com",
"msg_date": "Tue, 11 Jul 2023 17:30:55 +0530",
"msg_from": "Jeevan Chalke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "So what's going on here is that create_ordered_paths() does this:\n\n foreach(lc, input_rel->pathlist)\n {\n Path *input_path = (Path *) lfirst(lc);\n\n if (/* input path is suitably sorted already */)\n sorted_path = input_path;\n else\n /* build a sorted path atop input_path */\n\n /* Add projection step if needed */\n if (sorted_path->pathtarget != target)\n sorted_path = apply_projection_to_path(root, ordered_rel,\n sorted_path, target);\n\n add_path(ordered_rel, sorted_path);\n }\n\nThus, if the input RelOptInfo has a path that already has the correct\nordering and output target, we'll try to add that path directly to\nthe output RelOptInfo. This is cheating in at least two ways:\n\n1. The path's parent link isn't right: it still points to the input rel.\n\n2. As per Jeevan's report, we now potentially have two different links\nto the path. add_path could reject and free the path immediately,\nor it could do so later while comparing it to some path offered later\nfor the output RelOptInfo, and either case leads to a dangling pointer\nin the input RelOptInfo's pathlist.\n\nNow, the reason we get away with #2 is that nothing looks at the lower\nRelOptInfo's pathlist anymore after create_ordered_paths: we will only\nbe interested in paths that contribute to a surviving Path in the\noutput RelOptInfo, and those will be linked directly from the upper\nPath. However, that's clearly kind of fragile, plus it's a bit\nsurprising that nobody has complained about #1.\n\nWe could probably fix this by creating a rule that you *must*\nwrap a Path for a lower RelOptInfo into some sort of wrapper\nPath before offering it as a candidate for an upper RelOptInfo.\n(This could be cross-checked by having add_path Assert that\nnew_path->parent == parent_rel. The wrapper could be a do-nothing\nProjectionPath, perhaps.) But I think there are multiple places\ntaking similar shortcuts, so I'm a bit worried about how much overhead\nwe'll add for what seems likely to be only a debugging annoyance.\n\nA low-cost fix perhaps could be to unlink the lower rel's whole\npath list (set input_rel->pathlist = NIL, also zero the related\nfields such as cheapest_path) once we've finished selecting the\npaths we want for the upper rel. That's not great for debuggability\neither, but maybe it's the most appropriate compromise.\n\nI don't recall how clearly I understood this while writing the\nupper-planner-pathification patch years ago. I think I did\nrealize the code was cheating, but if so I failed to document\nit, so far as I can see.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jul 2023 16:45:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 08:46, Tom Lane <[email protected]> wrote:\n> A low-cost fix perhaps could be to unlink the lower rel's whole\n> path list (set input_rel->pathlist = NIL, also zero the related\n> fields such as cheapest_path) once we've finished selecting the\n> paths we want for the upper rel. That's not great for debuggability\n> either, but maybe it's the most appropriate compromise.\n\nI've not taken the time to fully understand this, but from reading the\nthread, I'm not immediately understanding why we can't just shallow\ncopy the Path from the other RelOptInfo and replace the parent before\nusing it in the upper RelOptInfo. Can you explain?\n\nDavid\n\n\n",
"msg_date": "Wed, 12 Jul 2023 12:16:18 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I've not taken the time to fully understand this, but from reading the\n> thread, I'm not immediately understanding why we can't just shallow\n> copy the Path from the other RelOptInfo and replace the parent before\n> using it in the upper RelOptInfo. Can you explain?\n\nI did think about that, but \"shallow copy a Path\" seems nontrivial\nbecause the Path structs are all different sizes. Maybe it is worth\nbuilding some infrastructure to support that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jul 2023 22:23:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 14:23, Tom Lane <[email protected]> wrote:\n> I did think about that, but \"shallow copy a Path\" seems nontrivial\n> because the Path structs are all different sizes. Maybe it is worth\n> building some infrastructure to support that?\n\nIt seems a reasonable thing to have to do. It seems the minimum thing\nwe could do to ensure each Path is only mentioned in at most 1\nRelOptInfo.\n\nI see GetExistingLocalJoinPath() in foreign.c might be related to this\nproblem, per:\n\n> * If the inner or outer subpath of the chosen path is a ForeignScan, we\n> * replace it with its outer subpath. For this reason, and also because the\n> * planner might free the original path later, the path returned by this\n> * function is a shallow copy of the original. There's no need to copy\n> * the substructure, so we don't.\n\nso that function could probably disappear if we had this.\n\nDavid\n\n\n",
"msg_date": "Wed, 12 Jul 2023 14:50:15 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 14:50, David Rowley <[email protected]> wrote:\n>\n> On Wed, 12 Jul 2023 at 14:23, Tom Lane <[email protected]> wrote:\n> > I did think about that, but \"shallow copy a Path\" seems nontrivial\n> > because the Path structs are all different sizes. Maybe it is worth\n> > building some infrastructure to support that?\n>\n> It seems a reasonable thing to have to do. It seems the minimum thing\n> we could do to ensure each Path is only mentioned in at most 1\n> RelOptInfo.\n\nI've attached a draft patch which adds copyObjectFlat() and supports\nall Node types asides from the ones mentioned in @extra_tags in\ngen_node_support.pl. This did require including all the node header\nfiles in copyfuncs.c, which that file seems to have avoided until now.\n\nI also didn't do anything about ExtensibleNode types. I assume just\ncopying the ExtensibleNode isn't good enough. To flat copy the actual\nnode I think would require adding a new function to\nExtensibleNodeMethods.\n\nI was also unsure what we should do when shallow copying a List. The\nproblem there is if we just do a shallow copy, a repalloc() on the\nelements array would end up pfreeing memory that might be used by a\nshallow copied clone. Perhaps List is not unique in that regard?\nMaybe the solution there is to add a special case and list_copy()\nLists like what is done in copyObjectImpl().\n\nI'm hoping the attached patch will at least assist in moving the\ndiscussion along.\n\nDavid",
"msg_date": "Mon, 17 Jul 2023 15:18:30 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 12 Jul 2023 at 14:50, David Rowley <[email protected]> wrote:\n>> On Wed, 12 Jul 2023 at 14:23, Tom Lane <[email protected]> wrote:\n>>> I did think about that, but \"shallow copy a Path\" seems nontrivial\n>>> because the Path structs are all different sizes. Maybe it is worth\n>>> building some infrastructure to support that?\n\n>> It seems a reasonable thing to have to do. It seems the minimum thing\n>> we could do to ensure each Path is only mentioned in at most 1\n>> RelOptInfo.\n\n> ...\n> I also didn't do anything about ExtensibleNode types. I assume just\n> copying the ExtensibleNode isn't good enough. To flat copy the actual\n> node I think would require adding a new function to\n> ExtensibleNodeMethods.\n\nYeah, the problem I've got with this approach is that flat-copying\nFDW and Custom paths would require extending the respective APIs.\nWhile that's a perfectly reasonable ask if we only need to do this\nin HEAD, it would be a nonstarter for released branches. Is it\nokay to only fix this issue in HEAD?\n\n> I was also unsure what we should do when shallow copying a List.\n\nThe proposal is to shallow-copy a Path node. List is not a kind\nof Path, so how does List get into it? (Lists below Paths would\nnot get copied, by definition.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jul 2023 23:31:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Mon, 17 Jul 2023 at 15:31, Tom Lane <[email protected]> wrote:\n> > I also didn't do anything about ExtensibleNode types. I assume just\n> > copying the ExtensibleNode isn't good enough. To flat copy the actual\n> > node I think would require adding a new function to\n> > ExtensibleNodeMethods.\n>\n> Yeah, the problem I've got with this approach is that flat-copying\n> FDW and Custom paths would require extending the respective APIs.\n> While that's a perfectly reasonable ask if we only need to do this\n> in HEAD, it would be a nonstarter for released branches. Is it\n> okay to only fix this issue in HEAD?\n\nCustomPaths, I didn't think about those. That certainly makes it more\ncomplex. I also now see the header comment for struct CustomPath\nmentioning that we don't copy Paths:\n\n * Core code must avoid assuming that the CustomPath is only as large as\n * the structure declared here; providers are allowed to make it the first\n * element in a larger structure. (Since the planner never copies Paths,\n * this doesn't add any complication.) However, for consistency with the\n * FDW case, we provide a \"custom_private\" field in CustomPath; providers\n * may prefer to use that rather than define another struct type.\n\nAre there any legitimate reasons to look at the input_rel's pathlist\nagain aside from debugging? I can't think of any. Perhaps back\nbranches can be fixed by just emptying the path lists and NULLifying\nthe cheapest paths as you mentioned last week.\n\n> > I was also unsure what we should do when shallow copying a List.\n>\n> The proposal is to shallow-copy a Path node. List is not a kind\n> of Path, so how does List get into it? (Lists below Paths would\n> not get copied, by definition.)\n\nThe patch contained infrastructure to copy any Node type. Not just\nPaths. Perhaps that's more than what's needed, but it seemed more\neffort to limit it just to Path types than to make it \"work\" for all\nNode types.\n\nDavid.\n\n\n",
"msg_date": "Tue, 18 Jul 2023 11:34:44 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 5:05 AM David Rowley <[email protected]> wrote:\n>\n> On Mon, 17 Jul 2023 at 15:31, Tom Lane <[email protected]> wrote:\n> > > I also didn't do anything about ExtensibleNode types. I assume just\n> > > copying the ExtensibleNode isn't good enough. To flat copy the actual\n> > > node I think would require adding a new function to\n> > > ExtensibleNodeMethods.\n> >\n> > Yeah, the problem I've got with this approach is that flat-copying\n> > FDW and Custom paths would require extending the respective APIs.\n> > While that's a perfectly reasonable ask if we only need to do this\n> > in HEAD, it would be a nonstarter for released branches. Is it\n> > okay to only fix this issue in HEAD?\n>\n> CustomPaths, I didn't think about those. That certainly makes it more\n> complex. I also now see the header comment for struct CustomPath\n> mentioning that we don't copy Paths:\n\nSomewhere upthread Tom suggested using a dummy projection path. Add a\nprojection path on top of input path and add the projection path to\noutput rel's list. That will work right?\n\nThere's some shallow copying code in reparameterize_path_by_childrel()\nbut that's very specific to the purpose there and doesn't consider\nCustom or Foreign paths.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 19 Jul 2023 14:09:30 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 4:30 PM Tom Lane <[email protected]> wrote:\n>\n> Jeevan Chalke <[email protected]> writes:\n> > Attached patch.\n>\n> I would be astonished if this fixes anything. The code still doesn't\n> know which paths are referenced by which other ones, and so the place\n> where we free a previously-added path can't know what to do.\n>\n> I've speculated about adding some form of reference counting to paths\n> (maybe just a \"pin\" flag rather than a full refcount) so that we could\n> be smarter about this. The existing kluge for \"don't free IndexPaths\"\n> could be replaced by setting the pin mark on any IndexPath that we\n> make a bitmap path from. Up to now it hasn't seemed necessary to\n> generalize that hack, but maybe it's time. Can you show a concrete\n> case where we are freeing a still-referenced path?\n\nSet of patches in [1] add infrastructure to reference, link and unlink\npaths.The patches are raw and have some TODOs there. But I think that\ninfrastructure will solve this problem as a side effect. Please take a\nlook and let me know if this is as per your speculation. It's more\nthan just pinning though.\n\nThe patch set uses references to free memory consumed by paths which\nremain unused. The memory consumed is substantial when partitionwise\njoin is used and there are thousands of partitions.\n\n[1] https://www.postgresql.org/message-id/CAExHW5tUcVsBkq9qT%3DL5vYz4e-cwQNw%3DKAGJrtSyzOp3F%3DXacA%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 28 Jul 2023 12:12:42 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unrecognized node type while displaying a Path due to dangling\n pointer"
}
] |
[
{
"msg_contents": "Hi community,\r\nwhen I learn the source of PostgreSQL, I think it's better to add a tip to the postgres \"check mode\", this can help the postgres's user when they check the postgres's data directory.\r\n\r\n\r\n\r\nsrc/backend/bootstrap/bootstrap.c\r\n\r\n\r\n\r\nif (check_only)\r\n {\r\n SetProcessingMode(NormalProcessing);\r\n CheckerModeMain();\r\n abort();\r\n }\r\n\r\n\r\nInstead of\r\n\r\n\r\nif (check_only)\r\n {\r\n SetProcessingMode(NormalProcessing);\r\n CheckerModeMain();\r\n printf(\"PostgreSQL check success, there's no problem\\n\");\r\n\r\n abort();\r\n }\r\n\r\n\r\nYours,\r\nWen Yi",
"msg_date": "Tue, 11 Jul 2023 15:45:47 +0800",
"msg_from": "\"=?ISO-8859-1?B?V2VuIFlp?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH]Add a tip to the check mode"
},
{
"msg_contents": "On Tue, 11 Jul 2023 at 15:11, Wen Yi <[email protected]> wrote:\n>\n> Hi community,\n> when I learn the source of PostgreSQL, I think it's better to add a tip to the postgres \"check mode\", this can help the postgres's user when they check the postgres's data directory.\n>\n> src/backend/bootstrap/bootstrap.c\n>\n> if (check_only)\n> {\n> SetProcessingMode(NormalProcessing);\n> CheckerModeMain();\n> abort();\n> }\n>\n> Instead of\n>\n> if (check_only)\n> {\n> SetProcessingMode(NormalProcessing);\n> CheckerModeMain();\n> printf(\"PostgreSQL check success, there's no problem\\n\");\n> abort();\n> }\n\nI'm afraid I don't understand the point of your suggestion.\nCheckerModeMain doesn't return (it unconditionally calls proc_exit(),\nwhich doesn't return) - it shouldn't hit the abort() clause. If it did\nhit the abort() clause, that'd probably be a problem on its own,\nright?\n\n-- \nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 11 Jul 2023 15:44:39 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Add a tip to the check mode"
},
{
"msg_contents": "I'm so sorry for my careless, you're right.\r\nBut I still think there should add a tip to our user when there's check ok, because when I use the check mode, it didn't give me any message (If there's no error happend) and just exit, like this:\r\n\r\n\r\n[beginnerc@bogon devel]$ postgres --check -D /home/beginnerc/pgsql/data\r\n[beginnerc@bogon devel]$ \r\n\r\n[beginnerc@bogon devel]$ echo $?\r\n0\r\n\r\n\r\nThat's confused me, until I print the return value.\r\nSo I think we should add this tip.\r\n\r\n\r\nI fix and recommit the patch, thanks very much for your reply.\r\n\r\n\r\nYours,\r\nWen Yi\r\n\r\n\r\n\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Matthias van de Meent\" <[email protected]>;\r\nDate: Tue, Jul 11, 2023 09:44 PM\r\nTo: \"Wen Yi\"<[email protected]>;\r\nCc: \"pgsql-hackers\"<[email protected]>;\r\nSubject: Re: [PATCH]Add a tip to the check mode\r\n\r\n\r\n\r\nOn Tue, 11 Jul 2023 at 15:11, Wen Yi <[email protected]> wrote:\r\n>\r\n> Hi community,\r\n> when I learn the source of PostgreSQL, I think it's better to add a tip to the postgres \"check mode\", this can help the postgres's user when they check the postgres's data directory.\r\n>\r\n> src/backend/bootstrap/bootstrap.c\r\n>\r\n> if (check_only)\r\n> {\r\n> SetProcessingMode(NormalProcessing);\r\n> CheckerModeMain();\r\n> abort();\r\n> }\r\n>\r\n> Instead of\r\n>\r\n> if (check_only)\r\n> {\r\n> SetProcessingMode(NormalProcessing);\r\n> CheckerModeMain();\r\n> printf(\"PostgreSQL check success, there's no problem\\n\");\r\n> abort();\r\n> }\r\n\r\nI'm afraid I don't understand the point of your suggestion.\r\nCheckerModeMain doesn't return (it unconditionally calls proc_exit(),\r\nwhich doesn't return) - it shouldn't hit the abort() clause. If it did\r\nhit the abort() clause, that'd probably be a problem on its own,\r\nright?\r\n\r\n-- \r\nKind regards,\r\n\r\nMatthias van de Meent\r\nNeon (https://neon.tech)",
"msg_date": "Wed, 12 Jul 2023 15:02:21 +0800",
"msg_from": "\"=?ISO-8859-1?B?V2VuIFlp?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH]Add a tip to the check mode"
}
] |
[
{
"msg_contents": "Is $subject possible?\n\n I feel like maybe the answer is no, but then I can also see some backend\ncode for similar things in copy.h.\n\nPerhaps it’s possible via a function call not sending the SQL?\n\n- James\n\nIs $subject possible? I feel like maybe the answer is no, but then I can also see some backend code for similar things in copy.h.Perhaps it’s possible via a function call not sending the SQL?- James",
"msg_date": "Tue, 11 Jul 2023 22:25:23 +1200",
"msg_from": "James Sewell <[email protected]>",
"msg_from_op": true,
"msg_subject": "COPY table FROM STDIN via SPI"
},
{
"msg_contents": "James Sewell <[email protected]> writes:\n> Is $subject possible?\n\nNo. It'd be a wire protocol break, and even if it weren't I would not\nexpect many clients to be able to deal with it. They're in the middle\nof a query cycle (for the SELECT or CALL that got you into SPI), and\nsuddenly the backend asks for COPY data? What are they supposed to\nsend, or where are they supposed to put it for the COPY-out case?\nThere's just not provision for nesting protocol operations like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jul 2023 06:46:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY table FROM STDIN via SPI"
},
{
"msg_contents": ">\n> No. It'd be a wire protocol break, and even if it weren't I would not\n> expect many clients to be able to deal with it. They're in the middle\n> of a query cycle (for the SELECT or CALL that got you into SPI), and\n> suddenly the backend asks for COPY data? What are they supposed to\n> send, or where are they supposed to put it for the COPY-out case?\n> There's just not provision for nesting protocol operations like that.\n>\n\nWhat about running a COPY directly from C - is that possible?\n\n\nNo. It'd be a wire protocol break, and even if it weren't I would not\nexpect many clients to be able to deal with it. They're in the middle\nof a query cycle (for the SELECT or CALL that got you into SPI), and\nsuddenly the backend asks for COPY data? What are they supposed to\nsend, or where are they supposed to put it for the COPY-out case?\nThere's just not provision for nesting protocol operations like that.What about running a COPY directly from C - is that possible?",
"msg_date": "Wed, 12 Jul 2023 14:52:37 +1200",
"msg_from": "James Sewell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COPY table FROM STDIN via SPI"
},
{
"msg_contents": "On 7/11/23 22:52, James Sewell wrote:\n> \n> No. It'd be a wire protocol break, and even if it weren't I would not\n> expect many clients to be able to deal with it. They're in the middle\n> of a query cycle (for the SELECT or CALL that got you into SPI), and\n> suddenly the backend asks for COPY data? What are they supposed to\n> send, or where are they supposed to put it for the COPY-out case?\n> There's just not provision for nesting protocol operations like that.\n> \n> \n> What about running a COPY directly from C - is that possible?\n\n\nhttps://www.postgresql.org/docs/current/libpq-copy.html\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 14:18:41 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY table FROM STDIN via SPI"
},
{
"msg_contents": "On 2023-07-12 14:18, Joe Conway wrote:\n> On 7/11/23 22:52, James Sewell wrote:\n>> What about running a COPY directly from C - is that possible?\n> \n> https://www.postgresql.org/docs/current/libpq-copy.html\n\nOr is the question about a COPY kicked off from server-side\nC code (following up a question about SPI)?\n\nIf the idea is to kick off a COPY that reads from the connected\nclient's STDIN, the wire protocol doesn't really have a way to\nwork that out with the client, as Tom pointed out.\n\nOr is the goal for some server-side code to quickly populate\na table from some file that's readable on the server and has\nthe same format that COPY FROM expects?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 12 Jul 2023 14:43:21 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: COPY table FROM STDIN via SPI"
},
{
"msg_contents": "On 7/12/23 14:43, [email protected] wrote:\n> On 2023-07-12 14:18, Joe Conway wrote:\n>> On 7/11/23 22:52, James Sewell wrote:\n>>> What about running a COPY directly from C - is that possible?\n>> \n>> https://www.postgresql.org/docs/current/libpq-copy.html\n> \n> Or is the question about a COPY kicked off from server-side\n> C code (following up a question about SPI)?\n> \n> If the idea is to kick off a COPY that reads from the connected\n> client's STDIN, the wire protocol doesn't really have a way to\n> work that out with the client, as Tom pointed out.\n> \n> Or is the goal for some server-side code to quickly populate\n> a table from some file that's readable on the server and has\n> the same format that COPY FROM expects?\n\n\nYou can still use this in a server-side extension in the same way that \ndblink works. Perhaps ugly, but I have used it in the past and it worked \n*really* well for us.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 15:09:24 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY table FROM STDIN via SPI"
}
] |
[
{
"msg_contents": "This has been a long-standing annoyance of mine. Who hasn't done something\nlike this?:\n\npsql> SET random_page_cost = 2.5;\n(do some stuff, realize that rpc was too high)\n\nLet's put that inside of postgresql.conf:\n\n#------------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#------------------------------------------------------------------------------\n\n# Add settings for extensions here\n\nrandom_page_cost = 2.5;\n\n\nBoom! Server will not start. Surely, we can be a little more liberal in\nwhat we accept? Attached patch allows a single trailing semicolon to be\nsilently discarded. As this parsing happens before the logging collector\nstarts up, the error about the semicolon is often buried somewhere in a\nseparate logfile or journald - so let's just allow postgres to start up\nsince there is no ambiguity about what random_page_cost (or any other GUC)\nis meant to be set to.\n\nI also considered doing an additional ereport(LOG) when we find one, but\nseemed better on reflection to simply ignore it.\n\nCheers,\nGreg",
"msg_date": "Tue, 11 Jul 2023 10:42:19 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forgive trailing semicolons inside of config files"
},
{
"msg_contents": "On Tue, 11 Jul 2023 at 10:43, Greg Sabino Mullane <[email protected]>\nwrote:\n\n> This has been a long-standing annoyance of mine. Who hasn't done something\n> like this?:\n>\n> psql> SET random_page_cost = 2.5;\n> (do some stuff, realize that rpc was too high)\n>\n> Let's put that inside of postgresql.conf:\n>\n>\n> #------------------------------------------------------------------------------\n> # CUSTOMIZED OPTIONS\n>\n> #------------------------------------------------------------------------------\n>\n> # Add settings for extensions here\n>\n> random_page_cost = 2.5;\n>\n>\n> Boom! Server will not start. Surely, we can be a little more liberal in\n> what we accept? Attached patch allows a single trailing semicolon to be\n> silently discarded. As this parsing happens before the logging collector\n> starts up, the error about the semicolon is often buried somewhere in a\n> separate logfile or journald - so let's just allow postgres to start up\n> since there is no ambiguity about what random_page_cost (or any other GUC)\n> is meant to be set to.\n>\n\nPlease, no!\n\nThere is no end to accepting sloppy syntax. What next, allow \"SET\nrandom_page_cost = 2.5;\" (with or without semicolon) in config files?\n\nI'd be more interested in improvements in visibility of errors. For\nexample, maybe if I try to start the server and there is a config file\nproblem, I could somehow get a straightforward error message right in the\nterminal window complaining about the line of the configuration which is\nwrong.\n\nOr maybe there could be a \"check configuration\" subcommand which checks the\nconfiguration. If it's fine, say so and set a flag saying the server is\nclear to be started/restarted. If not, give useful error messages and don't\nset the flag. Then make the start/restart commands only do their thing if\nthe \"config OK\" flag is set. Make sure that editing the configuration\nclears the flag (or have 2 copies of the configuration, copied over by the\n\"check\" subcommand: one for editing, one for running with).\n\nThis might properly belong outside of Postgres itself, I don't know. But I\nthink it would be way more useful than a potentially never-ending series of\npatches to liberalize the config parser.\n\nOn Tue, 11 Jul 2023 at 10:43, Greg Sabino Mullane <[email protected]> wrote:This has been a long-standing annoyance of mine. Who hasn't done something like this?:psql> SET random_page_cost = 2.5;(do some stuff, realize that rpc was too high)Let's put that inside of postgresql.conf:#------------------------------------------------------------------------------# CUSTOMIZED OPTIONS#------------------------------------------------------------------------------# Add settings for extensions here random_page_cost = 2.5;Boom! Server will not start. Surely, we can be a little more liberal in what we accept? Attached patch allows a single trailing semicolon to be silently discarded. As this parsing happens before the logging collector starts up, the error about the semicolon is often buried somewhere in a separate logfile or journald - so let's just allow postgres to start up since there is no ambiguity about what random_page_cost (or any other GUC) is meant to be set to.Please, no!There is no end to accepting sloppy syntax. What next, allow \"SET random_page_cost = 2.5;\" (with or without semicolon) in config files?I'd be more interested in improvements in visibility of errors. For example, maybe if I try to start the server and there is a config file problem, I could somehow get a straightforward error message right in the terminal window complaining about the line of the configuration which is wrong.Or maybe there could be a \"check configuration\" subcommand which checks the configuration. If it's fine, say so and set a flag saying the server is clear to be started/restarted. If not, give useful error messages and don't set the flag. Then make the start/restart commands only do their thing if the \"config OK\" flag is set. Make sure that editing the configuration clears the flag (or have 2 copies of the configuration, copied over by the \"check\" subcommand: one for editing, one for running with).This might properly belong outside of Postgres itself, I don't know. But I think it would be way more useful than a potentially never-ending series of patches to liberalize the config parser.",
"msg_date": "Tue, 11 Jul 2023 11:04:44 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forgive trailing semicolons inside of config files"
},
{
"msg_contents": "Isaac Morland <[email protected]> writes:\n> On Tue, 11 Jul 2023 at 10:43, Greg Sabino Mullane <[email protected]>\n>> # Add settings for extensions here\n>> random_page_cost = 2.5;\n>>\n>> Boom! Server will not start. Surely, we can be a little more liberal in\n>> what we accept? Attached patch allows a single trailing semicolon to be\n>> silently discarded.\n\n> Please, no!\n\nI agree. Allowing this would create huge confusion about whether it's\nEOL or semicolon that ends a config file entry. If you can write a\nsemicolon, then why not spread an entry across lines, or write\nmultiple entries on one line?\n\nIt seems possible that someday we might want to convert over to\nsemicolon-is-end-of-entry precisely to allow such cases. But\nI think that if/when we do that, it should be a flag day where you\n*must* change to the new syntax. (We did exactly that in pgbench\nscripts some years ago, and people didn't complain too much.)\n\n> Or maybe there could be a \"check configuration\" subcommand which checks the\n> configuration.\n\nWe have such a thing, see the pg_file_settings view.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jul 2023 11:34:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forgive trailing semicolons inside of config files"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 11:04 AM Isaac Morland <[email protected]>\nwrote:\n\n> Please, no!\n>\n> There is no end to accepting sloppy syntax. What next, allow \"SET\n> random_page_cost = 2.5;\" (with or without semicolon) in config files?\n>\n\nWell yes, there is an end. A single, trailing semicolon. Full stop. It's\nnot a slippery slope in which we end up asking the AI parser to interpret\nour haikus to derive the actual value. The postgresql.conf file is not some\nfinicky YAML/JSON beast - we already support some looseness in quoting or\nnot quoting values, optional whitespace, etc. Think of the trailing\nsemicolon as whitespace, if you like. You can see from the patch that this\ndoes not replace EOL/EOF.\n\n\n> I'd be more interested in improvements in visibility of errors. For\n> example, maybe if I try to start the server and there is a config file\n> problem, I could somehow get a straightforward error message right in the\n> terminal window complaining about the line of the configuration which is\n> wrong.\n>\n\nThat ship has long since sailed. We already send a detailed error message\nwith the line number, but in today's world of \"service start\", \"systemctl\nstart\", and higher level of control such as Patroni and Kubernetes, getting\nthings to show in a terminal window isn't happening. We can't work around\n2>&1.\n\n\n> Or maybe there could be a \"check configuration\" subcommand which checks\n> the configuration.\n>\n\nThere are things inside of Postgres once it has started, but yeah,\nsomething akin to visudo would be nice for editing config files.\n\n\n> But I think it would be way more useful than a potentially never-ending\n> series of patches to liberalize the config parser.\n>\n\nIt's a single semicolon, not a sign of the parser apocalypse. I've no plans\nfor future enhancements, but if they do no harm and make Postgres more user\nfriendly, I will support them.\n\nCheers,\nGreg\n\nOn Tue, Jul 11, 2023 at 11:04 AM Isaac Morland <[email protected]> wrote:Please, no!There is no end to accepting sloppy syntax. What next, allow \"SET random_page_cost = 2.5;\" (with or without semicolon) in config files?Well yes, there is an end. A single, trailing semicolon. Full stop. It's not a slippery slope in which we end up asking the AI parser to interpret our haikus to derive the actual value. The postgresql.conf file is not some finicky YAML/JSON beast - we already support some looseness in quoting or not quoting values, optional whitespace, etc. Think of the trailing semicolon as whitespace, if you like. You can see from the patch that this does not replace EOL/EOF. I'd be more interested in improvements in visibility of errors. For example, maybe if I try to start the server and there is a config file problem, I could somehow get a straightforward error message right in the terminal window complaining about the line of the configuration which is wrong.That ship has long since sailed. We already send a detailed error message with the line number, but in today's world of \"service start\", \"systemctl start\", and higher level of control such as Patroni and Kubernetes, getting things to show in a terminal window isn't happening. We can't work around 2>&1. Or maybe there could be a \"check configuration\" subcommand which checks the configuration.There are things inside of Postgres once it has started, but yeah, something akin to visudo would be nice for editing config files. But I think it would be way more useful than a potentially never-ending series of patches to liberalize the config parser.It's a single semicolon, not a sign of the parser apocalypse. I've no plans for future enhancements, but if they do no harm and make Postgres more user friendly, I will support them.Cheers,Greg",
"msg_date": "Tue, 11 Jul 2023 12:21:46 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forgive trailing semicolons inside of config files"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-11 12:21:46 -0400, Greg Sabino Mullane wrote:\n> On Tue, Jul 11, 2023 at 11:04 AM Isaac Morland <[email protected]>\n> > Or maybe there could be a \"check configuration\" subcommand which checks\n> > the configuration.\n> >\n>\n> There are things inside of Postgres once it has started, but yeah,\n> something akin to visudo would be nice for editing config files.\n\nYou can also do it kind-of-reasonably with the server binary, like:\nPGDATA=/srv/dev/pgdev-dev /path/to/postgres -C server_version; echo $?\n\n\n> > I'd be more interested in improvements in visibility of errors. For\n> > example, maybe if I try to start the server and there is a config file\n> > problem, I could somehow get a straightforward error message right in the\n> > terminal window complaining about the line of the configuration which is\n> > wrong.\n> >\n>\n> That ship has long since sailed. We already send a detailed error message\n> with the line number, but in today's world of \"service start\", \"systemctl\n> start\", and higher level of control such as Patroni and Kubernetes, getting\n> things to show in a terminal window isn't happening. We can't work around\n> 2>&1.\n\nAt least with debian's infrastructure, both systemctl start and reload show\nerrors reasonably well:\n\nstart with broken config:\nJul 11 19:13:40 awork3 systemd[1]: Starting [email protected] - PostgreSQL Cluster 15-test...\nJul 11 19:13:40 awork3 postgresql@15-test[3217452]: Error: invalid line 3 in /var/lib/postgresql/15/test/postgresql.auto.conf: dd\nJul 11 19:13:40 awork3 systemd[1]: [email protected]: Can't open PID file /run/postgresql/15-test.pid (yet?) after start: No such file or directory\n\n\nreload with broken config:\nJul 11 19:10:38 awork3 systemd[1]: Reloading [email protected] - PostgreSQL Cluster 15-test...\nJul 11 19:10:38 awork3 postgresql@15-test[3217175]: Error: invalid line 3 in /var/lib/postgresql/15/test/postgresql.auto.conf: dd\nJul 11 19:10:38 awork3 systemd[1]: [email protected]: Control process exited, code=exited, status=1/FAILURE\nJul 11 19:10:38 awork3 systemd[1]: Reload failed for [email protected] - PostgreSQL Cluster 15-test.\n\nHowever: It looks like that's all implemented in debian specific tooling,\nrather than PG itself. Oops.\n\n\nLooks like we could make this easier in core postgres by adding one more\nsd_notify() call, with something like\nSTATUS=reload failed due to syntax error in file \"/srv/dev/pgdev-dev/postgresql.conf\" line 821, near end of line\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jul 2023 19:28:27 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forgive trailing semicolons inside of config files"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Looks like we could make this easier in core postgres by adding one more\n> sd_notify() call, with something like\n> STATUS=reload failed due to syntax error in file \"/srv/dev/pgdev-dev/postgresql.conf\" line 821, near end of line\n\nSeems reasonable to investigate. The systemd camel's nose is already\ninside our tent, so we might as well consider more systemd-specific\nhacks to improve the user experience. But it seems like we'd need\nsome careful thought about just which messages to report.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jul 2023 22:37:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forgive trailing semicolons inside of config files"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWe have been working on the pg_adviser\n<https://github.com/DrPostgres/pg_adviser> extension whose goal is to\nsuggest indexes by creating virtual/hypothetical indexes and see how it\naffects the query cost.\n\nThe hypothetical index shouldn't take any space on the disk (allocates 0\npages) so we give it the flag *INDEX_CREATE_SKIP_BUILD.*\nBut the problem comes from here when the function *get_relation_info *is\ncalled in planning stage, it tries to calculate the B-Tree height by\ncalling function *_bt_getrootheight*, but the B-Tree is not built at all,\nand its metadata page (which is block 0 in our case) doesn't exist, so this\nreturns error that it cannot read the page (since it doesn't exist).\n\nI tried to debug the code and found that this feature was introduced in\nversion 9.3 under this commit [1]. I think that in the code we need to\ncheck if it's a B-Tree index *AND *the index is built/have some pages, then\nwe can go and calculate it otherwise just put it to -1\n\nI mean instead of this\nif (info->relam == BTREE_AM_OID)\n{\n/* For btrees, get tree height while we have the index open */\ninfo->tree_height = _bt_getrootheight(indexRelation);\n}\nelse\n{\n/* For other index types, just set it to \"unknown\" for now */\ninfo->tree_height = -1;\n}\n\nThe first line should be\nif (info->relam == BTREE_AM_OID && info->pages > 0)\nor use the storage manager (smgr) to know if the first block exists.\n\nI would appreciate it if anyone can agree/approve or deny so that I know if\nanything I am missing :)\n\nThanks everyone :)\n\n[1]\nhttps://github.com/postgres/postgres/commit/31f38f28b00cbe2b9267205359e3cf7bafa1cb97\n\nHi everyone,We have been working on the pg_adviser extension whose goal is to suggest indexes by creating virtual/hypothetical indexes and see how it affects the query cost.The hypothetical index shouldn't take any space on the disk (allocates 0 pages) so we give it the flag INDEX_CREATE_SKIP_BUILD.But the problem comes from here when the function get_relation_info is called in planning stage, it tries to calculate the B-Tree height by calling function _bt_getrootheight,\n but the B-Tree is not built at all, and its metadata page (which is \nblock 0 in our case) doesn't exist, so this returns error that it cannot\n read the page (since it doesn't exist).I tried to \ndebug the code and found that this feature was introduced in version 9.3\n under this commit [1]. I think that in the code we need to check if \nit's a B-Tree index AND the index is built/have some pages, then we can go and calculate it otherwise just put it to -1I mean instead of thisif (info->relam == BTREE_AM_OID){ /* For btrees, get tree height while we have the index open */ info->tree_height = _bt_getrootheight(indexRelation);}else{ /* For other index types, just set it to \"unknown\" for now */ info->tree_height = -1;}The first line should beif (info->relam == BTREE_AM_OID && info->pages > 0)or use the storage manager (smgr) to know if the first block exists.I would appreciate it if anyone can agree/approve or deny so that I know if anything I am missing :)Thanks everyone :)[1] https://github.com/postgres/postgres/commit/31f38f28b00cbe2b9267205359e3cf7bafa1cb97",
"msg_date": "Tue, 11 Jul 2023 19:35:14 +0300",
"msg_from": "Ahmed Ibrahim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issue in _bt_getrootheight"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 9:35 AM Ahmed Ibrahim\n<[email protected]> wrote:\n>\n> We have been working on the pg_adviser extension whose goal is to suggest indexes by creating virtual/hypothetical indexes and see how it affects the query cost.\n>\n> The hypothetical index shouldn't take any space on the disk (allocates 0 pages) so we give it the flag INDEX_CREATE_SKIP_BUILD.\n> But the problem comes from here when the function get_relation_info is called in planning stage, it tries to calculate the B-Tree height by calling function _bt_getrootheight, but the B-Tree is not built at all, and its metadata page (which is block 0 in our case) doesn't exist, so this returns error that it cannot read the page (since it doesn't exist).\n>\n> I tried to debug the code and found that this feature was introduced in version 9.3 under this commit [1]. I think that in the code we need to check if it's a B-Tree index AND the index is built/have some pages, then we can go and calculate it otherwise just put it to -1\n\n> I mean instead of this\n> if (info->relam == BTREE_AM_OID)\n> {\n> /* For btrees, get tree height while we have the index open */\n> info->tree_height = _bt_getrootheight(indexRelation);\n> }\n> else\n> {\n> /* For other index types, just set it to \"unknown\" for now */\n> info->tree_height = -1;\n> }\n>\n> The first line should be\n> if (info->relam == BTREE_AM_OID && info->pages > 0)\n> or use the storage manager (smgr) to know if the first block exists.\n\nI think the better method would be to calculate the index height\n*after* get_relation_info_hook is called. That way, instead of the\nserver guessing whether or not an index is hypothetical it can rely on\nthe index adviser's notion of which index is hypothetical. The hook\nimplementer has the opportunity to not only mark the\nindexOptInfo->hypothetical = true, but also calculate the tree height,\nif they can.\n\nPlease see attached the patch that does this. Let me know if this patch helps.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Wed, 19 Jul 2023 20:45:41 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in _bt_getrootheight"
},
{
"msg_contents": "Gurjeet Singh <[email protected]> writes:\n> Please see attached the patch that does this. Let me know if this patch helps.\n\nI don't like this patch one bit, because it adds a lot of overhead\n(i.e., an extra index_open/close cycle for every btree index in every\nquery) to support a tiny minority use-case. How come we don't\nalready know whether the index is hypothetical at the point where\n_bt_getrootheight is called now?\n\nActually, looking at the existing comment at the call site:\n\n /*\n * Allow a plugin to editorialize on the info we obtained from the\n * catalogs. Actions might include altering the assumed relation size,\n * removing an index, or adding a hypothetical index to the indexlist.\n */\n if (get_relation_info_hook)\n (*get_relation_info_hook) (root, relationObjectId, inhparent, rel);\n\nreminds me that the design intention was that hypothetical indexes\nwould get added to the list by get_relation_info_hook itself.\nIf that's not how the index adviser is operating, maybe we need\nto have a discussion about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jul 2023 13:41:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in _bt_getrootheight"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 10:42 AM Tom Lane <[email protected]> wrote:\n>\n> Gurjeet Singh <[email protected]> writes:\n> > Please see attached the patch that does this. Let me know if this patch helps.\n>\n> I don't like this patch one bit, because it adds a lot of overhead\n> (i.e., an extra index_open/close cycle for every btree index in every\n> query) to support a tiny minority use-case.\n\nI anticipated the patch's performance impact may be a concern, but\nbefore addressing it I wanted to see if the patch actually helped\nIndex Adviser. Ahmed has confirmed that my proposed patch works for\nhim.\n\nI believe the additional index_open() would not affect the performance\nsignificantly, since the very same indexes were index_open()ed just\nbefore calling the get_relation_info_hook. All the relevant caches\nwould be quite fresh because of the index_open() in the same function\nabove. And since the locks taken on these indexes haven't been\nreleased, we don't have to work hard to take any new locks (hence the\nindex_open() with NoLock flag).\n\n> How come we don't\n> already know whether the index is hypothetical at the point where\n> _bt_getrootheight is called now?\n\nBecause the 'hypthetical' flag is not stored in catalogs, and that's\nokay; see below.\n\nAt that point, the only indication that an index may be a hypothetical\nindex is if RelationGetNumberOfBlocks() returns 0 for it, and that's\nwhat Ahmed's proposed patch relied on. But I think extrapolating that\ninfo->pages==0 implies it's a hypothetical index, is stretching that\nassumption too far.\n\n> Actually, looking at the existing comment at the call site:\n>\n> /*\n> * Allow a plugin to editorialize on the info we obtained from the\n> * catalogs. Actions might include altering the assumed relation size,\n> * removing an index, or adding a hypothetical index to the indexlist.\n> */\n> if (get_relation_info_hook)\n> (*get_relation_info_hook) (root, relationObjectId, inhparent, rel);\n>\n> reminds me that the design intention was that hypothetical indexes\n> would get added to the list by get_relation_info_hook itself.\n> If that's not how the index adviser is operating, maybe we need\n> to have a discussion about that.\n\nHistorically, to avoid having to hand-create the IndexOptInfo and risk\ngetting something wrong, the Index Adviser has used index_create() to\ncreate a full-blown btree index, (sans that actual build step, with\nskip_build = true), and saving the returned OID. This ensured that all\nthe catalog entries were in place before it called the\nstandard_planner(). This way Postgres would build IndexOptInfo from\nthe entries in the catalog, as usual. Then, inside the\nget_relation_info_hook() callback, Index Adviser identifies these\nvirtual indexes by their OID, and at that point marks them with\nhypothetical=true.\n\nAfter planning is complete, the Index Adviser scans the plan to find\nany IndexScan objects that have indexid matching the saved OIDs.\n\nIndex Adviser performs the whole thing in a subtransaction, which gets\nrolled back. So the hypothetical indexes are not visible to any other\ntransaction, ever.\n\nAssigning OID to a hypothetical index is necessary, and I believe\nindex_create() is the right way to do it. In fact, in the 9.1 cycle\nthere was a bug fixed (a2095f7fb5, where the hypothetical flag was\nalso invented), to solve precisely this problem; to allow the Index\nAdviser to use OIDs to identify hypothetical indexes that were\nused/chosen by the planner.\n\nBut now I believe this architecture of the Index Adviser needs to\nchange, primarily to alleviate the performance impact of creating\ncatalog entries, subtransaction overhead, and the catalog bloat caused\nby index_create() (and then rolling back the subtransaction). As part\nof this architecture change, the Index Adviser will have to cook up\nIndexOptInfo objects and append them to the relation. And that should\nline up with the design intention you mention.\n\nBut the immediate blocker is how to assign OIDs to the hypothetical\nindexes so that all hypothetical indexes chosen by the planner can be\nidentified by the Index Adviser. I'd like the Index Adviser to work on\nread-only /standby nodes as well, but that wouldn't be possible\nbecause calling GetNewObjectId() is not allowed during recovery. I see\nthat HypoPG uses a neat little hack [1]. Perhaps Index Adviser will\nalso have to resort to that trick.\n\n[1]: hypo_get_min_fake_oid() finds the usable oid range below\nFirstNormalObjectId\nhttps://github.com/HypoPG/hypopg/blob/57d832ce7a2937fe7d42b113c7e95dd1f129795b/hypopg.c#L458\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Mon, 24 Jul 2023 03:33:58 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in _bt_getrootheight"
}
] |
[
{
"msg_contents": "Hi,\nWhile testing some use cases, I encountered 'ERROR: attempted to update\ninvisible tuple' when a partitioned index is attached to a parent index\nwhich is also a replica identity index.\nBelow is the reproducible test case. The issue is seen only when the\ncommands are executed inside a transaction.\n\nBEGIN;\n\nCREATE TABLE foo (\n id INT NOT NULL,\n ts TIMESTAMP WITH TIME ZONE NOT NULL\n) PARTITION BY RANGE (ts);\n\nCREATE TABLE foo_2023 (\n id INT NOT NULL,\n ts TIMESTAMP WITH TIME ZONE NOT NULL\n);\n\nALTER TABLE ONLY foo\n ATTACH PARTITION foo_2023\n FOR VALUES FROM ('2023-01-01 00:00:00+09') TO ('2024-01-01 00:00:00+09');\n\nCREATE UNIQUE INDEX pk_foo\n ON ONLY foo USING btree (id, ts);\n\nALTER TABLE ONLY foo REPLICA IDENTITY USING INDEX pk_foo;\n\nCREATE UNIQUE INDEX foo_2023_id_ts_ix ON foo_2023 USING btree (id, ts);\n\nALTER INDEX pk_foo ATTACH PARTITION foo_2023_id_ts_ix;\n\n\nThe 'ALTER INDEX pk_foo ATTACH PARTITION foo_2023_id_ts_ix' returns\n\"*ERROR: attempted to update invisible tuple\"*\n\nBelow are few observations from debugging:\n\n[image: image.png]\n\n[image: image.png]\n\nThe error is seen in validatePartitionedIndex() while validating the partition.\n\nThis function marks the parent index as VALID if it found as many\ninherited indexes as the partitioned table has partitions.\n\nThe pg_index tuple is fetched from partedIdx->rd_indextuple.Iit looks\nlike the index tuple is not refreshed.\n\nThe 'indisreplident' is false, the ctid field value is old and it does\nnot reflect the ctid changes made by 'ALTER TABLE ONLY foo REPLICA\nIDENTITY USING INDEX pk_foo'.\n\nAny suggestions ?\n\n\nRegards,\nShruthi KC\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 11 Jul 2023 22:52:16 +0530",
"msg_from": "Shruthi Gowda <[email protected]>",
"msg_from_op": true,
"msg_subject": "'ERROR: attempted to update invisible tuple' from 'ALTER INDEX ...\n ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 10:52:16PM +0530, Shruthi Gowda wrote:\n> While testing some use cases, I encountered 'ERROR: attempted to update\n> invisible tuple' when a partitioned index is attached to a parent index\n> which is also a replica identity index.\n> Below is the reproducible test case. The issue is seen only when the\n> commands are executed inside a transaction.\n\nThanks for the report, reproduced here.\n\n> The 'ALTER INDEX pk_foo ATTACH PARTITION foo_2023_id_ts_ix' returns\n> \"*ERROR: attempted to update invisible tuple\"*\n\nWhile working recently on what has led to cfc43ae and fc55c7f, I\nreally got the feeling that there could be some command sequences that\nlacked some CCIs (or CommandCounterIncrement calls) to make sure that\nthe catalog updates are visible in any follow-up steps in the same\ntransaction.\n\n> The 'indisreplident' is false, the ctid field value is old and it does\n> not reflect the ctid changes made by 'ALTER TABLE ONLY foo REPLICA\n> IDENTITY USING INDEX pk_foo'.\n\nYour report is telling that we are missing a CCI somewhere in this\nsequence. I would have thought that relation_mark_replica_identity()\nis the correct place when the pg_index entry is dirtied, but that does\nnot seem correct. Hmm.\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 09:38:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 09:38:41AM +0900, Michael Paquier wrote:\n> While working recently on what has led to cfc43ae and fc55c7f, I\n> really got the feeling that there could be some command sequences that\n> lacked some CCIs (or CommandCounterIncrement calls) to make sure that\n> the catalog updates are visible in any follow-up steps in the same\n> transaction.\n\nWait a minute. The validation of a partitioned index uses a copy of\nthe pg_index tuple from the relcache, which be out of date:\n newtup = heap_copytuple(partedIdx->rd_indextuple);\n ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = true;\n\nAnd it seems to me that we should do the catalog update based on a\ncopy of a tuple coming from the syscache, no? Attached is a patch\nthat fixes your issue with more advanced regression tests that use two\nlevels of partitioning, looping twice through an update of indisvalid\nwhen attaching the leaf index (the test reproduces the problem on\nHEAD, as well).\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 14:42:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 11:12 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 09:38:41AM +0900, Michael Paquier wrote:\n> > While working recently on what has led to cfc43ae and fc55c7f, I\n> > really got the feeling that there could be some command sequences that\n> > lacked some CCIs (or CommandCounterIncrement calls) to make sure that\n> > the catalog updates are visible in any follow-up steps in the same\n> > transaction.\n>\n> Wait a minute. The validation of a partitioned index uses a copy of\n> the pg_index tuple from the relcache, which be out of date:\n> newtup = heap_copytuple(partedIdx->rd_indextuple);\n> ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = true;\n\nBut why the recache entry is outdated, does that mean that in the\nprevious command, we missed registering the invalidation for the\nrecache entry?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Jul 2023 11:38:05 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 11:38:05AM +0530, Dilip Kumar wrote:\n> On Wed, Jul 12, 2023 at 11:12 AM Michael Paquier <[email protected]> wrote:\n>>\n>> On Wed, Jul 12, 2023 at 09:38:41AM +0900, Michael Paquier wrote:\n>> > While working recently on what has led to cfc43ae and fc55c7f, I\n>> > really got the feeling that there could be some command sequences that\n>> > lacked some CCIs (or CommandCounterIncrement calls) to make sure that\n>> > the catalog updates are visible in any follow-up steps in the same\n>> > transaction.\n>>\n>> Wait a minute. The validation of a partitioned index uses a copy of\n>> the pg_index tuple from the relcache, which be out of date:\n>> newtup = heap_copytuple(partedIdx->rd_indextuple);\n>> ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = true;\n> \n> But why the recache entry is outdated, does that mean that in the\n> previous command, we missed registering the invalidation for the\n> recache entry?\n\nYes, something's still a bit off here, even if switching a partitioned\nindex to become valid should use a fresh tuple copy from the syscache.\n\nTaking the test case of upthread, from what I can see, the ALTER TABLE\n.. REPLICA IDENTITY registers two relcache invalidations for pk_foo\n(via RegisterRelcacheInvalidation), which is the relcache entry whose\nstuff is messed up. I would have expected a refresh of the cache of\npk_foo to happen when doing the ALTER INDEX .. ATTACH PARTITION, but\nfor some reason it does not happen when running the whole in a\ntransaction block. I cannot put my finger on what's wrong for the\nmoment, but based on my current impressions the inval requests are\ncorrectly registered when switching the replica identity, but nothing\nabout pk_foo is updated when attaching a partition to it in the last\nstep where the invisible update happens :/ \n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 17:26:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 1:56 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 11:38:05AM +0530, Dilip Kumar wrote:\n> > On Wed, Jul 12, 2023 at 11:12 AM Michael Paquier <[email protected]> wrote:\n> >>\n> >> On Wed, Jul 12, 2023 at 09:38:41AM +0900, Michael Paquier wrote:\n> >> > While working recently on what has led to cfc43ae and fc55c7f, I\n> >> > really got the feeling that there could be some command sequences that\n> >> > lacked some CCIs (or CommandCounterIncrement calls) to make sure that\n> >> > the catalog updates are visible in any follow-up steps in the same\n> >> > transaction.\n> >>\n> >> Wait a minute. The validation of a partitioned index uses a copy of\n> >> the pg_index tuple from the relcache, which be out of date:\n> >> newtup = heap_copytuple(partedIdx->rd_indextuple);\n> >> ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = true;\n> >\n> > But why the recache entry is outdated, does that mean that in the\n> > previous command, we missed registering the invalidation for the\n> > recache entry?\n>\n> Yes, something's still a bit off here, even if switching a partitioned\n> index to become valid should use a fresh tuple copy from the syscache.\n>\n> Taking the test case of upthread, from what I can see, the ALTER TABLE\n> .. REPLICA IDENTITY registers two relcache invalidations for pk_foo\n> (via RegisterRelcacheInvalidation), which is the relcache entry whose\n> stuff is messed up. I would have expected a refresh of the cache of\n> pk_foo to happen when doing the ALTER INDEX .. ATTACH PARTITION, but\n> for some reason it does not happen when running the whole in a\n> transaction block.\n\nI think there is something to do with this code here[1], basically, we\nare in a transaction block so while processing the invalidation we\nhave first cleared the entry for the pk_foo but then we have partially\nrecreated it using 'RelationReloadIndexInfo', in this function we\nhaven't build complete relation descriptor but marked\n'relation->rd_isvalid' as true and due to that next relation_open in\n(ALTER INDEX .. ATTACH PARTITION) will reuse this half backed entry.\nI am still not sure what is the purpose of just reloading the index\nand marking the entry as valid which is not completely valid.\n\nRelationClearRelation()\n{\n..\n/*\n* Even non-system indexes should not be blown away if they are open and\n* have valid index support information. This avoids problems with active\n* use of the index support information. As with nailed indexes, we\n* re-read the pg_class row to handle possible physical relocation of the\n* index, and we check for pg_index updates too.\n*/\nif ((relation->rd_rel->relkind == RELKIND_INDEX ||\nrelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) &&\nrelation->rd_refcnt > 0 &&\nrelation->rd_indexcxt != NULL)\n{\nif (IsTransactionState())\nRelationReloadIndexInfo(relation);\nreturn;\n}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Jul 2023 17:46:00 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 1:22 PM Shruthi Gowda <[email protected]> wrote:\n\n> BEGIN;\n>\n> CREATE TABLE foo (\n> id INT NOT NULL,\n> ts TIMESTAMP WITH TIME ZONE NOT NULL\n> ) PARTITION BY RANGE (ts);\n>\n> CREATE TABLE foo_2023 (\n> id INT NOT NULL,\n> ts TIMESTAMP WITH TIME ZONE NOT NULL\n> );\n>\n> ALTER TABLE ONLY foo\n> ATTACH PARTITION foo_2023\n> FOR VALUES FROM ('2023-01-01 00:00:00+09') TO ('2024-01-01 00:00:00+09');\n>\n> CREATE UNIQUE INDEX pk_foo\n> ON ONLY foo USING btree (id, ts);\n>\n> ALTER TABLE ONLY foo REPLICA IDENTITY USING INDEX pk_foo;\n>\n> CREATE UNIQUE INDEX foo_2023_id_ts_ix ON foo_2023 USING btree (id, ts);\n>\n> ALTER INDEX pk_foo ATTACH PARTITION foo_2023_id_ts_ix;\n>\n>\nThis example confused me quite a bit when I first read it. I think that the\ndocumentation for CREATE INDEX .. ONLY is pretty inadequate. All it says is\n\"Indicates not to recurse creating indexes on partitions, if the table is\npartitioned. The default is to recurse.\" But that would just create a\npermanently empty index, which is of no use to anyone. I think we should\nsomehow explain the intent of this, namely that this creates an initially\ninvalid index which can be made valid by using ALTER INDEX ... ATTACH\nPARTITION once per partition.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jul 11, 2023 at 1:22 PM Shruthi Gowda <[email protected]> wrote:BEGIN;CREATE TABLE foo (\n id INT NOT NULL,\n ts TIMESTAMP WITH TIME ZONE NOT NULL\n) PARTITION BY RANGE (ts);\n\nCREATE TABLE foo_2023 (\n id INT NOT NULL,\n ts TIMESTAMP WITH TIME ZONE NOT NULL\n);\n\nALTER TABLE ONLY foo\n ATTACH PARTITION foo_2023\n FOR VALUES FROM ('2023-01-01 00:00:00+09') TO ('2024-01-01 00:00:00+09');\n\nCREATE UNIQUE INDEX pk_foo\n ON ONLY foo USING btree (id, ts);\n\nALTER TABLE ONLY foo REPLICA IDENTITY USING INDEX pk_foo;\n\nCREATE UNIQUE INDEX foo_2023_id_ts_ix ON foo_2023 USING btree (id, ts);\n\nALTER INDEX pk_foo ATTACH PARTITION foo_2023_id_ts_ix;This example confused me quite a bit when I first read it. I think that the documentation for CREATE INDEX .. ONLY is pretty inadequate. All it says is \"Indicates not to recurse creating indexes on partitions, if the table is partitioned. The default is to recurse.\" But that would just create a permanently empty index, which is of no use to anyone. I think we should somehow explain the intent of this, namely that this creates an initially invalid index which can be made valid by using ALTER INDEX ... ATTACH PARTITION once per partition.-- Robert HaasEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 12 Jul 2023 08:36:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 4:26 AM Michael Paquier <[email protected]> wrote:\n> Taking the test case of upthread, from what I can see, the ALTER TABLE\n> .. REPLICA IDENTITY registers two relcache invalidations for pk_foo\n> (via RegisterRelcacheInvalidation), which is the relcache entry whose\n> stuff is messed up. I would have expected a refresh of the cache of\n> pk_foo to happen when doing the ALTER INDEX .. ATTACH PARTITION, but\n> for some reason it does not happen when running the whole in a\n> transaction block. I cannot put my finger on what's wrong for the\n> moment, but based on my current impressions the inval requests are\n> correctly registered when switching the replica identity, but nothing\n> about pk_foo is updated when attaching a partition to it in the last\n> step where the invisible update happens :/\n\nI'm not sure exactly what is happening here, but it looks to me like\nATExecReplicaIdentity() only takes ShareLock on the index and\nnevertheless feels entitled to update the pg_index tuple, which is\npretty strange. We normally require AccessExclusiveLock to perform DDL\non an object, and in the limited exceptions that we have to that rule\n- see AlterTableGetLockLevel - it's pretty much always a\nself-exclusive lock. Otherwise, two backends might try to do the same\nDDL operation at the same time, which would lead to low-level failures\ntrying to update the same tuple such as the one seen here.\n\nBut even if that doesn't happen or is prevented by some other\nmechanism, there's still a synchronization problem. Suppose backend B1\nmodifies some state via a DDL operation on table T and then afterward\nbackend B2 wants to perform a non-DDL operation that depends on that\nstate. Well, B1 takes some lock on the relation, and B2 takes a lock\nthat would conflict with it, and that guarantees that B2 starts after\nB1 commits. That means that B2 is guaranteed to see the invalidations\nthat were queued by B1, which means it will flush any state out of its\ncache that was made stale by the operation performed by B1. If the\nlocks didn't conflict, B2 might start before B1 committed and either\nfail to update its caches or update them but with a version of the\ntuples that's about to be made obsolete when B1 commits. So ShareLock\ndoesn't feel like a very safe choice here.\n\nBut I'm not quite sure exactly what's going wrong, either. Every\nupdate is going to call CacheInvalidateHeapTuple(), and updating\neither an index's pg_class tuple or its pg_index tuple should queue up\na relcache invalidation, and CommandEndInvalidationMessages() should\ncause that to be processed. If this were multiple transactions, the\nonly thing that would be different is that the invalidation messages\nwould be in the shared queue, so maybe there's something going on with\nthe timing of CommandEndInvalidationMessages() vs.\nAcceptInvalidationMessages() that accounts for the problem occurring\nin one case but not the other. But I do wonder whether the underlying\nproblem is that what ATExecReplicaIdentity() is doing is not really\nsafe.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Jul 2023 10:01:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 5:46 PM Dilip Kumar <[email protected]> wrote:\n\n> On Wed, Jul 12, 2023 at 1:56 PM Michael Paquier <[email protected]>\n> wrote:\n> >\n> > On Wed, Jul 12, 2023 at 11:38:05AM +0530, Dilip Kumar wrote:\n> > > On Wed, Jul 12, 2023 at 11:12 AM Michael Paquier <[email protected]>\n> wrote:\n> > >>\n> > >> On Wed, Jul 12, 2023 at 09:38:41AM +0900, Michael Paquier wrote:\n> > >> > While working recently on what has led to cfc43ae and fc55c7f, I\n> > >> > really got the feeling that there could be some command sequences\n> that\n> > >> > lacked some CCIs (or CommandCounterIncrement calls) to make sure\n> that\n> > >> > the catalog updates are visible in any follow-up steps in the same\n> > >> > transaction.\n> > >>\n> > >> Wait a minute. The validation of a partitioned index uses a copy of\n> > >> the pg_index tuple from the relcache, which be out of date:\n> > >> newtup = heap_copytuple(partedIdx->rd_indextuple);\n> > >> ((Form_pg_index) GETSTRUCT(newtup))->indisvalid = true;\n> > >\n> > > But why the recache entry is outdated, does that mean that in the\n> > > previous command, we missed registering the invalidation for the\n> > > recache entry?\n> >\n> > Yes, something's still a bit off here, even if switching a partitioned\n> > index to become valid should use a fresh tuple copy from the syscache.\n> >\n> > Taking the test case of upthread, from what I can see, the ALTER TABLE\n> > .. REPLICA IDENTITY registers two relcache invalidations for pk_foo\n> > (via RegisterRelcacheInvalidation), which is the relcache entry whose\n> > stuff is messed up. I would have expected a refresh of the cache of\n> > pk_foo to happen when doing the ALTER INDEX .. ATTACH PARTITION, but\n> > for some reason it does not happen when running the whole in a\n> > transaction block.\n>\n> I think there is something to do with this code here[1], basically, we\n> are in a transaction block so while processing the invalidation we\n> have first cleared the entry for the pk_foo but then we have partially\n> recreated it using 'RelationReloadIndexInfo', in this function we\n> haven't build complete relation descriptor but marked\n> 'relation->rd_isvalid' as true and due to that next relation_open in\n> (ALTER INDEX .. ATTACH PARTITION) will reuse this half backed entry.\n> I am still not sure what is the purpose of just reloading the index\n> and marking the entry as valid which is not completely valid.\n>\n> RelationClearRelation()\n> {\n> ..\n> /*\n> * Even non-system indexes should not be blown away if they are open and\n> * have valid index support information. This avoids problems with active\n> * use of the index support information. As with nailed indexes, we\n> * re-read the pg_class row to handle possible physical relocation of the\n> * index, and we check for pg_index updates too.\n> */\n> if ((relation->rd_rel->relkind == RELKIND_INDEX ||\n> relation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX) &&\n> relation->rd_refcnt > 0 &&\n> relation->rd_indexcxt != NULL)\n> {\n> if (IsTransactionState())\n> RelationReloadIndexInfo(relation);\n> return;\n> }\n>\n I reviewed the function RelationReloadIndexInfo() and observed that the\n'indisreplident' field and the SelfItemPointer 't_self' are not refreshed\nto the pg_index tuple of the index.\n Attached is the patch that fixes the above issue.",
"msg_date": "Wed, 12 Jul 2023 21:57:54 +0530",
"msg_from": "Shruthi Gowda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 12:28 PM Shruthi Gowda <[email protected]> wrote:\n> I reviewed the function RelationReloadIndexInfo() and observed that the 'indisreplident' field and the SelfItemPointer 't_self' are not refreshed to the pg_index tuple of the index.\n> Attached is the patch that fixes the above issue.\n\nOh, interesting. The fact that indisreplident isn't copied seems like\na pretty clear mistake, but I'm guessing that the fact that t_self\nwasn't refreshed was deliberate and that the author of this code\ndidn't really intend for callers to look at the t_self value. We could\nchange our mind about whether that ought to be allowed, though. But,\nlike, none of the other tuple header fields are copied either... xmax,\nxvac, infomask, etc.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Jul 2023 16:02:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 04:02:23PM -0400, Robert Haas wrote:\n> Oh, interesting. The fact that indisreplident isn't copied seems like\n> a pretty clear mistake, but I'm guessing that the fact that t_self\n> wasn't refreshed was deliberate and that the author of this code\n> didn't really intend for callers to look at the t_self value. We could\n> change our mind about whether that ought to be allowed, though. But,\n> like, none of the other tuple header fields are copied either... xmax,\n> xvac, infomask, etc.\n\nSee 3c84046 and 8ec9438, mainly, from Tom. I didn't know that this is\nused as a shortcut to reload index information in the cache because it\nis much cheaper than a full index information rebuild. I agree that\nnot copying indisreplident in this code path is a mistake as this bug\nshows, because any follow-up command run in a transaction that changed\nthis field would get an incorrect information reference.\n\nNow, I have to admit that I am not completely sure what the\nconsequences of this addition are when it comes to concurrent index\noperations (CREATE/DROP INDEX, REINDEX):\n /* Copy xmin too, as that is needed to make sense of indcheckxmin */\n HeapTupleHeaderSetXmin(relation->rd_indextuple->t_data,\n HeapTupleHeaderGetXmin(tuple->t_data));\n+ ItemPointerCopy(&tuple->t_self, &relation->rd_indextuple->t_self);\n\nAnyway, as I have pointed upthread, I think that the craziness is also\nin validatePartitionedIndex() where this stuff thinks that it is OK to\nuse a copy the pg_index tuple coming from the relcache. As this\nreport proves, it is *not* safe, because we may miss a lot of\ninformation not updated by RelationReloadIndexInfo() that other\ncommands in the same transaction block may have updated, and the\ncode would insert into the catalog an inconsistent tuple for a\npartitioned index switched to become valid.\n\nI agree that updating indisreplident in this cheap index reload path\nis necessary, as well. Does my suggestion of using the syscache not\nmake sense for somebody here? Note that this is what all the other\ncode paths do for catalog updates of pg_index when retrieving a copy\nof its tuples.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 07:25:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 10:01:49AM -0400, Robert Haas wrote:\n> I'm not sure exactly what is happening here, but it looks to me like\n> ATExecReplicaIdentity() only takes ShareLock on the index and\n> nevertheless feels entitled to update the pg_index tuple, which is\n> pretty strange. We normally require AccessExclusiveLock to perform DDL\n> on an object, and in the limited exceptions that we have to that rule\n> - see AlterTableGetLockLevel - it's pretty much always a\n> self-exclusive lock. Otherwise, two backends might try to do the same\n> DDL operation at the same time, which would lead to low-level failures\n> trying to update the same tuple such as the one seen here.\n> \n> But even if that doesn't happen or is prevented by some other\n> mechanism, there's still a synchronization problem. Suppose backend B1\n> modifies some state via a DDL operation on table T and then afterward\n> backend B2 wants to perform a non-DDL operation that depends on that\n> state. Well, B1 takes some lock on the relation, and B2 takes a lock\n> that would conflict with it, and that guarantees that B2 starts after\n> B1 commits. That means that B2 is guaranteed to see the invalidations\n> that were queued by B1, which means it will flush any state out of its\n> cache that was made stale by the operation performed by B1. If the\n> locks didn't conflict, B2 might start before B1 committed and either\n> fail to update its caches or update them but with a version of the\n> tuples that's about to be made obsolete when B1 commits. So ShareLock\n> doesn't feel like a very safe choice here.\n\nYes, I also got to wonder whether it is OK to hold only a ShareLock\nfor the index being used as a replica identity. We hold an AEL on the\nparent table, and ShareLock is sufficient to prevent concurrent schema\noperations until the transaction that took the lock commit. But\nsurely, that feels inconsistent with the common practices in\ntablecmds.c.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 07:50:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 3:56 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jul 12, 2023 at 04:02:23PM -0400, Robert Haas wrote:\n> > Oh, interesting. The fact that indisreplident isn't copied seems like\n> > a pretty clear mistake, but I'm guessing that the fact that t_self\n> > wasn't refreshed was deliberate and that the author of this code\n> > didn't really intend for callers to look at the t_self value. We could\n> > change our mind about whether that ought to be allowed, though. But,\n> > like, none of the other tuple header fields are copied either... xmax,\n> > xvac, infomask, etc.\n>\n> See 3c84046 and 8ec9438, mainly, from Tom. I didn't know that this is\n> used as a shortcut to reload index information in the cache because it\n> is much cheaper than a full index information rebuild. I agree that\n> not copying indisreplident in this code path is a mistake as this bug\n> shows, because any follow-up command run in a transaction that changed\n> this field would get an incorrect information reference.\n>\n> Now, I have to admit that I am not completely sure what the\n> consequences of this addition are when it comes to concurrent index\n> operations (CREATE/DROP INDEX, REINDEX):\n> /* Copy xmin too, as that is needed to make sense of indcheckxmin */\n> HeapTupleHeaderSetXmin(relation->rd_indextuple->t_data,\n> HeapTupleHeaderGetXmin(tuple->t_data));\n> + ItemPointerCopy(&tuple->t_self, &relation->rd_indextuple->t_self);\n>\n> Anyway, as I have pointed upthread, I think that the craziness is also\n> in validatePartitionedIndex() where this stuff thinks that it is OK to\n> use a copy the pg_index tuple coming from the relcache. As this\n> report proves, it is *not* safe, because we may miss a lot of\n> information not updated by RelationReloadIndexInfo() that other\n> commands in the same transaction block may have updated, and the\n> code would insert into the catalog an inconsistent tuple for a\n> partitioned index switched to become valid.\n>\n> I agree that updating indisreplident in this cheap index reload path\n> is necessary, as well. Does my suggestion of using the syscache not\n> make sense for somebody here? Note that this is what all the other\n> code paths do for catalog updates of pg_index when retrieving a copy\n> of its tuples.\n\nYeah, It seems that using pg_index tuples from relcache is not safe,\nat least for updating the catalog tuples. However, are there known\nrules or do we need to add some comments saying that the pg_index\ntuple from the relcache cannot be used to update the catalog tuple?\nOr do we actually need to update all the tuple header information as\nwell in RelationReloadIndexInfo() in order to fix that invariant so\nthat it can be used for catalog tuple updates as well?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 09:35:17 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 09:35:17AM +0530, Dilip Kumar wrote:\n> Yeah, It seems that using pg_index tuples from relcache is not safe,\n> at least for updating the catalog tuples. However, are there known\n> rules or do we need to add some comments saying that the pg_index\n> tuple from the relcache cannot be used to update the catalog tuple?\n\nI don't recall an implied rule written in the tree about that, on top\nof my mind. Perhaps something about that could be done around the\ndeclaration of RelationData in rel.h, for instance.\n\n> Or do we actually need to update all the tuple header information as\n> well in RelationReloadIndexInfo() in order to fix that invariant so\n> that it can be used for catalog tuple updates as well?\n\nRelationReloadIndexInfo() is designed to be minimal, so I am not\nreally excited about extending it more than necessary without a case\nin favor of it. indisreplident is clearly on the list of things to\nupdate in this concept. The others would need a more careful\nevaluation, though we don't really have a case for doing more, IMO,\nparticularly in the score of stable branches.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 14:26:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 02:26:42PM +0900, Michael Paquier wrote:\n> On Thu, Jul 13, 2023 at 09:35:17AM +0530, Dilip Kumar wrote:\n>> Or do we actually need to update all the tuple header information as\n>> well in RelationReloadIndexInfo() in order to fix that invariant so\n>> that it can be used for catalog tuple updates as well?\n> \n> RelationReloadIndexInfo() is designed to be minimal, so I am not\n> really excited about extending it more than necessary without a case\n> in favor of it. indisreplident is clearly on the list of things to\n> update in this concept. The others would need a more careful\n> evaluation, though we don't really have a case for doing more, IMO,\n> particularly in the score of stable branches.\n\nFYI, I was planning to do something about this thread in the shape of\ntwo different patches: one for the indisreplident missing from the\nRelationReloadIndexInfo() and one for the syscache issue in the\npartitioned index validation. indisreplident use in the backend code\nis interesting, as, while double-checking the code, I did not find a\ncode path involving a command where indisreplident would be checked\nfrom the pg_index tuple in the relcache: all the values are from\ntuples retrieved from the syscache.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 17:10:37 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 1:40 PM Michael Paquier <[email protected]> wrote:\n\n> On Thu, Jul 13, 2023 at 02:26:42PM +0900, Michael Paquier wrote:\n> > On Thu, Jul 13, 2023 at 09:35:17AM +0530, Dilip Kumar wrote:\n> >> Or do we actually need to update all the tuple header information as\n> >> well in RelationReloadIndexInfo() in order to fix that invariant so\n> >> that it can be used for catalog tuple updates as well?\n> >\n> > RelationReloadIndexInfo() is designed to be minimal, so I am not\n> > really excited about extending it more than necessary without a case\n> > in favor of it. indisreplident is clearly on the list of things to\n> > update in this concept. The others would need a more careful\n> > evaluation, though we don't really have a case for doing more, IMO,\n> > particularly in the score of stable branches.\n>\n> FYI, I was planning to do something about this thread in the shape of\n> two different patches: one for the indisreplident missing from the\n> RelationReloadIndexInfo() and one for the syscache issue in the\n> partitioned index validation. indisreplident use in the backend code\n> is interesting, as, while double-checking the code, I did not find a\n> code path involving a command where indisreplident would be checked\n> from the pg_index tuple in the relcache: all the values are from\n> tuples retrieved from the syscache.\n>\n\nAgree with the idea of splitting the patch.\nWhile analyzing the issue I did notice that validatePartitionedIndex() is\nthe only place where the index tuple was copied from rel->rd_indextuple\nhowever was not clear about the motive behind it.\n\nRegards,\nShruthi KC\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jul 13, 2023 at 1:40 PM Michael Paquier <[email protected]> wrote:On Thu, Jul 13, 2023 at 02:26:42PM +0900, Michael Paquier wrote:\n> On Thu, Jul 13, 2023 at 09:35:17AM +0530, Dilip Kumar wrote:\n>> Or do we actually need to update all the tuple header information as\n>> well in RelationReloadIndexInfo() in order to fix that invariant so\n>> that it can be used for catalog tuple updates as well?\n> \n> RelationReloadIndexInfo() is designed to be minimal, so I am not\n> really excited about extending it more than necessary without a case\n> in favor of it. indisreplident is clearly on the list of things to\n> update in this concept. The others would need a more careful\n> evaluation, though we don't really have a case for doing more, IMO,\n> particularly in the score of stable branches.\n\nFYI, I was planning to do something about this thread in the shape of\ntwo different patches: one for the indisreplident missing from the\nRelationReloadIndexInfo() and one for the syscache issue in the\npartitioned index validation. indisreplident use in the backend code\nis interesting, as, while double-checking the code, I did not find a\ncode path involving a command where indisreplident would be checked\nfrom the pg_index tuple in the relcache: all the values are from\ntuples retrieved from the syscache.Agree with the idea of splitting the patch. While analyzing the issue I did notice that validatePartitionedIndex() is the only place where the index tuple was copied from rel->rd_indextuple however was not clear about the motive behind it. Regards,Shruthi KCEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 13 Jul 2023 14:01:49 +0530",
"msg_from": "Shruthi Gowda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 02:01:49PM +0530, Shruthi Gowda wrote:\n> While analyzing the issue I did notice that validatePartitionedIndex() is\n> the only place where the index tuple was copied from rel->rd_indextuple\n> however was not clear about the motive behind it.\n\nNo idea either. Anyway, I've split this stuff into two parts and\napplied the whole across the whole set of stable branches. Thanks!\n--\nMichael",
"msg_date": "Fri, 14 Jul 2023 11:18:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'ERROR: attempted to update invisible tuple' from 'ALTER INDEX\n ... ATTACH PARTITION' on parent index"
}
] |
[
{
"msg_contents": "These patches were created during an unrelated discussion about pgbench.\n\nPlease see emails [1] - [6] linked below, for the past discussion.\n\nIn brief:\n\n> $ pgbench -i -I dtGvp -s 500\n\nThe init-steps are severely under-documented in pgbench --help output.\nI think at least a pointer to the the pgbench docs should be mentioned\nin the pgbench --help output; an average user may not rush to read the\ncode to find the explanation, but a hint to where to find more details\nabout what the letters in --init-steps mean, would save them a lot of\ntime.\n\nPlease see attached 4 variants of the patch. Variant 1 simply tells\nthe reader to consult pgbench documentation. The second variant\nprovides a description for each of the letters, as the documentation\ndoes. The committer can pick the one they find suitable.\n\nThe text \", in the specified order\" is an important detail, that\nshould be included irrespective of the rest of the patch.\n\nMy preference would be to use the first variant, since the second one\nfeels too wordy for --help output. I believe we'll have to choose\nbetween these two; the alternatives will not make anyone happy.\n\nThese two variants show the two extremes; bare minimum vs. everything\nbut the kitchen sink. So one may feel the need to find a middle ground\nand provide a \"sufficient, but not too much\" variant. I have attempted\nthat in variants 3 and 4; also attached.\n\nThe third variant does away with the list of steps, and uses a\nparagraph to describe the letters. And the fourth variant makes that\nparagraph terse.\n\nIn the order of preference I'd choose variant 1, then 2. Variants 3\nand 4 feel like a significant degradation from variant 2.\n\nAttached samples.txt shows the snippets of --help output of each of\nthe variants/patches, for comparison.\n\nIn [6] below, Tristan showed preference for the second variant.\n\n[1] My complaint about -I initi_steps being severely under-documented\nin --help output\nhttps://www.postgresql.org/message-id/CABwTF4XMdHTxemhskad41Vj_hp2nPgifjwegOqR52_8-wEbv2Q%40mail.gmail.com\n\n[2] Tristan Partin agreeing with the complaint, and suggesting a patch\nwould be welcome\nhttps://www.postgresql.org/message-id/CT8BC7RXT33R.3CHYIXGD5NVHK%40gonk\n\n[3] Greg Smith agreeing and saying he'd welcome a few more words about\nthe init_steps in --help output\nhttps://www.postgresql.org/message-id/CAHLJuCUp5_VUo%2BRJ%2BpSnxeiiZfcstRtTubRP8%2Bu8NEqmrbp4aw%40mail.gmail.com\n\n[4] First set of patches\nhttps://www.postgresql.org/message-id/CABwTF4UKv43ZftJadsxs8%3Da07BmA1U4RU3W1qbmDAguVKJAmZw%40mail.gmail.com\n\n[5] Second set of patches\nhttps://www.postgresql.org/message-id/CABwTF4Ww42arY1Q88_iaraVLxqSU%2B8Yb4oKiTT5gD1sineog9w%40mail.gmail.com\n\n[6] Tristan showing preference for the second variant\nhttps://www.postgresql.org/message-id/CTBN5E2K2YSJ.3QYXGZ09JZXIW%40gonk\n\n+CC Tristan Partin and Greg Smith, since they were involved in the\ninitial thread.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Wed, 12 Jul 2023 00:42:14 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Better help output for pgbench -I init_steps"
},
{
"msg_contents": "On 12.07.23 09:42, Gurjeet Singh wrote:\n> These two variants show the two extremes; bare minimum vs. everything\n> but the kitchen sink. So one may feel the need to find a middle ground\n> and provide a \"sufficient, but not too much\" variant. I have attempted\n> that in variants 3 and 4; also attached.\n\nIf you end up with variant 3 or 4, please use double quotes instead of \nsingle quotes.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 12:08:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
},
{
"msg_contents": "On 2023-Jul-12, Gurjeet Singh wrote:\n\n> The init-steps are severely under-documented in pgbench --help output.\n> I think at least a pointer to the the pgbench docs should be mentioned\n> in the pgbench --help output; an average user may not rush to read the\n> code to find the explanation, but a hint to where to find more details\n> about what the letters in --init-steps mean, would save them a lot of\n> time.\n\nAgreed.\n\nI would do it the way `pg_waldump --rmgr=list` or `psql\n--help=variables` are handled: they give you a list of what is\nsupported. You don't have to put the list in the main --help output.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)\n\n\n",
"msg_date": "Wed, 12 Jul 2023 15:41:47 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 3:08 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 12.07.23 09:42, Gurjeet Singh wrote:\n> > These two variants show the two extremes; bare minimum vs. everything\n> > but the kitchen sink. So one may feel the need to find a middle ground\n> > and provide a \"sufficient, but not too much\" variant. I have attempted\n> > that in variants 3 and 4; also attached.\n>\n> If you end up with variant 3 or 4, please use double quotes instead of\n> single quotes.\n\nWill do.\n\nI believe you're suggesting this because in the neighboring help text\nthe string literals use double quotes. I see that other utilities,\nsuch as psql also use double quotes in help text.\n\nIf there's a convention, documented somewhere in our sources, I'd love\nto know and learn other conventions.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 12 Jul 2023 10:47:56 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
},
{
"msg_contents": "On 12.07.23 19:47, Gurjeet Singh wrote:\n>> If you end up with variant 3 or 4, please use double quotes instead of\n>> single quotes.\n> \n> Will do.\n> \n> I believe you're suggesting this because in the neighboring help text\n> the string literals use double quotes. I see that other utilities,\n> such as psql also use double quotes in help text.\n> \n> If there's a convention, documented somewhere in our sources, I'd love\n> to know and learn other conventions.\n\nhttps://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-QUOTATION-MARKS\n\n\n\n",
"msg_date": "Thu, 13 Jul 2023 13:29:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
},
{
"msg_contents": "On 12.07.23 15:41, Alvaro Herrera wrote:\n> On 2023-Jul-12, Gurjeet Singh wrote:\n> \n>> The init-steps are severely under-documented in pgbench --help output.\n>> I think at least a pointer to the the pgbench docs should be mentioned\n>> in the pgbench --help output; an average user may not rush to read the\n>> code to find the explanation, but a hint to where to find more details\n>> about what the letters in --init-steps mean, would save them a lot of\n>> time.\n> \n> Agreed.\n> \n> I would do it the way `pg_waldump --rmgr=list` or `psql\n> --help=variables` are handled: they give you a list of what is\n> supported. You don't have to put the list in the main --help output.\n\nI think I prefer variant 2. Currently we only have 8 steps, so it might \nbe overkill to separate them out into a different option.\n\n\n",
"msg_date": "Tue, 19 Sep 2023 09:20:36 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello,\r\n\r\nI've reviewed all 4 of your patches, each one applies and builds correctly.\r\n\r\n> I think I prefer variant 2. Currently, we only have 8 steps, so it might \r\n> be overkill to separate them out into a different option.\r\n\r\n+1 to this from Peter. Variant 2 is nicely formatted with lots of information which I feel better solves the problem this patch is trying to address. \r\nBoth versions 3 and 4 are a bit too jumbled for my liking without adding anything significant, even the shortened version 4.\r\n\r\nIf we were to go with variant 1 however, I think it would add more to have a link to the pgbench documentation that refers to the different init steps. Perhaps on a new line just under where it says \"see pgbench documentation for a description of these steps\".\r\n\r\nOverall good patch, I'm a firm believer that more information is always better than less.\r\n\r\nTristen\r\n---------------\r\nSoftware Engineer\r\nHighGo Software Inc. (Canada)\r\[email protected]\r\nwww.highgo.ca",
"msg_date": "Fri, 22 Sep 2023 21:01:34 +0000",
"msg_from": "Tristen Raab <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
},
{
"msg_contents": "On 22.09.23 22:01, Tristen Raab wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> Hello,\n> \n> I've reviewed all 4 of your patches, each one applies and builds correctly.\n> \n>> I think I prefer variant 2. Currently, we only have 8 steps, so it might\n>> be overkill to separate them out into a different option.\n> \n> +1 to this from Peter. Variant 2 is nicely formatted with lots of information which I feel better solves the problem this patch is trying to address.\n\nCommitted variant 2. I just changed the initial capitalization of the \nsentences to be more consistent with the surrounding text.\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 22:29:23 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Better help output for pgbench -I init_steps"
}
] |
[
{
"msg_contents": "I realized that commit 19d8e2308bc5 (and 5753d4ee320b before that) added\na new output type to RelationGetIndexAttrBitmap but forgot to list its\neffect in the function's documenting comment. Here's a patch that\nupdates it, making it more specific (and IMO more readable). I also add\na comment to the enum definition, to remind people that the other one\nneeds to be modified.\n\nThis ought to be backpatched to 16.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)",
"msg_date": "Wed, 12 Jul 2023 16:37:16 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "RelationGetIndexAttrBitmap comment outdated"
},
{
"msg_contents": "> On 12 Jul 2023, at 16:37, Alvaro Herrera <[email protected]> wrote:\n> \n> I realized that commit 19d8e2308bc5 (and 5753d4ee320b before that) added\n> a new output type to RelationGetIndexAttrBitmap but forgot to list its\n> effect in the function's documenting comment. Here's a patch that\n> updates it, making it more specific (and IMO more readable). I also add\n> a comment to the enum definition, to remind people that the other one\n> needs to be modified.\n\nLGTM, a clear improvement.\n\n> This ought to be backpatched to 16.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 16:42:48 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RelationGetIndexAttrBitmap comment outdated"
}
] |
[
{
"msg_contents": "Hi All,\nhttps://www.postgresql.org/docs/current/datatype-numeric.html gives me\n\"bad gateway\" error. Attached screen shot. Date/Time datatype\ndocumentation is accessible at\nhttps://www.postgresql.org/docs/current/datatype-datetime.html.\n\nJust got this while wrapping up for the day. Didn't look at what's going wrong.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 12 Jul 2023 20:45:15 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "numeric datatype for current release not available"
},
{
"msg_contents": "On Wed, 12 Jul 2023, 23:15 Ashutosh Bapat, <[email protected]>\nwrote:\n\n> Hi All,\n> https://www.postgresql.org/docs/current/datatype-numeric.html gives me\n> \"bad gateway\" error. Attached screen shot. Date/Time datatype\n> documentation is accessible at\n> https://www.postgresql.org/docs/current/datatype-datetime.html.\n>\n> Just got this while wrapping up for the day. Didn't look at what's going\n> wrong.\n>\n\nit's working here. probably a transient error, it happens from time to\ntime.\n\n>\n\nOn Wed, 12 Jul 2023, 23:15 Ashutosh Bapat, <[email protected]> wrote:Hi All,\nhttps://www.postgresql.org/docs/current/datatype-numeric.html gives me\n\"bad gateway\" error. Attached screen shot. Date/Time datatype\ndocumentation is accessible at\nhttps://www.postgresql.org/docs/current/datatype-datetime.html.\n\nJust got this while wrapping up for the day. Didn't look at what's going wrong.it's working here. probably a transient error, it happens from time to time.",
"msg_date": "Wed, 12 Jul 2023 23:25:18 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric datatype for current release not available"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 8:55 PM Julien Rouhaud <[email protected]> wrote:\n>\n> it's working here. probably a transient error, it happens from time to time.\n\nThanks Julien for looking into it. It's working now.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 13 Jul 2023 09:44:34 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: numeric datatype for current release not available"
}
] |
[
{
"msg_contents": "Greetings,\n\nWith a simple insert such as\n\nINSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id\n\nif a portal is used to get the results then the CommandStatus is not\nreturned on the execute only when the portal is closed. After looking at\nthis more it is really after all of the data is read which is consistent if\nyou don't use a portal, however it would be much more useful if we received\nthe CommandStatus after the insert was completed and before the data\n\nObviously I am biased by the JDBC API which would like to have\n\nPreparedStatement.execute() return the number of rows inserted\nwithout having to wait to read all of the rows returned\n\n\nDave Cramer\n\nGreetings,With a simple insert such as INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING idif a portal is used to get the results then the CommandStatus is not returned on the execute only when the portal is closed. After looking at this more it is really after all of the data is read which is consistent if you don't use a portal, however it would be much more useful if we received the CommandStatus after the insert was completed and before the dataObviously I am biased by the JDBC API which would like to havePreparedStatement.execute() return the number of rows inserted without having to wait to read all of the rows returnedDave Cramer",
"msg_date": "Wed, 12 Jul 2023 16:03:15 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 1:03 PM Dave Cramer <[email protected]> wrote:\n\n>\n> INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id\n>\n> if a portal is used to get the results then the CommandStatus\n>\n\nIIUC the portal is not optional if you including the RETURNING clause.\n\nThere is no CommandStatus message in the protocol, the desired information\nis part of the command tag returned in the CommandComplete message. You\nget that at the end of the command, which has been defined as when any\nportal produced by the command has been fully executed.\n\nYou probably should add your desire to the Version 4 protocol ToDo on the\nwiki.\n\nhttps://wiki.postgresql.org/wiki/Todo#Wire_Protocol_Changes_.2F_v4_Protocol\n\nIf that ever becomes an active project working through the details of that\nlist for desirability and feasibility would be the first thing to happen.\n\nDavid J.\n\nOn Wed, Jul 12, 2023 at 1:03 PM Dave Cramer <[email protected]> wrote:INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING idif a portal is used to get the results then the CommandStatusIIUC the portal is not optional if you including the RETURNING clause.There is no CommandStatus message in the protocol, the desired information is part of the command tag returned in the CommandComplete message. You get that at the end of the command, which has been defined as when any portal produced by the command has been fully executed.You probably should add your desire to the Version 4 protocol ToDo on the wiki.https://wiki.postgresql.org/wiki/Todo#Wire_Protocol_Changes_.2F_v4_ProtocolIf that ever becomes an active project working through the details of that list for desirability and feasibility would be the first thing to happen.David J.",
"msg_date": "Wed, 12 Jul 2023 13:31:04 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 1:03 PM Dave Cramer <[email protected]> wrote:\n>\n> With a simple insert such as\n>\n> INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id\n>\n> if a portal is used to get the results then the CommandStatus is not returned on the execute only when the portal is closed. After looking at this more it is really after all of the data is read which is consistent if you don't use a portal, however it would be much more useful if we received the CommandStatus after the insert was completed and before the data\n>\n> Obviously I am biased by the JDBC API which would like to have\n>\n> PreparedStatement.execute() return the number of rows inserted without having to wait to read all of the rows returned\n\nI believe if RETURNING clause is use, the protocol-level behaviour of\nINSERT is expected to match that of SELECT. If the SELECT command\nbehaves like that (resultset followed by CommandStatus), then I'd say\nthe INSERT RETURNING is behaving as expected.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 12 Jul 2023 13:37:14 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "Dave Cramer\n\n\nOn Wed, 12 Jul 2023 at 16:31, David G. Johnston <[email protected]>\nwrote:\n\n> On Wed, Jul 12, 2023 at 1:03 PM Dave Cramer <[email protected]> wrote:\n>\n>>\n>> INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id\n>>\n>> if a portal is used to get the results then the CommandStatus\n>>\n>\n> IIUC the portal is not optional if you including the RETURNING clause.\n>\n From my testing it isn't required.\n\n>\n> There is no CommandStatus message in the protocol, the desired information\n> is part of the command tag returned in the CommandComplete message. You\n> get that at the end of the command, which has been defined as when any\n> portal produced by the command has been fully executed.\n>\n\nI could argue that the insert is fully completed whether I read the data or\nnot.\n\n>\n> You probably should add your desire to the Version 4 protocol ToDo on the\n> wiki.\n>\n> https://wiki.postgresql.org/wiki/Todo#Wire_Protocol_Changes_.2F_v4_Protocol\n>\n\nthx, will do.\n\nDave\n\n>\n\nDave CramerOn Wed, 12 Jul 2023 at 16:31, David G. Johnston <[email protected]> wrote:On Wed, Jul 12, 2023 at 1:03 PM Dave Cramer <[email protected]> wrote:INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING idif a portal is used to get the results then the CommandStatusIIUC the portal is not optional if you including the RETURNING clause.From my testing it isn't required. There is no CommandStatus message in the protocol, the desired information is part of the command tag returned in the CommandComplete message. You get that at the end of the command, which has been defined as when any portal produced by the command has been fully executed.I could argue that the insert is fully completed whether I read the data or not. You probably should add your desire to the Version 4 protocol ToDo on the wiki.https://wiki.postgresql.org/wiki/Todo#Wire_Protocol_Changes_.2F_v4_Protocolthx, will do.Dave",
"msg_date": "Wed, 12 Jul 2023 17:09:27 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nUmm ... you do realize that we return the rows on-the-fly?\nThe server does not know how many rows got inserted/returned\nuntil it's run the query to completion, at which point all\nthe data has already been sent to the client. There isn't\nany way to return the rowcount before the data, and it wouldn't\nbe some trivial protocol adjustment to make that work differently.\n(What it *would* be is expensive, because we'd have to store\nthose rows somewhere.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jul 2023 17:49:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 17:49, Tom Lane <[email protected]> wrote:\n\n> Dave Cramer <[email protected]> writes:\n> > Obviously I am biased by the JDBC API which would like to have\n> > PreparedStatement.execute() return the number of rows inserted\n> > without having to wait to read all of the rows returned\n>\n> Umm ... you do realize that we return the rows on-the-fly?\n>\nI do realize that.\n\n> The server does not know how many rows got inserted/returned\n>\nWell I haven't looked at the code, but it seems unintuitive that adding the\nreturning clause changes the semantics of insert.\n\nuntil it's run the query to completion, at which point all\n> the data has already been sent to the client. There isn't\n> any way to return the rowcount before the data, and it wouldn't\n> be some trivial protocol adjustment to make that work differently.\n> (What it *would* be is expensive, because we'd have to store\n> those rows somewhere.)\n>\nI wasn't asking for that, I just want the number of rows inserted.\n\nThanks,\n\nDave\n\nOn Wed, 12 Jul 2023 at 17:49, Tom Lane <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nUmm ... you do realize that we return the rows on-the-fly?I do realize that. \nThe server does not know how many rows got inserted/returnedWell I haven't looked at the code, but it seems unintuitive that adding the returning clause changes the semantics of insert. \nuntil it's run the query to completion, at which point all\nthe data has already been sent to the client. There isn't\nany way to return the rowcount before the data, and it wouldn't\nbe some trivial protocol adjustment to make that work differently.\n(What it *would* be is expensive, because we'd have to store\nthose rows somewhere.)I wasn't asking for that, I just want the number of rows inserted.Thanks,Dave",
"msg_date": "Wed, 12 Jul 2023 17:59:29 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 2:59 PM Dave Cramer <[email protected]> wrote:\n\n> On Wed, 12 Jul 2023 at 17:49, Tom Lane <[email protected]> wrote:\n>\n>> Dave Cramer <[email protected]> writes:\n>> > Obviously I am biased by the JDBC API which would like to have\n>> > PreparedStatement.execute() return the number of rows inserted\n>> > without having to wait to read all of the rows returned\n>>\n>> Umm ... you do realize that we return the rows on-the-fly?\n>>\n> I do realize that.\n>\n>> The server does not know how many rows got inserted/returned\n>>\n> Well I haven't looked at the code, but it seems unintuitive that adding\n> the returning clause changes the semantics of insert.\n>\n>\nIt doesn't have to - the insertions are always \"as rows are produced\", it\nis just that in the non-returning case the final row can be sent to\n/dev/null instead of the client (IOW, there is always some destination).\nIn both cases the total number of rows inserted are only reliably known\nwhen the top executor node requests a new tuple and its immediate\npredecessor says \"no more rows present\".\n\nDavid J.\n\nOn Wed, Jul 12, 2023 at 2:59 PM Dave Cramer <[email protected]> wrote:On Wed, 12 Jul 2023 at 17:49, Tom Lane <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nUmm ... you do realize that we return the rows on-the-fly?I do realize that. \nThe server does not know how many rows got inserted/returnedWell I haven't looked at the code, but it seems unintuitive that adding the returning clause changes the semantics of insert. It doesn't have to - the insertions are always \"as rows are produced\", it is just that in the non-returning case the final row can be sent to /dev/null instead of the client (IOW, there is always some destination). In both cases the total number of rows inserted are only reliably known when the top executor node requests a new tuple and its immediate predecessor says \"no more rows present\".David J.",
"msg_date": "Wed, 12 Jul 2023 15:12:50 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nHuh ... just how *is* PreparedStatement.execute() supposed\nto behave when the statement is an INSERT RETURNING?\n\nexecute() -> true\ngetResultSet() -> the rows\ngetMoreResults() -> false\ngetUpdateCount() -> number inserted?\n\nIt seems that would fit the portal's behavior easily enough.\n\nOr is the JDBC spec insisting on some other order?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 12 Jul 2023 20:00:47 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 20:00, <[email protected]> wrote:\n\n> Dave Cramer <[email protected]> writes:\n> > Obviously I am biased by the JDBC API which would like to have\n> > PreparedStatement.execute() return the number of rows inserted\n> > without having to wait to read all of the rows returned\n>\n> Huh ... just how *is* PreparedStatement.execute() supposed\n> to behave when the statement is an INSERT RETURNING?\n>\n\nIt's really executeUpdate which is supposed to return the number of rows\nupdated.\nWithout a cursor it returns right away as all of the results are returned\nby the server. However with cursor you have to wait until you fetch the\nrows before you can get the CommandComplete message which btw is wrong as\nit returns INSERT 0 0 instead of INSERT 2 0\n\nDave\n\nOn Wed, 12 Jul 2023 at 20:00, <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nHuh ... just how *is* PreparedStatement.execute() supposed\nto behave when the statement is an INSERT RETURNING?It's really executeUpdate which is supposed to return the number of rows updated.Without a cursor it returns right away as all of the results are returned by the server. However with cursor you have to wait until you fetch the rows before you can get the CommandComplete message which btw is wrong as it returns INSERT 0 0 instead of INSERT 2 0Dave",
"msg_date": "Wed, 12 Jul 2023 20:57:20 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 5:57 PM Dave Cramer <[email protected]> wrote:\n\n> On Wed, 12 Jul 2023 at 20:00, <[email protected]> wrote:\n>\n>> Dave Cramer <[email protected]> writes:\n>> > Obviously I am biased by the JDBC API which would like to have\n>> > PreparedStatement.execute() return the number of rows inserted\n>> > without having to wait to read all of the rows returned\n>>\n>> Huh ... just how *is* PreparedStatement.execute() supposed\n>> to behave when the statement is an INSERT RETURNING?\n>>\n>\n> It's really executeUpdate which is supposed to return the number of rows\n> updated.\n>\n\nRight, and executeUpdate is the wrong API method to use, in the PostgreSQL\nworld, when executing insert/update/delete with the non-SQL-standard\nreturning clause. executeQuery is the method to use. And execute() should\nbehave as if executeQuery was called, i.e., return true, which it is\ncapable of doing since it has resultSet data that it needs to handle.\n\nThe addition of returning turns the insert/update/delete into a select in\nterms of effective client-seen behavior.\n\nISTM that you are trying to make user-error less painful. While that is\nlaudable it apparently isn't practical. They can either discard the\nresults and get a count by omitting returning or obtain the result and\nderive the count by counting rows alongside whatever else they needed the\nreturned data for.\n\nDavid J.\n\nOn Wed, Jul 12, 2023 at 5:57 PM Dave Cramer <[email protected]> wrote:On Wed, 12 Jul 2023 at 20:00, <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nHuh ... just how *is* PreparedStatement.execute() supposed\nto behave when the statement is an INSERT RETURNING?It's really executeUpdate which is supposed to return the number of rows updated.Right, and executeUpdate is the wrong API method to use, in the PostgreSQL world, when executing insert/update/delete with the non-SQL-standard returning clause. executeQuery is the method to use. And execute() should behave as if executeQuery was called, i.e., return true, which it is capable of doing since it has resultSet data that it needs to handle.The addition of returning turns the insert/update/delete into a select in terms of effective client-seen behavior.ISTM that you are trying to make user-error less painful. While that is laudable it apparently isn't practical. They can either discard the results and get a count by omitting returning or obtain the result and derive the count by counting rows alongside whatever else they needed the returned data for.David J.",
"msg_date": "Wed, 12 Jul 2023 18:30:45 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 21:31, David G. Johnston <[email protected]>\nwrote:\n\n> On Wed, Jul 12, 2023 at 5:57 PM Dave Cramer <[email protected]> wrote:\n>\n>> On Wed, 12 Jul 2023 at 20:00, <[email protected]> wrote:\n>>\n>>> Dave Cramer <[email protected]> writes:\n>>> > Obviously I am biased by the JDBC API which would like to have\n>>> > PreparedStatement.execute() return the number of rows inserted\n>>> > without having to wait to read all of the rows returned\n>>>\n>>> Huh ... just how *is* PreparedStatement.execute() supposed\n>>> to behave when the statement is an INSERT RETURNING?\n>>>\n>>\n>> It's really executeUpdate which is supposed to return the number of rows\n>> updated.\n>>\n>\n> Right, and executeUpdate is the wrong API method to use, in the PostgreSQL\n> world, when executing insert/update/delete with the non-SQL-standard\n> returning clause. executeQuery is the method to use. And execute() should\n> behave as if executeQuery was called, i.e., return true, which it is\n> capable of doing since it has resultSet data that it needs to handle.\n>\n> The addition of returning turns the insert/update/delete into a select in\n> terms of effective client-seen behavior.\n>\n> ISTM that you are trying to make user-error less painful. While that is\n> laudable it apparently isn't practical. They can either discard the\n> results and get a count by omitting returning or obtain the result and\n> derive the count by counting rows alongside whatever else they needed the\n> returned data for.\n>\nAny comment on why the CommandComplete is incorrect ?\nIt returns INSERT 0 0 if a cursor is used\n\nDave\n\n>\n\nOn Wed, 12 Jul 2023 at 21:31, David G. Johnston <[email protected]> wrote:On Wed, Jul 12, 2023 at 5:57 PM Dave Cramer <[email protected]> wrote:On Wed, 12 Jul 2023 at 20:00, <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nHuh ... just how *is* PreparedStatement.execute() supposed\nto behave when the statement is an INSERT RETURNING?It's really executeUpdate which is supposed to return the number of rows updated.Right, and executeUpdate is the wrong API method to use, in the PostgreSQL world, when executing insert/update/delete with the non-SQL-standard returning clause. executeQuery is the method to use. And execute() should behave as if executeQuery was called, i.e., return true, which it is capable of doing since it has resultSet data that it needs to handle.The addition of returning turns the insert/update/delete into a select in terms of effective client-seen behavior.ISTM that you are trying to make user-error less painful. While that is laudable it apparently isn't practical. They can either discard the results and get a count by omitting returning or obtain the result and derive the count by counting rows alongside whatever else they needed the returned data for.Any comment on why the CommandComplete is incorrect ?It returns INSERT 0 0 if a cursor is usedDave",
"msg_date": "Thu, 13 Jul 2023 09:53:55 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Thursday, July 13, 2023, Dave Cramer <[email protected]> wrote:\n\n>\n> Any comment on why the CommandComplete is incorrect ?\n> It returns INSERT 0 0 if a cursor is used\n>\n\n Looking at DECLARE it is surprising that what you describe is even\npossible. Can you share a psql reproducer?\n\nDavid J.\n\nOn Thursday, July 13, 2023, Dave Cramer <[email protected]> wrote:Any comment on why the CommandComplete is incorrect ?It returns INSERT 0 0 if a cursor is used Looking at DECLARE it is surprising that what you describe is even possible. Can you share a psql reproducer?David J.",
"msg_date": "Thu, 13 Jul 2023 07:24:50 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Thu, 13 Jul 2023 at 10:24, David G. Johnston <[email protected]>\nwrote:\n\n> On Thursday, July 13, 2023, Dave Cramer <[email protected]> wrote:\n>\n>>\n>> Any comment on why the CommandComplete is incorrect ?\n>> It returns INSERT 0 0 if a cursor is used\n>>\n>\n> Looking at DECLARE it is surprising that what you describe is even\n> possible. Can you share a psql reproducer?\n>\n\napologies, we are using a portal, not a cursor.\nDave Cramer\n\n>\n\nOn Thu, 13 Jul 2023 at 10:24, David G. Johnston <[email protected]> wrote:On Thursday, July 13, 2023, Dave Cramer <[email protected]> wrote:Any comment on why the CommandComplete is incorrect ?It returns INSERT 0 0 if a cursor is used Looking at DECLARE it is surprising that what you describe is even possible. Can you share a psql reproducer?apologies, we are using a portal, not a cursor. Dave Cramer",
"msg_date": "Thu, 13 Jul 2023 21:07:39 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 6:07 PM Dave Cramer <[email protected]> wrote:\n\n> On Thu, 13 Jul 2023 at 10:24, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Thursday, July 13, 2023, Dave Cramer <[email protected]> wrote:\n>>\n>>>\n>>> Any comment on why the CommandComplete is incorrect ?\n>>> It returns INSERT 0 0 if a cursor is used\n>>>\n>>\n>> Looking at DECLARE it is surprising that what you describe is even\n>> possible. Can you share a psql reproducer?\n>>\n>\n> apologies, we are using a portal, not a cursor.\n>\n>\nStill the same basic request of providing a reproducer - ideally in psql.\n\nIIUC a portal has to be used for a prepared (extended query mode) result\nset returning query. v16 can now handle parameter binding so:\n\npostgres=# \\bind 4\npostgres=# insert into ins values ($1) returning id;\n id\n----\n 4\n(1 row)\n\nINSERT 0 1\n\nWhich gives the expected non-zero command tag row count result.\n\nDavid J.\n\nOn Thu, Jul 13, 2023 at 6:07 PM Dave Cramer <[email protected]> wrote:On Thu, 13 Jul 2023 at 10:24, David G. Johnston <[email protected]> wrote:On Thursday, July 13, 2023, Dave Cramer <[email protected]> wrote:Any comment on why the CommandComplete is incorrect ?It returns INSERT 0 0 if a cursor is used Looking at DECLARE it is surprising that what you describe is even possible. Can you share a psql reproducer?apologies, we are using a portal, not a cursor. Still the same basic request of providing a reproducer - ideally in psql.IIUC a portal has to be used for a prepared (extended query mode) result set returning query. v16 can now handle parameter binding so:postgres=# \\bind 4postgres=# insert into ins values ($1) returning id; id---- 4(1 row)INSERT 0 1Which gives the expected non-zero command tag row count result.David J.",
"msg_date": "Fri, 14 Jul 2023 08:53:29 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-12 20:57, Dave Cramer wrote:\n> Without a cursor it returns right away as all of the results are \n> returned\n> by the server. However with cursor you have to wait until you fetch the\n> rows before you can get the CommandComplete message which btw is wrong \n> as\n> it returns INSERT 0 0 instead of INSERT 2 0\n\nTo make sure I am following, was this describing a comparison of\ntwo different ways in Java, using JDBC, to perform the same operation,\none of which behaves as desired while the other doesn't? If so, for\nmy curiosity, what do both ways look like in Java?\n\nOr was it a comparison of two different operations, say one\nan INSERT RETURNING and the other something else?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 12:07:53 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "David,\n\nI will try to get a tcpdump file. Doing this in libpq seems challenging as\nI'm not aware of how to create a portal in psql.\n\nChap\n\nThe only difference is one instance uses a portal to fetch the results, the\nother (correct one) is a normal insert where all of the rows are returned\nimmediately\n\nthis is a reproducer in Java\n\nconn.prepareStatement(\"DROP TABLE IF EXISTS test_table\").execute();\nconn.prepareStatement(\"CREATE TABLE IF NOT EXISTS test_table (id\nSERIAL PRIMARY KEY, cnt INT NOT NULL)\").execute();\n\nfor (var fetchSize : List.of(0, 1, 2, 3)) {\n System.out.println(\"FetchSize=\" + fetchSize);\n\n try (var stmt = conn.prepareStatement(\"INSERT INTO test_table\n(cnt) VALUES (1), (2) RETURNING id\", RETURN_GENERATED_KEYS)) {\n stmt.setFetchSize(fetchSize);\n\n var ret = stmt.executeUpdate();\n System.out.println(\"executeUpdate result: \" + ret);\n\n var rs = stmt.getGeneratedKeys();\n System.out.print(\"ids: \");\n while (rs.next()) {\n System.out.print(rs.getInt(1) + \" \");\n }\n System.out.print(\"\\n\\n\");\n }\n}\n\nDave\n\nOn Fri, 14 Jul 2023 at 12:07, <[email protected]> wrote:\n\n> On 2023-07-12 20:57, Dave Cramer wrote:\n> > Without a cursor it returns right away as all of the results are\n> > returned\n> > by the server. However with cursor you have to wait until you fetch the\n> > rows before you can get the CommandComplete message which btw is wrong\n> > as\n> > it returns INSERT 0 0 instead of INSERT 2 0\n>\n> To make sure I am following, was this describing a comparison of\n> two different ways in Java, using JDBC, to perform the same operation,\n> one of which behaves as desired while the other doesn't? If so, for\n> my curiosity, what do both ways look like in Java?\n>\n> Or was it a comparison of two different operations, say one\n> an INSERT RETURNING and the other something else?\n>\n> Regards,\n> -Chap\n>\n\nDavid, I will try to get a tcpdump file. Doing this in libpq seems challenging as I'm not aware of how to create a portal in psql.ChapThe only difference is one instance uses a portal to fetch the results, the other (correct one) is a normal insert where all of the rows are returned immediatelythis is a reproducer in Javaconn.prepareStatement(\"DROP TABLE IF EXISTS test_table\").execute();\nconn.prepareStatement(\"CREATE TABLE IF NOT EXISTS test_table (id SERIAL PRIMARY KEY, cnt INT NOT NULL)\").execute();\n\nfor (var fetchSize : List.of(0, 1, 2, 3)) {\n System.out.println(\"FetchSize=\" + fetchSize);\n\n try (var stmt = conn.prepareStatement(\"INSERT INTO test_table (cnt) VALUES (1), (2) RETURNING id\", RETURN_GENERATED_KEYS)) {\n stmt.setFetchSize(fetchSize);\n\n var ret = stmt.executeUpdate();\n System.out.println(\"executeUpdate result: \" + ret);\n\n var rs = stmt.getGeneratedKeys();\n System.out.print(\"ids: \");\n while (rs.next()) {\n System.out.print(rs.getInt(1) + \" \");\n }\n System.out.print(\"\\n\\n\");\n }\n}DaveOn Fri, 14 Jul 2023 at 12:07, <[email protected]> wrote:On 2023-07-12 20:57, Dave Cramer wrote:\n> Without a cursor it returns right away as all of the results are \n> returned\n> by the server. However with cursor you have to wait until you fetch the\n> rows before you can get the CommandComplete message which btw is wrong \n> as\n> it returns INSERT 0 0 instead of INSERT 2 0\n\nTo make sure I am following, was this describing a comparison of\ntwo different ways in Java, using JDBC, to perform the same operation,\none of which behaves as desired while the other doesn't? If so, for\nmy curiosity, what do both ways look like in Java?\n\nOr was it a comparison of two different operations, say one\nan INSERT RETURNING and the other something else?\n\nRegards,\n-Chap",
"msg_date": "Fri, 14 Jul 2023 12:29:57 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 9:30 AM Dave Cramer <[email protected]> wrote:\n\n> David,\n>\n> I will try to get a tcpdump file. Doing this in libpq seems challenging as\n> I'm not aware of how to create a portal in psql.\n>\n\nYeah, apparently psql does something special (like ignoring it...) with its\nFETCH_COUNT variable (set to 2 below as evidenced by the first query) for\nthe insert returning case. As documented since the command itself is not\nselect or values the test in is_select_command returns false and the branch:\n\n else if (pset.fetch_count <= 0 || pset.gexec_flag ||\npset.crosstab_flag || !is_select_command(query))\n{\n/* Default fetch-it-all-and-print mode */\n\nIs chosen.\n\nFixing that test in some manner and recompiling psql seems like it should\nbe the easiest way to produce a core-only test case.\n\npostgres=# select * from (Values (1),(2),(30000),(40000)) vals (v);\n v\n---\n 1\n 2\n 30000\n 40000\n(4 rows)\n\npostgres=# \\bind 5 6 70000 80000\npostgres=# insert into ins values ($1),($2),($3),($4) returning id;\n id\n-------\n 5\n 6\n 70000\n 80000\n(4 rows)\n\nINSERT 0 4\n\nI was hoping to see the INSERT RETURNING query have a 4 width header\ninstead of 7.\n\nDavid J.\n\nOn Fri, Jul 14, 2023 at 9:30 AM Dave Cramer <[email protected]> wrote:David, I will try to get a tcpdump file. Doing this in libpq seems challenging as I'm not aware of how to create a portal in psql.Yeah, apparently psql does something special (like ignoring it...) with its FETCH_COUNT variable (set to 2 below as evidenced by the first query) for the insert returning case. As documented since the command itself is not select or values the test in is_select_command returns false and the branch: else if (pset.fetch_count <= 0 || pset.gexec_flag ||\t\t\t pset.crosstab_flag || !is_select_command(query))\t{\t\t/* Default fetch-it-all-and-print mode */Is chosen.Fixing that test in some manner and recompiling psql seems like it should be the easiest way to produce a core-only test case.postgres=# select * from (Values (1),(2),(30000),(40000)) vals (v); v--- 1 2 30000 40000(4 rows)postgres=# \\bind 5 6 70000 80000postgres=# insert into ins values ($1),($2),($3),($4) returning id; id------- 5 6 70000 80000(4 rows)INSERT 0 4I was hoping to see the INSERT RETURNING query have a 4 width header instead of 7.David J.",
"msg_date": "Fri, 14 Jul 2023 09:50:53 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 9:50 AM David G. Johnston <\[email protected]> wrote:\n\n>\n> Fixing that test in some manner and recompiling psql seems like it should\n> be the easiest way to produce a core-only test case.\n>\n>\nApparently not - since it (ExecQueryUsingCursor) literally wraps the query\nin a DECLARE CURSOR SQL Command which prohibits INSERT...\n\nI suppose we'd have to write a psql equivalent of ExecQueryUsingPortal that\niterates over via fetch to make this work...probably more than I'm willing\nto try.\n\nDavid J.\n\nOn Fri, Jul 14, 2023 at 9:50 AM David G. Johnston <[email protected]> wrote:Fixing that test in some manner and recompiling psql seems like it should be the easiest way to produce a core-only test case.Apparently not - since it (ExecQueryUsingCursor) literally wraps the query in a DECLARE CURSOR SQL Command which prohibits INSERT...I suppose we'd have to write a psql equivalent of ExecQueryUsingPortal that iterates over via fetch to make this work...probably more than I'm willing to try.David J.",
"msg_date": "Fri, 14 Jul 2023 09:56:49 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "See attached pcap file\n\nafter the execute of the portal it returns INSERT 0 0\nDave Cramer\n\n\nOn Fri, 14 Jul 2023 at 12:57, David G. Johnston <[email protected]>\nwrote:\n\n> On Fri, Jul 14, 2023 at 9:50 AM David G. Johnston <\n> [email protected]> wrote:\n>\n>>\n>> Fixing that test in some manner and recompiling psql seems like it should\n>> be the easiest way to produce a core-only test case.\n>>\n>>\n> Apparently not - since it (ExecQueryUsingCursor) literally wraps the query\n> in a DECLARE CURSOR SQL Command which prohibits INSERT...\n>\n> I suppose we'd have to write a psql equivalent of ExecQueryUsingPortal\n> that iterates over via fetch to make this work...probably more than I'm\n> willing to try.\n>\n> David J.\n>",
"msg_date": "Fri, 14 Jul 2023 12:58:18 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 12:58, Dave Cramer wrote:\n> See attached pcap file\n\nSo if the fetch count is zero and no portal is needed,\nor if the fetch count exceeds the row count and the command\ncompletion follows directly with no suspension of the portal, then\nit comes with the correct count, but if the portal gets suspended,\nthen the later command completion reports a zero count?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 13:39:21 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 13:39, <[email protected]> wrote:\n\n> On 2023-07-14 12:58, Dave Cramer wrote:\n> > See attached pcap file\n>\n> So if the fetch count is zero and no portal is needed,\n> or if the fetch count exceeds the row count and the command\n> completion follows directly with no suspension of the portal, then\n> it comes with the correct count, but if the portal gets suspended,\n> then the later command completion reports a zero count?\n>\n>\nSeems so, yes.\n\nDave\n\nOn Fri, 14 Jul 2023 at 13:39, <[email protected]> wrote:On 2023-07-14 12:58, Dave Cramer wrote:\n> See attached pcap file\n\nSo if the fetch count is zero and no portal is needed,\nor if the fetch count exceeds the row count and the command\ncompletion follows directly with no suspension of the portal, then\nit comes with the correct count, but if the portal gets suspended,\nthen the later command completion reports a zero count?\nSeems so, yes.Dave",
"msg_date": "Fri, 14 Jul 2023 14:00:29 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 10:39 AM <[email protected]> wrote:\n\n> On 2023-07-14 12:58, Dave Cramer wrote:\n> > See attached pcap file\n>\n> So if the fetch count is zero and no portal is needed,\n> or if the fetch count exceeds the row count and the command\n> completion follows directly with no suspension of the portal, then\n> it comes with the correct count, but if the portal gets suspended,\n> then the later command completion reports a zero count?\n>\n>\nI cannot really understand that output other than to confirm that all\nqueries had returning and one of them showed INSERT 0 0\n\nIs there some magic set of arguments I should be using besides: tcpdump -Ar\nfilename ?\n\nBecause of the returning they all need a portal so far as the server is\nconcerned and the server will obligingly send the contents of the portal\nback to the client.\n\nI can definitely believe a bug exists in the intersection of a non-SELECT\nquery and a less-than-complete fetch count specification. There doesn't\nseem to be any place in the core testing framework to actually test out the\ninteraction though...I even tried using plpgsql, which lets me open/execute\na plpgsql cursor with insert returning (which SQL prohibits) but we can't\nget access to the command tag itself there. The ROW_COUNT variable likely\ntracks actual rows fetched or seen (in the case of MOVE).\n\nWhat I kinda suspect might be happening with a portal suspend is that the\nsuspension loop only ends when the final fetch attempt find zero rows to\nreturn and thus the final count ends up being zero instead of the\ncumulative sum over all portal scans.\n\nDavid J.\n\nOn Fri, Jul 14, 2023 at 10:39 AM <[email protected]> wrote:On 2023-07-14 12:58, Dave Cramer wrote:\n> See attached pcap file\n\nSo if the fetch count is zero and no portal is needed,\nor if the fetch count exceeds the row count and the command\ncompletion follows directly with no suspension of the portal, then\nit comes with the correct count, but if the portal gets suspended,\nthen the later command completion reports a zero count?I cannot really understand that output other than to confirm that all queries had returning and one of them showed INSERT 0 0Is there some magic set of arguments I should be using besides: tcpdump -Ar filename ?Because of the returning they all need a portal so far as the server is concerned and the server will obligingly send the contents of the portal back to the client.I can definitely believe a bug exists in the intersection of a non-SELECT query and a less-than-complete fetch count specification. There doesn't seem to be any place in the core testing framework to actually test out the interaction though...I even tried using plpgsql, which lets me open/execute a plpgsql cursor with insert returning (which SQL prohibits) but we can't get access to the command tag itself there. The ROW_COUNT variable likely tracks actual rows fetched or seen (in the case of MOVE).What I kinda suspect might be happening with a portal suspend is that the suspension loop only ends when the final fetch attempt find zero rows to return and thus the final count ends up being zero instead of the cumulative sum over all portal scans.David J.",
"msg_date": "Fri, 14 Jul 2023 11:19:40 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-12 21:30, David G. Johnston wrote:\n> Right, and executeUpdate is the wrong API method to use, in the \n> PostgreSQL\n> world, when executing insert/update/delete with the non-SQL-standard\n> returning clause. ... ISTM that you are trying to make user-error less\n> painful.\n\nIn Dave's Java reproducer, no user-error has been made, because the user\nsupplied a plain INSERT with the RETURN_GENERATED_KEYS option, and the\nRETURNING clause has been added by the JDBC driver. So the user expects\nexecuteUpdate to be the right method, and return the row count, and\ngetGeneratedKeys() to then return the rows.\n\nI've seen a possibly even more interesting result using pgjdbc-ng with\nprotocol.trace=true:\n\nFetchSize=0\n<P<D<S\n> 1.>t.>T$>Z*\n<B<E<S\n> 2.>D.>D.>C.>Z*\nexecuteUpdate result: 2\nids: 1 2\n\nFetchSize=1\n<B<E<H\n> 2.>D.>s*\nexecuteUpdate result: -1\nids: 3 <E<H\n> D.>s*\n4 <E<H\n> C*\n<C<S\n> 3.>Z*\n\nFetchSize=2\n<B<E<H\n> 2.>D.>D.>s*\nexecuteUpdate result: -1\nids: 5 6 <E<H\n> C*\n<C<S\n> 3.>Z*\n\nFetchSize=3\n<B<E<H\n> 2.>D.>D.>C*\n<C<S\n> 3.>Z*\nexecuteUpdate result: 2\nids: 7 8\n\n\nUnless there's some interleaving of trace and stdout messages happening\nhere, I think pgjdbc-ng is not even collecting all the returned rows\nin the suspended-cursor case before executeUpdate returns, but keeping\nthe cursor around for getGeneratedKeys() to use, so executeUpdate\nreturns -1 before even having seen the later command complete, and would\nstill do that even if the command complete message had the right count.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 14:34:46 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 14:19, David G. Johnston wrote:\n> Is there some magic set of arguments I should be using besides: tcpdump \n> -Ar\n> filename ?\n\nI opened it with Wireshark, which has a pgsql protocol decoder.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 14:39:27 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 14:34, <[email protected]> wrote:\n\n> On 2023-07-12 21:30, David G. Johnston wrote:\n> > Right, and executeUpdate is the wrong API method to use, in the\n> > PostgreSQL\n> > world, when executing insert/update/delete with the non-SQL-standard\n> > returning clause. ... ISTM that you are trying to make user-error less\n> > painful.\n>\n> In Dave's Java reproducer, no user-error has been made, because the user\n> supplied a plain INSERT with the RETURN_GENERATED_KEYS option, and the\n> RETURNING clause has been added by the JDBC driver. So the user expects\n> executeUpdate to be the right method, and return the row count, and\n> getGeneratedKeys() to then return the rows.\n>\n> I've seen a possibly even more interesting result using pgjdbc-ng with\n> protocol.trace=true:\n>\n> FetchSize=0\n> <P<D<S\n> > 1.>t.>T$>Z*\n> <B<E<S\n> > 2.>D.>D.>C.>Z*\n> executeUpdate result: 2\n> ids: 1 2\n>\n> FetchSize=1\n> <B<E<H\n> > 2.>D.>s*\n> executeUpdate result: -1\n> ids: 3 <E<H\n> > D.>s*\n> 4 <E<H\n> > C*\n> <C<S\n> > 3.>Z*\n>\n> FetchSize=2\n> <B<E<H\n> > 2.>D.>D.>s*\n> executeUpdate result: -1\n> ids: 5 6 <E<H\n> > C*\n> <C<S\n> > 3.>Z*\n>\n> FetchSize=3\n> <B<E<H\n> > 2.>D.>D.>C*\n> <C<S\n> > 3.>Z*\n> executeUpdate result: 2\n> ids: 7 8\n>\n>\n> Unless there's some interleaving of trace and stdout messages happening\n> here, I think pgjdbc-ng is not even collecting all the returned rows\n> in the suspended-cursor case before executeUpdate returns, but keeping\n> the cursor around for getGeneratedKeys() to use, so executeUpdate\n> returns -1 before even having seen the later command complete, and would\n> still do that even if the command complete message had the right count.\n>\n\nMy guess is that pgjdbc-ng sees the -1 and doesn't bother looking any\nfurther\n\nEither way pgjdbc-ng is a dead project so I'm not so concerned about it.\n\nDave\n\n>\n> Regards,\n> -Chap\n>\n\nOn Fri, 14 Jul 2023 at 14:34, <[email protected]> wrote:On 2023-07-12 21:30, David G. Johnston wrote:\n> Right, and executeUpdate is the wrong API method to use, in the \n> PostgreSQL\n> world, when executing insert/update/delete with the non-SQL-standard\n> returning clause. ... ISTM that you are trying to make user-error less\n> painful.\n\nIn Dave's Java reproducer, no user-error has been made, because the user\nsupplied a plain INSERT with the RETURN_GENERATED_KEYS option, and the\nRETURNING clause has been added by the JDBC driver. So the user expects\nexecuteUpdate to be the right method, and return the row count, and\ngetGeneratedKeys() to then return the rows.\n\nI've seen a possibly even more interesting result using pgjdbc-ng with\nprotocol.trace=true:\n\nFetchSize=0\n<P<D<S\n> 1.>t.>T$>Z*\n<B<E<S\n> 2.>D.>D.>C.>Z*\nexecuteUpdate result: 2\nids: 1 2\n\nFetchSize=1\n<B<E<H\n> 2.>D.>s*\nexecuteUpdate result: -1\nids: 3 <E<H\n> D.>s*\n4 <E<H\n> C*\n<C<S\n> 3.>Z*\n\nFetchSize=2\n<B<E<H\n> 2.>D.>D.>s*\nexecuteUpdate result: -1\nids: 5 6 <E<H\n> C*\n<C<S\n> 3.>Z*\n\nFetchSize=3\n<B<E<H\n> 2.>D.>D.>C*\n<C<S\n> 3.>Z*\nexecuteUpdate result: 2\nids: 7 8\n\n\nUnless there's some interleaving of trace and stdout messages happening\nhere, I think pgjdbc-ng is not even collecting all the returned rows\nin the suspended-cursor case before executeUpdate returns, but keeping\nthe cursor around for getGeneratedKeys() to use, so executeUpdate\nreturns -1 before even having seen the later command complete, and would\nstill do that even if the command complete message had the right count.My guess is that pgjdbc-ng sees the -1 and doesn't bother looking any furtherEither way pgjdbc-ng is a dead project so I'm not so concerned about it.Dave \n\nRegards,\n-Chap",
"msg_date": "Fri, 14 Jul 2023 14:47:52 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 11:34 AM <[email protected]> wrote:\n\n> On 2023-07-12 21:30, David G. Johnston wrote:\n> > Right, and executeUpdate is the wrong API method to use, in the\n> > PostgreSQL\n> > world, when executing insert/update/delete with the non-SQL-standard\n> > returning clause. ... ISTM that you are trying to make user-error less\n> > painful.\n>\n> In Dave's Java reproducer, no user-error has been made, because the user\n> supplied a plain INSERT with the RETURN_GENERATED_KEYS option, and the\n> RETURNING clause has been added by the JDBC driver. So the user expects\n> executeUpdate to be the right method, and return the row count, and\n> getGeneratedKeys() to then return the rows.\n>\n>\nThat makes more sense, though I don't understand how the original desire of\nhaving the count appear before the actual rows would materially\nbenefit that feature.\n\nI agree that the documented contract of the insert command tag says it\nreports the size of the entire tuple store maintained by the server during\nthe transaction instead of just the most recent count on subsequent fetches.\n\nDavid J.\n\nOn Fri, Jul 14, 2023 at 11:34 AM <[email protected]> wrote:On 2023-07-12 21:30, David G. Johnston wrote:\n> Right, and executeUpdate is the wrong API method to use, in the \n> PostgreSQL\n> world, when executing insert/update/delete with the non-SQL-standard\n> returning clause. ... ISTM that you are trying to make user-error less\n> painful.\n\nIn Dave's Java reproducer, no user-error has been made, because the user\nsupplied a plain INSERT with the RETURN_GENERATED_KEYS option, and the\nRETURNING clause has been added by the JDBC driver. So the user expects\nexecuteUpdate to be the right method, and return the row count, and\ngetGeneratedKeys() to then return the rows.That makes more sense, though I don't understand how the original desire of having the count appear before the actual rows would materially benefit that feature.I agree that the documented contract of the insert command tag says it reports the size of the entire tuple store maintained by the server during the transaction instead of just the most recent count on subsequent fetches.David J.",
"msg_date": "Fri, 14 Jul 2023 12:06:19 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 14:19, David G. Johnston wrote:\n> Because of the returning they all need a portal so far as the server is\n> concerned and the server will obligingly send the contents of the \n> portal\n> back to the client.\n\nDave's pcap file, for the fetch count 0 case, does not show any\nportal name used in the bind, describe, or execute messages, or\nany portal close message issued by the client afterward. The server\nmay be using a portal in that case, but it seems more transparent\nto the client when fetch count is zero.\n\nPerhaps an easy rule would be, if the driver itself adds RETURNING\nbecause of a RETURN_GENERATED_KEYS option, it should also force the\nfetch count to zero and collect all the returned rows before\nexecuteUpdate returns, and then it will have the right count\nto return?\n\nIt seems that any approach leaving any of the rows unfetched at\nthe time executeUpdate returns might violate a caller's expectation\nthat the whole outcome is known by that point.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 15:40:05 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 15:40, <[email protected]> wrote:\n\n> On 2023-07-14 14:19, David G. Johnston wrote:\n> > Because of the returning they all need a portal so far as the server is\n> > concerned and the server will obligingly send the contents of the\n> > portal\n> > back to the client.\n>\n> Dave's pcap file, for the fetch count 0 case, does not show any\n> portal name used in the bind, describe, or execute messages, or\n> any portal close message issued by the client afterward. The server\n> may be using a portal in that case, but it seems more transparent\n> to the client when fetch count is zero.\n>\n>\nThere is no portal for fetch count 0.\n\n\n> Perhaps an easy rule would be, if the driver itself adds RETURNING\n> because of a RETURN_GENERATED_KEYS option, it should also force the\n> fetch count to zero and collect all the returned rows before\n> executeUpdate returns, and then it will have the right count\n> to return?\n>\n> Well that kind of negates the whole point of using a cursor in the case\nwhere you have a large result set.\n\nThe rows are subsequently fetched in rs.next()\n\nSolves one problem, but creates another.\n\nDave\n\n>\n>\n\nOn Fri, 14 Jul 2023 at 15:40, <[email protected]> wrote:On 2023-07-14 14:19, David G. Johnston wrote:\n> Because of the returning they all need a portal so far as the server is\n> concerned and the server will obligingly send the contents of the \n> portal\n> back to the client.\n\nDave's pcap file, for the fetch count 0 case, does not show any\nportal name used in the bind, describe, or execute messages, or\nany portal close message issued by the client afterward. The server\nmay be using a portal in that case, but it seems more transparent\nto the client when fetch count is zero.\nThere is no portal for fetch count 0. \nPerhaps an easy rule would be, if the driver itself adds RETURNING\nbecause of a RETURN_GENERATED_KEYS option, it should also force the\nfetch count to zero and collect all the returned rows before\nexecuteUpdate returns, and then it will have the right count\nto return?\nWell that kind of negates the whole point of using a cursor in the case where you have a large result set.The rows are subsequently fetched in rs.next()Solves one problem, but creates another.Dave",
"msg_date": "Fri, 14 Jul 2023 15:49:33 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> I agree that the documented contract of the insert command tag says it\n> reports the size of the entire tuple store maintained by the server during\n> the transaction instead of just the most recent count on subsequent fetches.\n\nWhere do you see that documented, exactly? I looked in the protocol\nchapter and didn't find anything definite either way.\n\nI'm quite prepared to believe there are bugs here, since this whole\nset of behaviors is unreachable via libpq: you can't get it to send\nan Execute with a count other than zero (ie, fetch all), nor is it\nprepared to deal with the PortalSuspended messages it'd get if it did.\n\nI think that the behavior is arising from this bit in PortalRun:\n\n switch (portal->strategy)\n {\n ...\n case PORTAL_ONE_RETURNING:\n ...\n\n /*\n * If we have not yet run the command, do so, storing its\n * results in the portal's tuplestore. But we don't do that\n * for the PORTAL_ONE_SELECT case.\n */\n if (portal->strategy != PORTAL_ONE_SELECT && !portal->holdStore)\n FillPortalStore(portal, isTopLevel);\n\n /*\n * Now fetch desired portion of results.\n */\n nprocessed = PortalRunSelect(portal, true, count, dest);\n\n /*\n * If the portal result contains a command tag and the caller\n * gave us a pointer to store it, copy it and update the\n * rowcount.\n */\n if (qc && portal->qc.commandTag != CMDTAG_UNKNOWN)\n {\n CopyQueryCompletion(qc, &portal->qc);\n------>>> qc->nprocessed = nprocessed;\n }\n\n /* Mark portal not active */\n portal->status = PORTAL_READY;\n\n /*\n * Since it's a forward fetch, say DONE iff atEnd is now true.\n */\n result = portal->atEnd;\n\nThe marked line is, seemingly intentionally, overriding the portal's\nrowcount (which ought to count the whole query result) with the number\nof rows processed in the current fetch. Chap's theory that that's\nalways zero when this is being driven by JDBC seems plausible,\nsince the query status won't be returned to the client unless we\ndetect end-of-portal (otherwise we just send PortalSuspended).\n\nIt seems plausible to me that we should just remove that marked line.\nNot sure about the compatibility implications, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jul 2023 15:50:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 15:49, Dave Cramer wrote:\n> On Fri, 14 Jul 2023 at 15:40, <[email protected]> wrote:\n>> Perhaps an easy rule would be, if the driver itself adds RETURNING\n>> because of a RETURN_GENERATED_KEYS option, it should also force the\n>> fetch count to zero and collect all the returned rows before\n>> executeUpdate returns, and then it will have the right count\n>> to return?\n> \n> Well that kind of negates the whole point of using a cursor in the case\n> where you have a large result set.\n> \n> The rows are subsequently fetched in rs.next()\n\nI guess it comes down, again, to the question of what kind of thing\nthe API client thinks it is doing, when it issues an INSERT with\nthe RETURN_GENERATED_KEYS option.\n\nAn API client issuing a plain INSERT knows it is the kind of thing\nfor which executeUpdate() is appropriate, and the complete success\nor failure outcome is known when that returns.\n\nAn API client issuing its own explicit INSERT RETURNING knows it\nis the kind of thing for which executeQuery() is appropriate, and\nthat some processing (and possibly ereporting) may be left to\noccur while working through the ResultSet.\n\nBut now how about this odd hybrid, where the API client wrote\na plain INSERT, but added the RETURN_GENERATED_KEYS option?\nThe rewritten query is the kind of thing the server and the\ndriver need to treat as a query, but to the API client it still\nappears the kind of thing for which executeUpdate() is suited.\nThe generated keys can then be examined--in the form of a\nResultSet--but one obtained with the special method\ngetGeneratedKeys(), rather than the usual way of getting the\nResultSet from a query.\n\nShould the client then assume the semantics of executeUpdate\nhave changed for this case, the outcome isn't fully known yet,\nand errors could come to light while examining the returned\nkeys? Or should it still assume that executeUpdate has its\nusual meaning, all the work is done by that point, and the\nResultSet from getGeneratedKeys() is simply a convenient,\nfamiliar interface for examining what came back?\n\nI'm not sure if there's a firm answer to that one way or the\nother in the formal JDBC spec, but the latter seems perhaps\nsafer to me.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 16:32:39 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 16:32, <[email protected]> wrote:\n\n> On 2023-07-14 15:49, Dave Cramer wrote:\n> > On Fri, 14 Jul 2023 at 15:40, <[email protected]> wrote:\n> >> Perhaps an easy rule would be, if the driver itself adds RETURNING\n> >> because of a RETURN_GENERATED_KEYS option, it should also force the\n> >> fetch count to zero and collect all the returned rows before\n> >> executeUpdate returns, and then it will have the right count\n> >> to return?\n> >\n> > Well that kind of negates the whole point of using a cursor in the case\n> > where you have a large result set.\n> >\n> > The rows are subsequently fetched in rs.next()\n>\n> I guess it comes down, again, to the question of what kind of thing\n> the API client thinks it is doing, when it issues an INSERT with\n> the RETURN_GENERATED_KEYS option.\n>\n> An API client issuing a plain INSERT knows it is the kind of thing\n> for which executeUpdate() is appropriate, and the complete success\n> or failure outcome is known when that returns.\n>\n> An API client issuing its own explicit INSERT RETURNING knows it\n> is the kind of thing for which executeQuery() is appropriate, and\n> that some processing (and possibly ereporting) may be left to\n> occur while working through the ResultSet.\n>\n> But now how about this odd hybrid, where the API client wrote\n> a plain INSERT, but added the RETURN_GENERATED_KEYS option?\n> The rewritten query is the kind of thing the server and the\n> driver need to treat as a query, but to the API client it still\n> appears the kind of thing for which executeUpdate() is suited.\n> The generated keys can then be examined--in the form of a\n> ResultSet--but one obtained with the special method\n> getGeneratedKeys(), rather than the usual way of getting the\n> ResultSet from a query.\n>\n> Should the client then assume the semantics of executeUpdate\n> have changed for this case, the outcome isn't fully known yet,\n> and errors could come to light while examining the returned\n> keys? Or should it still assume that executeUpdate has its\n> usual meaning, all the work is done by that point, and the\n> ResultSet from getGeneratedKeys() is simply a convenient,\n> familiar interface for examining what came back?\n>\n\nThe fly in the ointment here is when they setFetchSize and we decide to use\na Portal under the covers.\n\nI\"m willing to document around this since it looks pretty unlikely that\nchanging the behaviour in the server is a non-starter.\n\n\n>\n> I'm not sure if there's a firm answer to that one way or the\n> other in the formal JDBC spec, but the latter seems perhaps\n> safer to me.\n>\n\nI'll leave the user to decide their own fate.\n\nDave\n\n>\n> Regards,\n> -Chap\n>\n\nOn Fri, 14 Jul 2023 at 16:32, <[email protected]> wrote:On 2023-07-14 15:49, Dave Cramer wrote:\n> On Fri, 14 Jul 2023 at 15:40, <[email protected]> wrote:\n>> Perhaps an easy rule would be, if the driver itself adds RETURNING\n>> because of a RETURN_GENERATED_KEYS option, it should also force the\n>> fetch count to zero and collect all the returned rows before\n>> executeUpdate returns, and then it will have the right count\n>> to return?\n> \n> Well that kind of negates the whole point of using a cursor in the case\n> where you have a large result set.\n> \n> The rows are subsequently fetched in rs.next()\n\nI guess it comes down, again, to the question of what kind of thing\nthe API client thinks it is doing, when it issues an INSERT with\nthe RETURN_GENERATED_KEYS option.\n\nAn API client issuing a plain INSERT knows it is the kind of thing\nfor which executeUpdate() is appropriate, and the complete success\nor failure outcome is known when that returns.\n\nAn API client issuing its own explicit INSERT RETURNING knows it\nis the kind of thing for which executeQuery() is appropriate, and\nthat some processing (and possibly ereporting) may be left to\noccur while working through the ResultSet.\n\nBut now how about this odd hybrid, where the API client wrote\na plain INSERT, but added the RETURN_GENERATED_KEYS option?\nThe rewritten query is the kind of thing the server and the\ndriver need to treat as a query, but to the API client it still\nappears the kind of thing for which executeUpdate() is suited.\nThe generated keys can then be examined--in the form of a\nResultSet--but one obtained with the special method\ngetGeneratedKeys(), rather than the usual way of getting the\nResultSet from a query.\n\nShould the client then assume the semantics of executeUpdate\nhave changed for this case, the outcome isn't fully known yet,\nand errors could come to light while examining the returned\nkeys? Or should it still assume that executeUpdate has its\nusual meaning, all the work is done by that point, and the\nResultSet from getGeneratedKeys() is simply a convenient,\nfamiliar interface for examining what came back?The fly in the ointment here is when they setFetchSize and we decide to use a Portal under the covers.I\"m willing to document around this since it looks pretty unlikely that changing the behaviour in the server is a non-starter. \n\nI'm not sure if there's a firm answer to that one way or the\nother in the formal JDBC spec, but the latter seems perhaps\nsafer to me.I'll leave the user to decide their own fate.Dave \n\nRegards,\n-Chap",
"msg_date": "Fri, 14 Jul 2023 17:02:43 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 17:02, Dave Cramer wrote:\n> The fly in the ointment here is when they setFetchSize and we decide to\n> use a Portal under the covers.\n\nA person might language-lawyer about whether setFetchSize even applies\nto the kind of thing done with executeUpdate.\n\nHmm ... the apidoc for setFetchSize says \"Gives the JDBC driver a hint\nas to the number of rows that should be fetched from the database when\nmore rows are needed for ResultSet objects generated by this Statement.\"\n\nSo ... it appears to apply to any \"ResultSet objects generated by this\nStatement\", and getGeneratedKeys returns a ResultSet, so maybe\nsetFetchSize should apply to it.\n\nOTOH, setFetchSize has @since 1.2, and getGeneratedKeys @since 1.4.\nAt the time setFetchSize was born, the only way you could get a \nResultSet\nwas from the kind of thing you'd use executeQuery for.\n\nSo when getGeneratedKeys was later added, a way of getting a ResultSet\nafter an executeUpdate, did they consciously intend it to come under\nthe jurisdiction of existing apidoc that concerned the fetch size of\na ResultSet you wanted from executeQuery?\n\nFull employment for language lawyers.\n\nMoreover, the apidoc does say the fetch size is \"a hint\", and also that\nit applies \"when more rows are needed\" from the ResultSet.\n\nSo it's technically not a misbehavior to disregard the hint, and you're\nnot even disregarding the hint if you fetch all the rows at once, \nbecause\nthen more rows can't be needed. :)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 17:31:15 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 12:51 PM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > I agree that the documented contract of the insert command tag says it\n> > reports the size of the entire tuple store maintained by the server\n> during\n> > the transaction instead of just the most recent count on subsequent\n> fetches.\n>\n> Where do you see that documented, exactly? I looked in the protocol\n> chapter and didn't find anything definite either way.\n>\n\nOn successful completion, an INSERT command returns a command tag of the\nform\n\nINSERT oid count\nThe count is the number of rows inserted or updated.\n\nhttps://www.postgresql.org/docs/current/sql-insert.html\n\nIt doesn't, nor should, have any qualifications about not applying to the\nreturning case and definitely shouldn't change based upon use of FETCH on\nthe unnamed portal.\n\nI'm quite prepared to believe there are bugs here, since this whole\n> set of behaviors is unreachable via libpq: you can't get it to send\n> an Execute with a count other than zero (ie, fetch all), nor is it\n> prepared to deal with the PortalSuspended messages it'd get if it did.\n>\n> I think that the behavior is arising from this bit in PortalRun:\n>\n> if (qc && portal->qc.commandTag != CMDTAG_UNKNOWN)\n> {\n> CopyQueryCompletion(qc, &portal->qc);\n> ------>>> qc->nprocessed = nprocessed;\n> }\n>\n\nI came to the same conclusion. The original introduction of that line\nreplaced string munging \"SELECT \" + nprocessed; so this code never even\nconsidered INSERT as being in scope and indeed wanted to return the per-run\nvalue in a fetch-list context.\n\nhttps://github.com/postgres/postgres/commit/2f9661311b83dc481fc19f6e3bda015392010a40#diff-f66f9adc3dfc98f2ede2e96691843b75128689a8cb9b79ae68d53dc749c3b22bL781\n\nI think I see why simple removal of that line is sufficient as the\ncopied-from &portal->qc already has the result of executing the underlying\ninsert query. That said, the paranoid approach would seem to be to keep\nthe assignment but only use it when we aren't dealing with the returning\ncase.\n\ndiff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c\nindex 5565f200c3..5e75141f0b 100644\n--- a/src/backend/tcop/pquery.c\n+++ b/src/backend/tcop/pquery.c\n@@ -775,7 +775,8 @@ PortalRun(Portal portal, long count, bool isTopLevel,\nbool run_once,\n if (qc && portal->qc.commandTag !=\nCMDTAG_UNKNOWN)\n {\n CopyQueryCompletion(qc,\n&portal->qc);\n- qc->nprocessed = nprocessed;\n+ if (portal->strategy !=\nPORTAL_ONE_RETURNING)\n+ qc->nprocessed = nprocessed;\n }\n\n /* Mark portal not active */\n\n\n> It seems plausible to me that we should just remove that marked line.\n> Not sure about the compatibility implications, though.\n>\n>\nI believe it is a bug worth fixing, save driver writers processing time\ngetting a count when the command tag is supposed to be providing it to them\nusing compute time already spent anyways.\n\nDavid J.\n\nOn Fri, Jul 14, 2023 at 12:51 PM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> I agree that the documented contract of the insert command tag says it\n> reports the size of the entire tuple store maintained by the server during\n> the transaction instead of just the most recent count on subsequent fetches.\n\nWhere do you see that documented, exactly? I looked in the protocol\nchapter and didn't find anything definite either way.On successful completion, an INSERT command returns a command tag of the formINSERT oid countThe count is the number of rows inserted or updated.https://www.postgresql.org/docs/current/sql-insert.htmlIt doesn't, nor should, have any qualifications about not applying to the returning case and definitely shouldn't change based upon use of FETCH on the unnamed portal.\nI'm quite prepared to believe there are bugs here, since this whole\nset of behaviors is unreachable via libpq: you can't get it to send\nan Execute with a count other than zero (ie, fetch all), nor is it\nprepared to deal with the PortalSuspended messages it'd get if it did.\n\nI think that the behavior is arising from this bit in PortalRun:\n if (qc && portal->qc.commandTag != CMDTAG_UNKNOWN)\n {\n CopyQueryCompletion(qc, &portal->qc);\n------>>> qc->nprocessed = nprocessed;\n }I came to the same conclusion. The original introduction of that line replaced string munging \"SELECT \" + nprocessed; so this code never even considered INSERT as being in scope and indeed wanted to return the per-run value in a fetch-list context.https://github.com/postgres/postgres/commit/2f9661311b83dc481fc19f6e3bda015392010a40#diff-f66f9adc3dfc98f2ede2e96691843b75128689a8cb9b79ae68d53dc749c3b22bL781I think I see why simple removal of that line is sufficient as the copied-from &portal->qc already has the result of executing the underlying insert query. That said, the paranoid approach would seem to be to keep the assignment but only use it when we aren't dealing with the returning case.diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.cindex 5565f200c3..5e75141f0b 100644--- a/src/backend/tcop/pquery.c+++ b/src/backend/tcop/pquery.c@@ -775,7 +775,8 @@ PortalRun(Portal portal, long count, bool isTopLevel, bool run_once, if (qc && portal->qc.commandTag != CMDTAG_UNKNOWN) { CopyQueryCompletion(qc, &portal->qc);- qc->nprocessed = nprocessed;+ if (portal->strategy != PORTAL_ONE_RETURNING)+ qc->nprocessed = nprocessed; } /* Mark portal not active */ \nIt seems plausible to me that we should just remove that marked line.\nNot sure about the compatibility implications, though.I believe it is a bug worth fixing, save driver writers processing time getting a count when the command tag is supposed to be providing it to them using compute time already spent anyways.David J.",
"msg_date": "Fri, 14 Jul 2023 14:33:14 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 17:31, Chapman Flack wrote:\n> So when getGeneratedKeys was later added, a way of getting a ResultSet\n> after an executeUpdate, did they consciously intend it to come under\n> the jurisdiction of existing apidoc that concerned the fetch size of\n> a ResultSet you wanted from executeQuery?\n> ...\n> Moreover, the apidoc does say the fetch size is \"a hint\", and also that\n> it applies \"when more rows are needed\" from the ResultSet.\n> \n> So it's technically not a misbehavior to disregard the hint, and you're\n> not even disregarding the hint if you fetch all the rows at once, \n> because\n> then more rows can't be needed. :)\n\n... and just to complete the thought, the apidoc for executeUpdate \nleaves\nno wiggle room for what that method returns: for DML, it has to be the\nrow count.\n\nSo if the only way to get the accurate row count is to fetch all the\nRETURN_GENERATED_KEYS rows at once, either to count them locally or\nto find the count in the completion message that follows them, that\nmandate seems stronger than any hint from setFetchSize.\n\nIf someone really does want to do a huge INSERT and get the generated\nvalues back in increments, it might be clearer to write an explicit\nINSERT RETURNING and issue it with executeQuery, where everything will\nwork as expected.\n\nI am also thinking someone might possibly allocate one Statement to\nuse for some number of executeQuery and executeUpdate calls, and might\ncall setFetchSize as a hint for the queries, but not expect it to have\neffects spilling over to executeUpdate.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 18:12:32 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 3:12 PM Chapman Flack <[email protected]> wrote:\n\n> If someone really does want to do a huge INSERT and get the generated\n> values back in increments, it might be clearer to write an explicit\n> INSERT RETURNING and issue it with executeQuery, where everything will\n> work as expected.\n>\n>\nFor PostgreSQL this is even moreso (i.e, huge means count > 1) since the\norder of rows in the returning clause is not promised to be related to the\norder of the rows as seen in the supplied insert command. A manual insert\nreturning should ask for not only any auto-generated column but also the\nset of columns that provide the unique natural key.\n\nDavid J.\n\nOn Fri, Jul 14, 2023 at 3:12 PM Chapman Flack <[email protected]> wrote:If someone really does want to do a huge INSERT and get the generated\nvalues back in increments, it might be clearer to write an explicit\nINSERT RETURNING and issue it with executeQuery, where everything will\nwork as expected.For PostgreSQL this is even moreso (i.e, huge means count > 1) since the order of rows in the returning clause is not promised to be related to the order of the rows as seen in the supplied insert command. A manual insert returning should ask for not only any auto-generated column but also the set of columns that provide the unique natural key.David J.",
"msg_date": "Fri, 14 Jul 2023 15:22:34 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-07-14 18:22, David G. Johnston wrote:\n> For PostgreSQL this is even moreso (i.e, huge means count > 1) since \n> the\n> order of rows in the returning clause is not promised to be related to \n> the\n> order of the rows as seen in the supplied insert command. A manual \n> insert\n> returning should ask for not only any auto-generated column but also \n> the\n> set of columns that provide the unique natural key.\n\nYikes!\n\nThat sounds like something that (if it's feasible) the driver's\nrewriting for RETURN_GENERATED_KEYS should try to do ... the\ndriver is already expected to be smart enough to know which\ncolumns the generated keys are ... should it also try to rewrite\nthe query in some way to get a meaningful order of the results?\n\nBut then ... the apidoc for getGeneratedKeys is completely\nsilent on the ordering anyway. I get the feeling this whole\ncorner of the JDBC API may have been thought out only as far\nas issuing a single-row INSERT at a time and getting its\nassigned keys back.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 14 Jul 2023 18:39:27 -0400",
"msg_from": "Chapman Flack <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On 2023-Jul-14, Dave Cramer wrote:\n\n> David,\n> \n> I will try to get a tcpdump file. Doing this in libpq seems challenging as\n> I'm not aware of how to create a portal in psql.\n\nYou can probably have a look at src/test/modules/libpq_pipeline/libpq_pipeline.c\nas a basis to write some test code for this.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"If you want to have good ideas, you must have many ideas. Most of them\nwill be wrong, and what you have to learn is which ones to throw away.\"\n (Linus Pauling)\n\n\n",
"msg_date": "Mon, 17 Jul 2023 09:37:41 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 2:49 PM Tom Lane <[email protected]> wrote:\n\n> Dave Cramer <[email protected]> writes:\n> > Obviously I am biased by the JDBC API which would like to have\n> > PreparedStatement.execute() return the number of rows inserted\n> > without having to wait to read all of the rows returned\n>\n> Umm ... you do realize that we return the rows on-the-fly?\n> The server does not know how many rows got inserted/returned\n> until it's run the query to completion, at which point all\n> the data has already been sent to the client\n>\n\nDoesn't this code contradict that statement?\n\nsrc/backend/tcop/pquery.c\n/*\n* If we have not yet run the command, do so, storing its\n* results in the portal's tuplestore. But we don't do that\n* for the PORTAL_ONE_SELECT case.\n*/\nif (portal->strategy != PORTAL_ONE_SELECT && !portal->holdStore)\n FillPortalStore(portal, isTopLevel);\n/*\n* Now fetch desired portion of results.\n*/\nnprocessed = PortalRunSelect(portal, true, count, dest);\n\n\nNot sure we'd want to lock ourselves into this implementation but at least\nas it stands now we could send a message with the portal size after calling\nFillPortalStore and prior to calling PortalRunSelect.\n\nDavid J.\n\nOn Wed, Jul 12, 2023 at 2:49 PM Tom Lane <[email protected]> wrote:Dave Cramer <[email protected]> writes:\n> Obviously I am biased by the JDBC API which would like to have\n> PreparedStatement.execute() return the number of rows inserted\n> without having to wait to read all of the rows returned\n\nUmm ... you do realize that we return the rows on-the-fly?\nThe server does not know how many rows got inserted/returned\nuntil it's run the query to completion, at which point all\nthe data has already been sent to the clientDoesn't this code contradict that statement?src/backend/tcop/pquery.c/** If we have not yet run the command, do so, storing its* results in the portal's tuplestore. But we don't do that* for the PORTAL_ONE_SELECT case.*/if (portal->strategy != PORTAL_ONE_SELECT && !portal->holdStore) FillPortalStore(portal, isTopLevel);/*\t\t\t\t * Now fetch desired portion of results.\t\t\t\t */\t\t\t\tnprocessed = PortalRunSelect(portal, true, count, dest);Not sure we'd want to lock ourselves into this implementation but at least as it stands now we could send a message with the portal size after calling FillPortalStore and prior to calling PortalRunSelect.David J.",
"msg_date": "Mon, 17 Jul 2023 17:17:27 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CommandStatus from insert returning when using a portal."
}
] |
[
{
"msg_contents": "Hi,\n\nI was reading the jit implementation and I notice that the lljit field of\nLLVMJitHandle is being assigned twice on llvm_compile_module function, is this\ncorrect? I'm attaching a supposed fixes that removes the second assignment. I\nran meson test and all tests have pass.\n\n--\nMatheus Alcantara",
"msg_date": "Thu, 13 Jul 2023 00:22:13 +0000",
"msg_from": "Matheus Alcantara <[email protected]>",
"msg_from_op": true,
"msg_subject": "Duplicated LLVMJitHandle->lljit assignment?"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 5:22 PM Matheus Alcantara <[email protected]> wrote:\n>\n> I was reading the jit implementation and I notice that the lljit field of\n> LLVMJitHandle is being assigned twice on llvm_compile_module function, is this\n> correct? I'm attaching a supposed fixes that removes the second assignment. I\n> ran meson test and all tests have pass.\n\n- handle->lljit = compile_orc;\n\nLGTM.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 12 Jul 2023 18:41:37 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicated LLVMJitHandle->lljit assignment?"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 06:41:37PM -0700, Gurjeet Singh wrote:\n> LGTM.\n\nIndeed. It looks like a small thinko from 6c57f2e when support for\nLLVM12 was added.\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 12:39:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicated LLVMJitHandle->lljit assignment?"
}
] |
[
{
"msg_contents": "We're observing a few cases with lockmanager spikes in a few quite loaded\nsystems.\n\nThese cases are different; queries are different, Postgres versions are 12,\n13, and 14.\n\nBut in all cases, servers are quite beefy (96-128 vCPUs, ~600-800 GiB)\nreceiving a lot of TPS (a few dozens of thousands). Most queries that\nstruggle from wait_event=lockmanager involve a substantial number of\ntables/indexes, often with partitioning.\n\nFP_LOCK_SLOTS_PER_BACKEND is now hard-coded 16 in storage/proc.h – and it\nis now very easy to hit this threshold in a loaded system, especially, for\nexample, if a table with a dozen of indexes was partitioned. It seems any\nsystem with good growth hits it sooner or later.\n\nI wonder, would it make sense to:\n1) either consider increasing this hard-coded value, taking into account\nthat 16 seems to be very low for modern workloads, schemas, and hardware –\nsay, it could be 64,\n2) or even make it configurable – a new GUC.\n\nThanks,\nNikolay Samokhvalov\nFounder, Postgres.ai\n\nWe're observing a few cases with lockmanager spikes in a few quite loaded systems.These cases are different; queries are different, Postgres versions are 12, 13, and 14.But in all cases, servers are quite beefy (96-128 vCPUs, ~600-800 GiB) receiving a lot of TPS (a few dozens of thousands). Most queries that struggle from wait_event=lockmanager involve a substantial number of tables/indexes, often with partitioning.FP_LOCK_SLOTS_PER_BACKEND is now hard-coded 16 in storage/proc.h – and it is now very easy to hit this threshold in a loaded system, especially, for example, if a table with a dozen of indexes was partitioned. It seems any system with good growth hits it sooner or later.I wonder, would it make sense to:1) either consider increasing this hard-coded value, taking into account that 16 seems to be very low for modern workloads, schemas, and hardware – say, it could be 64,2) or even make it configurable – a new GUC.Thanks,Nikolay SamokhvalovFounder, Postgres.ai",
"msg_date": "Wed, 12 Jul 2023 22:02:14 -0700",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "\n\nOn 7/13/23 07:02, Nikolay Samokhvalov wrote:\n> We're observing a few cases with lockmanager spikes in a few quite\n> loaded systems.\n> \n> These cases are different; queries are different, Postgres versions are\n> 12, 13, and 14.\n> \n> But in all cases, servers are quite beefy (96-128 vCPUs, ~600-800 GiB)\n> receiving a lot of TPS (a few dozens of thousands). Most queries that\n> struggle from wait_event=lockmanager involve a substantial number of\n> tables/indexes, often with partitioning.\n> \n> FP_LOCK_SLOTS_PER_BACKEND is now hard-coded 16 in storage/proc.h – and\n> it is now very easy to hit this threshold in a loaded system,\n> especially, for example, if a table with a dozen of indexes was\n> partitioned. It seems any system with good growth hits it sooner or later.\n> \n> I wonder, would it make sense to:\n> 1) either consider increasing this hard-coded value, taking into account\n> that 16 seems to be very low for modern workloads, schemas, and hardware\n> – say, it could be 64,\n\nWell, that has a cost too, as it makes PGPROC larger, right? At the\nmoment that struct is already ~880B / 14 cachelines, adding 48 XIDs\nwould make it +192B / +3 cachelines. I doubt that won't impact other\ncommon workloads ...\n\nHowever, the lmgr/README says this is meant to alleviate contention on\nthe lmgr partition locks. Wouldn't it be better to increase the number\nof those locks, without touching the PGPROC stuff? Could this be tuned\nusing some heuristics based on number of connections?\n\n> 2) or even make it configurable – a new GUC.\n> \n\nI'm rather strongly against adding yet another GUC for this low-level\nstuff. We already have enough of such knobs it's almost impossible for\nregular users to actually tune the system without doing something wrong.\nI'd even say it's actively harmful, especially if it's aimed at very\ncommon setups/workloads (like here).\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jul 2023 11:30:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "I thought it might be helpful to share some more details from one of the\ncase studies behind Nik's suggestion.\n\nBursty contention on lock_manager lwlocks recently became a recurring cause\nof query throughput drops for GitLab.com, and we got to study the behavior\nvia USDT and uprobe instrumentation along with more conventional\nobservations (see\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301). This\nturned up some interesting finds, and I thought sharing some of that\nresearch might be helpful.\n\nResults so far suggest that increasing FP_LOCK_SLOTS_PER_BACKEND would have\na much larger positive impact than any other mitigation strategy we have\nevaluated. Rather than reducing hold duration or collision rate, adding\nfastpath slots reduces the frequency of even having to acquire those\nlock_manager lwlocks. I suspect this would be helpful for many other\nworkloads, particularly those having high frequency queries whose tables\ncollectively have more than about 16 or indexes.\n\nLowering the lock_manager lwlock acquisition rate means lowering its\ncontention rate (and probably also its contention duration, since exclusive\nmode forces concurrent lockers to queue).\n\nI'm confident this would help our workload, and I strongly suspect it would\nbe generally helpful by letting queries use fastpath locking more often.\n\n> However, the lmgr/README says this is meant to alleviate contention on\n> the lmgr partition locks. Wouldn't it be better to increase the number\n> of those locks, without touching the PGPROC stuff?\n\nThat was my first thought too, but growing the lock_manager lwlock tranche\nisn't nearly as helpful.\n\nOn the slowpath, each relation's lock tag deterministically hashes onto a\nspecific lock_manager lwlock, so growing the number of lock_manager lwlocks\njust makes it less likely for two or more frequently locked relations to\nhash onto the same lock_manager.\n\nIn contrast, growing the number of fastpath slots completely avoids calls\nto the slowpath (i.e. no need to acquire a lock_manager lwlock).\n\nThe saturation condition we'd like to solve is heavy contention on one or\nmore of the lock_manager lwlocks. Since that is driven by the slowpath\nacquisition rate of heavyweight locks, avoiding the slowpath is better than\njust moderately reducing the contention on the slowpath.\n\nTo be fair, increasing the number of lock_manager locks definitely can help\nto a certain extent, but it doesn't cover an important general case. As a\nthought experiment, suppose we increase the lock_manager tranche to some\narbitrarily large size, larger than the number of relations in the db.\nThis unrealistically large size means we have the best case for avoiding\ncollisions -- each relation maps uniquely onto its own lock_manager\nlwlock. That helps a lot in the case where the workload is spread among\nmany non-overlapping sets of relations. But it doesn't help a workload\nwhere any one table is accessed frequently via slowpath locking.\n\nContinuing the thought experiment, if that frequently queried table has 16\nor more indexes, or if it is joined to other tables that collectively add\nup to over 16 relations, then each of those queries is guaranteed to have\nto use the slowpath and acquire the deterministically associated\nlock_manager lwlocks.\n\nSo growing the tranche of lock_manager lwlocks would help for some\nworkloads, while other workloads would not be helped much at all. (As a\nconcrete example, a workload at GitLab has several frequently queried\ntables with over 16 indexes that consequently always use at least some\nslowpath locks.)\n\nFor additional context:\n\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#what-influences-lock_manager-lwlock-acquisition-rate\nSummarizes the pathology and its current mitigations.\n\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678\nDocuments the supporting research methodology.\n\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365370510\nWhat code paths require an exclusive mode lwlock for lock_manager?\n\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142\nComparison of fastpath vs. slowpath locking, including quantifying the rate\ndifference.\n\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726\nConfirms the acquisition rate of lock_manager locks is not uniform. The\nsampled workload has a 3x difference in the most vs. least frequently\nacquired lock_manager lock, corresponding to the workload's most frequently\naccessed relations.\n\n> Well, that has a cost too, as it makes PGPROC larger, right? At the\n> moment that struct is already ~880B / 14 cachelines, adding 48 XIDs\n> would make it +192B / +3 cachelines. I doubt that won't impact other\n> common workloads ...\n\nThat's true; growing the data structure may affect L2/L3 cache hit rates\nwhen touching PGPROC. Is that cost worth the benefit of using fastpath for\na higher percentage of table locks? The answer may be workload- and\nplatform-specific. Exposing this as a GUC gives the admin a way to make a\ndifferent choice if our default (currently 16) is bad for them.\n\nI share your reluctance to add another low-level tunable, but like many\nother GUCs, having a generally reasonable default that can be adjusted is\nbetter than forcing folks to fork postgres to adjust a compile-time\nconstant. And unfortunately I don't see a better way to solve this\nproblem. Growing the lock_manager lwlock tranche isn't as effective,\nbecause it doesn't help workloads where one or more relations are locked\nfrequently enough to hit this saturation point.\n\nHandling a larger percentage of heavyweight lock acquisitions via fastpath\ninstead of slowpath seems likely to help many high-throughput workloads,\nsince it avoids having to exclusively acquire an lwlock. It seemed like\nthe least intrusive general-purpose solution we've come up with so far.\nThat's why we wanted to solicit feedback or new ideas from the community.\nCurrently, the only options folks have to solve this class of saturation\nare through some combination of schema changes, application changes,\nvertical scaling, and spreading the query rate among more postgres\ninstances. Those are not feasible and efficient options. Lacking a better\nsolution, exposing a GUC that rarely needs tuning seems reasonable to me.\n\nAnyway, hopefully the extra context is helpful! Please do share your\nthoughts.\n\n-- \n*Matt Smiley* | Staff Site Reliability Engineer at GitLab\n\nI thought it might be helpful to share some more details from one of the case studies behind Nik's suggestion.Bursty contention on lock_manager lwlocks recently became a recurring cause of query throughput drops for GitLab.com, and we got to study the behavior via USDT and uprobe instrumentation along with more conventional observations (see https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301). This turned up some interesting finds, and I thought sharing some of that research might be helpful.Results so far suggest that increasing FP_LOCK_SLOTS_PER_BACKEND would have a much larger positive impact than any other mitigation strategy we have evaluated. Rather than reducing hold duration or collision rate, adding fastpath slots reduces the frequency of even having to acquire those lock_manager lwlocks. I suspect this would be helpful for many other workloads, particularly those having high frequency queries whose tables collectively have more than about 16 or indexes.Lowering the lock_manager lwlock acquisition rate means lowering its contention rate (and probably also its contention duration, since exclusive mode forces concurrent lockers to queue).I'm confident this would help our workload, and I strongly suspect it would be generally helpful by letting queries use fastpath locking more often.> However, the lmgr/README says this is meant to alleviate contention on> the lmgr partition locks. Wouldn't it be better to increase the number> of those locks, without touching the PGPROC stuff?That was my first thought too, but growing the lock_manager lwlock tranche isn't nearly as helpful.On the slowpath, each relation's lock tag deterministically hashes onto a specific lock_manager lwlock, so growing the number of lock_manager lwlocks just makes it less likely for two or more frequently locked relations to hash onto the same lock_manager.In contrast, growing the number of fastpath slots completely avoids calls to the slowpath (i.e. no need to acquire a lock_manager lwlock).The saturation condition we'd like to solve is heavy contention on one or more of the lock_manager lwlocks. Since that is driven by the slowpath acquisition rate of heavyweight locks, avoiding the slowpath is better than just moderately reducing the contention on the slowpath.To be fair, increasing the number of lock_manager locks definitely can help to a certain extent, but it doesn't cover an important general case. As a thought experiment, suppose we increase the lock_manager tranche to some arbitrarily large size, larger than the number of relations in the db. This unrealistically large size means we have the best case for avoiding collisions -- each relation maps uniquely onto its own lock_manager lwlock. That helps a lot in the case where the workload is spread among many non-overlapping sets of relations. But it doesn't help a workload where any one table is accessed frequently via slowpath locking.Continuing the thought experiment, if that frequently queried table has 16 or more indexes, or if it is joined to other tables that collectively add up to over 16 relations, then each of those queries is guaranteed to have to use the slowpath and acquire the deterministically associated lock_manager lwlocks.So growing the tranche of lock_manager lwlocks would help for some workloads, while other workloads would not be helped much at all. (As a concrete example, a workload at GitLab has several frequently queried tables with over 16 indexes that consequently always use at least some slowpath locks.)For additional context:https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#what-influences-lock_manager-lwlock-acquisition-rateSummarizes the pathology and its current mitigations.https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678Documents the supporting research methodology.https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365370510What code paths require an exclusive mode lwlock for lock_manager?https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142Comparison of fastpath vs. slowpath locking, including quantifying the rate difference.https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726Confirms the acquisition rate of lock_manager locks is not uniform. The sampled workload has a 3x difference in the most vs. least frequently acquired lock_manager lock, corresponding to the workload's most frequently accessed relations.> Well, that has a cost too, as it makes PGPROC larger, right? At the> moment that struct is already ~880B / 14 cachelines, adding 48 XIDs> would make it +192B / +3 cachelines. I doubt that won't impact other> common workloads ...That's true; growing the data structure may affect L2/L3 cache hit rates when touching PGPROC. Is that cost worth the benefit of using fastpath for a higher percentage of table locks? The answer may be workload- and platform-specific. Exposing this as a GUC gives the admin a way to make a different choice if our default (currently 16) is bad for them.I share your reluctance to add another low-level tunable, but like many other GUCs, having a generally reasonable default that can be adjusted is better than forcing folks to fork postgres to adjust a compile-time constant. And unfortunately I don't see a better way to solve this problem. Growing the lock_manager lwlock tranche isn't as effective, because it doesn't help workloads where one or more relations are locked frequently enough to hit this saturation point.Handling a larger percentage of heavyweight lock acquisitions via fastpath instead of slowpath seems likely to help many high-throughput workloads, since it avoids having to exclusively acquire an lwlock. It seemed like the least intrusive general-purpose solution we've come up with so far. That's why we wanted to solicit feedback or new ideas from the community. Currently, the only options folks have to solve this class of saturation are through some combination of schema changes, application changes, vertical scaling, and spreading the query rate among more postgres instances. Those are not feasible and efficient options. Lacking a better solution, exposing a GUC that rarely needs tuning seems reasonable to me.Anyway, hopefully the extra context is helpful! Please do share your thoughts.-- Matt Smiley | Staff Site Reliability Engineer at GitLab",
"msg_date": "Wed, 2 Aug 2023 16:51:29 -0700",
"msg_from": "Matt Smiley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On 8/3/23 01:51, Matt Smiley wrote:\n> I thought it might be helpful to share some more details from one of the\n> case studies behind Nik's suggestion.\n> \n> Bursty contention on lock_manager lwlocks recently became a recurring\n> cause of query throughput drops for GitLab.com, and we got to study the\n> behavior via USDT and uprobe instrumentation along with more\n> conventional observations (see\n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301\n> <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301>). \n> This turned up some interesting finds, and I thought sharing some of\n> that research might be helpful.\n> \n\nThe analysis in the linked gitlab issue is pretty amazing. I wasn't\nplanning to argue against the findings anyway, but plenty of data\nsupporting the conclusions is good.\n\nI'm not an expert on locking, so some of the stuff I say may be\ntrivially obvious - it's just me thinking about ...\n\nI wonder what's the rough configuration of those systems, though. Both\nthe hardware and PostgreSQL side. How many cores / connections, etc.?\n\n\n> Results so far suggest that increasing FP_LOCK_SLOTS_PER_BACKEND would\n> have a much larger positive impact than any other mitigation strategy we\n> have evaluated. Rather than reducing hold duration or collision rate,\n> adding fastpath slots reduces the frequency of even having to acquire\n> those lock_manager lwlocks. I suspect this would be helpful for many\n> other workloads, particularly those having high frequency queries whose\n> tables collectively have more than about 16 or indexes.\n> \n\nYes, I agree with that. Partitioning makes this issue works, I guess.\nSchemas with indexes on every column are disturbingly common these days\ntoo, which hits the issue too ...\n\n> Lowering the lock_manager lwlock acquisition rate means lowering its\n> contention rate (and probably also its contention duration, since\n> exclusive mode forces concurrent lockers to queue).\n> \n> I'm confident this would help our workload, and I strongly suspect it\n> would be generally helpful by letting queries use fastpath locking more\n> often.\n> \n\nOK\n\n>> However, the lmgr/README says this is meant to alleviate contention on\n>> the lmgr partition locks. Wouldn't it be better to increase the number\n>> of those locks, without touching the PGPROC stuff?\n> \n> That was my first thought too, but growing the lock_manager lwlock\n> tranche isn't nearly as helpful.\n> \n> On the slowpath, each relation's lock tag deterministically hashes onto\n> a specific lock_manager lwlock, so growing the number of lock_manager\n> lwlocks just makes it less likely for two or more frequently locked\n> relations to hash onto the same lock_manager.\n> \n\nHmmm, so if we have a query that joins 16 tables, or a couple tables\nwith indexes, all backends running this will acquire exactly the same\npartition locks. And we're likely acquiring them in exactly the same\norder (to lock objects in the same order because of deadlocks), making\nthe issue worse.\n\n> In contrast, growing the number of fastpath slots completely avoids\n> calls to the slowpath (i.e. no need to acquire a lock_manager lwlock).\n> \n> The saturation condition we'd like to solve is heavy contention on one\n> or more of the lock_manager lwlocks. Since that is driven by the\n> slowpath acquisition rate of heavyweight locks, avoiding the slowpath is\n> better than just moderately reducing the contention on the slowpath.\n> \n> To be fair, increasing the number of lock_manager locks definitely can\n> help to a certain extent, but it doesn't cover an important general\n> case. As a thought experiment, suppose we increase the lock_manager\n> tranche to some arbitrarily large size, larger than the number of\n> relations in the db. This unrealistically large size means we have the\n> best case for avoiding collisions -- each relation maps uniquely onto\n> its own lock_manager lwlock. That helps a lot in the case where the\n> workload is spread among many non-overlapping sets of relations. But it\n> doesn't help a workload where any one table is accessed frequently via\n> slowpath locking.\n> \n\nUnderstood.\n\n> Continuing the thought experiment, if that frequently queried table has\n> 16 or more indexes, or if it is joined to other tables that collectively\n> add up to over 16 relations, then each of those queries is guaranteed to\n> have to use the slowpath and acquire the deterministically associated\n> lock_manager lwlocks.\n> \n> So growing the tranche of lock_manager lwlocks would help for some\n> workloads, while other workloads would not be helped much at all. (As a\n> concrete example, a workload at GitLab has several frequently queried\n> tables with over 16 indexes that consequently always use at least some\n> slowpath locks.)\n> \n> For additional context:\n> \n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#what-influences-lock_manager-lwlock-acquisition-rate <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#what-influences-lock_manager-lwlock-acquisition-rate>\n> Summarizes the pathology and its current mitigations.\n> \n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678>\n> Documents the supporting research methodology.\n> \n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365370510 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365370510>\n> What code paths require an exclusive mode lwlock for lock_manager?\n> \n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142>\n> Comparison of fastpath vs. slowpath locking, including quantifying the\n> rate difference.\n> \n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726>\n> Confirms the acquisition rate of lock_manager locks is not uniform. The\n> sampled workload has a 3x difference in the most vs. least frequently\n> acquired lock_manager lock, corresponding to the workload's most\n> frequently accessed relations.\n> \n\nThose are pretty great pieces of information. I wonder if some of the\nmeasurements may be affecting the observation (by consuming too much\nCPU, making the contention worse), but overall it seems convincing.\n\nWould it be difficult to sample just a small fraction of the calls? Say,\n1%, to get good histograms/estimated with acceptable CPU usage.\n\nIn any case, it's a great source of information to reproduce the issue\nand evaluate possible fixes.\n\n>> Well, that has a cost too, as it makes PGPROC larger, right? At the\n>> moment that struct is already ~880B / 14 cachelines, adding 48 XIDs\n>> would make it +192B / +3 cachelines. I doubt that won't impact other\n>> common workloads ...\n> \n> That's true; growing the data structure may affect L2/L3 cache hit rates\n> when touching PGPROC. Is that cost worth the benefit of using fastpath\n> for a higher percentage of table locks? The answer may be workload- and\n> platform-specific. Exposing this as a GUC gives the admin a way to make\n> a different choice if our default (currently 16) is bad for them.\n> \n\nAfter looking at the code etc. I think the main trade-off here is going\nto be the cost of searching the fpRelId array. At the moment it's\nsearched linearly, which is cheap for 16 locks. But at some point it'll\nbecome as expensive as updating the slowpath, and the question is when.\n\nI wonder if we could switch to a more elaborate strategy if the number\nof locks is high enough. Say, a hash table, or some hybrid approach.\n\n> I share your reluctance to add another low-level tunable, but like many\n> other GUCs, having a generally reasonable default that can be adjusted\n> is better than forcing folks to fork postgres to adjust a compile-time\n> constant. And unfortunately I don't see a better way to solve this\n> problem. Growing the lock_manager lwlock tranche isn't as effective,\n> because it doesn't help workloads where one or more relations are locked\n> frequently enough to hit this saturation point.\n> \n\nI understand. I have two concerns:\n\n1) How would the users know they need to tune this / determine what's\nthe right value, and what's the right value for their system.\n\n2) Having to deal with misconfigured systems as people tend to blindly\ntune everything to 100x the default, because more is better :-(\n\n\n> Handling a larger percentage of heavyweight lock acquisitions via\n> fastpath instead of slowpath seems likely to help many high-throughput\n> workloads, since it avoids having to exclusively acquire an lwlock. It\n> seemed like the least intrusive general-purpose solution we've come up\n> with so far. That's why we wanted to solicit feedback or new ideas from\n> the community. Currently, the only options folks have to solve this\n> class of saturation are through some combination of schema changes,\n> application changes, vertical scaling, and spreading the query rate\n> among more postgres instances. Those are not feasible and efficient\n> options. Lacking a better solution, exposing a GUC that rarely needs\n> tuning seems reasonable to me.\n> \n> Anyway, hopefully the extra context is helpful! Please do share your\n> thoughts.\n> \n\nAbsolutely! I think the next step for me is to go through the analysis\nagain, and try to construct a couple of different workloads hitting this\nin some way.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 3 Aug 2023 22:39:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "\n\nOn 8/3/23 22:39, Tomas Vondra wrote:\n> On 8/3/23 01:51, Matt Smiley wrote:\n>> I thought it might be helpful to share some more details from one of the\n>> case studies behind Nik's suggestion.\n>>\n>> Bursty contention on lock_manager lwlocks recently became a recurring\n>> cause of query throughput drops for GitLab.com, and we got to study the\n>> behavior via USDT and uprobe instrumentation along with more\n>> conventional observations (see\n>> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301\n>> <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301>). \n>> This turned up some interesting finds, and I thought sharing some of\n>> that research might be helpful.\n>>\n> \n> The analysis in the linked gitlab issue is pretty amazing. I wasn't\n> planning to argue against the findings anyway, but plenty of data\n> supporting the conclusions is good.\n> \n> I'm not an expert on locking, so some of the stuff I say may be\n> trivially obvious - it's just me thinking about ...\n> \n> I wonder what's the rough configuration of those systems, though. Both\n> the hardware and PostgreSQL side. How many cores / connections, etc.?\n> \n> \n>> Results so far suggest that increasing FP_LOCK_SLOTS_PER_BACKEND would\n>> have a much larger positive impact than any other mitigation strategy we\n>> have evaluated. Rather than reducing hold duration or collision rate,\n>> adding fastpath slots reduces the frequency of even having to acquire\n>> those lock_manager lwlocks. I suspect this would be helpful for many\n>> other workloads, particularly those having high frequency queries whose\n>> tables collectively have more than about 16 or indexes.\n>>\n> \n> Yes, I agree with that. Partitioning makes this issue works, I guess.\n> Schemas with indexes on every column are disturbingly common these days\n> too, which hits the issue too ...\n> \n>> Lowering the lock_manager lwlock acquisition rate means lowering its\n>> contention rate (and probably also its contention duration, since\n>> exclusive mode forces concurrent lockers to queue).\n>>\n>> I'm confident this would help our workload, and I strongly suspect it\n>> would be generally helpful by letting queries use fastpath locking more\n>> often.\n>>\n> \n> OK\n> \n>>> However, the lmgr/README says this is meant to alleviate contention on\n>>> the lmgr partition locks. Wouldn't it be better to increase the number\n>>> of those locks, without touching the PGPROC stuff?\n>>\n>> That was my first thought too, but growing the lock_manager lwlock\n>> tranche isn't nearly as helpful.\n>>\n>> On the slowpath, each relation's lock tag deterministically hashes onto\n>> a specific lock_manager lwlock, so growing the number of lock_manager\n>> lwlocks just makes it less likely for two or more frequently locked\n>> relations to hash onto the same lock_manager.\n>>\n> \n> Hmmm, so if we have a query that joins 16 tables, or a couple tables\n> with indexes, all backends running this will acquire exactly the same\n> partition locks. And we're likely acquiring them in exactly the same\n> order (to lock objects in the same order because of deadlocks), making\n> the issue worse.\n> \n>> In contrast, growing the number of fastpath slots completely avoids\n>> calls to the slowpath (i.e. no need to acquire a lock_manager lwlock).\n>>\n>> The saturation condition we'd like to solve is heavy contention on one\n>> or more of the lock_manager lwlocks. Since that is driven by the\n>> slowpath acquisition rate of heavyweight locks, avoiding the slowpath is\n>> better than just moderately reducing the contention on the slowpath.\n>>\n>> To be fair, increasing the number of lock_manager locks definitely can\n>> help to a certain extent, but it doesn't cover an important general\n>> case. As a thought experiment, suppose we increase the lock_manager\n>> tranche to some arbitrarily large size, larger than the number of\n>> relations in the db. This unrealistically large size means we have the\n>> best case for avoiding collisions -- each relation maps uniquely onto\n>> its own lock_manager lwlock. That helps a lot in the case where the\n>> workload is spread among many non-overlapping sets of relations. But it\n>> doesn't help a workload where any one table is accessed frequently via\n>> slowpath locking.\n>>\n> \n> Understood.\n> \n>> Continuing the thought experiment, if that frequently queried table has\n>> 16 or more indexes, or if it is joined to other tables that collectively\n>> add up to over 16 relations, then each of those queries is guaranteed to\n>> have to use the slowpath and acquire the deterministically associated\n>> lock_manager lwlocks.\n>>\n>> So growing the tranche of lock_manager lwlocks would help for some\n>> workloads, while other workloads would not be helped much at all. (As a\n>> concrete example, a workload at GitLab has several frequently queried\n>> tables with over 16 indexes that consequently always use at least some\n>> slowpath locks.)\n>>\n>> For additional context:\n>>\n>> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#what-influences-lock_manager-lwlock-acquisition-rate <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#what-influences-lock_manager-lwlock-acquisition-rate>\n>> Summarizes the pathology and its current mitigations.\n>>\n>> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678>\n>> Documents the supporting research methodology.\n>>\n>> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365370510 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365370510>\n>> What code paths require an exclusive mode lwlock for lock_manager?\n>>\n>> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142>\n>> Comparison of fastpath vs. slowpath locking, including quantifying the\n>> rate difference.\n>>\n>> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726 <https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726>\n>> Confirms the acquisition rate of lock_manager locks is not uniform. The\n>> sampled workload has a 3x difference in the most vs. least frequently\n>> acquired lock_manager lock, corresponding to the workload's most\n>> frequently accessed relations.\n>>\n> \n> Those are pretty great pieces of information. I wonder if some of the\n> measurements may be affecting the observation (by consuming too much\n> CPU, making the contention worse), but overall it seems convincing.\n> \n> Would it be difficult to sample just a small fraction of the calls? Say,\n> 1%, to get good histograms/estimated with acceptable CPU usage.\n> \n> In any case, it's a great source of information to reproduce the issue\n> and evaluate possible fixes.\n> \n>>> Well, that has a cost too, as it makes PGPROC larger, right? At the\n>>> moment that struct is already ~880B / 14 cachelines, adding 48 XIDs\n>>> would make it +192B / +3 cachelines. I doubt that won't impact other\n>>> common workloads ...\n>>\n>> That's true; growing the data structure may affect L2/L3 cache hit rates\n>> when touching PGPROC. Is that cost worth the benefit of using fastpath\n>> for a higher percentage of table locks? The answer may be workload- and\n>> platform-specific. Exposing this as a GUC gives the admin a way to make\n>> a different choice if our default (currently 16) is bad for them.\n>>\n> \n> After looking at the code etc. I think the main trade-off here is going\n> to be the cost of searching the fpRelId array. At the moment it's\n> searched linearly, which is cheap for 16 locks. But at some point it'll\n> become as expensive as updating the slowpath, and the question is when.\n> \n> I wonder if we could switch to a more elaborate strategy if the number\n> of locks is high enough. Say, a hash table, or some hybrid approach.\n> \n>> I share your reluctance to add another low-level tunable, but like many\n>> other GUCs, having a generally reasonable default that can be adjusted\n>> is better than forcing folks to fork postgres to adjust a compile-time\n>> constant. And unfortunately I don't see a better way to solve this\n>> problem. Growing the lock_manager lwlock tranche isn't as effective,\n>> because it doesn't help workloads where one or more relations are locked\n>> frequently enough to hit this saturation point.\n>>\n> \n> I understand. I have two concerns:\n> \n> 1) How would the users know they need to tune this / determine what's\n> the right value, and what's the right value for their system.\n> \n> 2) Having to deal with misconfigured systems as people tend to blindly\n> tune everything to 100x the default, because more is better :-(\n> \n> \n>> Handling a larger percentage of heavyweight lock acquisitions via\n>> fastpath instead of slowpath seems likely to help many high-throughput\n>> workloads, since it avoids having to exclusively acquire an lwlock. It\n>> seemed like the least intrusive general-purpose solution we've come up\n>> with so far. That's why we wanted to solicit feedback or new ideas from\n>> the community. Currently, the only options folks have to solve this\n>> class of saturation are through some combination of schema changes,\n>> application changes, vertical scaling, and spreading the query rate\n>> among more postgres instances. Those are not feasible and efficient\n>> options. Lacking a better solution, exposing a GUC that rarely needs\n>> tuning seems reasonable to me.\n>>\n>> Anyway, hopefully the extra context is helpful! Please do share your\n>> thoughts.\n>>\n> \n> Absolutely! I think the next step for me is to go through the analysis\n> again, and try to construct a couple of different workloads hitting this\n> in some way.\n> \n\nFWIW I did some progress on this - I think I managed to reproduce the\nissue on a synthetic workload (with a partitioned table, using variable\nnumber of partitions / indexes). It's hard to say for sure how serious\nthe reproduced cases are, but I do see spikes of lock_manager wait\nevents, and so on.\n\nThat however brings me to the second step - I was planning to increase\nthe FP_LOCK_SLOTS_PER_BACKEND value and see how much it helps, and also\nmeasure some of the negative impact, to get a better idea what the trade\noffs are.\n\nBut I quickly realized it's far more complicated than just increasing\nthe define. The thing is, it's not enough to make fpRelId larger,\nthere's also fpLockBits, tracking additional info about the locks (lock\nmode etc.). But FP_LOCK_SLOTS_PER_BACKEND does not affect that, it's\njust that int64 is enough to store the bits for 16 fastpath locks.\n\nNote: This means one of the \"mitigations\" in the analysis (just rebuild\npostgres with custom FP_LOCK_SLOTS_PER_BACKEND value) won't work.\n\nI tried to fix that in a naive way (replacing it with an int64 array,\nwith one value for 16 locks), but I must be missing something as there\nare locking failures.\n\nI'm not sure I'll have time to hack on this soon, but if someone else\nwants to take a stab at it and produce a minimal patch, I might be able\nto run more tests on it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 6 Aug 2023 16:44:50 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-02 16:51:29 -0700, Matt Smiley wrote:\n> I thought it might be helpful to share some more details from one of the\n> case studies behind Nik's suggestion.\n> \n> Bursty contention on lock_manager lwlocks recently became a recurring cause\n> of query throughput drops for GitLab.com, and we got to study the behavior\n> via USDT and uprobe instrumentation along with more conventional\n> observations (see\n> https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301). This\n> turned up some interesting finds, and I thought sharing some of that\n> research might be helpful.\n\nHm, I'm curious whether you have a way to trigger the issue outside of your\nprod environment. Mainly because I'm wondering if you're potentially hitting\nthe issue fixed in a4adc31f690 - we ended up not backpatching that fix, so\nyou'd not see the benefit unless you reproduced the load in 16+.\n\nI'm also wondering if it's possible that the reason for the throughput drops\nare possibly correlated with heavyweight contention or higher frequency access\nto the pg_locks view. Deadlock checking and the locks view acquire locks on\nall lock manager partitions... So if there's a bout of real lock contention\n(for longer than deadlock_timeout)...\n\n\nGiven that most of your lock manager traffic comes from query planning - have\nyou evaluated using prepared statements more heavily?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 6 Aug 2023 13:00:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On 8/6/23 16:44, Tomas Vondra wrote:\n> ...\n>\n> I'm not sure I'll have time to hack on this soon, but if someone else\n> wants to take a stab at it and produce a minimal patch, I might be able\n> to run more tests on it.\n> \n\nNah, I gave it another try, handling the bitmap in a different way, and\nthat happened to work sufficiently well. So here are some results and\nalso the bench script.\n\nNote: I haven't reproduced the terrible regressions described in this\nthread. Either I don't have a system large enough, the workload may not\nbe exactly right, or maybe it's due to the commit missing in older\nbranches (mentioned by Andres). Still, the findings seem interesting.\n\nThe workload is very simple - create a table \"t\" with certain number of\npartitiones and indexes, add certain number of rows (100, 10k or 1M) and\ndo \"select count(*) from t\". And measure throughput. There's also a\nscript collecting wait-event/lock info, but I haven't looked at that.\n\nI did this for current master (17dev), with two patch versions.\n\nmaster - current master, with 16 fast-path slots\n\nv1 increases the number of slots to 64, but switches to a single array\ncombining the bitmap and OIDs. I'm sure the original approach (separate\nbitmap) can be made to work, and it's possible this is responsible for a\nsmall regression in some runs (more about it in a minute).\n\nv2 was an attempt to address the small regressions in v1, which may be\ndue to having to search larger arrays. The core always walks the whole\narray, even if we know there have never been that many entries. So this\ntries to track the last used slot, and stop the loops earlier.\n\nThe attached PDF visualizes the results, and differences between master\nand the two patches. It plots throughput against number of rows / tables\nand indexes, and also concurrent clients.\n\nThe last two columns show throughput vs. master, with a simple color\nscale: green - speedup (good), red - regression (bad).\n\nLet's talk about the smallest data set (100 rows). The 10k test has the\nsame behavior, with smaller differences (as the locking accounts for a\nsmaller part of the total duration). On the 1M data set the patches make\nalmost no difference.\n\nThere's pretty clear flip once we reach 16 partitions - on master the\nthroughput drops from 310k tps to 210k tps (for 32 clients, the machine\nhas 32 cores). With both patches, the drop is to only about 240k tps, so\n~20% improvement compared to master.\n\nThe other interesting thing is behavior with many clients:\n\n 1 16 32 64 96 128\n master 17603 169132 204870 199390 199662 196774\n v1 17062 180475 234644 266372 267105 265885\n v2 18972 187294 242838 275064 275931 274332\n\nSo the master \"stagnates\" or maybe even drops off, while with both\npatches the throughput continues to grow beyond 32 clients. This is even\nmore obvious for 32 or 64 partitions - for 32, the results are\n\n 1 16 32 64 96 128\n master 11292 93783 111223 102580 95197 87800\n v1 12025 123503 168382 179952 180191 179846\n v2 12501 126438 174255 185435 185332 184938\n\nThat's a pretty massive improvement, IMO. Would help OLTP scalability.\n\nFor 10k rows the patterns is the same, although the differences are less\nsignificant. For 1M rows there's no speedup.\n\nThe bad news is this seems to have negative impact on cases with few\npartitions, that'd fit into 16 slots. Which is not surprising, as the\ncode has to walk longer arrays, it probably affects caching etc. So this\nwould hurt the systems that don't use that many relations - not much,\nbut still.\n\nThe regression appears to be consistently ~3%, and v2 aimed to improve\nthat - at least for the case with just 100 rows. It even gains ~5% in a\ncouple cases. It's however a bit strange v2 doesn't really help the two\nlarger cases.\n\nOverall, I think this seems interesting - it's hard to not like doubling\nthe throughput in some cases. Yes, it's 100 rows only, and the real\nimprovements are bound to be smaller, it would help short OLTP queries\nthat only process a couple rows.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 7 Aug 2023 12:51:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On Mon, Aug 07, 2023 at 12:51:24PM +0200, Tomas Vondra wrote:\n> The bad news is this seems to have negative impact on cases with few\n> partitions, that'd fit into 16 slots. Which is not surprising, as the\n> code has to walk longer arrays, it probably affects caching etc. So this\n> would hurt the systems that don't use that many relations - not much,\n> but still.\n> \n> The regression appears to be consistently ~3%, and v2 aimed to improve\n> that - at least for the case with just 100 rows. It even gains ~5% in a\n> couple cases. It's however a bit strange v2 doesn't really help the two\n> larger cases.\n> \n> Overall, I think this seems interesting - it's hard to not like doubling\n> the throughput in some cases. Yes, it's 100 rows only, and the real\n> improvements are bound to be smaller, it would help short OLTP queries\n> that only process a couple rows.\n\nIndeed. I wonder whether we could mitigate the regressions by using SIMD\nintrinsics in the loops. Or auto-vectorization, if that is possible.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 09:56:51 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 6:51 AM Tomas Vondra\n<[email protected]> wrote:\n> The regression appears to be consistently ~3%, and v2 aimed to improve\n> that - at least for the case with just 100 rows. It even gains ~5% in a\n> couple cases. It's however a bit strange v2 doesn't really help the two\n> larger cases.\n\nTo me, that's an absolutely monumental regression for a change like\nthis. Sure, lots of people have partitioned tables. But also, lots of\npeople don't. Penalizing very simple queries by 2-3% seems completely\nover the top to me. I can't help wondering whether there's actually\nsomething wrong with the test, or the coding, because that seems huge\nto me.\n\nI would also argue that the results are actually not that great,\nbecause once you get past 64 partitions you're right back where you\nstarted, or maybe worse off. To me, there's nothing magical about\ncases between 16 and 64 relations that makes them deserve special\ntreatment - plenty of people are going to want to use hundreds of\npartitions, and even if you only use a few dozen, this isn't going to\nhelp as soon as you join two or three partitioned tables, and I\nsuspect it hurts whenever it doesn't help.\n\nI think we need a design that scales better. I don't really know what\nthat would look like, exactly, but your idea of a hybrid approach\nseems like it might be worth further consideration. We don't have to\nstore an infinite number of fast-path locks in an array that we search\nlinearly, and it might be better that if we moved to some other\napproach we could avoid some of the regression. You mentioned a hash\ntable; a partially associative cache might also be worth considering,\nlike having an array of 1k slots but dividing it logically into 128\nbins of 16 slots each and only allowing an OID to be recorded in the\nbin whose low 7 bits match the low 7 bits of the OID.\n\nBut maybe first we need to understand where all the CPU cycles are\ngoing, because maybe that's optimizing completely the wrong thing and,\nagain, it seems like an awfully large hit.\n\nOf course, another thing we could do is try to improve the main lock\nmanager somehow. I confess that I don't have a great idea for that at\nthe moment, but the current locking scheme there is from a very, very\nlong time ago and clearly wasn't designed with modern hardware in\nmind.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 13:05:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On 8/7/23 19:05, Robert Haas wrote:\n> On Mon, Aug 7, 2023 at 6:51 AM Tomas Vondra\n> <[email protected]> wrote:\n>> The regression appears to be consistently ~3%, and v2 aimed to improve\n>> that - at least for the case with just 100 rows. It even gains ~5% in a\n>> couple cases. It's however a bit strange v2 doesn't really help the two\n>> larger cases.\n> \n> To me, that's an absolutely monumental regression for a change like\n> this. Sure, lots of people have partitioned tables. But also, lots of\n> people don't. Penalizing very simple queries by 2-3% seems completely\n> over the top to me. I can't help wondering whether there's actually\n> something wrong with the test, or the coding, because that seems huge\n> to me.\n> \n\nI'm the first to admit the coding (in my patches) is far from perfect,\nand this may easily be a consequence of that. My motivation was to get\nsome quick measurements for the \"bad case\".\n\n> I would also argue that the results are actually not that great,\n> because once you get past 64 partitions you're right back where you\n> started, or maybe worse off. To me, there's nothing magical about\n> cases between 16 and 64 relations that makes them deserve special\n> treatment - plenty of people are going to want to use hundreds of\n> partitions, and even if you only use a few dozen, this isn't going to\n> help as soon as you join two or three partitioned tables, and I\n> suspect it hurts whenever it doesn't help.\n> \n\nThat's true, but doesn't that apply to any cache that can overflow? You\ncould make the same argument about the default value of 16 slots - why\nnot to have just 8?\n\nFWIW I wasn't really suggesting we should increase the value to 64, I\nwas just trying to get a better idea of the costs at play (fast-path\ncache maintenance and regular locking).\n\n> I think we need a design that scales better. I don't really know what\n> that would look like, exactly, but your idea of a hybrid approach\n> seems like it might be worth further consideration. We don't have to\n> store an infinite number of fast-path locks in an array that we search\n> linearly, and it might be better that if we moved to some other\n> approach we could avoid some of the regression. You mentioned a hash\n> table; a partially associative cache might also be worth considering,\n> like having an array of 1k slots but dividing it logically into 128\n> bins of 16 slots each and only allowing an OID to be recorded in the\n> bin whose low 7 bits match the low 7 bits of the OID.\n\nYes, I agree. I don't know if this particular design would be the right\none (1000 elements seems a bit too much for something included right in\nPGPROC). But yeah, something that flips from linear search to something\nelse would be reasonable.\n\n> \n> But maybe first we need to understand where all the CPU cycles are\n> going, because maybe that's optimizing completely the wrong thing and,\n> again, it seems like an awfully large hit.\n> \n\nRight. We're mostly just guessing what the issue is.\n\n> Of course, another thing we could do is try to improve the main lock\n> manager somehow. I confess that I don't have a great idea for that at\n> the moment, but the current locking scheme there is from a very, very\n> long time ago and clearly wasn't designed with modern hardware in\n> mind.\n> \n\nNo idea, but I'd bet some of the trade offs may need re-evaluation.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Aug 2023 21:02:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On 8/7/23 18:56, Nathan Bossart wrote:\n> On Mon, Aug 07, 2023 at 12:51:24PM +0200, Tomas Vondra wrote:\n>> The bad news is this seems to have negative impact on cases with few\n>> partitions, that'd fit into 16 slots. Which is not surprising, as the\n>> code has to walk longer arrays, it probably affects caching etc. So this\n>> would hurt the systems that don't use that many relations - not much,\n>> but still.\n>>\n>> The regression appears to be consistently ~3%, and v2 aimed to improve\n>> that - at least for the case with just 100 rows. It even gains ~5% in a\n>> couple cases. It's however a bit strange v2 doesn't really help the two\n>> larger cases.\n>>\n>> Overall, I think this seems interesting - it's hard to not like doubling\n>> the throughput in some cases. Yes, it's 100 rows only, and the real\n>> improvements are bound to be smaller, it would help short OLTP queries\n>> that only process a couple rows.\n> \n> Indeed. I wonder whether we could mitigate the regressions by using SIMD\n> intrinsics in the loops. Or auto-vectorization, if that is possible.\n> \n\nMaybe, but from what I know about SIMD it would require a lot of changes\nto the design, so that the loops don't mix accesses to different PGPROC\nfields (fpLockBits, fpRelId) and so on. But I think it'd be better to\njust stop walking the whole array regularly.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Aug 2023 21:08:48 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 3:02 PM Tomas Vondra\n<[email protected]> wrote:\n> > I would also argue that the results are actually not that great,\n> > because once you get past 64 partitions you're right back where you\n> > started, or maybe worse off. To me, there's nothing magical about\n> > cases between 16 and 64 relations that makes them deserve special\n> > treatment - plenty of people are going to want to use hundreds of\n> > partitions, and even if you only use a few dozen, this isn't going to\n> > help as soon as you join two or three partitioned tables, and I\n> > suspect it hurts whenever it doesn't help.\n>\n> That's true, but doesn't that apply to any cache that can overflow? You\n> could make the same argument about the default value of 16 slots - why\n> not to have just 8?\n\nYes and no. I mean, there are situations where when the cache\noverflows, you still get a lot of benefit out of the entries that you\nare able to cache, as when the frequency of access follows some kind\nof non-uniform distribution, Zipfian or decreasing geometrically or\nwhatever. There are also situations where you can just make the cache\nbig enough that as a practical matter it's never going to overflow. I\ncan't think of a PostgreSQL-specific example right now, but if you\nfind that a 10-entry cache of other people living in your house isn't\ngood enough, a 200-entry cache should solve the problem for nearly\neveryone alive. If that doesn't cause a resource crunch, crank up the\ncache size and forget about it. But here we have neither of those\nsituations. The access frequency is basically uniform, and the cache\nsize needed to avoid overflows seems to be unrealistically large, at\nleast given the current design. So I think that in this case upping\nthe cache size figures to be much less effective than in some other\ncases.\n\nIt's also a bit questionable whether \"cache\" is even the right word\nhere. I'd say it isn't, because it's not like the information in the\nfast-path locking structures is a subset of the full information\nstored elsewhere. Whatever information is stored there is canonical\nfor those entries.\n\n> Yes, I agree. I don't know if this particular design would be the right\n> one (1000 elements seems a bit too much for something included right in\n> PGPROC). But yeah, something that flips from linear search to something\n> else would be reasonable.\n\nYeah ... or there could be a few slots in the PGPROC and then a bit\nindicating whether to jump to a larger shared memory structure located\nin a separate array. Not sure exactly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 15:21:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Thank you Tomas! I really appreciate your willingness to dig in here and\nhelp us out! The rest of my replies are inline below.\n\nOn Thu, Aug 3, 2023 at 1:39 PM Tomas Vondra <[email protected]>\nwrote:\n\n> The analysis in the linked gitlab issue is pretty amazing. I wasn't\n> planning to argue against the findings anyway, but plenty of data\n> supporting the conclusions is good.\n>\n\nThank you! I totally agree, having supporting data is so helpful.\n\nI'm not an expert on locking, so some of the stuff I say may be\n> trivially obvious - it's just me thinking about ...\n>\n\nAbsolutely makes sense to check assumptions, etc. Thanks for being open!\nFor what it's worth, I've also been working with Postgres for many years,\nand I love that it keeps teaching me new things, this topic being just the\nlatest.\n\nI wonder what's the rough configuration of those systems, though. Both\n> the hardware and PostgreSQL side. How many cores / connections, etc.?\n>\n\nEach of the postgres hosts had 96 vCPUs and at peak handled roughly 80\nconcurrently active connections.\n\nFor purposes of reproducing the pathology, I think we can do so with a\nsingle postgres instance. We will need a high enough query rate to push\nthe bottleneck to lock_manager lwlock contention. The simplest way to do\nso is probably to give it a small dataset that fits easily in cache and run\nseveral concurrent client connections doing cheap single-row queries, each\nin its own transaction, against a target table that has either many indexes\nor partitions or both.\n\nFor context, here's a brief summary of the production environment where we\nfirst observed this pathology:\nThe writable primary postgres instance has several streaming replicas, used\nfor read-only portions of the workload. All of them run on equivalent\nhardware. Most of the research focuses on the streaming replica postgres\ninstances, although the same pathology has been observed in the writable\nprimary node as well. The general topology is thousands of client\nconnections fanning down into several pgbouncer instances per Postgres\ninstance. From each Postgres instance's perspective, its workload\ngenerally has a daily peak of roughly 80 concurrently active backends\nsupporting a throughput of 75K transactions second, where most transactions\nrun a single query.\n\nYes, I agree with that. Partitioning makes this issue works, I guess.\n> Schemas with indexes on every column are disturbingly common these days\n> too, which hits the issue too ...\n>\n\nAgreed.\n\n\n> Those are pretty great pieces of information. I wonder if some of the\n> measurements may be affecting the observation (by consuming too much\n> CPU, making the contention worse), but overall it seems convincing.\n>\n\nYes, definitely overhead is a concern, glad you asked!\n\nHere are my notes on the overhead for each bpftrace script:\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678\n\nHere is a summary of where that overhead comes from:\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365310956\n\nHere are more generic benchmark results for uprobe overhead:\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1383\n\nBriefly, we generally expect the instrumentation overhead to be roughly\n1-2 microseconds per call to the instrumented instruction. It partly\ndepends on what we're doing in the instrumentation, but most of that\noverhead is just the interrupt-handling to transfer control flow to/from\nthe BPF code.\n\nWould it be difficult to sample just a small fraction of the calls? Say,\n> 1%, to get good histograms/estimated with acceptable CPU usage.\n>\n\nThat would be great, but since the overhead comes mostly from the control\ntransfer, it wouldn't help to put sampling logic in the tracer itself. The\nmain way to mitigate that overhead is to choose instrumentation points\nwhere the call rate is tolerably low. That's why the only instrumentation\nI used for more than a few seconds at a time were the \"low overhead\"\nscripts that instrument only the stalled call to LWLockAcquire.\n\n\n> In any case, it's a great source of information to reproduce the issue\n> and evaluate possible fixes.\n>\n\nThanks, that's my hope!\n\n\n> After looking at the code etc. I think the main trade-off here is going\n> to be the cost of searching the fpRelId array. At the moment it's\n> searched linearly, which is cheap for 16 locks. But at some point it'll\n> become as expensive as updating the slowpath, and the question is when.\n>\n> I wonder if we could switch to a more elaborate strategy if the number\n> of locks is high enough. Say, a hash table, or some hybrid approach.\n>\n\nInteresting idea! I was hoping a linear search would stay cheap enough but\nyou're right, it's going to become too inefficient at some point. It might\nmake sense to start with just blackbox timing or throughput measurements,\nbecause directly measuring that search duration may not be cheap. To\nobserve durations via BPF, we have to instrument 2 points (e.g. function\nentry and return, or more generally the instructions before and after the\ncritical section we're observing). For code called as frequently as\nLWLockAcquire, that overhead would be prohibitively expensive, so we might\nneed to measure it natively with counters for each histogram bucket we care\nabout. Just thinking ahead; we don't need to deal with this yet, I guess.\n\n\n> I understand. I have two concerns:\n>\n> 1) How would the users know they need to tune this / determine what's\n> the right value, and what's the right value for their system.\n>\n> 2) Having to deal with misconfigured systems as people tend to blindly\n> tune everything to 100x the default, because more is better :-(\n>\n\nThese are very valid concerns. Thanks for articulating them!\n\nFor point 1:\nThe advice might be to only increase the number of slots for fastpath locks\nper xact if sampling pg_stat_activity frequently shows \"lock_manager\"\nwait_events affecting a significant percentage of your non-idle backends.\nAnd increase it as little as possible, due to the extra overhead incurred\nwhen checking locks. For calibration purposes, I polled pg_locks\nperiodically, counting the number of slowpath locks where the lock mode was\nweak enough to qualify for fastpath if there had been enough fastpath slots\navailable. See the graph and SQL at the bottom of this note:\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142\n.\n\nFor point 2:\nMaybe we should put a ceiling on the allowed value. Maybe 64 or 128 or\n256? Probably should depend on the cost of searching the fpRelid array, as\nyou described earlier.\n\nThank you Tomas! I really appreciate your willingness to dig in here and help us out! The rest of my replies are inline below.On Thu, Aug 3, 2023 at 1:39 PM Tomas Vondra <[email protected]> wrote:The analysis in the linked gitlab issue is pretty amazing. I wasn't\nplanning to argue against the findings anyway, but plenty of data\nsupporting the conclusions is good.Thank you! I totally agree, having supporting data is so helpful.\nI'm not an expert on locking, so some of the stuff I say may be\ntrivially obvious - it's just me thinking about ...Absolutely makes sense to check assumptions, etc. Thanks for being open! For what it's worth, I've also been working with Postgres for many years, and I love that it keeps teaching me new things, this topic being just the latest.\nI wonder what's the rough configuration of those systems, though. Both\nthe hardware and PostgreSQL side. How many cores / connections, etc.?Each of the postgres hosts had 96 vCPUs and at peak handled roughly 80 concurrently active connections.For purposes of reproducing the pathology, I think we can do so with a \nsingle postgres instance. We will need a high enough query rate to push\n the bottleneck to lock_manager lwlock contention. The simplest way to do so is \nprobably to give it a small dataset that fits easily in cache and run \nseveral concurrent client connections doing cheap single-row queries, \neach in its own transaction, against a target table that has either many \nindexes or partitions or both.For context, here's a brief summary of the production environment where we first observed this pathology:The writable primary postgres instance has several streaming replicas, used for read-only portions of the workload. All of them run on equivalent hardware. Most of the research focuses on the streaming replica postgres instances, although the same pathology has been observed in the writable primary node as well. The general topology is thousands of client connections fanning down into several pgbouncer instances per Postgres instance. From each Postgres instance's perspective, its workload generally has a daily peak of roughly 80 concurrently active backends supporting a throughput of 75K transactions second, where most transactions run a single query.\nYes, I agree with that. Partitioning makes this issue works, I guess.\nSchemas with indexes on every column are disturbingly common these days\ntoo, which hits the issue too ...Agreed. \nThose are pretty great pieces of information. I wonder if some of the\nmeasurements may be affecting the observation (by consuming too much\nCPU, making the contention worse), but overall it seems convincing.Yes, definitely overhead is a concern, glad you asked!Here are my notes on the overhead for each bpftrace script: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1357834678Here is a summary of where that overhead comes from: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365310956Here are more generic benchmark results for uprobe overhead: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1383Briefly, we generally expect the instrumentation overhead to be roughly 1-2 microseconds per call to the instrumented instruction. It partly depends on what we're doing in the instrumentation, but most of that overhead is just the interrupt-handling to transfer control flow to/from the BPF code.\nWould it be difficult to sample just a small fraction of the calls? Say,\n1%, to get good histograms/estimated with acceptable CPU usage.That would be great, but since the overhead comes mostly from the control transfer, it wouldn't help to put sampling logic in the tracer itself. The main way to mitigate that overhead is to choose instrumentation\n points where the call rate is tolerably low. That's why the only \ninstrumentation I used for more than a few seconds at a time were the \n\"low overhead\" scripts that instrument only the stalled call to \nLWLockAcquire. \nIn any case, it's a great source of information to reproduce the issue\nand evaluate possible fixes.Thanks, that's my hope! \nAfter looking at the code etc. I think the main trade-off here is going\nto be the cost of searching the fpRelId array. At the moment it's\nsearched linearly, which is cheap for 16 locks. But at some point it'll\nbecome as expensive as updating the slowpath, and the question is when.\n\nI wonder if we could switch to a more elaborate strategy if the number\nof locks is high enough. Say, a hash table, or some hybrid approach.Interesting idea! I was hoping a linear search would stay cheap enough but you're right, it's going to become too inefficient at some point. It might make sense to start with just blackbox timing or throughput measurements, because directly measuring that search duration may not be cheap. To observe durations via BPF, we have to instrument 2 points (e.g. function entry and return, or more generally the instructions before and after the critical section we're observing). For code called as frequently as LWLockAcquire, that overhead would be prohibitively expensive, so we might need to measure it natively with counters for each histogram bucket we care about. Just thinking ahead; we don't need to deal with this yet, I guess. \nI understand. I have two concerns:\n\n1) How would the users know they need to tune this / determine what's\nthe right value, and what's the right value for their system.\n\n2) Having to deal with misconfigured systems as people tend to blindly\ntune everything to 100x the default, because more is better :-(These are very valid concerns. Thanks for articulating them!For point 1:The advice might be to only increase the number of slots for fastpath locks per xact if sampling pg_stat_activity frequently shows \"lock_manager\" wait_events affecting a significant percentage of your non-idle backends. And increase it as little as possible, due to the extra overhead incurred when checking locks. For calibration purposes, I polled pg_locks periodically, counting the number of slowpath locks where the lock mode was weak enough to qualify for fastpath if there had been enough fastpath slots available. See the graph and SQL at the bottom of this note: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365595142.For point 2:Maybe we should put a ceiling on the allowed value. Maybe 64 or 128 or 256? Probably should depend on the cost of searching the fpRelid array, as you described earlier.",
"msg_date": "Mon, 7 Aug 2023 12:26:59 -0700",
"msg_from": "Matt Smiley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "\n\nOn 8/7/23 21:21, Robert Haas wrote:\n> On Mon, Aug 7, 2023 at 3:02 PM Tomas Vondra\n> <[email protected]> wrote:\n>>> I would also argue that the results are actually not that great,\n>>> because once you get past 64 partitions you're right back where you\n>>> started, or maybe worse off. To me, there's nothing magical about\n>>> cases between 16 and 64 relations that makes them deserve special\n>>> treatment - plenty of people are going to want to use hundreds of\n>>> partitions, and even if you only use a few dozen, this isn't going to\n>>> help as soon as you join two or three partitioned tables, and I\n>>> suspect it hurts whenever it doesn't help.\n>>\n>> That's true, but doesn't that apply to any cache that can overflow? You\n>> could make the same argument about the default value of 16 slots - why\n>> not to have just 8?\n> \n> Yes and no. I mean, there are situations where when the cache\n> overflows, you still get a lot of benefit out of the entries that you\n> are able to cache, as when the frequency of access follows some kind\n> of non-uniform distribution, Zipfian or decreasing geometrically or\n> whatever. There are also situations where you can just make the cache\n> big enough that as a practical matter it's never going to overflow. I\n> can't think of a PostgreSQL-specific example right now, but if you\n> find that a 10-entry cache of other people living in your house isn't\n> good enough, a 200-entry cache should solve the problem for nearly\n> everyone alive. If that doesn't cause a resource crunch, crank up the\n> cache size and forget about it. But here we have neither of those\n> situations. The access frequency is basically uniform, and the cache\n> size needed to avoid overflows seems to be unrealistically large, at\n> least given the current design. So I think that in this case upping\n> the cache size figures to be much less effective than in some other\n> cases.\n> \n\nWhy would the access frequency be uniform? In particular, there's a huge\nvariability in how long the locks need to exist - IIRC we may be keeping\nlocks for tables for a long time, but not for indexes. From this POV it\nmight be better to do fast-path locking for indexes, no?\n\n> It's also a bit questionable whether \"cache\" is even the right word\n> here. I'd say it isn't, because it's not like the information in the\n> fast-path locking structures is a subset of the full information\n> stored elsewhere. Whatever information is stored there is canonical\n> for those entries.\n> \n\nRight. Calling this a cache might be a bit misleading.\n\n>> Yes, I agree. I don't know if this particular design would be the right\n>> one (1000 elements seems a bit too much for something included right in\n>> PGPROC). But yeah, something that flips from linear search to something\n>> else would be reasonable.\n> \n> Yeah ... or there could be a few slots in the PGPROC and then a bit\n> indicating whether to jump to a larger shared memory structure located\n> in a separate array. Not sure exactly.\n> \n\nMaybe, but isn't that mostly what the regular non-fast-path locking\ndoes? Wouldn't that defeat the whole purpose of fast-path locking?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Aug 2023 21:48:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 3:48 PM Tomas Vondra\n<[email protected]> wrote:\n> Why would the access frequency be uniform? In particular, there's a huge\n> variability in how long the locks need to exist - IIRC we may be keeping\n> locks for tables for a long time, but not for indexes. From this POV it\n> might be better to do fast-path locking for indexes, no?\n\nIf you're not using explicit transactions, you take a bunch of locks\nat the start of a statement and then release all of them at the end.\nNone of the locks stick around so fast-path locking structure goes\nthrough cycles where it starts out empty, fills up to N items, and\nthen goes back to empty. If you visualize it as a cache, we're\nflushing the entire cache at the end of every operation.\n\nIf you run multiple statements in a transaction, the locks will be\nkept until the end of the transaction, once acquired. So then you\ncould start with a small number and gradually accumulate more. But\nthen you're going to release them all at once at the end.\n\nThe main thing that matters here seems to be whether or not all of the\nlocks can go through the fast-path mechanism, or how many have to go\nthrough the regular mechanism. It shouldn't matter, AFAICS, *which\nones* go through the fast-path mechanism. If you think it does, I'd\nlike to hear why - it's possible I'm missing something here.\n\n> Maybe, but isn't that mostly what the regular non-fast-path locking\n> does? Wouldn't that defeat the whole purpose of fast-path locking?\n\nI don't think so. The main lock manager has two flaws that hinder\nperformance in comparison with the fast-path mechanism. The first, but\nless important, one is that the data structures are just a lot\nsimpler. For access to a small number of fixed-size elements, a C\narray is hard to beat, and the main lock manager data structures are a\nlot more complex. The second one, which I think is more important, is\nthat we've essentially flipped the ordering of the primary key. In the\nmain lock manager, you start by hashing the locked object and that\ngives you a partition number and you then take that partition lock.\nThen, you iterate through a list of backends that have that object\nlocked. This means that if a lot of people are taking locks on the\nsame object, even if there's no actual conflict between the lock\nmodes, you still get a lot of contention. But in the fast-path\nmechanism, it's reversed: first, you go to the shared memory *for your\nbackend* and then you search through it for the particular locked\nobject at issue. So basically the main lock manager treats the primary\nkey as (what, who) while the fast-path mechanism treats it as (who,\nwhat). And that gets rid of a ton of contention because then different\nbackends locking the same object (in sufficiently weak lock modes)\nnever touch the same cache lines, so there's actually zero contention.\nThat is, I believe, the most important thing about the fast-path\nlocking system.\n\nWhat I've just said is slightly complicated by the existence of\nFastPathStrongRelationLockData, which is concurrently accessed by all\nbackends when using fast-path locking, but it's only read-only as\nnobody actually takes a strong lock (like an AccessExclusiveLock on a\ntable). So you probably do get some cache line effects there, but\nbecause it's read-only, they don't cause too much of a headache.\n\nWe do have to be careful that the overhead of checking multiple\nlocking data structures doesn't add up to a problem, for sure. But\nthere can still, I believe, be a lot of benefit in dividing up access\nfirst by \"who\" and then by \"what\" for weak relation locks even if the\nper-backend data structures become more complex. Or at least I hope\nso.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 16:31:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Hi Andres, thanks for helping! Great questions, replies are inline below.\n\nOn Sun, Aug 6, 2023 at 1:00 PM Andres Freund <[email protected]> wrote:\n\n> Hm, I'm curious whether you have a way to trigger the issue outside of your\n> prod environment. Mainly because I'm wondering if you're potentially\n> hitting\n> the issue fixed in a4adc31f690 - we ended up not backpatching that fix, so\n> you'd not see the benefit unless you reproduced the load in 16+.\n>\n\nThanks for sharing this!\n\nI have not yet written a reproducer since we see this daily in production.\nI have a sketch of a few ways that I think will reproduce the behavior\nwe're observing, but haven't had time to implement it.\n\nI'm not sure if we're seeing this behavior in production, but it's\ndefinitely an interesting find. Currently we are running postgres 12.11,\nwith an upcoming upgrade to 15 planned. Good to know there's a potential\nimprovement waiting in 16. I noticed that in LWLockAcquire the call to\nLWLockDequeueSelf occurs (\nhttps://github.com/postgres/postgres/blob/REL_12_11/src/backend/storage/lmgr/lwlock.c#L1218)\ndirectly between the unsuccessful attempt to immediately acquire the lock\nand reporting the backend's wait event. The distinctive indicators we have\nbeen using for this pathology are that \"lock_manager\" wait_event and its\nassociated USDT probe (\nhttps://github.com/postgres/postgres/blob/REL_12_11/src/backend/storage/lmgr/lwlock.c#L1236-L1237),\nboth of which occur after whatever overhead is incurred by\nLWLockDequeueSelf. As you mentioned in your commit message, that overhead\nis hard to detect. My first impression is that whatever overhead it incurs\nis in addition to what we are investigating.\n\n\n> I'm also wondering if it's possible that the reason for the throughput\n> drops\n> are possibly correlated with heavyweight contention or higher frequency\n> access\n> to the pg_locks view. Deadlock checking and the locks view acquire locks on\n> all lock manager partitions... So if there's a bout of real lock contention\n> (for longer than deadlock_timeout)...\n>\n\nGreat questions, but we ruled that out. The deadlock_timeout is 5 seconds,\nso frequently hitting that would massively violate SLO and would alert the\non-call engineers. The pg_locks view is scraped a couple times per minute\nfor metrics collection, but the lock_manager lwlock contention can be\nobserved thousands of times every second, typically with very short\ndurations. The following example (captured just now) shows the number of\ntimes per second over a 10-second window that any 1 of the 16\n\"lock_manager\" lwlocks was contended:\n\nmsmiley@patroni-main-2004-103-db-gprd.c.gitlab-production.internal:~$ sudo\n./bpftrace -e 'usdt:/usr/lib/postgresql/12/bin/postgres:lwlock__wait__start\n/str(arg0) == \"lock_manager\"/ { @[arg1] = count(); } interval:s:1 {\nprint(@); clear(@); } interval:s:10 { exit(); }'\nAttaching 5 probes...\n@[0]: 12122\n@[0]: 12888\n@[0]: 13011\n@[0]: 13348\n@[0]: 11461\n@[0]: 10637\n@[0]: 10892\n@[0]: 12334\n@[0]: 11565\n@[0]: 11596\n\nTypically that contention only lasts a couple microseconds. But the long\ntail can sometimes be much slower. Details here:\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365159507\n.\n\nGiven that most of your lock manager traffic comes from query planning -\n> have\n> you evaluated using prepared statements more heavily?\n>\n\nYes, there are unrelated obstacles to doing so -- that's a separate can of\nworms, unfortunately. But in this pathology, even if we used prepared\nstatements, the backend would still need to reacquire the same locks during\neach executing transaction. So in terms of lock acquisition rate, whether\nit's via the planner or executor doing it, the same relations have to be\nlocked.\n\nHi Andres, thanks for helping! Great questions, replies are inline below.On Sun, Aug 6, 2023 at 1:00 PM Andres Freund <[email protected]> wrote:Hm, I'm curious whether you have a way to trigger the issue outside of your\nprod environment. Mainly because I'm wondering if you're potentially hitting\nthe issue fixed in a4adc31f690 - we ended up not backpatching that fix, so\nyou'd not see the benefit unless you reproduced the load in 16+.Thanks for sharing this!I have not yet written a reproducer since we see this daily in \nproduction. I have a sketch of a few ways that I think will reproduce \nthe behavior we're observing, but haven't had time to implement it.I'm not sure if we're seeing this behavior in production, but it's definitely an interesting find. Currently we are running postgres 12.11, with an upcoming upgrade to 15 planned. Good to know there's a potential improvement waiting in 16. I noticed that in LWLockAcquire the call to LWLockDequeueSelf occurs (https://github.com/postgres/postgres/blob/REL_12_11/src/backend/storage/lmgr/lwlock.c#L1218) directly between the unsuccessful attempt to immediately acquire the lock and reporting the backend's wait event. The distinctive indicators we have been using for this pathology are that \"lock_manager\" wait_event and its associated USDT probe (https://github.com/postgres/postgres/blob/REL_12_11/src/backend/storage/lmgr/lwlock.c#L1236-L1237), both of which occur after whatever overhead is incurred by LWLockDequeueSelf. As you mentioned in your commit message, that overhead is hard to detect. My first impression is that whatever overhead it incurs is in addition to what we are investigating. \nI'm also wondering if it's possible that the reason for the throughput drops\nare possibly correlated with heavyweight contention or higher frequency access\nto the pg_locks view. Deadlock checking and the locks view acquire locks on\nall lock manager partitions... So if there's a bout of real lock contention\n(for longer than deadlock_timeout)...Great questions, but we ruled that out. The deadlock_timeout is 5 seconds, so frequently hitting that would massively violate SLO and would alert the on-call engineers. The pg_locks view is scraped a couple times per minute for metrics collection, but the lock_manager lwlock contention can be observed thousands of times every second, typically with very short durations. The following example (captured just now) shows the number of times per second over a 10-second window that any 1 of the 16 \"lock_manager\" lwlocks was contended:msmiley@patroni-main-2004-103-db-gprd.c.gitlab-production.internal:~$ sudo ./bpftrace -e 'usdt:/usr/lib/postgresql/12/bin/postgres:lwlock__wait__start /str(arg0) == \"lock_manager\"/ { @[arg1] = count(); } interval:s:1 { print(@); clear(@); } interval:s:10 { exit(); }'Attaching 5 probes...@[0]: 12122@[0]: 12888@[0]: 13011@[0]: 13348@[0]: 11461@[0]: 10637@[0]: 10892@[0]: 12334@[0]: 11565@[0]: 11596Typically that contention only lasts a couple microseconds. But the long tail can sometimes be much slower. Details here: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365159507.\n\nGiven that most of your lock manager traffic comes from query planning - have\nyou evaluated using prepared statements more heavily?Yes, there are unrelated obstacles to doing so -- that's a separate can of worms, unfortunately. But in this pathology, even if we used prepared statements, the backend would still need to reacquire the same locks during each executing transaction. So in terms of lock acquisition rate, whether it's via the planner or executor doing it, the same relations have to be locked.",
"msg_date": "Mon, 7 Aug 2023 13:59:26 -0700",
"msg_from": "Matt Smiley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-07 13:59:26 -0700, Matt Smiley wrote:\n> I have not yet written a reproducer since we see this daily in production.\n> I have a sketch of a few ways that I think will reproduce the behavior\n> we're observing, but haven't had time to implement it.\n> \n> I'm not sure if we're seeing this behavior in production\n\nIt might be worth for you to backpatch\n\ncommit 92daeca45df\nAuthor: Andres Freund <[email protected]>\nDate: 2022-11-21 20:34:17 -0800\n\n Add wait event for pg_usleep() in perform_spin_delay()\n\ninto 12. That should be low risk and have only trivially resolvable\nconflicts. Alternatively, you could use bpftrace et al to set a userspace\nprobe on perform_spin_delay().\n\n\n> , but it's definitely an interesting find. Currently we are running\n> postgres 12.11, with an upcoming upgrade to 15 planned. Good to know\n> there's a potential improvement waiting in 16. I noticed that in\n> LWLockAcquire the call to LWLockDequeueSelf occurs (\n> https://github.com/postgres/postgres/blob/REL_12_11/src/backend/storage/lmgr/lwlock.c#L1218)\n> directly between the unsuccessful attempt to immediately acquire the lock\n> and reporting the backend's wait event.\n\nThat's normal.\n\n\n\n> > I'm also wondering if it's possible that the reason for the throughput\n> > drops\n> > are possibly correlated with heavyweight contention or higher frequency\n> > access\n> > to the pg_locks view. Deadlock checking and the locks view acquire locks on\n> > all lock manager partitions... So if there's a bout of real lock contention\n> > (for longer than deadlock_timeout)...\n> >\n> \n> Great questions, but we ruled that out. The deadlock_timeout is 5 seconds,\n> so frequently hitting that would massively violate SLO and would alert the\n> on-call engineers. The pg_locks view is scraped a couple times per minute\n> for metrics collection, but the lock_manager lwlock contention can be\n> observed thousands of times every second, typically with very short\n> durations. The following example (captured just now) shows the number of\n> times per second over a 10-second window that any 1 of the 16\n> \"lock_manager\" lwlocks was contended:\n\nSome short-lived contention is fine and expected - the question is how long\nthe waits are...\n\nUnfortunately my experience is that the overhead of bpftrace means that\nanalyzing things like this with bpftrace is very hard... :(.\n\n\n> > Given that most of your lock manager traffic comes from query planning -\n> > have you evaluated using prepared statements more heavily?\n> >\n> \n> Yes, there are unrelated obstacles to doing so -- that's a separate can of\n> worms, unfortunately. But in this pathology, even if we used prepared\n> statements, the backend would still need to reacquire the same locks during\n> each executing transaction. So in terms of lock acquisition rate, whether\n> it's via the planner or executor doing it, the same relations have to be\n> locked.\n\nPlanning will often lock more database objects than query execution. Which can\nkeep you using fastpath locks for longer.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Aug 2023 14:16:25 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": ">\n> Why would the access frequency be uniform? In particular, there's a huge\n> variability in how long the locks need to exist\n>\n\nAs a supporting data point, our example production workload shows a 3x\ndifference between the most versus least frequently contended lock_manager\nlock:\nhttps://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726\n\nSince we deterministically distribute relations among those 16 lock_manager\nlwlocks by hashing their lock tag, we can probably assume a roughly uniform\nnumber of relations are being managed by each lock_manager lock, but the\ndemand (and contention) for them is non-uniform. This 3x spread\ncorroborates the intuition that some relations are locked more frequently\nthan others (that being both a schema- and workload-specific property).\n\nSince we're contemplating a new hashing scheme, I wonder how we could\naccommodate that kind of asymmetry, where some relations are locked more\nfrequently than others.\n\n\nWhy would the access frequency be uniform? In particular, there's a huge\nvariability in how long the locks need to existAs a supporting data point, our example production workload shows a 3x difference between the most versus least frequently contended lock_manager lock:https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2301#note_1365630726Since we deterministically distribute relations among those 16 lock_manager lwlocks by hashing their lock tag, we can probably assume a roughly uniform number of relations are being managed by each lock_manager lock, but the demand (and contention) for them is non-uniform. This 3x spread corroborates the intuition that some relations are locked more frequently than others (that being both a schema- and workload-specific property).Since we're contemplating a new hashing scheme, I wonder how we could accommodate that kind of asymmetry, where some relations are locked more frequently than others.",
"msg_date": "Mon, 7 Aug 2023 14:19:10 -0700",
"msg_from": "Matt Smiley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-07 13:05:32 -0400, Robert Haas wrote:\n> I would also argue that the results are actually not that great,\n> because once you get past 64 partitions you're right back where you\n> started, or maybe worse off. To me, there's nothing magical about\n> cases between 16 and 64 relations that makes them deserve special\n> treatment - plenty of people are going to want to use hundreds of\n> partitions, and even if you only use a few dozen, this isn't going to\n> help as soon as you join two or three partitioned tables, and I\n> suspect it hurts whenever it doesn't help.\n> \n> I think we need a design that scales better. I don't really know what\n> that would look like, exactly, but your idea of a hybrid approach\n> seems like it might be worth further consideration. We don't have to\n> store an infinite number of fast-path locks in an array that we search\n> linearly, and it might be better that if we moved to some other\n> approach we could avoid some of the regression.\n\nMy gut feeling is that the state for fast path locks doesn't live in quite the\nright place.\n\nWhat if fast path locks entered PROCLOCK into the shared hashtable, just like\nwith normal locks, the first time a lock is acquired by a backend. Except that\nwe'd set a flag indicating the lock is a fastpath lock. When the lock is\nreleased, neither the LOCALLOCK nor the PROCLOCK entry would be\nremoved. Instead, the LOCK/PROCLOCK would be modified to indicate that the\nlock is not held anymore.\n\nThat itself wouldn't buy us much - we'd still need to do a lookup in the\nshared hashtable. But, by the time we decide whether to use fast path locks,\nwe've already done a hash lookup in the LOCALLOCK hashtable. Because the\nPROCLOCK entry would continue to exist, we can use LOCALLOCK->proclock to get\nthe PROCLOCK entry without a shared hash table lookup.\n\nAcquiring a strong lock on a fastpath lock would basically entail modifying\nall the relevant PROCLOCKs atomically to indicate that fast path locks aren't\npossible anymore. Acquiring a fast path lock would just require atomically\nmodifying the PROCLOCK to indicate that the lock is held.\n\nOn a first blush, this sounds like it could end up being fairly clean and\ngeneric?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Aug 2023 14:36:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-07 14:36:48 -0700, Andres Freund wrote:\n> What if fast path locks entered PROCLOCK into the shared hashtable, just like\n> with normal locks, the first time a lock is acquired by a backend. Except that\n> we'd set a flag indicating the lock is a fastpath lock. When the lock is\n> released, neither the LOCALLOCK nor the PROCLOCK entry would be\n> removed. Instead, the LOCK/PROCLOCK would be modified to indicate that the\n> lock is not held anymore.\n> \n> That itself wouldn't buy us much - we'd still need to do a lookup in the\n> shared hashtable. But, by the time we decide whether to use fast path locks,\n> we've already done a hash lookup in the LOCALLOCK hashtable. Because the\n> PROCLOCK entry would continue to exist, we can use LOCALLOCK->proclock to get\n> the PROCLOCK entry without a shared hash table lookup.\n> \n> Acquiring a strong lock on a fastpath lock would basically entail modifying\n> all the relevant PROCLOCKs atomically to indicate that fast path locks aren't\n> possible anymore. Acquiring a fast path lock would just require atomically\n> modifying the PROCLOCK to indicate that the lock is held.\n> \n> On a first blush, this sounds like it could end up being fairly clean and\n> generic?\n\nOn 2023-08-07 13:05:32 -0400, Robert Haas wrote:\n> Of course, another thing we could do is try to improve the main lock\n> manager somehow. I confess that I don't have a great idea for that at\n> the moment, but the current locking scheme there is from a very, very\n> long time ago and clearly wasn't designed with modern hardware in\n> mind.\n\nI think the biggest flaw of the locking scheme is that the LockHash locks\nprotect two, somewhat independent, things:\n1) the set of currently lockable objects, i.e. the entries in the hash table [partition]\n2) the state of all the locks [in a partition]\n\nIt'd not be that hard to avoid the shared hashtable lookup in a number of\ncases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest\nabove. But we can't, in general, avoid the lock on the partition anyway, as\nthe each lock's state is also protected by the partition lock.\n\nThe amount of work to do a lookup in the shared hashtable and/or create a new\nentry therein, is quite bound. But the work for acquiring a lock is much less\nso. We'll e.g. often have to iterate over the set of lock holders etc.\n\nI think we ought to investigate whether pushing down the locking for the \"lock\nstate\" into the individual locks is worth it. That way the partitioned lock\nwould just protect the hashtable.\n\nThe biggest issue I see is deadlock checking. Right now acquiring all lock\npartitions gives you a consistent view of all the non-fastpath locks - and\nfastpath locks can't participate in deadlocks. Any scheme that makes \"lock\nstate\" locking in general more granular, will make it next to impossible to\nhave a similarly consistent view of all locks. I'm not sure the current\ndegree of consistency is required however - the lockers participating in a\nlock cycle, pretty much by definition, are blocked.\n\n\nA secondary issue is that making the locks more granular could affect the\nhappy path measurably - we'd need two atomics for each heavyweight lock\nacquisition, not one. But if we cached the lookup in the shared hashtable,\nwe'd commonly be able to skip the hashtable lookup...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Aug 2023 15:05:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 6:05 PM Andres Freund <[email protected]> wrote:\n> I think the biggest flaw of the locking scheme is that the LockHash locks\n> protect two, somewhat independent, things:\n> 1) the set of currently lockable objects, i.e. the entries in the hash table [partition]\n> 2) the state of all the locks [in a partition]\n>\n> It'd not be that hard to avoid the shared hashtable lookup in a number of\n> cases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest\n> above. But we can't, in general, avoid the lock on the partition anyway, as\n> the each lock's state is also protected by the partition lock.\n\nYes, and that's a huge problem. The main selling point of the whole\nfast-path mechanism is to ease the pressure on the lock manager\npartition locks, and if we did something like what you described in\nthe previous email without changing the locking regimen, we'd bring\nall of that contention back. I'm pretty sure that would suck.\n\n> The amount of work to do a lookup in the shared hashtable and/or create a new\n> entry therein, is quite bound. But the work for acquiring a lock is much less\n> so. We'll e.g. often have to iterate over the set of lock holders etc.\n>\n> I think we ought to investigate whether pushing down the locking for the \"lock\n> state\" into the individual locks is worth it. That way the partitioned lock\n> would just protect the hashtable.\n\nI think this would still suck. Suppose you put an LWLock or slock_t in\neach LOCK. If you now run a lot of select queries against the same\ntable (e.g. pgbench -S -c 64 -j 64), everyone is going to fight over\nthe lock counts for that table. Here again, the value of the fast-path\nsystem is that it spreads out the contention in ways that approaches\nlike this can't do.\n\nOr, hmm, maybe what you're really suggesting is pushing the state down\ninto each PROCLOCK rather than each LOCK. That would be more promising\nif we could do it, because that is per-lock *and also per-backend*.\nBut you can't decide from looking at a single PROCLOCK whether a new\nlock at some given lock mode is grantable or not, at least not with\nthe current PROCLOCK representation.\n\nI think any workable solution here has to allow a backend to take a\nweak relation lock without contending with other backends trying to\ntake the same weak relation lock (provided there are no strong\nlockers). Maybe backends should be able to allocate PROCLOCKs and\nrecord weak relation locks there without actually linking them up to\nLOCK objects, or something like that. Anyone who wants a strong lock\nmust first go and find all of those objects for the LOCK they want and\nconnect them up to that LOCK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Aug 2023 16:44:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-08 16:44:37 -0400, Robert Haas wrote:\n> On Mon, Aug 7, 2023 at 6:05 PM Andres Freund <[email protected]> wrote:\n> > I think the biggest flaw of the locking scheme is that the LockHash locks\n> > protect two, somewhat independent, things:\n> > 1) the set of currently lockable objects, i.e. the entries in the hash table [partition]\n> > 2) the state of all the locks [in a partition]\n> >\n> > It'd not be that hard to avoid the shared hashtable lookup in a number of\n> > cases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest\n> > above. But we can't, in general, avoid the lock on the partition anyway, as\n> > the each lock's state is also protected by the partition lock.\n> \n> Yes, and that's a huge problem. The main selling point of the whole\n> fast-path mechanism is to ease the pressure on the lock manager\n> partition locks, and if we did something like what you described in\n> the previous email without changing the locking regimen, we'd bring\n> all of that contention back. I'm pretty sure that would suck.\n\nYea - I tried to outline how I think we could implement the fastpath locking\nscheme in a less limited way in the earlier email, that I had quoted above\nthis bit. Here I was pontificating on what we possibly should do in addition\nto that. I think even if we had \"unlimited\" fastpath locking, there's still\nenough pressure on the lock manager locks that it's worth improving the\noverall locking scheme.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Aug 2023 15:04:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "On 8/8/23 3:04 PM, Andres Freund wrote:\n> On 2023-08-08 16:44:37 -0400, Robert Haas wrote:\n>> On Mon, Aug 7, 2023 at 6:05 PM Andres Freund <[email protected]> wrote:\n>>> I think the biggest flaw of the locking scheme is that the LockHash locks\n>>> protect two, somewhat independent, things:\n>>> 1) the set of currently lockable objects, i.e. the entries in the hash table [partition]\n>>> 2) the state of all the locks [in a partition]\n>>>\n>>> It'd not be that hard to avoid the shared hashtable lookup in a number of\n>>> cases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest\n>>> above. But we can't, in general, avoid the lock on the partition anyway, as\n>>> the each lock's state is also protected by the partition lock.\n>>\n>> Yes, and that's a huge problem. The main selling point of the whole\n>> fast-path mechanism is to ease the pressure on the lock manager\n>> partition locks, and if we did something like what you described in\n>> the previous email without changing the locking regimen, we'd bring\n>> all of that contention back. I'm pretty sure that would suck.\n> \n> Yea - I tried to outline how I think we could implement the fastpath locking\n> scheme in a less limited way in the earlier email, that I had quoted above\n> this bit. Here I was pontificating on what we possibly should do in addition\n> to that. I think even if we had \"unlimited\" fastpath locking, there's still\n> enough pressure on the lock manager locks that it's worth improving the\n> overall locking scheme.\n\n\nHas anyone considered whether increasing NUM_LOCK_PARTITIONS to\nsomething bigger than 16 might offer cheap/easy/small short-term\nimprovements while folks continue to think about the bigger long-term ideas?\n\ncf.\nhttps://www.postgresql.org/message-id/flat/VI1PR05MB620666631A41186ACC3FC91ACFC70%40VI1PR05MB6206.eurprd05.prod.outlook.com\n\nI haven't looked deeply into it myself yet. Didn't see a mention in this\nthread or in Matt's gitlab research ticket. Maybe it doesn't actually\nhelp. Anyway Alexander Pyhalov's email about LWLock optimization and\nNUM_LOCK_PARTITIONS is out there, and I wondered about this.\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nPerformance Engineer\nAmazon Web Services\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 12:09:06 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
},
{
"msg_contents": "Good morning!\n\nFYI: I know many people are/were tracking this email thread rather\nthan the newer and more recent one \"scalability bottlenecks with\n(many) partitions (and more)\", but please see [1] [2] , where Tomas\ncommitted enhanced fast-path locking to the master(18).\n\nThanks Tomas for persistence on this!\n\n-J.\n\n[1] - https://www.postgresql.org/message-id/E1ss4gX-000IvX-63%40gemulon.postgresql.org\n[2] - https://www.postgresql.org/message-id/7c1eeafb-2375-4ff6-8469-0640d52d44ed%40vondra.me\n\n\n",
"msg_date": "Mon, 23 Sep 2024 09:04:02 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configurable FP_LOCK_SLOTS_PER_BACKEND"
}
] |
[
{
"msg_contents": "Hi,\n After I create the same name index on the heap table and the temporary\ntable, I can only get the temporary table's index by \\di+.\n\ncreate table t1(c1 int);\ncreate temp table t2(c1 int);\n\ncreate index idx1 on t1(c1);\n\\di+\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description\n--------+------+-------+-------+-------+--------+-------------\n public | idx1 | index | zhrt | t1 | 128 kB |\n(1 row)\n\ncreate index idx1 on t2(c1);\n\\di+\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description\n-------------+------+-------+-------+-------+--------+-------------\n pg_temp_298 | idx1 | index | zhrt | t2 | 128 kB |\n(1 row)\n\nIs it the expected bavior?\n\nHi, After I create the same name index on the heap table and the temporary table, I can only get the temporary table's index by \\di+.create table t1(c1 int);\ncreate temp table t2(c1 int);\n\ncreate index idx1 on t1(c1);\n\\di+\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description \n--------+------+-------+-------+-------+--------+-------------\n public | idx1 | index | zhrt | t1 | 128 kB | \n(1 row)\n\ncreate index idx1 on t2(c1);\n\\di+\n List of relations\n Schema | Name | Type | Owner | Table | Size | Description \n-------------+------+-------+-------+-------+--------+-------------\n pg_temp_298 | idx1 | index | zhrt | t2 | 128 kB | \n(1 row)\nIs it the expected bavior?",
"msg_date": "Thu, 13 Jul 2023 15:17:17 +0800",
"msg_from": "=?UTF-8?B?44OV44OW44Kt44OA44Kk44K544Kt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\\di+ cannot show the same name indexes"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jul 13, 2023 at 03:17:17PM +0800, フブキダイスキ wrote:\n> After I create the same name index on the heap table and the temporary\n> table, I can only get the temporary table's index by \\di+.\n>\n> create table t1(c1 int);\n> create temp table t2(c1 int);\n>\n> create index idx1 on t1(c1);\n> \\di+\n> List of relations\n> Schema | Name | Type | Owner | Table | Size | Description\n> --------+------+-------+-------+-------+--------+-------------\n> public | idx1 | index | zhrt | t1 | 128 kB |\n> (1 row)\n>\n> create index idx1 on t2(c1);\n> \\di+\n> List of relations\n> Schema | Name | Type | Owner | Table | Size | Description\n> -------------+------+-------+-------+-------+--------+-------------\n> pg_temp_298 | idx1 | index | zhrt | t2 | 128 kB |\n> (1 row)\n>\n> Is it the expected bavior?\n\nYes, since the pg_temp schema has higher priority and those command will not\nshow multiple objects for the same non qualified name. You can either change\nthe priority with something like\n\nSET search_path TO public, pg_temp;\n\nto look at public (or any other schema) first, or explicitly ask for the schema\nyou want, e.g. \\di+ public.*\n\n\n",
"msg_date": "Thu, 13 Jul 2023 15:29:03 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \\di+ cannot show the same name indexes"
}
] |
[
{
"msg_contents": "Hi,\r\nI have a question about the routine \"GetNonHistoricCatalogSnapshot\".\r\nIt has a param \"Oid relid\". It firstly\r\nchecks if the relation has systemcache or if it is in \"RelationInvalidatesSnapshotsOnly\" related relations.\r\nIf yes, it will invalidate the CatalogSnapshot.\r\n\r\nI just wonder in which situation the developer tries to scan a non-catalog table by CatalogSnapshot.\r\n\r\nBy the way, in the routine \" SnapshotSetCommandId\", there is a comment\r\n\r\n /* Should we do the same with CatalogSnapshot? */\r\n\r\nFrom my point of view, there is no need to update the curcid of CatalogSnapshot, as the CatalogSnapshot\r\nwill be invalidated if there are any updates on the catalog tables in the current transaction.\r\n\r\nIf I misunderstood, please correct me!\r\n\r\nBest regards, xiaoran\r\n\n\n\n\n\n\n\n\r\nHi,\n\r\nI have a question about the routine \"GetNonHistoricCatalogSnapshot\".\n\nIt has a param \"Oid relid\". It firstly\n\r\nchecks if the relation has systemcache or if it is in \"RelationInvalidatesSnapshotsOnly\" related relations.\n\r\nIf yes, it will invalidate the CatalogSnapshot.\n\n\n\n\r\nI just wonder in which situation the developer tries to scan a non-catalog table by CatalogSnapshot. \n\n\n\n\n\n\r\nBy the way, in the routine \" SnapshotSetCommandId\", there is a comment \n\n\n\n\n\n /* Should we do the same with CatalogSnapshot? */\n\n\n\n\n\r\nFrom my point of view, there is no need to update the curcid of CatalogSnapshot, as the CatalogSnapshot\n\r\nwill be invalidated if there are any updates on the catalog tables in the current transaction.\n\n\n\n\n\nIf I misunderstood, please correct me!\n\n\n\n\n\n\n\r\nBest regards, xiaoran",
"msg_date": "Thu, 13 Jul 2023 08:55:18 +0000",
"msg_from": "Xiaoran Wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "About `GetNonHistoricCatalogSnapshot`: does it allow developers to\n use catalog snapshot to scan non-catalog tables?"
}
] |
[
{
"msg_contents": "Hi,\n\nMy collegue Konstantin (cc-ed) noticed that the GiST code of intarray\nmay leak memory in certain index operations:\n\n> g_intbig_compress(...):\n> [...]\n> ArrayType *in = DatumGetArrayTypeP(entry->key);\n> [...]\n> if (in != DatumGetArrayTypeP(entry->key))\n> pfree(in);\n\nDatumGetArrayTypeP will allocate a new, uncompressed copy if the given\nDatum is compressed. So in this code, if entry->key is compressed we'd\nallocate two decompressed copies, while only we only deallocate the\nfirst of these two. I believe the attached patch fixes the issue.\n\nIt looks like this bug has existed since the code was first committed,\nso backpatching would go back to 11 if this is an active issue.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Thu, 13 Jul 2023 14:02:36 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential memory leak in contrib/intarray's g_intbig_compress"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> My collegue Konstantin (cc-ed) noticed that the GiST code of intarray\n> may leak memory in certain index operations:\n\nCan you demonstrate an actual problem here, that is a query-lifespan leak?\n\nIMO, the coding rule in the GiST and GIN AMs is that the AM code is\nresponsible for running all opclass-supplied functions in suitably\nshort-lived memory contexts, so that leaks within those functions\ndon't cause problems. This is different from btree which requires\ncomparison functions to not leak. The rationale for having different\nconventions is that btree comparison functions are typically simple\nenough to be able to deal with such a restriction, whereas GiST\nand GIN opclasses are often complex critters for which it'd be too\nbug-prone to insist on leakproofness. So it seems worth the cost\nto make the AM itself set up a throwaway memory context.\n\n(I don't recall offhand about which rule the other AMs use.\nI'm also not sure where or if this choice is documented.)\n\nIn the case at hand, I think the pfree is useless and was installed\nby somebody who had mis-extrapolated from btree rules. So what\nI'm thinking would be sufficient is to drop it altogether:\n\n-\t\tif (in != DatumGetArrayTypeP(entry->key))\n- \t\t\tpfree(in);\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jul 2023 11:20:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential memory leak in contrib/intarray's g_intbig_compress"
},
{
"msg_contents": "I wrote:\n> Matthias van de Meent <[email protected]> writes:\n>> My collegue Konstantin (cc-ed) noticed that the GiST code of intarray\n>> may leak memory in certain index operations:\n\n> Can you demonstrate an actual problem here, that is a query-lifespan leak?\n\n> IMO, the coding rule in the GiST and GIN AMs is that the AM code is\n> responsible for running all opclass-supplied functions in suitably\n> short-lived memory contexts, so that leaks within those functions\n> don't cause problems.\n\nI tried adding \"MemoryContextStats(CurrentMemoryContext);\" at the top\nof g_intbig_compress() and running the intarray regression tests\n(which do reach the pfree in question). This confirmed that the\ncompress function is always called in the \"GiST temporary context\"\nmade by createTempGistContext. Also, the amount of memory reported as\nconsumed didn't seem to vary when I removed the pfree, which indicates\nthat we do manage to reset that context often enough that leakage here\ndoesn't matter. It's hard to make an exact comparison because of\nGiST's habit of randomizing page-split decisions, so that the sequence\nof calls to the compress function isn't identical from one run to the\nnext. But at least in the cases exercised by the regression tests,\nwe do not need that pfree --- and if you believe the comment for\ncreateTempGistContext, it would be a GiST bug not an intarray bug\nif we did.\n\nI'll go remove the pfree. Perhaps there is a TODO item here to\nimprove the documentation about these memory management conventions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jul 2023 12:22:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential memory leak in contrib/intarray's g_intbig_compress"
},
{
"msg_contents": "On Thu, 13 Jul 2023 at 17:20, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > My collegue Konstantin (cc-ed) noticed that the GiST code of intarray\n> > may leak memory in certain index operations:\n>\n> Can you demonstrate an actual problem here, that is a query-lifespan leak?\n>\n> IMO, the coding rule in the GiST and GIN AMs is that the AM code is\n> responsible for running all opclass-supplied functions in suitably\n> short-lived memory contexts, so that leaks within those functions\n> don't cause problems. This is different from btree which requires\n> comparison functions to not leak. The rationale for having different\n> conventions is that btree comparison functions are typically simple\n> enough to be able to deal with such a restriction, whereas GiST\n> and GIN opclasses are often complex critters for which it'd be too\n> bug-prone to insist on leakproofness. So it seems worth the cost\n> to make the AM itself set up a throwaway memory context.\n>\n> (I don't recall offhand about which rule the other AMs use.\n> I'm also not sure where or if this choice is documented.)\n>\n> In the case at hand, I think the pfree is useless and was installed\n> by somebody who had mis-extrapolated from btree rules. So what\n> I'm thinking would be sufficient is to drop it altogether:\n>\n> - if (in != DatumGetArrayTypeP(entry->key))\n> - pfree(in);\n\nLooks like it's indeed a useless pfree call here - all paths that I\ncould find that lead to GiST's compress procedure are encapsulated in\na temporary context that is reset fairly quickly after use (at most\nthis memory context would live for the duration of the recursive\nsplitting of pages up the tree, but I haven't verified this\nhypotheses).\n\nThere are similar pfree calls in the _int_gist.c file's g_int_compress\nfunction, which made me think we do need to clean up after use, but\nindeed these pfrees are useless (or even harmful if bug #17888 can be\ntrusted)\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Thu, 13 Jul 2023 18:28:39 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Potential memory leak in contrib/intarray's g_intbig_compress"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 06:28:39PM +0200, Matthias van de Meent wrote:\n> There are similar pfree calls in the _int_gist.c file's g_int_compress\n> function, which made me think we do need to clean up after use, but\n> indeed these pfrees are useless (or even harmful if bug #17888 can be\n> trusted)\n\nIndeed, all these are in a GiST temporary context. So you'd mean\nsomething like the attached perhaps, for both the decompress and\ncompress paths?\n--\nMichael",
"msg_date": "Fri, 14 Jul 2023 14:57:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential memory leak in contrib/intarray's g_intbig_compress"
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 07:57, Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jul 13, 2023 at 06:28:39PM +0200, Matthias van de Meent wrote:\n> > There are similar pfree calls in the _int_gist.c file's g_int_compress\n> > function, which made me think we do need to clean up after use, but\n> > indeed these pfrees are useless (or even harmful if bug #17888 can be\n> > trusted)\n>\n> Indeed, all these are in a GiST temporary context. So you'd mean\n> something like the attached perhaps, for both the decompress and\n> compress paths?\n\nYes, looks good to me. Thanks!\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 2 Aug 2023 13:44:42 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Potential memory leak in contrib/intarray's g_intbig_compress"
}
] |
[
{
"msg_contents": "Hello, my name is Matheus Farias and this is the first time that I'm\nsending an email to the pgsql-hackers list. I'm a software developer intern\nat Bitnine Global Inc. and, along with other interns, we've been working on\nupdating Apache AGE with the latest version of Postgres, the REL_16_BETA\nversion. One of the main problems that we are facing is that the code was\nreworked to update the permission checking and now some of the queries\nreturn ERROR: invalid perminfoindex <rte->perminfoindex> in RTE with relid\n<rte->relid>. This occurs due to one of the RTEs having perminfoindex = 0\nand the relid containing a value.\n\nAGE is a Postgres extension which allows us to execute openCypher commands\nto create a graph with nodes and edges. There are two main tables that are\ncreated: _ag_label_vertex and _ag_label_edge. Both of them will be the\nparent label tables of every other vertex/edge label we create.\n\nWhen we do a simple MATCH query to find all nodes with the v label:\n\nSELECT * FROM cypher('cypher_set', $$MATCH (n:v)RETURN n\n$$) AS (node agtype);\n\n\ninside the add_rtes_to_flat_rtable() function, it goes inside a loop\nwhere we can see the stored RTEs in root->parse->rtable:\n\n// I've simplified what every RTE shows.\n\nroot->parse->rtable\n[\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_RELATION, relid = 16991, perminfoindex = 1)\n]\n\nBut executing the query with a simple SET clause:\n\nSELECT * FROM cypher('cypher_set', $$MATCH (n) SET n.i = 3\n$$) AS (a agtype);\n\nOne of the RTEs of the RTE_RELATION type and relid with a not null\nvalue has perminfoindex = 0\n\nroot->parse->rtable\n[\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_RELATION, relid = 16971, perminfoindex = 1),\n (rtekind = RTE_RELATION, relid = 16971, perminfoindex = 1),\n (rtekind = RTE_RELATION, relid = 16991, perminfoindex = 0)\n]\n\nthe relid = 16991 is related to the child vertex label and the relid =\n16971 related to the parent vertex label:\n\nSELECT to_regclass('cypher_set._ag_label_vertex')::oid;\n to_regclass -------------\n 16971\nSELECT to_regclass('cypher_set.v')::oid;\n to_regclass -------------\n 16991\n\nWith further inspection in AGE's code, after executing the SET query,\nit goes inside transform_cypher_clause_as_subquery() function and the\nParseNamespaceItem has the following values:\n\n {p_names = 0x1205638, p_rte = 0x11edb70, p_rtindex = 1, p_perminfo =\n0x7f7f7f7f7f7f7f7f,\n p_nscolumns = 0x1205848, p_rel_visible = true, p_cols_visible =\ntrue, p_lateral_only = false,\n p_lateral_ok = true}\n\nAnd the pnsi->p_rte has:\n\n{type = T_RangeTblEntry, rtekind = RTE_SUBQUERY, relid = 0, relkind =\n0 '\\000', rellockmode = 0,\n tablesample = 0x0, perminfoindex = 0, subquery = 0x11ed710,\nsecurity_barrier = false,\n jointype = JOIN_INNER, joinmergedcols = 0, joinaliasvars = 0x0,\njoinleftcols = 0x0, joinrightcols = 0x0,\n join_using_alias = 0x0, functions = 0x0, funcordinality = false,\ntablefunc = 0x0, values_lists = 0x0,\n ctename = 0x0, ctelevelsup = 0, self_reference = false, coltypes =\n0x0, coltypmods = 0x0,\n colcollations = 0x0, enrname = 0x0, enrtuples = 0, alias =\n0x12055f0, eref = 0x1205638, lateral = false,\n inh = false, inFromCl = true, securityQuals = 0x0}\n\nThen it calls addNSItemToQuery(pstate, pnsi, true, false, true);. This\nfunction adds the given nsitem/RTE as a top-level entry in the pstate's\njoin list and/or namespace list. I've been thinking if adding the\nnsitem/RTE like this won't cause this error?\n\nAlso in handle_prev_clause it has the following line, which is going to add\nall the rte's attributes to the current queries targetlist which, again,\nI'm not sure if that's what causing the problem because the relid of the\nrte is 0:\n\nquery->targetList = expandNSItemAttrs(pstate, pnsi, 0, true, -1);\n\nIf someone knows more about it, I would be grateful for any kind of\nanswer or help. AGE's source code can be found here:\nhttps://github.com/apache/age\n\nHello, my name is Matheus Farias and this is the first time that I'm sending an email to the pgsql-hackers list. I'm a software developer intern at Bitnine Global Inc. and, along with other interns, we've been working on updating Apache AGE with the latest version of Postgres, the REL_16_BETA version. One of the main problems that we are facing is that the code \nwas reworked to update the permission checking and now some of the \nqueries return ERROR: invalid perminfoindex <rte->perminfoindex> in RTE with relid <rte->relid>. This occurs due to one of the RTEs having perminfoindex = 0 and the relid containing a value.AGE is a Postgres extension which allows us to execute openCypher commands to create a graph with nodes and edges. There are two main tables that \nare created: _ag_label_vertex and _ag_label_edge. Both of them will be the parent label tables of every other vertex/edge label we create.When we do a simple MATCH query to find all nodes with the v label:SELECT * FROM cypher('cypher_set', $$\nMATCH (n:v)\nRETURN n\n$$) AS (node agtype);inside the add_rtes_to_flat_rtable() function, it goes inside a loop where we can see the stored RTEs in root->parse->rtable:// I've simplified what every RTE shows.\n\nroot->parse->rtable\n[\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_RELATION, relid = 16991, perminfoindex = 1)\n]But executing the query with a simple SET clause:SELECT * FROM cypher('cypher_set', $$\nMATCH (n) \nSET n.i = 3\n$$) AS (a agtype);One of the RTEs of the RTE_RELATION type and relid with a not null value has perminfoindex = 0root->parse->rtable\n[\n (rtekind = RTE_SUBQUERY, relid = 0, perminfoindex = 0),\n (rtekind = RTE_RELATION, relid = 16971, perminfoindex = 1),\n (rtekind = RTE_RELATION, relid = 16971, perminfoindex = 1),\n (rtekind = RTE_RELATION, relid = 16991, perminfoindex = 0)\n]the relid = 16991 is related to the child vertex label and the relid = 16971 related to the parent vertex label:SELECT to_regclass('cypher_set._ag_label_vertex')::oid;\n to_regclass \n-------------\n 16971\n\nSELECT to_regclass('cypher_set.v')::oid;\n to_regclass \n-------------\n 16991\nWith further inspection in AGE's code, after executing the SET query, it goes inside transform_cypher_clause_as_subquery() function and the ParseNamespaceItem has the following values: {p_names = 0x1205638, p_rte = 0x11edb70, p_rtindex = 1, p_perminfo = 0x7f7f7f7f7f7f7f7f, \n p_nscolumns = 0x1205848, p_rel_visible = true, p_cols_visible = true, p_lateral_only = false, \n p_lateral_ok = true}And the pnsi->p_rte has:{type = T_RangeTblEntry, rtekind = RTE_SUBQUERY, relid = 0, relkind = 0 '\\000', rellockmode = 0, \n tablesample = 0x0, perminfoindex = 0, subquery = 0x11ed710, security_barrier = false, \n jointype = JOIN_INNER, joinmergedcols = 0, joinaliasvars = 0x0, joinleftcols = 0x0, joinrightcols = 0x0, \n join_using_alias = 0x0, functions = 0x0, funcordinality = false, tablefunc = 0x0, values_lists = 0x0, \n ctename = 0x0, ctelevelsup = 0, self_reference = false, coltypes = 0x0, coltypmods = 0x0, \n colcollations = 0x0, enrname = 0x0, enrtuples = 0, alias = 0x12055f0, eref = 0x1205638, lateral = false, \n inh = false, inFromCl = true, securityQuals = 0x0}Then it calls addNSItemToQuery(pstate, pnsi, true, false, true);.\n This function adds the given nsitem/RTE as a top-level entry in the \npstate's join list and/or namespace list. I've been thinking if adding \nthe nsitem/RTE like this won't cause this error?\nAlso in handle_prev_clause it has the following line, \nwhich is going to add all the rte's attributes to the current queries \ntargetlist which, again, I'm not sure if that's what causing the problem\n because the relid of the rte is 0:query->targetList = expandNSItemAttrs(pstate, pnsi, 0, true, -1);\nIf someone knows more about it, I would be grateful for any kind of answer or help. AGE's source code can be found here: https://github.com/apache/age",
"msg_date": "Thu, 13 Jul 2023 16:14:32 -0300",
"msg_from": "Farias de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Farias de Oliveira <[email protected]> writes:\n> With further inspection in AGE's code, after executing the SET query,\n> it goes inside transform_cypher_clause_as_subquery() function and the\n> ParseNamespaceItem has the following values:\n\n> {p_names = 0x1205638, p_rte = 0x11edb70, p_rtindex = 1, p_perminfo =\n> 0x7f7f7f7f7f7f7f7f,\n> p_nscolumns = 0x1205848, p_rel_visible = true, p_cols_visible =\n> true, p_lateral_only = false,\n> p_lateral_ok = true}\n\nHmm, that uninitialized value for p_perminfo is pretty suspicious.\nI see that transformFromClauseItem and buildNSItemFromLists both\ncreate ParseNamespaceItems without bothering to fill p_perminfo,\nwhile buildNSItemFromTupleDesc fills it per the caller and\naddRangeTableEntryForJoin always sets it to NULL. I think we\nought to make the first two set it to NULL as well, because\nuninitialized fields are invariably a bad idea (even though the\nlack of valgrind complaints says that the core code is managing\nto avoid touching those fields).\n\nIf we do that, is it sufficient to resolve your problem?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jul 2023 18:12:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In Postgres 16 BETA,\n should the ParseNamespaceItem have the same index as it's RangeTableEntry?"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 7:12 AM Tom Lane <[email protected]> wrote:\n> Farias de Oliveira <[email protected]> writes:\n> > With further inspection in AGE's code, after executing the SET query,\n> > it goes inside transform_cypher_clause_as_subquery() function and the\n> > ParseNamespaceItem has the following values:\n>\n> > {p_names = 0x1205638, p_rte = 0x11edb70, p_rtindex = 1, p_perminfo =\n> > 0x7f7f7f7f7f7f7f7f,\n> > p_nscolumns = 0x1205848, p_rel_visible = true, p_cols_visible =\n> > true, p_lateral_only = false,\n> > p_lateral_ok = true}\n>\n> Hmm, that uninitialized value for p_perminfo is pretty suspicious.\n> I see that transformFromClauseItem and buildNSItemFromLists both\n> create ParseNamespaceItems without bothering to fill p_perminfo,\n> while buildNSItemFromTupleDesc fills it per the caller and\n> addRangeTableEntryForJoin always sets it to NULL. I think we\n> ought to make the first two set it to NULL as well, because\n> uninitialized fields are invariably a bad idea (even though the\n> lack of valgrind complaints says that the core code is managing\n> to avoid touching those fields).\n\nAgreed, I'll go ahead and fix that.\n\n> If we do that, is it sufficient to resolve your problem?\n\nHmm, I'm afraid maybe not, because if the above were the root issue,\nwe'd have seen a segfault and not the error the OP mentioned? I'm\nthinking the issue is that their code appears to be consing up an RTE\nthat the core code (getRTEPermissionInfo() most likely via\nmarkRTEForSelectPriv()) is not expecting to be called with? I would\nbe helpful to see a backtrace when the error occurs to be sure.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Jul 2023 12:05:11 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Thanks Amit and Tom for the quick response. I have attached a file that\ncontains the execution of the code via GDB and also what the backtrace\ncommand shows when it gets the error. If I forgot to add something or if it\nis necessary to add anything else, please let me know.\n\nThank you,\nMatheus Farias\n\nEm sex., 14 de jul. de 2023 às 00:05, Amit Langote <[email protected]>\nescreveu:\n\n> On Fri, Jul 14, 2023 at 7:12 AM Tom Lane <[email protected]> wrote:\n> > Farias de Oliveira <[email protected]> writes:\n> > > With further inspection in AGE's code, after executing the SET query,\n> > > it goes inside transform_cypher_clause_as_subquery() function and the\n> > > ParseNamespaceItem has the following values:\n> >\n> > > {p_names = 0x1205638, p_rte = 0x11edb70, p_rtindex = 1, p_perminfo =\n> > > 0x7f7f7f7f7f7f7f7f,\n> > > p_nscolumns = 0x1205848, p_rel_visible = true, p_cols_visible =\n> > > true, p_lateral_only = false,\n> > > p_lateral_ok = true}\n> >\n> > Hmm, that uninitialized value for p_perminfo is pretty suspicious.\n> > I see that transformFromClauseItem and buildNSItemFromLists both\n> > create ParseNamespaceItems without bothering to fill p_perminfo,\n> > while buildNSItemFromTupleDesc fills it per the caller and\n> > addRangeTableEntryForJoin always sets it to NULL. I think we\n> > ought to make the first two set it to NULL as well, because\n> > uninitialized fields are invariably a bad idea (even though the\n> > lack of valgrind complaints says that the core code is managing\n> > to avoid touching those fields).\n>\n> Agreed, I'll go ahead and fix that.\n>\n> > If we do that, is it sufficient to resolve your problem?\n>\n> Hmm, I'm afraid maybe not, because if the above were the root issue,\n> we'd have seen a segfault and not the error the OP mentioned? I'm\n> thinking the issue is that their code appears to be consing up an RTE\n> that the core code (getRTEPermissionInfo() most likely via\n> markRTEForSelectPriv()) is not expecting to be called with? I would\n> be helpful to see a backtrace when the error occurs to be sure.\n>\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n>",
"msg_date": "Fri, 14 Jul 2023 11:30:38 -0300",
"msg_from": "Farias de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Farias de Oliveira <[email protected]> writes:\n> 3905\t\t\telog(ERROR, \"invalid perminfoindex %u in RTE with relid %u\",\n> (gdb) bt\n> #0 getRTEPermissionInfo (rteperminfos=0x138a500, rte=0x138a6b0) at parse_relation.c:3905\n> #1 0x0000000000676e29 in GetResultRTEPermissionInfo (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> at execUtils.c:1412\n> #2 0x0000000000677c30 in ExecGetUpdatedCols (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> at execUtils.c:1321\n> #3 0x0000000000677cd7 in ExecGetAllUpdatedCols (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> at execUtils.c:1362\n> #4 0x000000000066b9bf in ExecUpdateLockMode (estate=estate@entry=0x138ce48, relinfo=relinfo@entry=0x13b8f50) at execMain.c:2385\n> #5 0x00007f197fb19a8d in update_entity_tuple (resultRelInfo=<optimized out>, resultRelInfo@entry=0x13b8f50, \n> elemTupleSlot=elemTupleSlot@entry=0x13b9730, estate=estate@entry=0x138ce48, old_tuple=0x13bae80)\n> at src/backend/executor/cypher_set.c:120\n> #6 0x00007f197fb1a2ff in process_update_list (node=node@entry=0x138d0c8) at src/backend/executor/cypher_set.c:595\n> #7 0x00007f197fb1a348 in process_all_tuples (node=node@entry=0x138d0c8) at src/backend/executor/cypher_set.c:212\n> #8 0x00007f197fb1a455 in exec_cypher_set (node=0x138d0c8) at src/backend/executor/cypher_set.c:641\n\nSo apparently, what we have here is a result-relation RTE that has\nno permissions info associated. It's not clear to me whether that\nis a bug in building the parse tree, or an expectable situation\nthat GetResultRTEPermissionInfo ought to be coping with. I'm\ninclined to bet on the former though, and to guess that AGE is\nmissing dealing with the new RTEPermissionInfo structs in someplace\nor other. I'm afraid that allowing GetResultRTEPermissionInfo to\nlet this pass without comment would mask actual bugs, so fixing\nit at that end doesn't seem attractive.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jul 2023 11:16:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In Postgres 16 BETA,\n should the ParseNamespaceItem have the same index as it's RangeTableEntry?"
},
{
"msg_contents": "I believe I have found something interesting that might be the root of the\nproblem with RTEPermissionInfo. But I do not know how to fix it exactly. In\nAGE's code, the execution of it goes through a function called\nanalyze_cypher_clause() which does the following:\n\nstatic Query *analyze_cypher_clause(transform_method transform,\n cypher_clause *clause,\n cypher_parsestate *parent_cpstate)\n{\n cypher_parsestate *cpstate;\n Query *query;\n ParseState *parent_pstate = (ParseState*)parent_cpstate;\n ParseState *pstate;\n\n cpstate = make_cypher_parsestate(parent_cpstate);\n pstate = (ParseState*)cpstate;\n\n /* copy the expr_kind down to the child */\n pstate->p_expr_kind = parent_pstate->p_expr_kind;\n\n query = transform(cpstate, clause);\n\n advance_transform_entities_to_next_clause(cpstate->entities);\n\n parent_cpstate->entities = list_concat(parent_cpstate->entities,\n cpstate->entities);\n\n free_cypher_parsestate(cpstate);\n\n return query;\n}\n\nthe free_cypher_parsestate() function calls the free_parsestate() function:\n\nvoid free_cypher_parsestate(cypher_parsestate *cpstate)\n{\n free_parsestate((ParseState *)cpstate);\n}\n\nSo, before that happens the cpstate struct contains the following data:\n\n{pstate = {parentParseState = 0x2b06ab0, p_sourcetext = 0x2b06ef0 \"MATCH\n(n) SET n.i = 3\", p_rtable = 0x2bdb370,\n p_rteperminfos = 0x2bdb320, p_joinexprs = 0x0, p_nullingrels = 0x0,\np_joinlist = 0x2bdb478, p_namespace = 0x2bdb4c8,\n p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0,\np_parent_cte = 0x0, p_target_relation = 0x0,\n p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0,\np_expr_kind = EXPR_KIND_FROM_SUBSELECT, p_next_resno = 2,\n p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent\n= false, p_resolve_unknowns = true,\n p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false,\np_hasTargetSRFs = false, p_hasSubLinks = false,\n p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook =\n0x0, p_post_columnref_hook = 0x0,\n p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state =\n0x0}, graph_name = 0x2b06e50 \"cypher_set\",\n graph_oid = 16942, params = 0x0, default_alias_num = 0, entities =\n0x2c6e158, property_constraint_quals = 0x0,\n exprHasAgg = false, p_opt_match = false}\n\nAnd then after that the pstate gets all wiped out:\n\n{pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET\nn.i = 3\", p_rtable = 0x0,\n p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0,\np_joinlist = 0x0, p_namespace = 0x0,\n p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0,\np_parent_cte = 0x0, p_target_relation = 0x0,\n p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0,\np_expr_kind = EXPR_KIND_FROM_SUBSELECT, p_next_resno = 1,\n p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent\n= false, p_resolve_unknowns = true,\n p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false,\np_hasTargetSRFs = false, p_hasSubLinks = false,\n p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook =\n0x0, p_post_columnref_hook = 0x0,\n p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state =\n0x0}, graph_name = 0x2b06e50 \"cypher_set\",\n graph_oid = 16942, params = 0x0, default_alias_num = 0, entities =\n0x2c6e228, property_constraint_quals = 0x0,\n exprHasAgg = false, p_opt_match = false}\n\nBut in transform_cypher_clause_as_subquery(), we use the same pstate. And\nwhen we assign\n\npnsi = addRangeTableEntryForSubquery(pstate, query, alias, lateral, true);\n\nThe pstate changes, adding a value to p_rtable but nothing in p_rteperminfos\n.\n\nThen after that addNSItemToQuery(pstate, pnsi, true, false, true) is\ncalled, changing the pstate to add values to p_joinlist and p_namespace.\n\nIt ends up going inside other functions and changing it more a bit, but at\nthe end of one of these functions it assigns values to some members of the\nquery:\n\nquery->targetList = lappend(query->targetList, tle);\nquery->rtable = pstate->p_rtable;\nquery->jointree = makeFromExpr(pstate->p_joinlist, NULL);\n\nI assume that here is missing the assignment of query->rteperminfos to be\nthe same as pstate->p_rteperminfos, but the pstate has the following values:\n\n{pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET\nn.i = 3\", p_rtable = 0x2c6e590,\n p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0,\np_joinlist = 0x2c6e678, p_namespace = 0x2c6e6c8,\n p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0,\np_parent_cte = 0x0, p_target_relation = 0x0,\n p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0,\np_expr_kind = EXPR_KIND_NONE, p_next_resno = 3,\n p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent\n= false, p_resolve_unknowns = true,\n p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false,\np_hasTargetSRFs = false, p_hasSubLinks = false,\n p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook =\n0x0, p_post_columnref_hook = 0x0,\n p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state =\n0x0}, graph_name = 0x2b06e50 \"cypher_set\",\n graph_oid = 16942, params = 0x0, default_alias_num = 0, entities =\n0x2c6e228, property_constraint_quals = 0x0,\n exprHasAgg = false, p_opt_match = false}\n\nSo changing that won't solve the issue.\n\nEm sex., 14 de jul. de 2023 às 12:16, Tom Lane <[email protected]> escreveu:\n\n> Farias de Oliveira <[email protected]> writes:\n> > 3905 elog(ERROR, \"invalid perminfoindex %u in RTE with\n> relid %u\",\n> > (gdb) bt\n> > #0 getRTEPermissionInfo (rteperminfos=0x138a500, rte=0x138a6b0) at\n> parse_relation.c:3905\n> > #1 0x0000000000676e29 in GetResultRTEPermissionInfo\n> (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> > at execUtils.c:1412\n> > #2 0x0000000000677c30 in ExecGetUpdatedCols (relinfo=relinfo@entry=0x13b8f50,\n> estate=estate@entry=0x138ce48)\n> > at execUtils.c:1321\n> > #3 0x0000000000677cd7 in ExecGetAllUpdatedCols (relinfo=relinfo@entry=0x13b8f50,\n> estate=estate@entry=0x138ce48)\n> > at execUtils.c:1362\n> > #4 0x000000000066b9bf in ExecUpdateLockMode (estate=estate@entry=0x138ce48,\n> relinfo=relinfo@entry=0x13b8f50) at execMain.c:2385\n> > #5 0x00007f197fb19a8d in update_entity_tuple (resultRelInfo=<optimized\n> out>, resultRelInfo@entry=0x13b8f50,\n> > elemTupleSlot=elemTupleSlot@entry=0x13b9730, estate=estate@entry=0x138ce48,\n> old_tuple=0x13bae80)\n> > at src/backend/executor/cypher_set.c:120\n> > #6 0x00007f197fb1a2ff in process_update_list (node=node@entry=0x138d0c8)\n> at src/backend/executor/cypher_set.c:595\n> > #7 0x00007f197fb1a348 in process_all_tuples (node=node@entry=0x138d0c8)\n> at src/backend/executor/cypher_set.c:212\n> > #8 0x00007f197fb1a455 in exec_cypher_set (node=0x138d0c8) at\n> src/backend/executor/cypher_set.c:641\n>\n> So apparently, what we have here is a result-relation RTE that has\n> no permissions info associated. It's not clear to me whether that\n> is a bug in building the parse tree, or an expectable situation\n> that GetResultRTEPermissionInfo ought to be coping with. I'm\n> inclined to bet on the former though, and to guess that AGE is\n> missing dealing with the new RTEPermissionInfo structs in someplace\n> or other. I'm afraid that allowing GetResultRTEPermissionInfo to\n> let this pass without comment would mask actual bugs, so fixing\n> it at that end doesn't seem attractive.\n>\n> regards, tom lane\n>\n\nI believe I have found something interesting that might be the root of the problem with RTEPermissionInfo. But I do not know how to fix it exactly. In AGE's code, the execution of it goes through a function called analyze_cypher_clause() which does the following:static Query *analyze_cypher_clause(transform_method transform, cypher_clause *clause, cypher_parsestate *parent_cpstate){ cypher_parsestate *cpstate; Query *query; ParseState *parent_pstate = (ParseState*)parent_cpstate; ParseState *pstate; cpstate = make_cypher_parsestate(parent_cpstate); pstate = (ParseState*)cpstate; /* copy the expr_kind down to the child */ pstate->p_expr_kind = parent_pstate->p_expr_kind; query = transform(cpstate, clause); advance_transform_entities_to_next_clause(cpstate->entities); parent_cpstate->entities = list_concat(parent_cpstate->entities, cpstate->entities); free_cypher_parsestate(cpstate); return query;}the free_cypher_parsestate() function calls the free_parsestate() function:void free_cypher_parsestate(cypher_parsestate *cpstate){ free_parsestate((ParseState *)cpstate);}So, before that happens the cpstate struct contains the following data:{pstate = {parentParseState = 0x2b06ab0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET n.i = 3\", p_rtable = 0x2bdb370, p_rteperminfos = 0x2bdb320, p_joinexprs = 0x0, p_nullingrels = 0x0, p_joinlist = 0x2bdb478, p_namespace = 0x2bdb4c8, p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0, p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0, p_expr_kind = EXPR_KIND_FROM_SUBSELECT, p_next_resno = 2, p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent = false, p_resolve_unknowns = true, p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false, p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0, p_post_columnref_hook = 0x0, p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}, graph_name = 0x2b06e50 \"cypher_set\", graph_oid = 16942, params = 0x0, default_alias_num = 0, entities = 0x2c6e158, property_constraint_quals = 0x0, exprHasAgg = false, p_opt_match = false}And then after that the pstate gets all wiped out:{pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET n.i = 3\", p_rtable = 0x0, p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0, p_joinlist = 0x0, p_namespace = 0x0, p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0, p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0, p_expr_kind = EXPR_KIND_FROM_SUBSELECT, p_next_resno = 1, p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent = false, p_resolve_unknowns = true, p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false, p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0, p_post_columnref_hook = 0x0, p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}, graph_name = 0x2b06e50 \"cypher_set\", graph_oid = 16942, params = 0x0, default_alias_num = 0, entities = 0x2c6e228, property_constraint_quals = 0x0, exprHasAgg = false, p_opt_match = false}But in transform_cypher_clause_as_subquery(), we use the same pstate. And when we assign pnsi = addRangeTableEntryForSubquery(pstate, query, alias, lateral, true); The pstate changes, adding a value to p_rtable but nothing in p_rteperminfos.Then after that addNSItemToQuery(pstate, pnsi, true, false, true) is called, changing the pstate to add values to p_joinlist and p_namespace.It ends up going inside other functions and changing it more a bit, but at the end of one of these functions it assigns values to some members of the query:query->targetList = lappend(query->targetList, tle);query->rtable = pstate->p_rtable;query->jointree = makeFromExpr(pstate->p_joinlist, NULL);I assume that here is missing the assignment of query->rteperminfos to be the same as pstate->p_rteperminfos, but the pstate has the following values:{pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET n.i = 3\", p_rtable = 0x2c6e590, p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0, p_joinlist = 0x2c6e678, p_namespace = 0x2c6e6c8, p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0, p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0, p_expr_kind = EXPR_KIND_NONE, p_next_resno = 3, p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent = false, p_resolve_unknowns = true, p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false, p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0, p_post_columnref_hook = 0x0, p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}, graph_name = 0x2b06e50 \"cypher_set\", graph_oid = 16942, params = 0x0, default_alias_num = 0, entities = 0x2c6e228, property_constraint_quals = 0x0, exprHasAgg = false, p_opt_match = false}So changing that won't solve the issue.Em sex., 14 de jul. de 2023 às 12:16, Tom Lane <[email protected]> escreveu:Farias de Oliveira <[email protected]> writes:\n> 3905 elog(ERROR, \"invalid perminfoindex %u in RTE with relid %u\",\n> (gdb) bt\n> #0 getRTEPermissionInfo (rteperminfos=0x138a500, rte=0x138a6b0) at parse_relation.c:3905\n> #1 0x0000000000676e29 in GetResultRTEPermissionInfo (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> at execUtils.c:1412\n> #2 0x0000000000677c30 in ExecGetUpdatedCols (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> at execUtils.c:1321\n> #3 0x0000000000677cd7 in ExecGetAllUpdatedCols (relinfo=relinfo@entry=0x13b8f50, estate=estate@entry=0x138ce48)\n> at execUtils.c:1362\n> #4 0x000000000066b9bf in ExecUpdateLockMode (estate=estate@entry=0x138ce48, relinfo=relinfo@entry=0x13b8f50) at execMain.c:2385\n> #5 0x00007f197fb19a8d in update_entity_tuple (resultRelInfo=<optimized out>, resultRelInfo@entry=0x13b8f50, \n> elemTupleSlot=elemTupleSlot@entry=0x13b9730, estate=estate@entry=0x138ce48, old_tuple=0x13bae80)\n> at src/backend/executor/cypher_set.c:120\n> #6 0x00007f197fb1a2ff in process_update_list (node=node@entry=0x138d0c8) at src/backend/executor/cypher_set.c:595\n> #7 0x00007f197fb1a348 in process_all_tuples (node=node@entry=0x138d0c8) at src/backend/executor/cypher_set.c:212\n> #8 0x00007f197fb1a455 in exec_cypher_set (node=0x138d0c8) at src/backend/executor/cypher_set.c:641\n\nSo apparently, what we have here is a result-relation RTE that has\nno permissions info associated. It's not clear to me whether that\nis a bug in building the parse tree, or an expectable situation\nthat GetResultRTEPermissionInfo ought to be coping with. I'm\ninclined to bet on the former though, and to guess that AGE is\nmissing dealing with the new RTEPermissionInfo structs in someplace\nor other. I'm afraid that allowing GetResultRTEPermissionInfo to\nlet this pass without comment would mask actual bugs, so fixing\nit at that end doesn't seem attractive.\n\n regards, tom lane",
"msg_date": "Fri, 14 Jul 2023 16:43:01 -0300",
"msg_from": "Farias de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Hello,\n\nOn Sat, Jul 15, 2023 at 4:43 AM Farias de Oliveira\n<[email protected]> wrote:\n> I believe I have found something interesting that might be the root of the problem with RTEPermissionInfo. But I do not know how to fix it exactly. In AGE's code, the execution of it goes through a function called analyze_cypher_clause() which does the following:\n>\n> It ends up going inside other functions and changing it more a bit, but at the end of one of these functions it assigns values to some members of the query:\n>\n> query->targetList = lappend(query->targetList, tle);\n> query->rtable = pstate->p_rtable;\n> query->jointree = makeFromExpr(pstate->p_joinlist, NULL);\n>\n> I assume that here is missing the assignment of query->rteperminfos to be the same as pstate->p_rteperminfos, but the pstate has the following values:\n>\n> {pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET n.i = 3\", p_rtable = 0x2c6e590,\n> p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0, p_joinlist = 0x2c6e678, p_namespace = 0x2c6e6c8,\n> p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0,\n> p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0, p_expr_kind = EXPR_KIND_NONE, p_next_resno = 3,\n> p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent = false, p_resolve_unknowns = true,\n> p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false,\n> p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0, p_post_columnref_hook = 0x0,\n> p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}, graph_name = 0x2b06e50 \"cypher_set\",\n> graph_oid = 16942, params = 0x0, default_alias_num = 0, entities = 0x2c6e228, property_constraint_quals = 0x0,\n> exprHasAgg = false, p_opt_match = false}\n>\n> So changing that won't solve the issue.\n\nDoes p_rtable in this last pstate contain any RTE_RELATION entries?\nIf it does, p_rteperminfos being NIL looks like a bug in your code.\n\nActually, given the back trace of the error that you shared, I am\nsuspecting more of a problem in the code that generates a\nResultRelInfo pointing at the wrong RTE via its ri_RangeTableIndex.\nThat code should perhaps set the ri_RangeTableIndex to 0 if it doesn't\nknow the actual existing RTE corresponding to that result relation.\nIf you set it to some non-0 value, the RTE that it points to should\nsatisfy invariants such as having the corresponding RTEPermissionInfo\npresent in the rteperminfos list if necessary.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Jul 2023 18:58:20 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Hello,\n\nThank you for the help guys and I'm so sorry for my late response. Indeed,\nthe error relies on the ResultRelInfo. In GetResultRTEPermissionInfo()\nfunction, it does a checking on the relinfo->ri_RootResultRelInfo variable.\nI believe that it should go inside this scope:\n\n\n if (relinfo->ri_RootResultRelInfo)\n\t{\n\t\t/*\n\t\t * For inheritance child result relations (a partition routing target\n\t\t * of an INSERT or a child UPDATE target), this returns the root\n\t\t * parent's RTE to fetch the RTEPermissionInfo because that's the only\n\t\t * one that has one assigned.\n\t\t */\n\t\trti = relinfo->ri_RootResultRelInfo->ri_RangeTableIndex;\n\t}\n\nThe relinfo contains:\n\n{type = T_ResultRelInfo, ri_RangeTableIndex = 5, ri_RelationDesc =\n0x7f44e3308cc8, ri_NumIndices = 0, ri_IndexRelationDescs = 0x0,\nri_IndexRelationInfo = 0x0, ri_RowIdAttNo = 0,\n ri_extraUpdatedCols = 0x0, ri_projectNew = 0x0, ri_newTupleSlot =\n0x0, ri_oldTupleSlot = 0x0, ri_projectNewInfoValid = false,\nri_TrigDesc = 0x0, ri_TrigFunctions = 0x0,\n ri_TrigWhenExprs = 0x0, ri_TrigInstrument = 0x0, ri_ReturningSlot =\n0x0, ri_TrigOldSlot = 0x0, ri_TrigNewSlot = 0x0, ri_FdwRoutine = 0x0,\nri_FdwState = 0x0,\n ri_usesFdwDirectModify = false, ri_NumSlots = 0,\nri_NumSlotsInitialized = 0, ri_BatchSize = 0, ri_Slots = 0x0,\nri_PlanSlots = 0x0, ri_WithCheckOptions = 0x0,\n ri_WithCheckOptionExprs = 0x0, ri_ConstraintExprs = 0x0,\nri_GeneratedExprsI = 0x0, ri_GeneratedExprsU = 0x0,\nri_NumGeneratedNeededI = 0, ri_NumGeneratedNeededU = 0,\n ri_returningList = 0x0, ri_projectReturning = 0x0,\nri_onConflictArbiterIndexes = 0x0, ri_onConflict = 0x0,\nri_matchedMergeAction = 0x0, ri_notMatchedMergeAction = 0x0,\n ri_PartitionCheckExpr = 0x0, ri_ChildToRootMap = 0x0,\nri_ChildToRootMapValid = false, ri_RootToChildMap = 0x0,\nri_RootToChildMapValid = false, ri_RootResultRelInfo = 0x0,\n ri_PartitionTupleSlot = 0x0, ri_CopyMultiInsertBuffer = 0x0,\nri_ancestorResultRels = 0x0}\n\nSince relinfo->ri_RootResultRelInfo = 0x0, the rti will have no value\nand Postgres will interpret that the ResultRelInfo must've been\ncreated only for filtering triggers and the relation is not being\ninserted into.\nThe relinfo variable is created with the\ncreate_entity_result_rel_info() function:\n\nResultRelInfo *create_entity_result_rel_info(EState *estate, char *graph_name,\n char *label_name)\n{\n RangeVar *rv;\n Relation label_relation;\n ResultRelInfo *resultRelInfo;\n\n ParseState *pstate = make_parsestate(NULL);\n\n resultRelInfo = palloc(sizeof(ResultRelInfo));\n\n if (strlen(label_name) == 0)\n {\n rv = makeRangeVar(graph_name, AG_DEFAULT_LABEL_VERTEX, -1);\n }\n else\n {\n rv = makeRangeVar(graph_name, label_name, -1);\n }\n\n label_relation = parserOpenTable(pstate, rv, RowExclusiveLock);\n\n // initialize the resultRelInfo\n InitResultRelInfo(resultRelInfo, label_relation,\n list_length(estate->es_range_table), NULL,\n estate->es_instrument);\n\n // open the parse state\n ExecOpenIndices(resultRelInfo, false);\n\n free_parsestate(pstate);\n\n return resultRelInfo;\n}\n\nIn this case, how can we get the relinfo->ri_RootResultRelInfo to\nstore the appropriate data?\n\nThank you,\n\nMatheus Farias\n\n\nEm ter., 18 de jul. de 2023 às 06:58, Amit Langote <[email protected]>\nescreveu:\n\n> Hello,\n>\n> On Sat, Jul 15, 2023 at 4:43 AM Farias de Oliveira\n> <[email protected]> wrote:\n> > I believe I have found something interesting that might be the root of\n> the problem with RTEPermissionInfo. But I do not know how to fix it\n> exactly. In AGE's code, the execution of it goes through a function called\n> analyze_cypher_clause() which does the following:\n> >\n> > It ends up going inside other functions and changing it more a bit, but\n> at the end of one of these functions it assigns values to some members of\n> the query:\n> >\n> > query->targetList = lappend(query->targetList, tle);\n> > query->rtable = pstate->p_rtable;\n> > query->jointree = makeFromExpr(pstate->p_joinlist, NULL);\n> >\n> > I assume that here is missing the assignment of query->rteperminfos to\n> be the same as pstate->p_rteperminfos, but the pstate has the following\n> values:\n> >\n> > {pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n)\n> SET n.i = 3\", p_rtable = 0x2c6e590,\n> > p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0,\n> p_joinlist = 0x2c6e678, p_namespace = 0x2c6e6c8,\n> > p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0,\n> p_parent_cte = 0x0, p_target_relation = 0x0,\n> > p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0,\n> p_expr_kind = EXPR_KIND_NONE, p_next_resno = 3,\n> > p_multiassign_exprs = 0x0, p_locking_clause = 0x0,\n> p_locked_from_parent = false, p_resolve_unknowns = true,\n> > p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false,\n> p_hasTargetSRFs = false, p_hasSubLinks = false,\n> > p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook =\n> 0x0, p_post_columnref_hook = 0x0,\n> > p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state =\n> 0x0}, graph_name = 0x2b06e50 \"cypher_set\",\n> > graph_oid = 16942, params = 0x0, default_alias_num = 0, entities =\n> 0x2c6e228, property_constraint_quals = 0x0,\n> > exprHasAgg = false, p_opt_match = false}\n> >\n> > So changing that won't solve the issue.\n>\n> Does p_rtable in this last pstate contain any RTE_RELATION entries?\n> If it does, p_rteperminfos being NIL looks like a bug in your code.\n>\n> Actually, given the back trace of the error that you shared, I am\n> suspecting more of a problem in the code that generates a\n> ResultRelInfo pointing at the wrong RTE via its ri_RangeTableIndex.\n> That code should perhaps set the ri_RangeTableIndex to 0 if it doesn't\n> know the actual existing RTE corresponding to that result relation.\n> If you set it to some non-0 value, the RTE that it points to should\n> satisfy invariants such as having the corresponding RTEPermissionInfo\n> present in the rteperminfos list if necessary.\n>\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nHello,Thank you for the help guys and I'm so sorry for my late response. Indeed, the error relies on the ResultRelInfo. In GetResultRTEPermissionInfo() function, it does a checking on the relinfo->ri_RootResultRelInfo variable. I believe that it should go inside this scope: if (relinfo->ri_RootResultRelInfo)\t{\t\t/*\t\t * For inheritance child result relations (a partition routing target\t\t * of an INSERT or a child UPDATE target), this returns the root\t\t * parent's RTE to fetch the RTEPermissionInfo because that's the only\t\t * one that has one assigned.\t\t */\t\trti = relinfo->ri_RootResultRelInfo->ri_RangeTableIndex;\t}The relinfo contains:{type = T_ResultRelInfo, ri_RangeTableIndex = 5, ri_RelationDesc = 0x7f44e3308cc8, ri_NumIndices = 0, ri_IndexRelationDescs = 0x0, ri_IndexRelationInfo = 0x0, ri_RowIdAttNo = 0, ri_extraUpdatedCols = 0x0, ri_projectNew = 0x0, ri_newTupleSlot = 0x0, ri_oldTupleSlot = 0x0, ri_projectNewInfoValid = false, ri_TrigDesc = 0x0, ri_TrigFunctions = 0x0, ri_TrigWhenExprs = 0x0, ri_TrigInstrument = 0x0, ri_ReturningSlot = 0x0, ri_TrigOldSlot = 0x0, ri_TrigNewSlot = 0x0, ri_FdwRoutine = 0x0, ri_FdwState = 0x0, ri_usesFdwDirectModify = false, ri_NumSlots = 0, ri_NumSlotsInitialized = 0, ri_BatchSize = 0, ri_Slots = 0x0, ri_PlanSlots = 0x0, ri_WithCheckOptions = 0x0, ri_WithCheckOptionExprs = 0x0, ri_ConstraintExprs = 0x0, ri_GeneratedExprsI = 0x0, ri_GeneratedExprsU = 0x0, ri_NumGeneratedNeededI = 0, ri_NumGeneratedNeededU = 0, ri_returningList = 0x0, ri_projectReturning = 0x0, ri_onConflictArbiterIndexes = 0x0, ri_onConflict = 0x0, ri_matchedMergeAction = 0x0, ri_notMatchedMergeAction = 0x0, ri_PartitionCheckExpr = 0x0, ri_ChildToRootMap = 0x0, ri_ChildToRootMapValid = false, ri_RootToChildMap = 0x0, ri_RootToChildMapValid = false, ri_RootResultRelInfo = 0x0, ri_PartitionTupleSlot = 0x0, ri_CopyMultiInsertBuffer = 0x0, ri_ancestorResultRels = 0x0}Since relinfo->ri_RootResultRelInfo = 0x0, the rti will have no value and Postgres will interpret that the ResultRelInfo must've been created only for filtering triggers and the relation is not being inserted into. The relinfo variable is created with the create_entity_result_rel_info() function:ResultRelInfo *create_entity_result_rel_info(EState *estate, char *graph_name, char *label_name){ RangeVar *rv; Relation label_relation; ResultRelInfo *resultRelInfo; ParseState *pstate = make_parsestate(NULL); resultRelInfo = palloc(sizeof(ResultRelInfo)); if (strlen(label_name) == 0) { rv = makeRangeVar(graph_name, AG_DEFAULT_LABEL_VERTEX, -1); } else { rv = makeRangeVar(graph_name, label_name, -1); } label_relation = parserOpenTable(pstate, rv, RowExclusiveLock); // initialize the resultRelInfo InitResultRelInfo(resultRelInfo, label_relation, list_length(estate->es_range_table), NULL, estate->es_instrument); // open the parse state ExecOpenIndices(resultRelInfo, false); free_parsestate(pstate); return resultRelInfo;}In this case, how can we get the relinfo->ri_RootResultRelInfo to store the appropriate data?Thank you,Matheus FariasEm ter., 18 de jul. de 2023 às 06:58, Amit Langote <[email protected]> escreveu:Hello,\n\nOn Sat, Jul 15, 2023 at 4:43 AM Farias de Oliveira\n<[email protected]> wrote:\n> I believe I have found something interesting that might be the root of the problem with RTEPermissionInfo. But I do not know how to fix it exactly. In AGE's code, the execution of it goes through a function called analyze_cypher_clause() which does the following:\n>\n> It ends up going inside other functions and changing it more a bit, but at the end of one of these functions it assigns values to some members of the query:\n>\n> query->targetList = lappend(query->targetList, tle);\n> query->rtable = pstate->p_rtable;\n> query->jointree = makeFromExpr(pstate->p_joinlist, NULL);\n>\n> I assume that here is missing the assignment of query->rteperminfos to be the same as pstate->p_rteperminfos, but the pstate has the following values:\n>\n> {pstate = {parentParseState = 0x0, p_sourcetext = 0x2b06ef0 \"MATCH (n) SET n.i = 3\", p_rtable = 0x2c6e590,\n> p_rteperminfos = 0x0, p_joinexprs = 0x0, p_nullingrels = 0x0, p_joinlist = 0x2c6e678, p_namespace = 0x2c6e6c8,\n> p_lateral_active = false, p_ctenamespace = 0x0, p_future_ctes = 0x0, p_parent_cte = 0x0, p_target_relation = 0x0,\n> p_target_nsitem = 0x0, p_is_insert = false, p_windowdefs = 0x0, p_expr_kind = EXPR_KIND_NONE, p_next_resno = 3,\n> p_multiassign_exprs = 0x0, p_locking_clause = 0x0, p_locked_from_parent = false, p_resolve_unknowns = true,\n> p_queryEnv = 0x0, p_hasAggs = false, p_hasWindowFuncs = false, p_hasTargetSRFs = false, p_hasSubLinks = false,\n> p_hasModifyingCTE = false, p_last_srf = 0x0, p_pre_columnref_hook = 0x0, p_post_columnref_hook = 0x0,\n> p_paramref_hook = 0x0, p_coerce_param_hook = 0x0, p_ref_hook_state = 0x0}, graph_name = 0x2b06e50 \"cypher_set\",\n> graph_oid = 16942, params = 0x0, default_alias_num = 0, entities = 0x2c6e228, property_constraint_quals = 0x0,\n> exprHasAgg = false, p_opt_match = false}\n>\n> So changing that won't solve the issue.\n\nDoes p_rtable in this last pstate contain any RTE_RELATION entries?\nIf it does, p_rteperminfos being NIL looks like a bug in your code.\n\nActually, given the back trace of the error that you shared, I am\nsuspecting more of a problem in the code that generates a\nResultRelInfo pointing at the wrong RTE via its ri_RangeTableIndex.\nThat code should perhaps set the ri_RangeTableIndex to 0 if it doesn't\nknow the actual existing RTE corresponding to that result relation.\nIf you set it to some non-0 value, the RTE that it points to should\nsatisfy invariants such as having the corresponding RTEPermissionInfo\npresent in the rteperminfos list if necessary.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 20 Jul 2023 17:05:29 -0300",
"msg_from": "Farias de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Hello,\n\nOn Fri, Jul 21, 2023 at 5:05 AM Farias de Oliveira\n<[email protected]> wrote:\n>\n> Hello,\n>\n> Thank you for the help guys and I'm so sorry for my late response. Indeed, the error relies on the ResultRelInfo. In GetResultRTEPermissionInfo() function, it does a checking on the relinfo->ri_RootResultRelInfo variable. I believe that it should go inside this scope:\n>\n>\n> if (relinfo->ri_RootResultRelInfo)\n> {\n> /*\n> * For inheritance child result relations (a partition routing target\n> * of an INSERT or a child UPDATE target), this returns the root\n> * parent's RTE to fetch the RTEPermissionInfo because that's the only\n> * one that has one assigned.\n> */\n> rti = relinfo->ri_RootResultRelInfo->ri_RangeTableIndex;\n> }\n>\n> The relinfo contains:\n>\n> {type = T_ResultRelInfo, ri_RangeTableIndex = 5, ri_RelationDesc = 0x7f44e3308cc8, ri_NumIndices = 0, ri_IndexRelationDescs = 0x0, ri_IndexRelationInfo = 0x0, ri_RowIdAttNo = 0,\n> ri_extraUpdatedCols = 0x0, ri_projectNew = 0x0, ri_newTupleSlot = 0x0, ri_oldTupleSlot = 0x0, ri_projectNewInfoValid = false, ri_TrigDesc = 0x0, ri_TrigFunctions = 0x0,\n> ri_TrigWhenExprs = 0x0, ri_TrigInstrument = 0x0, ri_ReturningSlot = 0x0, ri_TrigOldSlot = 0x0, ri_TrigNewSlot = 0x0, ri_FdwRoutine = 0x0, ri_FdwState = 0x0,\n> ri_usesFdwDirectModify = false, ri_NumSlots = 0, ri_NumSlotsInitialized = 0, ri_BatchSize = 0, ri_Slots = 0x0, ri_PlanSlots = 0x0, ri_WithCheckOptions = 0x0,\n> ri_WithCheckOptionExprs = 0x0, ri_ConstraintExprs = 0x0, ri_GeneratedExprsI = 0x0, ri_GeneratedExprsU = 0x0, ri_NumGeneratedNeededI = 0, ri_NumGeneratedNeededU = 0,\n> ri_returningList = 0x0, ri_projectReturning = 0x0, ri_onConflictArbiterIndexes = 0x0, ri_onConflict = 0x0, ri_matchedMergeAction = 0x0, ri_notMatchedMergeAction = 0x0,\n> ri_PartitionCheckExpr = 0x0, ri_ChildToRootMap = 0x0, ri_ChildToRootMapValid = false, ri_RootToChildMap = 0x0, ri_RootToChildMapValid = false, ri_RootResultRelInfo = 0x0,\n> ri_PartitionTupleSlot = 0x0, ri_CopyMultiInsertBuffer = 0x0, ri_ancestorResultRels = 0x0}\n>\n> Since relinfo->ri_RootResultRelInfo = 0x0, the rti will have no value and Postgres will interpret that the ResultRelInfo must've been created only for filtering triggers and the relation is not being inserted into.\n> The relinfo variable is created with the create_entity_result_rel_info() function:\n>\n> ResultRelInfo *create_entity_result_rel_info(EState *estate, char *graph_name,\n> char *label_name)\n> {\n> RangeVar *rv;\n> Relation label_relation;\n> ResultRelInfo *resultRelInfo;\n>\n> ParseState *pstate = make_parsestate(NULL);\n>\n> resultRelInfo = palloc(sizeof(ResultRelInfo));\n>\n> if (strlen(label_name) == 0)\n> {\n> rv = makeRangeVar(graph_name, AG_DEFAULT_LABEL_VERTEX, -1);\n> }\n> else\n> {\n> rv = makeRangeVar(graph_name, label_name, -1);\n> }\n>\n> label_relation = parserOpenTable(pstate, rv, RowExclusiveLock);\n>\n> // initialize the resultRelInfo\n> InitResultRelInfo(resultRelInfo, label_relation,\n> list_length(estate->es_range_table), NULL,\n> estate->es_instrument);\n>\n> // open the parse state\n> ExecOpenIndices(resultRelInfo, false);\n>\n> free_parsestate(pstate);\n>\n> return resultRelInfo;\n> }\n>\n> In this case, how can we get the relinfo->ri_RootResultRelInfo to store the appropriate data?\n\nYour function doesn't seem to have access to the ModifyTableState\nnode, so setting ri_RootResultRelInfo to the correct ResultRelInfo\nnode does not seem doable.\n\nAs I suggested in my previous reply, please check if passing 0 (not\nlist_length(estate->es_range_table)) for the 3rd argument\nInitResultRelInfo() fixes the problem and gives the correct result.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 Jul 2023 20:30:01 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
},
{
"msg_contents": "Hello,\n\nThank you Amit, changing the 3rd argument to 0 fixes some of the errors\n(there are 6 out of 24 errors still failing) but it throws a new one\n\"ERROR: bad buffer ID: 0\". We will need to take a more in depth look here\non why this is occuring, but thank you so much for the help.\n\nThank you,\nMatheus Farias\n\nEm qua., 26 de jul. de 2023 às 08:30, Amit Langote <[email protected]>\nescreveu:\n\n> Hello,\n>\n> On Fri, Jul 21, 2023 at 5:05 AM Farias de Oliveira\n> <[email protected]> wrote:\n> >\n> > Hello,\n> >\n> > Thank you for the help guys and I'm so sorry for my late response.\n> Indeed, the error relies on the ResultRelInfo. In\n> GetResultRTEPermissionInfo() function, it does a checking on the\n> relinfo->ri_RootResultRelInfo variable. I believe that it should go inside\n> this scope:\n> >\n> >\n> > if (relinfo->ri_RootResultRelInfo)\n> > {\n> > /*\n> > * For inheritance child result relations (a partition routing target\n> > * of an INSERT or a child UPDATE target), this returns the root\n> > * parent's RTE to fetch the RTEPermissionInfo because that's the only\n> > * one that has one assigned.\n> > */\n> > rti = relinfo->ri_RootResultRelInfo->ri_RangeTableIndex;\n> > }\n> >\n> > The relinfo contains:\n> >\n> > {type = T_ResultRelInfo, ri_RangeTableIndex = 5, ri_RelationDesc =\n> 0x7f44e3308cc8, ri_NumIndices = 0, ri_IndexRelationDescs = 0x0,\n> ri_IndexRelationInfo = 0x0, ri_RowIdAttNo = 0,\n> > ri_extraUpdatedCols = 0x0, ri_projectNew = 0x0, ri_newTupleSlot = 0x0,\n> ri_oldTupleSlot = 0x0, ri_projectNewInfoValid = false, ri_TrigDesc = 0x0,\n> ri_TrigFunctions = 0x0,\n> > ri_TrigWhenExprs = 0x0, ri_TrigInstrument = 0x0, ri_ReturningSlot =\n> 0x0, ri_TrigOldSlot = 0x0, ri_TrigNewSlot = 0x0, ri_FdwRoutine = 0x0,\n> ri_FdwState = 0x0,\n> > ri_usesFdwDirectModify = false, ri_NumSlots = 0,\n> ri_NumSlotsInitialized = 0, ri_BatchSize = 0, ri_Slots = 0x0, ri_PlanSlots\n> = 0x0, ri_WithCheckOptions = 0x0,\n> > ri_WithCheckOptionExprs = 0x0, ri_ConstraintExprs = 0x0,\n> ri_GeneratedExprsI = 0x0, ri_GeneratedExprsU = 0x0, ri_NumGeneratedNeededI\n> = 0, ri_NumGeneratedNeededU = 0,\n> > ri_returningList = 0x0, ri_projectReturning = 0x0,\n> ri_onConflictArbiterIndexes = 0x0, ri_onConflict = 0x0,\n> ri_matchedMergeAction = 0x0, ri_notMatchedMergeAction = 0x0,\n> > ri_PartitionCheckExpr = 0x0, ri_ChildToRootMap = 0x0,\n> ri_ChildToRootMapValid = false, ri_RootToChildMap = 0x0,\n> ri_RootToChildMapValid = false, ri_RootResultRelInfo = 0x0,\n> > ri_PartitionTupleSlot = 0x0, ri_CopyMultiInsertBuffer = 0x0,\n> ri_ancestorResultRels = 0x0}\n> >\n> > Since relinfo->ri_RootResultRelInfo = 0x0, the rti will have no value\n> and Postgres will interpret that the ResultRelInfo must've been created\n> only for filtering triggers and the relation is not being inserted into.\n> > The relinfo variable is created with the create_entity_result_rel_info()\n> function:\n> >\n> > ResultRelInfo *create_entity_result_rel_info(EState *estate, char\n> *graph_name,\n> > char *label_name)\n> > {\n> > RangeVar *rv;\n> > Relation label_relation;\n> > ResultRelInfo *resultRelInfo;\n> >\n> > ParseState *pstate = make_parsestate(NULL);\n> >\n> > resultRelInfo = palloc(sizeof(ResultRelInfo));\n> >\n> > if (strlen(label_name) == 0)\n> > {\n> > rv = makeRangeVar(graph_name, AG_DEFAULT_LABEL_VERTEX, -1);\n> > }\n> > else\n> > {\n> > rv = makeRangeVar(graph_name, label_name, -1);\n> > }\n> >\n> > label_relation = parserOpenTable(pstate, rv, RowExclusiveLock);\n> >\n> > // initialize the resultRelInfo\n> > InitResultRelInfo(resultRelInfo, label_relation,\n> > list_length(estate->es_range_table), NULL,\n> > estate->es_instrument);\n> >\n> > // open the parse state\n> > ExecOpenIndices(resultRelInfo, false);\n> >\n> > free_parsestate(pstate);\n> >\n> > return resultRelInfo;\n> > }\n> >\n> > In this case, how can we get the relinfo->ri_RootResultRelInfo to store\n> the appropriate data?\n>\n> Your function doesn't seem to have access to the ModifyTableState\n> node, so setting ri_RootResultRelInfo to the correct ResultRelInfo\n> node does not seem doable.\n>\n> As I suggested in my previous reply, please check if passing 0 (not\n> list_length(estate->es_range_table)) for the 3rd argument\n> InitResultRelInfo() fixes the problem and gives the correct result.\n>\n> --\n> Thanks, Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nHello,Thank you Amit, changing the 3rd argument to 0 fixes some of the errors (there are 6 out of 24 errors still failing) but it throws a new one \"ERROR: bad buffer ID: 0\". We will need to take a more in depth look here on why this is occuring, but thank you so much for the help.Thank you,Matheus FariasEm qua., 26 de jul. de 2023 às 08:30, Amit Langote <[email protected]> escreveu:Hello,\n\nOn Fri, Jul 21, 2023 at 5:05 AM Farias de Oliveira\n<[email protected]> wrote:\n>\n> Hello,\n>\n> Thank you for the help guys and I'm so sorry for my late response. Indeed, the error relies on the ResultRelInfo. In GetResultRTEPermissionInfo() function, it does a checking on the relinfo->ri_RootResultRelInfo variable. I believe that it should go inside this scope:\n>\n>\n> if (relinfo->ri_RootResultRelInfo)\n> {\n> /*\n> * For inheritance child result relations (a partition routing target\n> * of an INSERT or a child UPDATE target), this returns the root\n> * parent's RTE to fetch the RTEPermissionInfo because that's the only\n> * one that has one assigned.\n> */\n> rti = relinfo->ri_RootResultRelInfo->ri_RangeTableIndex;\n> }\n>\n> The relinfo contains:\n>\n> {type = T_ResultRelInfo, ri_RangeTableIndex = 5, ri_RelationDesc = 0x7f44e3308cc8, ri_NumIndices = 0, ri_IndexRelationDescs = 0x0, ri_IndexRelationInfo = 0x0, ri_RowIdAttNo = 0,\n> ri_extraUpdatedCols = 0x0, ri_projectNew = 0x0, ri_newTupleSlot = 0x0, ri_oldTupleSlot = 0x0, ri_projectNewInfoValid = false, ri_TrigDesc = 0x0, ri_TrigFunctions = 0x0,\n> ri_TrigWhenExprs = 0x0, ri_TrigInstrument = 0x0, ri_ReturningSlot = 0x0, ri_TrigOldSlot = 0x0, ri_TrigNewSlot = 0x0, ri_FdwRoutine = 0x0, ri_FdwState = 0x0,\n> ri_usesFdwDirectModify = false, ri_NumSlots = 0, ri_NumSlotsInitialized = 0, ri_BatchSize = 0, ri_Slots = 0x0, ri_PlanSlots = 0x0, ri_WithCheckOptions = 0x0,\n> ri_WithCheckOptionExprs = 0x0, ri_ConstraintExprs = 0x0, ri_GeneratedExprsI = 0x0, ri_GeneratedExprsU = 0x0, ri_NumGeneratedNeededI = 0, ri_NumGeneratedNeededU = 0,\n> ri_returningList = 0x0, ri_projectReturning = 0x0, ri_onConflictArbiterIndexes = 0x0, ri_onConflict = 0x0, ri_matchedMergeAction = 0x0, ri_notMatchedMergeAction = 0x0,\n> ri_PartitionCheckExpr = 0x0, ri_ChildToRootMap = 0x0, ri_ChildToRootMapValid = false, ri_RootToChildMap = 0x0, ri_RootToChildMapValid = false, ri_RootResultRelInfo = 0x0,\n> ri_PartitionTupleSlot = 0x0, ri_CopyMultiInsertBuffer = 0x0, ri_ancestorResultRels = 0x0}\n>\n> Since relinfo->ri_RootResultRelInfo = 0x0, the rti will have no value and Postgres will interpret that the ResultRelInfo must've been created only for filtering triggers and the relation is not being inserted into.\n> The relinfo variable is created with the create_entity_result_rel_info() function:\n>\n> ResultRelInfo *create_entity_result_rel_info(EState *estate, char *graph_name,\n> char *label_name)\n> {\n> RangeVar *rv;\n> Relation label_relation;\n> ResultRelInfo *resultRelInfo;\n>\n> ParseState *pstate = make_parsestate(NULL);\n>\n> resultRelInfo = palloc(sizeof(ResultRelInfo));\n>\n> if (strlen(label_name) == 0)\n> {\n> rv = makeRangeVar(graph_name, AG_DEFAULT_LABEL_VERTEX, -1);\n> }\n> else\n> {\n> rv = makeRangeVar(graph_name, label_name, -1);\n> }\n>\n> label_relation = parserOpenTable(pstate, rv, RowExclusiveLock);\n>\n> // initialize the resultRelInfo\n> InitResultRelInfo(resultRelInfo, label_relation,\n> list_length(estate->es_range_table), NULL,\n> estate->es_instrument);\n>\n> // open the parse state\n> ExecOpenIndices(resultRelInfo, false);\n>\n> free_parsestate(pstate);\n>\n> return resultRelInfo;\n> }\n>\n> In this case, how can we get the relinfo->ri_RootResultRelInfo to store the appropriate data?\n\nYour function doesn't seem to have access to the ModifyTableState\nnode, so setting ri_RootResultRelInfo to the correct ResultRelInfo\nnode does not seem doable.\n\nAs I suggested in my previous reply, please check if passing 0 (not\nlist_length(estate->es_range_table)) for the 3rd argument\nInitResultRelInfo() fixes the problem and gives the correct result.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 28 Jul 2023 15:37:39 -0300",
"msg_from": "Farias de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: In Postgres 16 BETA, should the ParseNamespaceItem have the same\n index as it's RangeTableEntry?"
}
] |
[
{
"msg_contents": "While adapting a Java implementation of the SQL parser, I noticed that \nin structures JsonArrayAgg, JsonArrayConstructor, \nJsonArrayQueryConstructor and JsonObjectConstrutor, the absent_on_null \nfield defaults to TRUE.\nBut in JsonObjectAgg, absent_on_null defaults to FALSE.\nIs that intentionally?\n\nRegards,\nMartin.\n\n\n\n",
"msg_date": "Fri, 14 Jul 2023 07:53:14 +0200",
"msg_from": "Martin Butter <[email protected]>",
"msg_from_op": true,
"msg_subject": "16beta2 SQL parser: different defaults on absent_on_null"
},
{
"msg_contents": "> On 14 Jul 2023, at 07:53, Martin Butter <[email protected]> wrote:\n\n> While adapting a Java implementation of the SQL parser, I noticed that in structures JsonArrayAgg, JsonArrayConstructor, JsonArrayQueryConstructor and JsonObjectConstrutor, the absent_on_null field defaults to TRUE.\n> But in JsonObjectAgg, absent_on_null defaults to FALSE.\n> Is that intentionally?\n\nI would say so, an empty NULL|ABSENT ON NULL clause for arrays is defined as\ntrue, while for objects it's defined as false (which is shared between both\njson_object() and json_objectagg()).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 14 Jul 2023 10:29:53 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 16beta2 SQL parser: different defaults on absent_on_null"
},
{
"msg_contents": "Hello Daniel,\n\nThanks for the explanation, it sounds reasonable. I'm glad it is not a bug.\n\nRegards,\nMartin.\n\nOn 14/07/2023 10:29, Daniel Gustafsson wrote:\n>> On 14 Jul 2023, at 07:53, Martin Butter<[email protected]> wrote:\n>> While adapting a Java implementation of the SQL parser, I noticed that in structures JsonArrayAgg, JsonArrayConstructor, JsonArrayQueryConstructor and JsonObjectConstrutor, the absent_on_null field defaults to TRUE.\n>> But in JsonObjectAgg, absent_on_null defaults to FALSE.\n>> Is that intentionally?\n> I would say so, an empty NULL|ABSENT ON NULL clause for arrays is defined as\n> true, while for objects it's defined as false (which is shared between both\n> json_object() and json_objectagg()).\n>\n> --\n> Daniel Gustafsson\n>\n-- \nMartin Butter\nDeveloper\n\nSplendid Data Nederland B.V.\nBinnenhof 62A\n1412 LC NAARDEN\n\nT: +31 (0)85 773 19 99\nM: +31 (0)6 226 946 62\nE: [email protected]\n\nhttp://www.splendiddata.com/\n\n\n\n\n\nHello Daniel,\nThanks for the explanation,\n it sounds reasonable. I'm glad it is not a bug.\nRegards,\n Martin.\n\nOn 14/07/2023 10:29, Daniel Gustafsson\n wrote:\n\n\n\nOn 14 Jul 2023, at 07:53, Martin Butter <[email protected]> wrote:\n\n\n\n\n\nWhile adapting a Java implementation of the SQL parser, I noticed that in structures JsonArrayAgg, JsonArrayConstructor, JsonArrayQueryConstructor and JsonObjectConstrutor, the absent_on_null field defaults to TRUE.\nBut in JsonObjectAgg, absent_on_null defaults to FALSE.\nIs that intentionally?\n\n\n\nI would say so, an empty NULL|ABSENT ON NULL clause for arrays is defined as\ntrue, while for objects it's defined as false (which is shared between both\njson_object() and json_objectagg()).\n\n--\nDaniel Gustafsson\n\n\n\n-- \n\n Martin Butter\n Developer\n\n Splendid Data Nederland B.V.\n Binnenhof 62A\n 1412 LC NAARDEN\n\n T: +31 (0)85 773 19 99\n M: +31 (0)6 226 946 62\n E: [email protected]\n\nhttp://www.splendiddata.com/",
"msg_date": "Fri, 14 Jul 2023 11:06:33 +0200",
"msg_from": "Martin Butter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 16beta2 SQL parser: different defaults on absent_on_null"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at [1] I started to wonder why it is safe that\nCreateCheckPoint() updates XLogCtl->RedoRecPtr after releasing the WAL\ninsertion lock:\n\n\t/*\n\t * Now we can release the WAL insertion locks, allowing other xacts to\n\t * proceed while we are flushing disk buffers.\n\t */\n\tWALInsertLockRelease();\n\n\t/* Update the info_lck-protected copy of RedoRecPtr as well */\n\tSpinLockAcquire(&XLogCtl->info_lck);\n\tXLogCtl->RedoRecPtr = checkPoint.redo;\n\tSpinLockRelease(&XLogCtl->info_lck);\n\nThe most important user of that is GetRedoRecPtr().\n\nRight now I'm a bit confused why it's ok that\n\n\t/* Update the info_lck-protected copy of RedoRecPtr as well */\n\tSpinLockAcquire(&XLogCtl->info_lck);\n\tXLogCtl->RedoRecPtr = checkPoint.redo;\n\tSpinLockRelease(&XLogCtl->info_lck);\n\nhappens after WALInsertLockRelease().\n\n\nBut then started to wonder, even if that weren't the case, how come\nXLogSaveBufferForHint() and other uses of GetRedoRecPtr(), aren't racy as\nhell?\n\nThe reason XLogInsertRecord() can safely check if an FPW is needed is that it\nholds a WAL insertion lock, the redo pointer cannot change until the insertion\nlock is released.\n\nBut there's *zero* interlock in XLogSaveBufferForHint() from what I can tell?\nA checkpoint could easily start between between the GetRedoRecPtr() and the\ncheck whether this buffer needs to be WAL logged?\n\n\nWhile XLogSaveBufferForHint() makes no note of this, it's sole caller,\nMarkBufferDirtyHint(), tries to deal with some related concerns to some\ndegree:\n\n\t\t\t/*\n\t\t\t * If the block is already dirty because we either made a change\n\t\t\t * or set a hint already, then we don't need to write a full page\n\t\t\t * image. Note that aggressive cleaning of blocks dirtied by hint\n\t\t\t * bit setting would increase the call rate. Bulk setting of hint\n\t\t\t * bits would reduce the call rate...\n\t\t\t *\n\t\t\t * We must issue the WAL record before we mark the buffer dirty.\n\t\t\t * Otherwise we might write the page before we write the WAL. That\n\t\t\t * causes a race condition, since a checkpoint might occur between\n\t\t\t * writing the WAL record and marking the buffer dirty. We solve\n\t\t\t * that with a kluge, but one that is already in use during\n\t\t\t * transaction commit to prevent race conditions. Basically, we\n\t\t\t * simply prevent the checkpoint WAL record from being written\n\t\t\t * until we have marked the buffer dirty. We don't start the\n\t\t\t * checkpoint flush until we have marked dirty, so our checkpoint\n\t\t\t * must flush the change to disk successfully or the checkpoint\n\t\t\t * never gets written, so crash recovery will fix.\n\t\t\t *\n\t\t\t * It's possible we may enter here without an xid, so it is\n\t\t\t * essential that CreateCheckPoint waits for virtual transactions\n\t\t\t * rather than full transactionids.\n\t\t\t */\n\t\t\tAssert((MyProc->delayChkptFlags & DELAY_CHKPT_START) == 0);\n\t\t\tMyProc->delayChkptFlags |= DELAY_CHKPT_START;\n\t\t\tdelayChkptFlags = true;\n\t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n\nbut I don't think that really does all that much, because the\nDELAY_CHKPT_START handling in CreateCheckPoint() happens after we determine\nthe redo pointer. This code isn't even reached if we wrongly skipped due to\nthe if (lsn <= RedoRecPtr).\n\n\nI seriously doubt this can correctly be implemented outside of xlog*.c /\nwithout the use of a WALInsertLock?\n\nI feel like I must be missing here, this isnt' a particularly narrow race?\n\n\nIt looks to me like the sue of GetRedoRecPtr() in nextval_internal() is also\nwrong. I think the uses in slot.c, snapbuild.c, rewriteheap.c are fine.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20230714151626.rhgae7taigk2xrq7%40awork3.anarazel.de\n\n\n",
"msg_date": "Fri, 14 Jul 2023 08:42:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "XLogSaveBufferForHint() correctness and more"
}
] |
[
{
"msg_contents": "I looked into the performance gripe at [1] about pg_restore not making\neffective use of parallel workers when there are a lot of tables.\nI was able to reproduce that by dumping and parallel restoring 100K\ntables made according to this script:\n\ndo $$\nbegin\nfor i in 1..100000 loop\n execute format('create table t%s (f1 int unique, f2 int unique);', i);\n execute format('insert into t%s select x, x from generate_series(1,1000) x',\n i);\n if i % 100 = 0 then commit; end if;\nend loop;\nend\n$$;\n\nOnce pg_restore reaches the parallelizable part of the restore, what\nI see is that the parent pg_restore process goes to 100% CPU while its\nchildren (and the server) mostly sit idle; that is, the task dispatch\nlogic in pg_backup_archiver.c is unable to dispatch tasks fast enough\nto keep the children busy. A quick perf check showed most of the time\nbeing eaten by pg_qsort and TocEntrySizeCompare.\n\nWhat I believe is happening is that we start the parallel restore phase\nwith 100K TableData items that are ready to go (they are in the\nready_list) and 200K AddConstraint items that are pending, because\nwe make those have dependencies on the corresponding TableData so we\ndon't build an index until after its table is populated. Each time\none of the TableData items is completed by some worker, the two\nAddConstraint items for its table are moved from the pending_list\nto the ready_list --- and that means ready_list_insert marks the\nready_list as no longer sorted. When we go to pop the next task\nfrom the ready_list, we re-sort that entire list first. So\nwe spend something like O(N^2 * log(N)) time just sorting, if\nthere are N tables. Clearly, this code is much less bright\nthan it thinks it is (and that's all my fault, if memory serves).\n\nI'm not sure how big a deal this is in practice: in most situations\nthe individual jobs are larger than they are in this toy example,\nplus the initial non-parallelizable part of the restore is a bigger\nbottleneck anyway with this many tables. Still, we do have one\nreal-world complaint, so maybe we should look into improving it.\n\nI wonder if we could replace the sorted ready-list with a priority heap,\nalthough that might be complicated by the fact that pop_next_work_item\nhas to be capable of popping something that's not necessarily the\nlargest remaining job. Another idea could be to be a little less eager\nto sort the list every time; I think in practice scheduling wouldn't\nget much worse if we only re-sorted every so often.\n\nI don't have time to pursue this right now, but perhaps someone\nelse would like to.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAEzn%3DHSPXi6OS-5KzGMcZeKzWKOOX1me2u2eCiGtMEZDz9Fqdg%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 15 Jul 2023 13:47:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-15 13:47:12 -0400, Tom Lane wrote:\n> I wonder if we could replace the sorted ready-list with a priority heap,\n> although that might be complicated by the fact that pop_next_work_item\n> has to be capable of popping something that's not necessarily the\n> largest remaining job. Another idea could be to be a little less eager\n> to sort the list every time; I think in practice scheduling wouldn't\n> get much worse if we only re-sorted every so often.\n\nPerhaps we could keep track of where the newly inserted items are, and use\ninsertion sort or such when the number of new elements is much smaller than\nthe size of the already sorted elements?\n\nAs you say, a straight priority heap might not be easy. But we could just open\ncode using two sorted arrays, one large, one for recent additions that needs\nto be newly sorted. And occasionally merge the small array into the big array,\nonce it has gotten large enough that sorting becomes expensive. We could go\nfor a heap of N>2 such arrays, but I doubt it would be worth much.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 15 Jul 2023 11:19:16 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On 2023-07-15 Sa 13:47, Tom Lane wrote:\n> I looked into the performance gripe at [1] about pg_restore not making\n> effective use of parallel workers when there are a lot of tables.\n> I was able to reproduce that by dumping and parallel restoring 100K\n> tables made according to this script:\n>\n> do $$\n> begin\n> for i in 1..100000 loop\n> execute format('create table t%s (f1 int unique, f2 int unique);', i);\n> execute format('insert into t%s select x, x from generate_series(1,1000) x',\n> i);\n> if i % 100 = 0 then commit; end if;\n> end loop;\n> end\n> $$;\n>\n> Once pg_restore reaches the parallelizable part of the restore, what\n> I see is that the parent pg_restore process goes to 100% CPU while its\n> children (and the server) mostly sit idle; that is, the task dispatch\n> logic in pg_backup_archiver.c is unable to dispatch tasks fast enough\n> to keep the children busy. A quick perf check showed most of the time\n> being eaten by pg_qsort and TocEntrySizeCompare.\n>\n> What I believe is happening is that we start the parallel restore phase\n> with 100K TableData items that are ready to go (they are in the\n> ready_list) and 200K AddConstraint items that are pending, because\n> we make those have dependencies on the corresponding TableData so we\n> don't build an index until after its table is populated. Each time\n> one of the TableData items is completed by some worker, the two\n> AddConstraint items for its table are moved from the pending_list\n> to the ready_list --- and that means ready_list_insert marks the\n> ready_list as no longer sorted. When we go to pop the next task\n> from the ready_list, we re-sort that entire list first. So\n> we spend something like O(N^2 * log(N)) time just sorting, if\n> there are N tables. Clearly, this code is much less bright\n> than it thinks it is (and that's all my fault, if memory serves).\n>\n> I'm not sure how big a deal this is in practice: in most situations\n> the individual jobs are larger than they are in this toy example,\n> plus the initial non-parallelizable part of the restore is a bigger\n> bottleneck anyway with this many tables. Still, we do have one\n> real-world complaint, so maybe we should look into improving it.\n>\n> I wonder if we could replace the sorted ready-list with a priority heap,\n> although that might be complicated by the fact that pop_next_work_item\n> has to be capable of popping something that's not necessarily the\n> largest remaining job. Another idea could be to be a little less eager\n> to sort the list every time; I think in practice scheduling wouldn't\n> get much worse if we only re-sorted every so often.\n>\n\nYeah, I think that last idea is reasonable. Something like if the number \nadded since the last sort is more than min(50, list_length/4) then sort. \nThat shouldn't be too invasive.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-15 Sa 13:47, Tom Lane wrote:\n\n\nI looked into the performance gripe at [1] about pg_restore not making\neffective use of parallel workers when there are a lot of tables.\nI was able to reproduce that by dumping and parallel restoring 100K\ntables made according to this script:\n\ndo $$\nbegin\nfor i in 1..100000 loop\n execute format('create table t%s (f1 int unique, f2 int unique);', i);\n execute format('insert into t%s select x, x from generate_series(1,1000) x',\n i);\n if i % 100 = 0 then commit; end if;\nend loop;\nend\n$$;\n\nOnce pg_restore reaches the parallelizable part of the restore, what\nI see is that the parent pg_restore process goes to 100% CPU while its\nchildren (and the server) mostly sit idle; that is, the task dispatch\nlogic in pg_backup_archiver.c is unable to dispatch tasks fast enough\nto keep the children busy. A quick perf check showed most of the time\nbeing eaten by pg_qsort and TocEntrySizeCompare.\n\nWhat I believe is happening is that we start the parallel restore phase\nwith 100K TableData items that are ready to go (they are in the\nready_list) and 200K AddConstraint items that are pending, because\nwe make those have dependencies on the corresponding TableData so we\ndon't build an index until after its table is populated. Each time\none of the TableData items is completed by some worker, the two\nAddConstraint items for its table are moved from the pending_list\nto the ready_list --- and that means ready_list_insert marks the\nready_list as no longer sorted. When we go to pop the next task\nfrom the ready_list, we re-sort that entire list first. So\nwe spend something like O(N^2 * log(N)) time just sorting, if\nthere are N tables. Clearly, this code is much less bright\nthan it thinks it is (and that's all my fault, if memory serves).\n\nI'm not sure how big a deal this is in practice: in most situations\nthe individual jobs are larger than they are in this toy example,\nplus the initial non-parallelizable part of the restore is a bigger\nbottleneck anyway with this many tables. Still, we do have one\nreal-world complaint, so maybe we should look into improving it.\n\nI wonder if we could replace the sorted ready-list with a priority heap,\nalthough that might be complicated by the fact that pop_next_work_item\nhas to be capable of popping something that's not necessarily the\nlargest remaining job. Another idea could be to be a little less eager\nto sort the list every time; I think in practice scheduling wouldn't\nget much worse if we only re-sorted every so often.\n\n\n\n\n\nYeah, I think that last idea is reasonable. Something like if the\n number added since the last sort is more than min(50,\n list_length/4) then sort. That shouldn't be too invasive.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 16 Jul 2023 08:17:49 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-07-15 Sa 13:47, Tom Lane wrote:\n>> I wonder if we could replace the sorted ready-list with a priority heap,\n>> although that might be complicated by the fact that pop_next_work_item\n>> has to be capable of popping something that's not necessarily the\n>> largest remaining job. Another idea could be to be a little less eager\n>> to sort the list every time; I think in practice scheduling wouldn't\n>> get much worse if we only re-sorted every so often.\n\n> Yeah, I think that last idea is reasonable. Something like if the number \n> added since the last sort is more than min(50, list_length/4) then sort. \n> That shouldn't be too invasive.\n\nActually, as long as we're talking about approximately-correct behavior:\nlet's make the ready_list be a priority heap, and then just make\npop_next_work_item scan forward from the array start until it finds an\nitem that's runnable per the lock heuristic. If the heap root is\nblocked, the next things we'll examine will be its two children.\nWe might pick the lower-priority of those two, but it's still known to\nbe higher priority than at least 50% of the remaining heap entries, so\nit shouldn't be too awful as a choice. The argument gets weaker the\nfurther you go into the heap, but we're not expecting that having most\nof the top entries blocked will be a common case. (Besides which, the\npriorities are pretty crude to begin with.) Once selected, pulling out\nan entry that is not the heap root is no problem: you just start the\nsift-down process from there.\n\nThe main advantage of this over the only-sort-sometimes idea is that\nwe can guarantee that the largest ready item will always be dispatched\nas soon as it can be (because it will be the heap root). So cases\ninvolving one big table (with big indexes) and a lot of little ones\nshould get scheduled sanely, which is the main thing we want this\nalgorithm to ensure. With the other approach we can't really promise\nmuch at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jul 2023 09:45:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sun, Jul 16, 2023 at 09:45:54AM -0400, Tom Lane wrote:\n> Actually, as long as we're talking about approximately-correct behavior:\n> let's make the ready_list be a priority heap, and then just make\n> pop_next_work_item scan forward from the array start until it finds an\n> item that's runnable per the lock heuristic. If the heap root is\n> blocked, the next things we'll examine will be its two children.\n> We might pick the lower-priority of those two, but it's still known to\n> be higher priority than at least 50% of the remaining heap entries, so\n> it shouldn't be too awful as a choice. The argument gets weaker the\n> further you go into the heap, but we're not expecting that having most\n> of the top entries blocked will be a common case. (Besides which, the\n> priorities are pretty crude to begin with.) Once selected, pulling out\n> an entry that is not the heap root is no problem: you just start the\n> sift-down process from there.\n> \n> The main advantage of this over the only-sort-sometimes idea is that\n> we can guarantee that the largest ready item will always be dispatched\n> as soon as it can be (because it will be the heap root). So cases\n> involving one big table (with big indexes) and a lot of little ones\n> should get scheduled sanely, which is the main thing we want this\n> algorithm to ensure. With the other approach we can't really promise\n> much at all.\n\nThis seems worth a try. IIUC you are suggesting making binaryheap.c\nfrontend-friendly and expanding its API a bit. If no one has volunteered,\nI could probably hack something together.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 16 Jul 2023 20:54:24 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sun, Jul 16, 2023 at 08:54:24PM -0700, Nathan Bossart wrote:\n> This seems worth a try. IIUC you are suggesting making binaryheap.c\n> frontend-friendly and expanding its API a bit. If no one has volunteered,\n> I could probably hack something together.\n\nI spent some time on the binaryheap changes. I haven't had a chance to\nplug it into the ready_list yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Jul 2023 21:57:01 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On 2023-Jul-17, Nathan Bossart wrote:\n\n> @@ -35,7 +42,11 @@ binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n> \tbinaryheap *heap;\n> \n> \tsz = offsetof(binaryheap, bh_nodes) + sizeof(Datum) * capacity;\n> +#ifdef FRONTEND\n> +\theap = (binaryheap *) pg_malloc(sz);\n> +#else\n> \theap = (binaryheap *) palloc(sz);\n> +#endif\n\nHmm, as I recall fe_memutils.c provides you with palloc() in the\nfrontend environment, so you don't actually need this one.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n",
"msg_date": "Tue, 18 Jul 2023 18:05:11 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 06:05:11PM +0200, Alvaro Herrera wrote:\n> On 2023-Jul-17, Nathan Bossart wrote:\n> \n>> @@ -35,7 +42,11 @@ binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)\n>> \tbinaryheap *heap;\n>> \n>> \tsz = offsetof(binaryheap, bh_nodes) + sizeof(Datum) * capacity;\n>> +#ifdef FRONTEND\n>> +\theap = (binaryheap *) pg_malloc(sz);\n>> +#else\n>> \theap = (binaryheap *) palloc(sz);\n>> +#endif\n> \n> Hmm, as I recall fe_memutils.c provides you with palloc() in the\n> frontend environment, so you don't actually need this one.\n\nAh, yes it does. Thanks for the pointer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Jul 2023 09:07:13 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Here is a work-in-progress patch set for converting ready_list to a\npriority queue. On my machine, Tom's 100k-table example [0] takes 11.5\nminutes without these patches and 1.5 minutes with them.\n\nOne item that requires more thought is binaryheap's use of Datum. AFAICT\nthe Datum definitions live in postgres.h and aren't available to frontend\ncode. I think we'll either need to move the Datum definitions to c.h or to\nadjust binaryheap to use \"void *\".\n\n[0] https://postgr.es/m/3612876.1689443232%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 20 Jul 2023 12:06:44 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 12:06:44PM -0700, Nathan Bossart wrote:\n> Here is a work-in-progress patch set for converting ready_list to a\n> priority queue. On my machine, Tom's 100k-table example [0] takes 11.5\n> minutes without these patches and 1.5 minutes with them.\n> \n> One item that requires more thought is binaryheap's use of Datum. AFAICT\n> the Datum definitions live in postgres.h and aren't available to frontend\n> code. I think we'll either need to move the Datum definitions to c.h or to\n> adjust binaryheap to use \"void *\".\n\nIn v3, I moved the Datum definitions to c.h. I first tried modifying\nbinaryheap to use \"int\" or \"void *\" instead, but that ended up requiring\nsome rather invasive changes in backend code, not to mention any extensions\nthat happen to be using it. I also looked into moving the definitions to a\nseparate datumdefs.h header that postgres.h would include, but that felt\nawkward because 1) postgres.h clearly states that it is intended for things\n\"that never escape the backend\" and 2) the definitions seem relatively\ninexpensive. However, I think the latter option is still viable, so I'm\nfine with switching to it if folks think that is a better approach.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 22 Jul 2023 16:19:41 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sat, Jul 22, 2023 at 04:19:41PM -0700, Nathan Bossart wrote:\n> In v3, I moved the Datum definitions to c.h. I first tried modifying\n> binaryheap to use \"int\" or \"void *\" instead, but that ended up requiring\n> some rather invasive changes in backend code, not to mention any extensions\n> that happen to be using it. I also looked into moving the definitions to a\n> separate datumdefs.h header that postgres.h would include, but that felt\n> awkward because 1) postgres.h clearly states that it is intended for things\n> \"that never escape the backend\" and 2) the definitions seem relatively\n> inexpensive. However, I think the latter option is still viable, so I'm\n> fine with switching to it if folks think that is a better approach.\n\nBTW we might be able to replace the open-coded heap in pg_dump_sort.c\n(added by 79273cc) with a binaryheap, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 22 Jul 2023 16:28:15 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Thu, Jul 20, 2023 at 12:06:44PM -0700, Nathan Bossart wrote:\n>> One item that requires more thought is binaryheap's use of Datum. AFAICT\n>> the Datum definitions live in postgres.h and aren't available to frontend\n>> code. I think we'll either need to move the Datum definitions to c.h or to\n>> adjust binaryheap to use \"void *\".\n\n> In v3, I moved the Datum definitions to c.h. I first tried modifying\n> binaryheap to use \"int\" or \"void *\" instead, but that ended up requiring\n> some rather invasive changes in backend code, not to mention any extensions\n> that happen to be using it.\n\nI'm quite uncomfortable with putting Datum in c.h. I know that the\ntypedef is merely a uintptr_t, but this solution seems to me to be\nblowing all kinds of holes in the abstraction, because exactly none\nof the infrastructure that goes along with Datum is or is ever likely\nto be in any frontend build. At the very least, frontend code that\nrefers to Datum will be misleading as hell.\n\nI wonder whether we can't provide some alternate definition or \"skin\"\nfor binaryheap that preserves the Datum API for backend code that wants\nthat, while providing a void *-based API for frontend code to use.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 Jul 2023 19:47:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sat, Jul 22, 2023 at 07:47:50PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I first tried modifying\n>> binaryheap to use \"int\" or \"void *\" instead, but that ended up requiring\n>> some rather invasive changes in backend code, not to mention any extensions\n>> that happen to be using it.\n\nI followed through with the \"void *\" approach (attached), and it wasn't as\nbad as I expected.\n\n> I wonder whether we can't provide some alternate definition or \"skin\"\n> for binaryheap that preserves the Datum API for backend code that wants\n> that, while providing a void *-based API for frontend code to use.\n\nI can give this a try next, but it might be rather #ifdef-heavy.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 22 Jul 2023 22:57:03 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Saturday, July 15, 2023 7:47:12 PM CEST Tom Lane wrote:\n> I'm not sure how big a deal this is in practice: in most situations\n> the individual jobs are larger than they are in this toy example,\n> plus the initial non-parallelizable part of the restore is a bigger\n> bottleneck anyway with this many tables. Still, we do have one\n> real-world complaint, so maybe we should look into improving it.\n\nHi\n\nFor what it's worth, at my current job it's kind of a big deal. I was going to \nstart looking at the bad performance I got on pg_restore for some databases \nwith over 50k tables (in 200 namespaces) when I found this thread. The dump \nweights in about 2,8GB, the toc.dat file is 230MB, 50 120 tables, 142 069 \nconstraints and 73 669 indexes.\n\nHEAD pg_restore duration: 30 minutes\npg_restore with latest patch from Nathan Bossart: 23 minutes\n\nThis is indeed better, but there is still a lot of room for improvements. With \nsuch usecases, I was able to go much faster using the patched pg_restore with \na script that parallelize on each schema instead of relying on the choices \nmade by pg_restore. It seems the choice of parallelizing only the data loading \nis losing nice speedup opportunities with a huge number of objects.\n\npatched pg_restore + parallel restore of schemas: 10 minutes\n\nAnyway, the patch works really fine as is, and I will certainly keep trying \nfuture iterations.\n\nRegards\n\n Pierre\n\n\n\n\n\n",
"msg_date": "Mon, 24 Jul 2023 19:27:36 +0200",
"msg_from": "Pierre Ducroquet <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sat, Jul 22, 2023 at 10:57:03PM -0700, Nathan Bossart wrote:\n> On Sat, Jul 22, 2023 at 07:47:50PM -0400, Tom Lane wrote:\n>> I wonder whether we can't provide some alternate definition or \"skin\"\n>> for binaryheap that preserves the Datum API for backend code that wants\n>> that, while providing a void *-based API for frontend code to use.\n> \n> I can give this a try next, but it might be rather #ifdef-heavy.\n\nHere is a sketch of this approach. It required fewer #ifdefs than I was\nexpecting. At the moment, this one seems like the winner to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Jul 2023 12:00:15 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 12:00:15PM -0700, Nathan Bossart wrote:\n> Here is a sketch of this approach. It required fewer #ifdefs than I was\n> expecting. At the moment, this one seems like the winner to me.\n\nHere is a polished patch set for this approach. I've also added a 0004\nthat replaces the open-coded heap in pg_dump_sort.c with a binaryheap.\nIMHO these patches are in decent shape.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 25 Jul 2023 11:53:36 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 11:53:36AM -0700, Nathan Bossart wrote:\n> Here is a polished patch set for this approach. I've also added a 0004\n> that replaces the open-coded heap in pg_dump_sort.c with a binaryheap.\n> IMHO these patches are in decent shape.\n\nI'm hoping to commit these patches at some point in the current commitfest.\nI don't sense anything tremendously controversial, and they provide a\npretty nice speedup in some cases. Are there any remaining concerns?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:05:46 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> I'm hoping to commit these patches at some point in the current commitfest.\n> I don't sense anything tremendously controversial, and they provide a\n> pretty nice speedup in some cases. Are there any remaining concerns?\n\nI've not actually looked at any of these patchsets after the first one.\nI have added myself as a reviewer and will hopefully get to it within\na week or so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Sep 2023 13:41:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 01:41:41PM -0400, Tom Lane wrote:\n> I've not actually looked at any of these patchsets after the first one.\n> I have added myself as a reviewer and will hopefully get to it within\n> a week or so.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:51:29 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 2:53 PM Nathan Bossart <[email protected]> wrote:\n> On Mon, Jul 24, 2023 at 12:00:15PM -0700, Nathan Bossart wrote:\n> > Here is a sketch of this approach. It required fewer #ifdefs than I was\n> > expecting. At the moment, this one seems like the winner to me.\n>\n> Here is a polished patch set for this approach. I've also added a 0004\n> that replaces the open-coded heap in pg_dump_sort.c with a binaryheap.\n> IMHO these patches are in decent shape.\n\n[ drive-by comment that hopefully doesn't cause too much pain ]\n\nIn hindsight, I think that making binaryheap depend on Datum was a bad\nidea. I think that was my idea, and I think it wasn't very smart.\nConsidering that people have coded to that decision up until now, it\nmight not be too easy to change at this point. But in principle I\nguess you'd want to be able to make a heap out of any C data type,\nrather than just Datum, or just Datum in the backend and just void *\nin the frontend.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 16:00:44 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 04:00:44PM -0400, Robert Haas wrote:\n> In hindsight, I think that making binaryheap depend on Datum was a bad\n> idea. I think that was my idea, and I think it wasn't very smart.\n> Considering that people have coded to that decision up until now, it\n> might not be too easy to change at this point. But in principle I\n> guess you'd want to be able to make a heap out of any C data type,\n> rather than just Datum, or just Datum in the backend and just void *\n> in the frontend.\n\nYeah, something similar to simplehash for binary heaps could be nice. That\nbeing said, I don't know if there's a strong reason to specialize the\nimplementation for a given C data type in most cases. I suspect many\ncallers are just fine with dealing with pointers (e.g., I wouldn't store an\nentire TocEntry in the array), and smaller types like integers are already\nstored directly in the array thanks to the use of Datum. However, it\n_would_ allow us to abandon this frontend/backend void */Datum kludge,\nwhich is something.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 13:52:48 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 01:52:48PM -0700, Nathan Bossart wrote:\n> On Fri, Sep 01, 2023 at 04:00:44PM -0400, Robert Haas wrote:\n>> In hindsight, I think that making binaryheap depend on Datum was a bad\n>> idea. I think that was my idea, and I think it wasn't very smart.\n>> Considering that people have coded to that decision up until now, it\n>> might not be too easy to change at this point. But in principle I\n>> guess you'd want to be able to make a heap out of any C data type,\n>> rather than just Datum, or just Datum in the backend and just void *\n>> in the frontend.\n> \n> Yeah, something similar to simplehash for binary heaps could be nice. That\n> being said, I don't know if there's a strong reason to specialize the\n> implementation for a given C data type in most cases. I suspect many\n> callers are just fine with dealing with pointers (e.g., I wouldn't store an\n> entire TocEntry in the array), and smaller types like integers are already\n> stored directly in the array thanks to the use of Datum. However, it\n> _would_ allow us to abandon this frontend/backend void */Datum kludge,\n> which is something.\n\nI ended up hacking together a (nowhere near committable) patch to see how\nhard it would be to allow using any type with binaryheap. It doesn't seem\ntoo bad.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 2 Sep 2023 11:55:21 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On 2023-Sep-02, Nathan Bossart wrote:\n\n> On Fri, Sep 01, 2023 at 01:52:48PM -0700, Nathan Bossart wrote:\n\n> > Yeah, something similar to simplehash for binary heaps could be nice. That\n> > being said, I don't know if there's a strong reason to specialize the\n> > implementation for a given C data type in most cases.\n> \n> I ended up hacking together a (nowhere near committable) patch to see how\n> hard it would be to allow using any type with binaryheap. It doesn't seem\n> too bad.\n\nYeah, using void * seems to lead to interfaces that are pretty much the\nsame as bsearch() or qsort(). (Why isn't your payload type const,\nthough?)\n\nI do wonder why did you change _remove_first and _first to have a\n'result' output argument instead of a return value. Does this change\nactually buy you anything? simplehash.h doesn't do that either.\n\n> -extern void binaryheap_add(binaryheap *heap, Datum d);\n> -extern Datum binaryheap_first(binaryheap *heap);\n> -extern Datum binaryheap_remove_first(binaryheap *heap);\n> -extern void binaryheap_replace_first(binaryheap *heap, Datum d);\n> +extern void binaryheap_add(binaryheap *heap, void *d);\n> +extern void binaryheap_first(binaryheap *heap, void *result);\n> +extern void binaryheap_remove_first(binaryheap *heap, void *result);\n> +extern void binaryheap_replace_first(binaryheap *heap, void *d);\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 3 Sep 2023 12:04:00 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sun, Sep 03, 2023 at 12:04:00PM +0200, Alvaro Herrera wrote:\n> On 2023-Sep-02, Nathan Bossart wrote:\n>> I ended up hacking together a (nowhere near committable) patch to see how\n>> hard it would be to allow using any type with binaryheap. It doesn't seem\n>> too bad.\n> \n> Yeah, using void * seems to lead to interfaces that are pretty much the\n> same as bsearch() or qsort().\n\nRight. This is what I had in mind.\n\n> (Why isn't your payload type const,\n> though?)\n\nIt probably should be const. This patch was just a proof-of-concept and\nstill requireѕ a bit of work.\n\n> I do wonder why did you change _remove_first and _first to have a\n> 'result' output argument instead of a return value. Does this change\n> actually buy you anything? simplehash.h doesn't do that either.\n> \n>> -extern void binaryheap_add(binaryheap *heap, Datum d);\n>> -extern Datum binaryheap_first(binaryheap *heap);\n>> -extern Datum binaryheap_remove_first(binaryheap *heap);\n>> -extern void binaryheap_replace_first(binaryheap *heap, Datum d);\n>> +extern void binaryheap_add(binaryheap *heap, void *d);\n>> +extern void binaryheap_first(binaryheap *heap, void *result);\n>> +extern void binaryheap_remove_first(binaryheap *heap, void *result);\n>> +extern void binaryheap_replace_first(binaryheap *heap, void *d);\n\n_first could likely just return a pointer to the data in the binary heap's\narray. However, _remove_first has to copy the data somewhere, so I think\nthe alternative would be to return a palloc'd value. Is there another way\nthat I'm not thinking of?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 3 Sep 2023 08:11:16 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sat, Sep 02, 2023 at 11:55:21AM -0700, Nathan Bossart wrote:\n> I ended up hacking together a (nowhere near committable) patch to see how\n> hard it would be to allow using any type with binaryheap. It doesn't seem\n> too bad.\n\nI spent some more time on this patch and made the relevant adjustments to\nthe rest of the set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 4 Sep 2023 16:08:29 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> I spent some more time on this patch and made the relevant adjustments to\n> the rest of the set.\n\nHmm ... I do not like v7 very much at all. It requires rather ugly\nchanges to all of the existing callers, and what are we actually\nbuying? If anything, it makes things slower for pass-by-value items\nlike integers. I'd stick with the Datum convention in the backend.\n\nInstead, I took a closer look through the v6 patch set.\nI think that's in pretty good shape and nearly committable,\nbut I have a few thoughts:\n\n* I'm not sure about defining bh_node_type as a macro:\n\n+#ifdef FRONTEND\n+#define bh_node_type void *\n+#else\n+#define bh_node_type Datum\n+#endif\n\nrather than an actual typedef:\n\n+#ifdef FRONTEND\n+typedef void *bh_node_type;\n+#else\n+typedef Datum bh_node_type;\n+#endif\n\nMy concern here is that bh_node_type is effectively acting as a\ntypedef, so that pgindent might misbehave if it's not declared as a\ntypedef. On the other hand, there doesn't seem to be any indentation\nproblem in the patchset as it stands, and we don't expect any code\noutside binaryheap.h/.c to refer to bh_node_type, so maybe it's fine.\n(If you do choose to make it a typedef, remember to add it to\ntypedefs.list.)\n\n* As a matter of style, I'd recommend adding braces in places\nlike this:\n\n \tif (heap->bh_size >= heap->bh_space)\n+\t{\n+#ifdef FRONTEND\n+\t\tpg_fatal(\"out of binary heap slots\");\n+#else\n \t\telog(ERROR, \"out of binary heap slots\");\n+#endif\n+\t}\n \theap->bh_nodes[heap->bh_size] = d;\n\nIt's not wrong as you have it, but I think it's more readable\nand less easy to accidentally break with the extra braces.\n\n* In 0002, isn't the comment for binaryheap_remove_node wrong?\n\n+ * Removes the nth node from the heap. The caller must ensure that there are\n+ * at least (n - 1) nodes in the heap. O(log n) worst case.\n\nShouldn't that be \"(n + 1)\"? Also, I'd specify \"n'th (zero based) node\"\nfor clarity.\n\n* I would say that this bit in 0004:\n\n-\t\tj = removeHeapElement(pendingHeap, heapLength--);\n+\t\tj = (intptr_t) binaryheap_remove_first(pendingHeap);\n\nneeds an explicit cast to int:\n\n+\t\tj = (int) (intptr_t) binaryheap_remove_first(pendingHeap);\n\notherwise some compilers might complain about the result possibly\nnot fitting in \"j\".\n\nOther than those nitpicks, I like v6. I'll mark this RfC.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Sep 2023 12:35:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Sun, Sep 10, 2023 at 12:35:10PM -0400, Tom Lane wrote:\n> Hmm ... I do not like v7 very much at all. It requires rather ugly\n> changes to all of the existing callers, and what are we actually\n> buying? If anything, it makes things slower for pass-by-value items\n> like integers. I'd stick with the Datum convention in the backend.\n> \n> Instead, I took a closer look through the v6 patch set.\n> I think that's in pretty good shape and nearly committable,\n> but I have a few thoughts:\n\nThanks for reviewing. I'm fine with proceeding with the v6 approach. Even\nthough the alternative approach makes the API consistent for the frontend\nand backend, I'm also not a huge fan of the pointer gymnastics required in\nthe comparators. Granted, we still have to do some intptr_t conversions in\npg_dump_sort.c with the v6 approach, but that seems to be an exception.\n\n> * I'm not sure about defining bh_node_type as a macro:\n> \n> +#ifdef FRONTEND\n> +#define bh_node_type void *\n> +#else\n> +#define bh_node_type Datum\n> +#endif\n> \n> rather than an actual typedef:\n> \n> +#ifdef FRONTEND\n> +typedef void *bh_node_type;\n> +#else\n> +typedef Datum bh_node_type;\n> +#endif\n> \n> My concern here is that bh_node_type is effectively acting as a\n> typedef, so that pgindent might misbehave if it's not declared as a\n> typedef. On the other hand, there doesn't seem to be any indentation\n> problem in the patchset as it stands, and we don't expect any code\n> outside binaryheap.h/.c to refer to bh_node_type, so maybe it's fine.\n> (If you do choose to make it a typedef, remember to add it to\n> typedefs.list.)\n\nI think a typedef makes more sense here.\n\n> * As a matter of style, I'd recommend adding braces in places\n> like this:\n> \n> \tif (heap->bh_size >= heap->bh_space)\n> +\t{\n> +#ifdef FRONTEND\n> +\t\tpg_fatal(\"out of binary heap slots\");\n> +#else\n> \t\telog(ERROR, \"out of binary heap slots\");\n> +#endif\n> +\t}\n> \theap->bh_nodes[heap->bh_size] = d;\n> \n> It's not wrong as you have it, but I think it's more readable\n> and less easy to accidentally break with the extra braces.\n\nFair point.\n\n> * In 0002, isn't the comment for binaryheap_remove_node wrong?\n> \n> + * Removes the nth node from the heap. The caller must ensure that there are\n> + * at least (n - 1) nodes in the heap. O(log n) worst case.\n> \n> Shouldn't that be \"(n + 1)\"? Also, I'd specify \"n'th (zero based) node\"\n> for clarity.\n\nYeah, that's a mistake.\n\n> * I would say that this bit in 0004:\n> \n> -\t\tj = removeHeapElement(pendingHeap, heapLength--);\n> +\t\tj = (intptr_t) binaryheap_remove_first(pendingHeap);\n> \n> needs an explicit cast to int:\n> \n> +\t\tj = (int) (intptr_t) binaryheap_remove_first(pendingHeap);\n> \n> otherwise some compilers might complain about the result possibly\n> not fitting in \"j\".\n\nSure. IMO it's a tad more readable, too.\n\n> Other than those nitpicks, I like v6. I'll mark this RfC.\n\nGreat. I've posted a v8 with your comments addressed in order to get one\nmore round of cfbot coverage. Assuming those tests pass and there is no\nadditional feedback, I'll plan on committing this in the next few days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Sep 2023 11:34:50 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 11:34:50AM -0700, Nathan Bossart wrote:\n> On Sun, Sep 10, 2023 at 12:35:10PM -0400, Tom Lane wrote:\n>> Other than those nitpicks, I like v6. I'll mark this RfC.\n> \n> Great. I've posted a v8 with your comments addressed in order to get one\n> more round of cfbot coverage. Assuming those tests pass and there is no\n> additional feedback, I'll plan on committing this in the next few days.\n\nUpon closer inspection, I found a rather nasty problem. The qsort\ncomparator expects a TocEntry **, but the binaryheap comparator expects a\nTocEntry *, and we simply pass the arguments through to the qsort\ncomparator. In v9, I added the requisite ampersands. I'm surprised this\nworked at all. I'm planning to run some additional tests to make sure this\npatch set works as expected.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Sep 2023 15:47:53 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> Upon closer inspection, I found a rather nasty problem. The qsort\n> comparator expects a TocEntry **, but the binaryheap comparator expects a\n> TocEntry *, and we simply pass the arguments through to the qsort\n> comparator. In v9, I added the requisite ampersands.\n\nOoops :-(\n\n> I'm surprised this\n> worked at all.\n\nProbably it was not sorting things appropriately. Might be worth adding\nsome test scaffolding to check that bigger tasks are chosen before\nsmaller ones.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Sep 2023 20:01:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 08:01:39PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Upon closer inspection, I found a rather nasty problem. The qsort\n>> comparator expects a TocEntry **, but the binaryheap comparator expects a\n>> TocEntry *, and we simply pass the arguments through to the qsort\n>> comparator. In v9, I added the requisite ampersands.\n> \n> Ooops :-(\n> \n>> I'm surprised this\n>> worked at all.\n> \n> Probably it was not sorting things appropriately. Might be worth adding\n> some test scaffolding to check that bigger tasks are chosen before\n> smaller ones.\n\nFurther testing revealed that the binaryheap comparator function was\nactually generating a min-heap since the qsort comparator sorts by\ndecreasing dataLength. This is fixed in v10. And I am 0 for 2 today...\n\nNow that this appears to be functioning as expected, I see that the larger\nentries are typically picked up earlier, but we do sometimes pick entries\nquite a bit further down the list, as anticipated. The case I was testing\n(10k tables with the number of rows equal to the table number) was much\nfaster with this patch (just over a minute) than without it (over 16\nminutes).\n\nSincerest apologies for the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Sep 2023 20:45:39 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "For now, I've committed 0001 and 0002. I intend to commit the others soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 14:22:32 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> For now, I've committed 0001 and 0002. I intend to commit the others soon.\n\nbowerbird is unhappy with this. I suppose you missed out updating\nthe src/tools/msvc/ scripts. (Weren't we about ready to nuke those?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Sep 2023 21:23:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 09:23:20PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> For now, I've committed 0001 and 0002. I intend to commit the others soon.\n> \n> bowerbird is unhappy with this. I suppose you missed out updating\n> the src/tools/msvc/ scripts. (Weren't we about ready to nuke those?)\n\nI saw that and have attempted to fix it with 83223f5. I'm still waiting\nfor an MSVC animal to report back.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 18:28:58 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 09:23:20PM -0400, Tom Lane wrote:\n> bowerbird is unhappy with this. I suppose you missed out updating\n> the src/tools/msvc/ scripts.\n> (Weren't we about ready to nuke those?)\n\nhamerkop seems to be the only buildfarm member that would complain if\nthese were to be gone today, on top of bowerbird, of course.\n--\nMichael",
"msg_date": "Tue, 19 Sep 2023 10:29:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Sep 18, 2023 at 09:23:20PM -0400, Tom Lane wrote:\n>> bowerbird is unhappy with this. I suppose you missed out updating\n>> the src/tools/msvc/ scripts. (Weren't we about ready to nuke those?)\n\n> I saw that and have attempted to fix it with 83223f5.\n\nAh, right, sorry for the noise.\n\nBut in any case, how long are we keeping src/tools/msvc/ ?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Sep 2023 21:36:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 09:36:03PM -0400, Tom Lane wrote:\n> But in any case, how long are we keeping src/tools/msvc/ ?\n\n From a skim of [0], it seems like it could be removed now. I see a couple\nof work-in-progress patches from Andres [1] that would probably serve as a\ngood starting point. I won't have much time for this for the next few\nweeks, so if someone else wants to pick it up, please feel free.\n\n[0] https://postgr.es/m/20230408191007.7lysd42euafwl74f%40awork3.anarazel.de\n[1] https://github.com/anarazel/postgres/commits/drop-homegrown-msvc\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Sep 2023 18:54:28 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 02:22:32PM -0700, Nathan Bossart wrote:\n> For now, I've committed 0001 and 0002. I intend to commit the others soon.\n\nI've committed the rest of the patches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Sep 2023 19:30:33 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficiency in parallel pg_restore with many tables"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nWhile working on an extension I encountered a quite tricky question -\nthe extension (with functions in C) creates tables during function calls,\nthese tables must be protected from direct users' queries, at the same\ntime they must remain accessible for all functions of this extension\nfor all users allowed to use this extension.\n\nCould you please advise or give some hint on what is the correct (and\nsecure) way to implement this?\n\nCurrently I use the owner of the extension as owner when creating\nsuch a table inside the function, but maybe there are some pitfalls\nin this kind of solution?\n\nThanks in advance.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!While working on an extension I encountered a quite tricky question -the extension (with functions in C) creates tables during function calls,these tables must be protected from direct users' queries, at the sametime they must remain accessible for all functions of this extensionfor all users allowed to use this extension.Could you please advise or give some hint on what is the correct (andsecure) way to implement this?Currently I use the owner of the extension as owner when creatingsuch a table inside the function, but maybe there are some pitfallsin this kind of solution?Thanks in advance.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Sat, 15 Jul 2023 23:57:30 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Protect extension' internal tables - how?"
},
{
"msg_contents": "Hi,\n\n> Could you please advise or give some hint on what is the correct (and\n> secure) way to implement this?\n>\n> Currently I use the owner of the extension as owner when creating\n> such a table inside the function, but maybe there are some pitfalls\n> in this kind of solution?\n\nIf the goal is to protect the user from an _accidental_ access to the\ntables, placing them into a separate schema _my_extension_private or\nsomething will be enough.\n\nOtherwise consider using corresponding access control abilities of\nPostgreSQL and creating functions with SECURITY DEFINER [1]. Be\nmindful that your functions will become a target for privilege\nescalation, so you should be extra careful with the implementation.\n\n[1]: https://www.postgresql.org/docs/current/sql-createfunction.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 17 Jul 2023 15:48:58 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protect extension' internal tables - how?"
},
{
"msg_contents": "Hi,\n\nAleksander, thank you very much.\nTables are already placed into special schema, but there are some\ndynamically\ncreated tables and the goal is to protect all these tables from direct\ninsert, update\nand delete operations from users. I've read about the SECURITY DEFINER,\nit will do the trick.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Aleksander, thank you very much.Tables are already placed into special schema, but there are some dynamicallycreated tables and the goal is to protect all these tables from direct insert, updateand delete operations from users. I've read about the SECURITY DEFINER,it will do the trick.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Tue, 18 Jul 2023 14:19:44 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protect extension' internal tables - how?"
}
] |
[
{
"msg_contents": "Hi,\n\nSeveral loops which are important for query performance, like heapgetpage()'s\nloop over all tuples, have to call functions like\nHeapCheckForSerializableConflictOut() and PredicateLockTID() in every\niteration.\n\nWhen serializable is not in use, all those functions do is to to return. But\nbeing situated in a different translation unit, the compiler can't inline\n(without LTO at least) the check whether serializability is needed. It's not\njust the function call overhead that's noticable, it's also that registers\nhave to be spilled to the stack / reloaded from memory etc.\n\nOn a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\npinned to one core. Parallel workers disabled to reduce noise. All times are\nthe average of 15 executions with pgbench, in a newly started, but prewarmed\npostgres.\n\nSELECT * FROM pgbench_accounts OFFSET 10000000;\nHEAD:\n397.977\n\nremoving the HeapCheckForSerializableConflictOut() from heapgetpage()\n(incorrect!), to establish the baseline of what serializable costs:\n336.695\n\npulling out CheckForSerializableConflictOutNeeded() from\nHeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding calling\nHeapCheckForSerializableConflictOut() in the loop:\n339.742\n\nmoving the loop into a static inline function, marked as pg_always_inline,\ncalled with static arguments for always_visible, check_serializable:\n326.546\n\nmarking the always_visible, !check_serializable case likely():\n322.249\n\nremoving TestForOldSnapshot() calls, which we pretty much already decided on:\n312.987\n\n\nFWIW, there's more we can do, with some hacky changes I got the time down to\n273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms\nfor something as core as this, is imo worth considering on its own.\n\n\n\n\nNow, this just affects the sequential scan case. heap_hot_search_buffer()\nshares many of the same pathologies. I find it a bit harder to improve,\nbecause the compiler's code generation seems to switch between good / bad with\nchanges that seems unrelated...\n\n\nI wonder why we haven't used PageIsAllVisible() in heap_hot_search_buffer() so\nfar?\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 15 Jul 2023 18:56:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nRegards,\nZhang Mingli\nOn Jul 16, 2023 at 09:57 +0800, Andres Freund <[email protected]>, wrote:\n> Hi,\n>\n> Several loops which are important for query performance, like heapgetpage()'s\n> loop over all tuples, have to call functions like\n> HeapCheckForSerializableConflictOut() and PredicateLockTID() in every\n> iteration.\n>\n> When serializable is not in use, all those functions do is to to return. But\n> being situated in a different translation unit, the compiler can't inline\n> (without LTO at least) the check whether serializability is needed. It's not\n> just the function call overhead that's noticable, it's also that registers\n> have to be spilled to the stack / reloaded from memory etc.\n>\n> On a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\n> pinned to one core. Parallel workers disabled to reduce noise. All times are\n> the average of 15 executions with pgbench, in a newly started, but prewarmed\n> postgres.\n>\n> SELECT * FROM pgbench_accounts OFFSET 10000000;\n> HEAD:\n> 397.977\n>\n> removing the HeapCheckForSerializableConflictOut() from heapgetpage()\n> (incorrect!), to establish the baseline of what serializable costs:\n> 336.695\n>\n> pulling out CheckForSerializableConflictOutNeeded() from\n> HeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding calling\n> HeapCheckForSerializableConflictOut() in the loop:\n> 339.742\n>\n> moving the loop into a static inline function, marked as pg_always_inline,\n> called with static arguments for always_visible, check_serializable:\n> 326.546\n>\n> marking the always_visible, !check_serializable case likely():\n> 322.249\n>\n> removing TestForOldSnapshot() calls, which we pretty much already decided on:\n> 312.987\n>\n>\n> FWIW, there's more we can do, with some hacky changes I got the time down to\n> 273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms\n> for something as core as this, is imo worth considering on its own.\n>\n>\n>\n>\n> Now, this just affects the sequential scan case. heap_hot_search_buffer()\n> shares many of the same pathologies. I find it a bit harder to improve,\n> because the compiler's code generation seems to switch between good / bad with\n> changes that seems unrelated...\n>\n>\n> I wonder why we haven't used PageIsAllVisible() in heap_hot_search_buffer() so\n> far?\n>\n>\n> Greetings,\n>\n> Andres Freund\n\nLGTM and I have a fool question:\n\n\tif (likely(all_visible))\n\t{\n\t\tif (likely(!check_serializable))\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 1, 0);\n\t\telse\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 1, 1);\n\t}\n\telse\n\t{\n\t\tif (likely(!check_serializable))\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 0);\n\t\telse\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 1);\n\n\nDoes it make sense to combine if else condition and put it to the incline function’s param?\n\nLike:\nscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, all_visible, check_serializable);\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\nRegards,\nZhang Mingli\n\n\nOn Jul 16, 2023 at 09:57 +0800, Andres Freund <[email protected]>, wrote:\nHi,\n\nSeveral loops which are important for query performance, like heapgetpage()'s\nloop over all tuples, have to call functions like\nHeapCheckForSerializableConflictOut() and PredicateLockTID() in every\niteration.\n\nWhen serializable is not in use, all those functions do is to to return. But\nbeing situated in a different translation unit, the compiler can't inline\n(without LTO at least) the check whether serializability is needed. It's not\njust the function call overhead that's noticable, it's also that registers\nhave to be spilled to the stack / reloaded from memory etc.\n\nOn a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\npinned to one core. Parallel workers disabled to reduce noise. All times are\nthe average of 15 executions with pgbench, in a newly started, but prewarmed\npostgres.\n\nSELECT * FROM pgbench_accounts OFFSET 10000000;\nHEAD:\n397.977\n\nremoving the HeapCheckForSerializableConflictOut() from heapgetpage()\n(incorrect!), to establish the baseline of what serializable costs:\n336.695\n\npulling out CheckForSerializableConflictOutNeeded() from\nHeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding calling\nHeapCheckForSerializableConflictOut() in the loop:\n339.742\n\nmoving the loop into a static inline function, marked as pg_always_inline,\ncalled with static arguments for always_visible, check_serializable:\n326.546\n\nmarking the always_visible, !check_serializable case likely():\n322.249\n\nremoving TestForOldSnapshot() calls, which we pretty much already decided on:\n312.987\n\n\nFWIW, there's more we can do, with some hacky changes I got the time down to\n273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms\nfor something as core as this, is imo worth considering on its own.\n\n\n\n\nNow, this just affects the sequential scan case. heap_hot_search_buffer()\nshares many of the same pathologies. I find it a bit harder to improve,\nbecause the compiler's code generation seems to switch between good / bad with\nchanges that seems unrelated...\n\n\nI wonder why we haven't used PageIsAllVisible() in heap_hot_search_buffer() so\nfar?\n\n\nGreetings,\n\nAndres Freund\n\nLGTM and I have a fool question: \n\n\tif (likely(all_visible))\n\t{\n\t\tif (likely(!check_serializable))\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 1, 0);\n\t\telse\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 1, 1);\n\t}\n\telse\n\t{\n\t\tif (likely(!check_serializable))\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 0);\n\t\telse\n\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 1);\n\n\nDoes it make sense to combine if else condition and put it to the incline function’s param?\n\nLike:\nscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n\t\t\t\t\t\t\t\t\t\t\t\t block, lines, all_visible, check_serializable);",
"msg_date": "Mon, 17 Jul 2023 09:55:07 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from\n serializable"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-17 09:55:07 +0800, Zhang Mingli wrote:\n> LGTM and I have a fool question:\n>\n> \tif (likely(all_visible))\n> \t{\n> \t\tif (likely(!check_serializable))\n> \t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t block, lines, 1, 0);\n> \t\telse\n> \t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t block, lines, 1, 1);\n> \t}\n> \telse\n> \t{\n> \t\tif (likely(!check_serializable))\n> \t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 0);\n> \t\telse\n> \t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 1);\n>\n>\n> Does it make sense to combine if else condition and put it to the incline function’s param?\n>\n> Like:\n> scan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> \t\t\t\t\t\t\t\t\t\t\t\t block, lines, all_visible, check_serializable);\n\nI think that makes it less likely that the compiler actually generates a\nconstant-folded version for each of the branches. Perhaps worth some\nexperimentation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jul 2023 07:58:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nIs there a plan to merge this patch in PG16?\n\nThanks,\nMuhammad\n\n________________________________\nFrom: Andres Freund <[email protected]>\nSent: Saturday, July 15, 2023 6:56 PM\nTo: [email protected] <[email protected]>\nCc: Thomas Munro <[email protected]>\nSubject: Improve heapgetpage() performance, overhead from serializable\n\nHi,\n\nSeveral loops which are important for query performance, like heapgetpage()'s\nloop over all tuples, have to call functions like\nHeapCheckForSerializableConflictOut() and PredicateLockTID() in every\niteration.\n\nWhen serializable is not in use, all those functions do is to to return. But\nbeing situated in a different translation unit, the compiler can't inline\n(without LTO at least) the check whether serializability is needed. It's not\njust the function call overhead that's noticable, it's also that registers\nhave to be spilled to the stack / reloaded from memory etc.\n\nOn a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\npinned to one core. Parallel workers disabled to reduce noise. All times are\nthe average of 15 executions with pgbench, in a newly started, but prewarmed\npostgres.\n\nSELECT * FROM pgbench_accounts OFFSET 10000000;\nHEAD:\n397.977\n\nremoving the HeapCheckForSerializableConflictOut() from heapgetpage()\n(incorrect!), to establish the baseline of what serializable costs:\n336.695\n\npulling out CheckForSerializableConflictOutNeeded() from\nHeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding calling\nHeapCheckForSerializableConflictOut() in the loop:\n339.742\n\nmoving the loop into a static inline function, marked as pg_always_inline,\ncalled with static arguments for always_visible, check_serializable:\n326.546\n\nmarking the always_visible, !check_serializable case likely():\n322.249\n\nremoving TestForOldSnapshot() calls, which we pretty much already decided on:\n312.987\n\n\nFWIW, there's more we can do, with some hacky changes I got the time down to\n273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms\nfor something as core as this, is imo worth considering on its own.\n\n\n\n\nNow, this just affects the sequential scan case. heap_hot_search_buffer()\nshares many of the same pathologies. I find it a bit harder to improve,\nbecause the compiler's code generation seems to switch between good / bad with\nchanges that seems unrelated...\n\n\nI wonder why we haven't used PageIsAllVisible() in heap_hot_search_buffer() so\nfar?\n\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nIs there a plan to merge this patch in PG16?\n\n\n\n\nThanks,\n\nMuhammad\n\n\n\n\n\nFrom: Andres Freund <[email protected]>\nSent: Saturday, July 15, 2023 6:56 PM\nTo: [email protected] <[email protected]>\nCc: Thomas Munro <[email protected]>\nSubject: Improve heapgetpage() performance, overhead from serializable\n \n\n\nHi,\n\nSeveral loops which are important for query performance, like heapgetpage()'s\nloop over all tuples, have to call functions like\nHeapCheckForSerializableConflictOut() and PredicateLockTID() in every\niteration.\n\nWhen serializable is not in use, all those functions do is to to return. But\nbeing situated in a different translation unit, the compiler can't inline\n(without LTO at least) the check whether serializability is needed. It's not\njust the function call overhead that's noticable, it's also that registers\nhave to be spilled to the stack / reloaded from memory etc.\n\nOn a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\npinned to one core. Parallel workers disabled to reduce noise. All times are\nthe average of 15 executions with pgbench, in a newly started, but prewarmed\npostgres.\n\nSELECT * FROM pgbench_accounts OFFSET 10000000;\nHEAD:\n397.977\n\nremoving the HeapCheckForSerializableConflictOut() from heapgetpage()\n(incorrect!), to establish the baseline of what serializable costs:\n336.695\n\npulling out CheckForSerializableConflictOutNeeded() from\nHeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding calling\nHeapCheckForSerializableConflictOut() in the loop:\n339.742\n\nmoving the loop into a static inline function, marked as pg_always_inline,\ncalled with static arguments for always_visible, check_serializable:\n326.546\n\nmarking the always_visible, !check_serializable case likely():\n322.249\n\nremoving TestForOldSnapshot() calls, which we pretty much already decided on:\n312.987\n\n\nFWIW, there's more we can do, with some hacky changes I got the time down to\n273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms\nfor something as core as this, is imo worth considering on its own.\n\n\n\n\nNow, this just affects the sequential scan case. heap_hot_search_buffer()\nshares many of the same pathologies. I find it a bit harder to improve,\nbecause the compiler's code generation seems to switch between good / bad with\nchanges that seems unrelated...\n\n\nI wonder why we haven't used PageIsAllVisible() in heap_hot_search_buffer() so\nfar?\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 31 Aug 2023 18:12:34 +0000",
"msg_from": "Muhammad Malik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "This thread [1] also Improving the heapgetpage function, and looks like\nthis thread.\n\n[1]\nhttps://www.postgresql.org/message-id/a9f40066-3d25-a240-4229-ec2fbe94e7a5%40yeah.net\n\nMuhammad Malik <[email protected]> 于2023年9月1日周五 04:04写道:\n\n> Hi,\n>\n> Is there a plan to merge this patch in PG16?\n>\n> Thanks,\n> Muhammad\n>\n> ------------------------------\n> *From:* Andres Freund <[email protected]>\n> *Sent:* Saturday, July 15, 2023 6:56 PM\n> *To:* [email protected] <[email protected]>\n> *Cc:* Thomas Munro <[email protected]>\n> *Subject:* Improve heapgetpage() performance, overhead from serializable\n>\n> Hi,\n>\n> Several loops which are important for query performance, like\n> heapgetpage()'s\n> loop over all tuples, have to call functions like\n> HeapCheckForSerializableConflictOut() and PredicateLockTID() in every\n> iteration.\n>\n> When serializable is not in use, all those functions do is to to return.\n> But\n> being situated in a different translation unit, the compiler can't inline\n> (without LTO at least) the check whether serializability is needed. It's\n> not\n> just the function call overhead that's noticable, it's also that registers\n> have to be spilled to the stack / reloaded from memory etc.\n>\n> On a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\n> pinned to one core. Parallel workers disabled to reduce noise. All times\n> are\n> the average of 15 executions with pgbench, in a newly started, but\n> prewarmed\n> postgres.\n>\n> SELECT * FROM pgbench_accounts OFFSET 10000000;\n> HEAD:\n> 397.977\n>\n> removing the HeapCheckForSerializableConflictOut() from heapgetpage()\n> (incorrect!), to establish the baseline of what serializable costs:\n> 336.695\n>\n> pulling out CheckForSerializableConflictOutNeeded() from\n> HeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding\n> calling\n> HeapCheckForSerializableConflictOut() in the loop:\n> 339.742\n>\n> moving the loop into a static inline function, marked as pg_always_inline,\n> called with static arguments for always_visible, check_serializable:\n> 326.546\n>\n> marking the always_visible, !check_serializable case likely():\n> 322.249\n>\n> removing TestForOldSnapshot() calls, which we pretty much already decided\n> on:\n> 312.987\n>\n>\n> FWIW, there's more we can do, with some hacky changes I got the time down\n> to\n> 273.261, but the tradeoffs start to be a bit more complicated. And\n> 397->320ms\n> for something as core as this, is imo worth considering on its own.\n>\n>\n>\n>\n> Now, this just affects the sequential scan case. heap_hot_search_buffer()\n> shares many of the same pathologies. I find it a bit harder to improve,\n> because the compiler's code generation seems to switch between good / bad\n> with\n> changes that seems unrelated...\n>\n>\n> I wonder why we haven't used PageIsAllVisible() in\n> heap_hot_search_buffer() so\n> far?\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThis thread [1] also Improving the heapgetpage function, and looks like this thread.[1] https://www.postgresql.org/message-id/a9f40066-3d25-a240-4229-ec2fbe94e7a5%40yeah.netMuhammad Malik <[email protected]> 于2023年9月1日周五 04:04写道:\n\n\nHi,\n\n\n\n\nIs there a plan to merge this patch in PG16?\n\n\n\n\nThanks,\n\nMuhammad\n\n\n\n\n\nFrom: Andres Freund <[email protected]>\nSent: Saturday, July 15, 2023 6:56 PM\nTo: [email protected] <[email protected]>\nCc: Thomas Munro <[email protected]>\nSubject: Improve heapgetpage() performance, overhead from serializable\n \n\n\nHi,\n\nSeveral loops which are important for query performance, like heapgetpage()'s\nloop over all tuples, have to call functions like\nHeapCheckForSerializableConflictOut() and PredicateLockTID() in every\niteration.\n\nWhen serializable is not in use, all those functions do is to to return. But\nbeing situated in a different translation unit, the compiler can't inline\n(without LTO at least) the check whether serializability is needed. It's not\njust the function call overhead that's noticable, it's also that registers\nhave to be spilled to the stack / reloaded from memory etc.\n\nOn a freshly loaded pgbench scale 100, with turbo mode disabled, postgres\npinned to one core. Parallel workers disabled to reduce noise. All times are\nthe average of 15 executions with pgbench, in a newly started, but prewarmed\npostgres.\n\nSELECT * FROM pgbench_accounts OFFSET 10000000;\nHEAD:\n397.977\n\nremoving the HeapCheckForSerializableConflictOut() from heapgetpage()\n(incorrect!), to establish the baseline of what serializable costs:\n336.695\n\npulling out CheckForSerializableConflictOutNeeded() from\nHeapCheckForSerializableConflictOut() in heapgetpage(), and avoiding calling\nHeapCheckForSerializableConflictOut() in the loop:\n339.742\n\nmoving the loop into a static inline function, marked as pg_always_inline,\ncalled with static arguments for always_visible, check_serializable:\n326.546\n\nmarking the always_visible, !check_serializable case likely():\n322.249\n\nremoving TestForOldSnapshot() calls, which we pretty much already decided on:\n312.987\n\n\nFWIW, there's more we can do, with some hacky changes I got the time down to\n273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms\nfor something as core as this, is imo worth considering on its own.\n\n\n\n\nNow, this just affects the sequential scan case. heap_hot_search_buffer()\nshares many of the same pathologies. I find it a bit harder to improve,\nbecause the compiler's code generation seems to switch between good / bad with\nchanges that seems unrelated...\n\n\nI wonder why we haven't used PageIsAllVisible() in heap_hot_search_buffer() so\nfar?\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 1 Sep 2023 14:07:37 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 9:58 PM Andres Freund <[email protected]> wrote:\n\n> FWIW, there's more we can do, with some hacky changes I got the time down\nto\n> 273.261, but the tradeoffs start to be a bit more complicated. And\n397->320ms\n> for something as core as this, is imo worth considering on its own.\n\nNice!\n\n> On 2023-07-17 09:55:07 +0800, Zhang Mingli wrote:\n\n> > Does it make sense to combine if else condition and put it to the\nincline function’s param?\n> >\n> > Like:\n> > scan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> >\n block, lines, all_visible, check_serializable);\n>\n> I think that makes it less likely that the compiler actually generates a\n> constant-folded version for each of the branches. Perhaps worth some\n> experimentation.\n\nCombining this way doesn't do so for me.\n\nMinor style nit:\n\n+ scan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n+ block, lines, 0, 1);\n\nI believe we prefer true/false rather than numbers.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 17, 2023 at 9:58 PM Andres Freund <[email protected]> wrote:> FWIW, there's more we can do, with some hacky changes I got the time down to> 273.261, but the tradeoffs start to be a bit more complicated. And 397->320ms> for something as core as this, is imo worth considering on its own.Nice!> On 2023-07-17 09:55:07 +0800, Zhang Mingli wrote:> > Does it make sense to combine if else condition and put it to the incline function’s param?> >> > Like:> > scan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,> > block, lines, all_visible, check_serializable);>> I think that makes it less likely that the compiler actually generates a> constant-folded version for each of the branches. Perhaps worth some> experimentation.Combining this way doesn't do so for me.Minor style nit:+\t\t\tscan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,+\t\t\t\t\t\t\t\t\t\t\t\t block, lines, 0, 1);I believe we prefer true/false rather than numbers. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Sep 2023 14:42:57 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nOn 2023-09-05 14:42:57 +0700, John Naylor wrote:\n> On Mon, Jul 17, 2023 at 9:58 PM Andres Freund <[email protected]> wrote:\n>\n> > FWIW, there's more we can do, with some hacky changes I got the time down\n> to\n> > 273.261, but the tradeoffs start to be a bit more complicated. And\n> 397->320ms\n> > for something as core as this, is imo worth considering on its own.\n>\n> Nice!\n>\n> > On 2023-07-17 09:55:07 +0800, Zhang Mingli wrote:\n>\n> > > Does it make sense to combine if else condition and put it to the\n> incline function’s param?\n> > >\n> > > Like:\n> > > scan->rs_ntuples = heapgetpage_collect(scan, snapshot, page, buffer,\n> > >\n> block, lines, all_visible, check_serializable);\n> >\n> > I think that makes it less likely that the compiler actually generates a\n> > constant-folded version for each of the branches. Perhaps worth some\n> > experimentation.\n>\n> Combining this way doesn't do so for me.\n\nAre you saying that the desired constant folding happened after combining the\nbranches, or that it didn't happen?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:38:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 1:38 AM Andres Freund <[email protected]> wrote:\n\n> > > I think that makes it less likely that the compiler actually\ngenerates a\n> > > constant-folded version for each of the branches. Perhaps worth some\n> > > experimentation.\n> >\n> > Combining this way doesn't do so for me.\n>\n> Are you saying that the desired constant folding happened after combining\nthe\n> branches, or that it didn't happen?\n\nConstant folding did not happen.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Sep 6, 2023 at 1:38 AM Andres Freund <[email protected]> wrote:> > > I think that makes it less likely that the compiler actually generates a> > > constant-folded version for each of the branches. Perhaps worth some> > > experimentation.> >> > Combining this way doesn't do so for me.>> Are you saying that the desired constant folding happened after combining the> branches, or that it didn't happen?Constant folding did not happen.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 6 Sep 2023 10:14:18 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 1:08 PM tender wang <[email protected]> wrote:\n>\n> This thread [1] also Improving the heapgetpage function, and looks like\nthis thread.\n>\n> [1]\nhttps://www.postgresql.org/message-id/a9f40066-3d25-a240-4229-ec2fbe94e7a5%40yeah.net\n\nPlease don't top-post.\n\nFor the archives: That CF entry has been withdrawn, after the author looked\nat this one and did some testing.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Sep 1, 2023 at 1:08 PM tender wang <[email protected]> wrote:>> This thread [1] also Improving the heapgetpage function, and looks like this thread.>> [1] https://www.postgresql.org/message-id/a9f40066-3d25-a240-4229-ec2fbe94e7a5%40yeah.netPlease don't top-post.For the archives: That CF entry has been withdrawn, after the author looked at this one and did some testing.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 7 Sep 2023 13:14:02 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 9:58 PM Andres Freund <[email protected]> wrote:\n> And 397->320ms\n> for something as core as this, is imo worth considering on its own.\n\nHi Andres, this interesting work seems to have fallen off the radar --\nare you still planning to move forward with this for v17?\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:01:31 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-22 13:01:31 +0700, John Naylor wrote:\n> On Mon, Jul 17, 2023 at 9:58 PM Andres Freund <[email protected]> wrote:\n> > And 397->320ms\n> > for something as core as this, is imo worth considering on its own.\n> \n> Hi Andres, this interesting work seems to have fallen off the radar --\n> are you still planning to move forward with this for v17?\n\nI had completely forgotten about this patch, but some discussion around\nstreaming read reminded me of it. Here's a rebased version, with conflicts\nresolved and very light comment polish and a commit message. Given that\nthere's been no changes otherwise in the last months, I'm inclined to push in\na few hours.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 6 Apr 2024 21:49:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Sun, Apr 7, 2024 at 11:49 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-01-22 13:01:31 +0700, John Naylor wrote:\n> > On Mon, Jul 17, 2023 at 9:58 PM Andres Freund <[email protected]> wrote:\n> > > And 397->320ms\n> > > for something as core as this, is imo worth considering on its own.\n> >\n> > Hi Andres, this interesting work seems to have fallen off the radar --\n> > are you still planning to move forward with this for v17?\n>\n> I had completely forgotten about this patch, but some discussion around\n> streaming read reminded me of it. Here's a rebased version, with conflicts\n> resolved and very light comment polish and a commit message. Given that\n> there's been no changes otherwise in the last months, I'm inclined to push in\n> a few hours.\n\nJust in time ;-) The commit message should also have \"reviewed by\nZhang Mingli\" and \"tested by Quan Zongliang\", who shared results in a\nthread for a withrawn CF entry with a similar idea but covering fewer\ncases:\n\nhttps://www.postgresql.org/message-id/2ef7ff1b-3d18-2283-61b1-bbd25fc6c7ce%40yeah.net\n\n\n",
"msg_date": "Sun, 7 Apr 2024 12:07:22 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-07 12:07:22 +0700, John Naylor wrote:\n> Just in time ;-) The commit message should also have \"reviewed by\n> Zhang Mingli\" and \"tested by Quan Zongliang\", who shared results in a\n> thread for a withrawn CF entry with a similar idea but covering fewer\n> cases:\n\nGood call. Added and pushed.\n\nThanks,\n\nAndres\n\n\n",
"msg_date": "Sun, 7 Apr 2024 00:30:06 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Sun, 7 Apr 2024 at 19:30, Andres Freund <[email protected]> wrote:\n> Good call. Added and pushed.\n\nI understand you're already aware of the reference in the comment to\nheapgetpage(), which no longer exists as of 44086b097.\n\nMelanie and I had discussed the heap_prepare_pagescan() name while I\nwas reviewing that recent refactor. Aside from fixing the comment, how\nabout also renaming heapgetpage_collect() to\nheap_prepare_pagescan_tuples()?\n\nPatch attached for reference. Not looking for any credit.\n\nI'm also happy to revisit the heap_prepare_pagescan() name and call\nheapgetpage_collect() some appropriate derivative of whatever we'd\nrename that to.\n\nCopied Melanie as she may want to chime in too.\n\nDavid",
"msg_date": "Mon, 8 Apr 2024 14:43:21 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 14:43:21 +1200, David Rowley wrote:\n> On Sun, 7 Apr 2024 at 19:30, Andres Freund <[email protected]> wrote:\n> > Good call. Added and pushed.\n> \n> I understand you're already aware of the reference in the comment to\n> heapgetpage(), which no longer exists as of 44086b097.\n\nYea, https://postgr.es/m/20240407172615.cocrsvboqm3ttqe4%40awork3.anarazel.de\n\n\n> Melanie and I had discussed the heap_prepare_pagescan() name while I\n> was reviewing that recent refactor. Aside from fixing the comment, how\n> about also renaming heapgetpage_collect() to\n> heap_prepare_pagescan_tuples()?\n\n> Patch attached for reference. Not looking for any credit.\n> \n> I'm also happy to revisit the heap_prepare_pagescan() name and call\n> heapgetpage_collect() some appropriate derivative of whatever we'd\n> rename that to.\n\nI kinda don't like heap_prepare_pagescan(), but heapgetpage() is worse. And I\ndon't have a great alternative suggestion.\n\nOff-list Melanie suggested heap_page_collect_visible_tuples(). Given the\nseparate callsites (making long names annoying) and the fact that it's really\nspecific to one caller, I'm somewhat inclined to just go with\ncollect_visible_tuples() or page_collect_visible(), I think I prefer the\nlatter.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 20:13:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 15:13, Andres Freund <[email protected]> wrote:\n> I kinda don't like heap_prepare_pagescan(), but heapgetpage() is worse. And I\n> don't have a great alternative suggestion.\n\nIt came around from having nothing better. I was keen not to have the\nname indicate it was only for checking visibility as we're also\nchecking for serialization conflicts and pruning the page. The word\n\"prepare\" made it there as it seemed generic enough to not falsely\nindicate it was only checking visibility. Also, it seemed good to\nkeep it generic as if we one day end up with something new that needs\nto be done before scanning a page in page mode then that new code is\nmore likely to be put in the function with a generic name rather than\na function that appears to be named for some other unrelated task. It\nwould be nice not to end up with two functions to call before scanning\na page in page mode.\n\n> Off-list Melanie suggested heap_page_collect_visible_tuples(). Given the\n> separate callsites (making long names annoying) and the fact that it's really\n> specific to one caller, I'm somewhat inclined to just go with\n> collect_visible_tuples() or page_collect_visible(), I think I prefer the\n> latter.\n\nI understand wanting to avoid the long name. I'd rather stay clear of\n\"visible\", but don't feel as strongly about this as it's static.\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Apr 2024 15:43:12 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 15:43:12 +1200, David Rowley wrote:\n> On Mon, 8 Apr 2024 at 15:13, Andres Freund <[email protected]> wrote:\n> > Off-list Melanie suggested heap_page_collect_visible_tuples(). Given the\n> > separate callsites (making long names annoying) and the fact that it's really\n> > specific to one caller, I'm somewhat inclined to just go with\n> > collect_visible_tuples() or page_collect_visible(), I think I prefer the\n> > latter.\n> \n> I understand wanting to avoid the long name. I'd rather stay clear of\n> \"visible\", but don't feel as strongly about this as it's static.\n\nI think visible would be ok, the serialization checks are IMO about\nvisibility too. But if you'd prefer I'd also be ok with something like\npage_collect_tuples()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 21:08:29 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "On Mon, 8 Apr 2024 at 16:08, Andres Freund <[email protected]> wrote:\n>\n> On 2024-04-08 15:43:12 +1200, David Rowley wrote:\n> > I understand wanting to avoid the long name. I'd rather stay clear of\n> > \"visible\", but don't feel as strongly about this as it's static.\n>\n> I think visible would be ok, the serialization checks are IMO about\n> visibility too. But if you'd prefer I'd also be ok with something like\n> page_collect_tuples()?\n\nThat's ok for me.\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Apr 2024 16:18:21 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-08 16:18:21 +1200, David Rowley wrote:\n> On Mon, 8 Apr 2024 at 16:08, Andres Freund <[email protected]> wrote:\n> > I think visible would be ok, the serialization checks are IMO about\n> > visibility too. But if you'd prefer I'd also be ok with something like\n> > page_collect_tuples()?\n> \n> That's ok for me.\n\nCool, pushed that way.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2024 22:13:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve heapgetpage() performance, overhead from serializable"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nAs a follow-up for the CVE-2023-2454 fix, I think that it makes sense to\ncompletely remove unsafe functions\nPushOverrideSearchPath()/PopOverrideSearchPath(), which are not used in the\ncore now.\nPlease look at the patch attached.\n\nBeside that, maybe it's worth to rename three functions in \"Override\" in\ntheir names: GetOverrideSearchPath(), CopyOverrideSearchPath(),\nOverrideSearchPathMatchesCurrent(), and then maybe struct OverrideSearchPath.\nNoah Misch proposed name GetSearchPathMatcher() for the former.\n\nWhat do you think?\n\nBest regards,\nAlexander",
"msg_date": "Sun, 16 Jul 2023 13:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting rid of OverrideSearhPath in namespace.c"
},
{
"msg_contents": "Hi,\n\n> As a follow-up for the CVE-2023-2454 fix, I think that it makes sense to\n> completely remove unsafe functions\n> PushOverrideSearchPath()/PopOverrideSearchPath(), which are not used in the\n> core now.\n> Please look at the patch attached.\n>\n> [...]\n>\n> What do you think?\n\n+1 to remove dead code.\n\nThe proposed patch however removes get_collation_oid(), apparently by\nmistake. Other than that the patch looks fine and passes `make\ninstallcheck-world`.\n\nI added an entry to the nearest CF [1].\n\n> Beside that, maybe it's worth to rename three functions in \"Override\" in\n> their names: GetOverrideSearchPath(), CopyOverrideSearchPath(),\n> OverrideSearchPathMatchesCurrent(), and then maybe struct OverrideSearchPath.\n> Noah Misch proposed name GetSearchPathMatcher() for the former.\n\n+1 as well. I added the corresponding 0002 patch.\n\n[1] https://commitfest.postgresql.org/44/4447/\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 17 Jul 2023 17:11:46 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of OverrideSearhPath in namespace.c"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 05:11:46PM +0300, Aleksander Alekseev wrote:\n> > As a follow-up for the CVE-2023-2454 fix, I think that it makes sense to\n> > completely remove unsafe functions\n> > PushOverrideSearchPath()/PopOverrideSearchPath(), which are not used in the\n> > core now.\n> > Please look at the patch attached.\n> >\n> > [...]\n> >\n> > What do you think?\n> \n> +1 to remove dead code.\n> \n> The proposed patch however removes get_collation_oid(), apparently by\n> mistake. Other than that the patch looks fine and passes `make\n> installcheck-world`.\n> \n> I added an entry to the nearest CF [1].\n> \n> > Beside that, maybe it's worth to rename three functions in \"Override\" in\n> > their names: GetOverrideSearchPath(), CopyOverrideSearchPath(),\n> > OverrideSearchPathMatchesCurrent(), and then maybe struct OverrideSearchPath.\n> > Noah Misch proposed name GetSearchPathMatcher() for the former.\n> \n> +1 as well. I added the corresponding 0002 patch.\n\nPushed both. Thanks.\n\n\n",
"msg_date": "Mon, 31 Jul 2023 20:06:28 -0400",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of OverrideSearhPath in namespace.c"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile scanning the code, I have noticed that a couple of code paths\nthat do syscache lookups are passing down directly Oids rather than\nDatums. I think that we'd better be consistent here, even if there is\nno actual bug.\n\nI have noticed 11 callers of SearchSysCache*() that pass down\nan Oid instead of a Datum.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Mon, 17 Jul 2023 20:10:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "Hi,\n\n> I have noticed 11 callers of SearchSysCache*() that pass down\n> an Oid instead of a Datum.\n\nGood catch.\n\n> I think that we'd better be consistent here, even if there is\n> no actual bug.\n>\n\n+1\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 17 Jul 2023 15:36:45 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "Hi\n\nRegards,\nZhang Mingli\nOn Jul 17, 2023 at 19:10 +0800, Michael Paquier <[email protected]>, wrote:\n> Hi all,\n>\n> While scanning the code, I have noticed that a couple of code paths\n> that do syscache lookups are passing down directly Oids rather than\n> Datums. I think that we'd better be consistent here, even if there is\n> no actual bug.\n>\n> I have noticed 11 callers of SearchSysCache*() that pass down\n> an Oid instead of a Datum.\n>\n> Thoughts or comments?\n> --\n> Michael\nLGTM, and there are two functions missed, in sequence_options\n\n pgstuple = SearchSysCache1(SEQRELID, relid);\n\nShall we fix that too?\n\n\n\n\n\n\n\nHi\n\n\nRegards,\nZhang Mingli\n\n\nOn Jul 17, 2023 at 19:10 +0800, Michael Paquier <[email protected]>, wrote:\nHi all,\n\nWhile scanning the code, I have noticed that a couple of code paths\nthat do syscache lookups are passing down directly Oids rather than\nDatums. I think that we'd better be consistent here, even if there is\nno actual bug.\n\nI have noticed 11 callers of SearchSysCache*() that pass down\nan Oid instead of a Datum.\n\nThoughts or comments?\n--\nMichael\nLGTM, and there are two functions missed, in sequence_options\n\n pgstuple = SearchSysCache1(SEQRELID, relid);\n\nShall we fix that too?",
"msg_date": "Mon, 17 Jul 2023 21:09:01 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "Hi,\n\nRegards,\nZhang Mingli\nOn Jul 17, 2023 at 21:09 +0800, Zhang Mingli <[email protected]>, wrote:\n> sequence_options\nAnd inside pg_sequence_parameters:\n\tpgstuple = SearchSysCache1(SEQRELID, relid);\n\n\n\n\n\n\n\nHi,\n\n\nRegards,\nZhang Mingli\n\n\nOn Jul 17, 2023 at 21:09 +0800, Zhang Mingli <[email protected]>, wrote:\nsequence_options\nAnd inside pg_sequence_parameters:\n\tpgstuple = SearchSysCache1(SEQRELID, relid);",
"msg_date": "Mon, 17 Jul 2023 21:11:22 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "Hi,\n\n> And inside pg_sequence_parameters:\n> pgstuple = SearchSysCache1(SEQRELID, relid);\n\nFound another one in partcache.c:\n\n```\n /* Get pg_class.relpartbound */\n tuple = SearchSysCache1(RELOID, RelationGetRelid(rel));\n```\n\nI can't be 100% sure but it looks like that's all of them. PFA the\nupdated patch v2.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 17 Jul 2023 17:33:42 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "Hi,\n\n> > And inside pg_sequence_parameters:\n> > pgstuple = SearchSysCache1(SEQRELID, relid);\n>\n> Found another one in partcache.c:\n>\n> ```\n> /* Get pg_class.relpartbound */\n> tuple = SearchSysCache1(RELOID, RelationGetRelid(rel));\n> ```\n>\n> I can't be 100% sure but it looks like that's all of them. PFA the\n> updated patch v2.\n\nAdded a CF entry, just in case:\nhttps://commitfest.postgresql.org/44/4448/\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 17 Jul 2023 17:38:41 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 05:33:42PM +0300, Aleksander Alekseev wrote:\n> I can't be 100% sure but it looks like that's all of them. PFA the\n> updated patch v2.\n\nThanks. Yes, this stuff is easy to miss. I was just grepping for a\nfew patterns and missed these two.\n--\nMichael",
"msg_date": "Tue, 18 Jul 2023 07:27:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 07:27:02AM +0900, Michael Paquier wrote:\n> On Mon, Jul 17, 2023 at 05:33:42PM +0300, Aleksander Alekseev wrote:\n> > I can't be 100% sure but it looks like that's all of them. PFA the\n> > updated patch v2.\n> \n> Thanks. Yes, this stuff is easy to miss. I was just grepping for a\n> few patterns and missed these two.\n\nSpotted a few more of these things after a second lookup.\n\nOne for subscriptions:\nsrc/backend/commands/alter.c:\nif (SearchSysCacheExists2(SUBSCRIPTIONNAME, MyDatabaseId,\n\nAnd two for transforms:\nsrc/backend/utils/cache/lsyscache.c:\ntup = SearchSysCache2(TRFTYPELANG, typid, langid);\nsrc/backend/utils/cache/lsyscache.c:\ntup = SearchSysCache2(TRFTYPELANG, typid, langid);\n\nAnd applied the whole. Thanks for looking and spot more of these\ninconsistencies!\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 15:28:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ObjectIdGetDatum() missing from SearchSysCache*() callers"
}
] |
[
{
"msg_contents": "Hi all,\n\nI’m a security engineer and I’m looking into restricting the set of allowed ciphers on Postgres and configure a concrete set of curves on our postgres instances.\n\nI see in the source code that only TLS 1.2 and bellow cipher lists can be configured:\n\nhttps://github.com/postgres/postgres/blob/master/src/backend/libpq/be-secure-openssl.c#L281\n\nand Postgres relies on the OpenSSL defaults for TLS 1.3 ciphersuites.\n\nMy first question is whether there is a reason not to support setting TLS 1.3 cipher suites through configuration ? Maybe there are Postgres builds with BoringSSL ? (Just speculating ?)\n\nAnother thing I was curious about is why does postgres opts to support setting only a single elliptic group (https://github.com/postgres/postgres/blob/master/src/backend/libpq/be-secure-openssl.c#L1303) instead of calling out to an SSL function like SSL_CTX_set1_curves_list ?\n\nWould the community be interested in seeing patches for setting TLS 1.3 ciphersuites and expanding the configuration option for EC settings to support lists instead of single values ?\n\nThanks,\nSeraphime Kirkovski\n\n\n\n\n\n\n\n\n\n\nHi all,\n\n \n\nI’m a security engineer and I’m looking into restricting the set of allowed ciphers on Postgres and configure a concrete set of curves on our postgres instances.\n\n \n\nI see in the source code that only TLS 1.2 and bellow cipher lists can be configured:\n\n \n\nhttps://github.com/postgres/postgres/blob/master/src/backend/libpq/be-secure-openssl.c#L281\n\n \n\nand Postgres relies on the OpenSSL defaults for TLS 1.3 ciphersuites.\n\n \n\nMy first question is whether there is a reason not to support setting TLS 1.3 cipher suites through configuration ? Maybe there are Postgres builds with BoringSSL ? (Just speculating ?)\n\n \n\nAnother thing I was curious about is why does postgres opts to support setting only a single elliptic group (https://github.com/postgres/postgres/blob/master/src/backend/libpq/be-secure-openssl.c#L1303)\n instead of calling out to an SSL function like SSL_CTX_set1_curves_list ?\n\n \n\nWould the community be interested in seeing patches for setting TLS 1.3 ciphersuites and expanding the configuration option for EC settings to support lists instead of single values ? \n\n \n\nThanks,\n\nSeraphime Kirkovski",
"msg_date": "Mon, 17 Jul 2023 13:16:02 +0000",
"msg_from": "Seraphime Kirkovski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fine-tune TLS 1.3 cipher suites and curves lists"
},
{
"msg_contents": "> On 17 Jul 2023, at 15:16, Seraphime Kirkovski <[email protected]> wrote:\n\n> I see in the source code that only TLS 1.2 and bellow cipher lists can be configured:\n> \n> https://github.com/postgres/postgres/blob/master/src/backend/libpq/be-secure-openssl.c#L281\n> \n> and Postgres relies on the OpenSSL defaults for TLS 1.3 ciphersuites.\n> \n> My first question is whether there is a reason not to support setting TLS 1.3 cipher suites through configuration ? Maybe there are Postgres builds with BoringSSL ? (Just speculating ?)\n\nI think the main raison is that noone has done it, and noone has requested it.\nI have no way if knowing for certain, but I doubt too many postgres users\nchange this setting.\n\n> Another thing I was curious about is why does postgres opts to support setting only a single elliptic group (https://github.com/postgres/postgres/blob/master/src/backend/libpq/be-secure-openssl.c#L1303) instead of calling out to an SSL function like SSL_CTX_set1_curves_list ?\n> \n> Would the community be interested in seeing patches for setting TLS 1.3 ciphersuites and expanding the configuration option for EC settings to support lists instead of single values ? \n\nI would be interested in seeing them, and would offer to review them.\n\nThe main challenge is IMO to properly document these settings such that\npostgres users know what they are, and when they should think about changing\nthem. Postgres also supports very old OpenSSL versions, so any change and\nsetting must in some way make sense for those installations (which may be a\nno-op, a warning at startup for non-applicable settings, or something else).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 17 Jul 2023 22:06:07 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fine-tune TLS 1.3 cipher suites and curves lists"
}
] |
[
{
"msg_contents": "Hey list,\n\nI was working on a project with event triggers and was wondering if there\nwas any context from the developers around why some things make this list\nand others do not. Example: REVOKE/ GRANT are in the event trigger matrix\n[1] but REINDEX is not. Just wondering if there's a mailing list thread or\na commit message that has more info. I can't seem to find anything in the\npostgres list archives. Thanks!\n\n[1] https://www.postgresql.org/docs/15/event-trigger-matrix.html\n\nHey list,I was working on a project with event triggers and was wondering if there was any context from the developers around why some things make this list and others do not. Example: REVOKE/ GRANT are in the event trigger matrix [1] but REINDEX is not. Just wondering if there's a mailing list thread or a commit message that has more info. I can't seem to find anything in the postgres list archives. Thanks![1] https://www.postgresql.org/docs/15/event-trigger-matrix.html",
"msg_date": "Mon, 17 Jul 2023 08:56:36 -0600",
"msg_from": "Garrett Thornburg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Looking for context around which event triggers are permitted"
},
{
"msg_contents": "Hi,\n\n> I was working on a project with event triggers and was wondering if there was any context from the developers around why some things make this list and others do not. Example: REVOKE/ GRANT are in the event trigger matrix [1] but REINDEX is not. Just wondering if there's a mailing list thread or a commit message that has more info. I can't seem to find anything in the postgres list archives. Thanks!\n>\n> [1] https://www.postgresql.org/docs/15/event-trigger-matrix.html\n\nGood question. My guess would be that no one really needed an event\ntrigger for REINDEX so far.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 17 Jul 2023 18:26:32 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for context around which event triggers are permitted"
},
{
"msg_contents": "On Mon, 17 Jul 2023 at 11:26, Aleksander Alekseev <[email protected]>\nwrote:\n\n> Hi,\n>\n> > I was working on a project with event triggers and was wondering if\n> there was any context from the developers around why some things make this\n> list and others do not. Example: REVOKE/ GRANT are in the event trigger\n> matrix [1] but REINDEX is not. Just wondering if there's a mailing list\n> thread or a commit message that has more info. I can't seem to find\n> anything in the postgres list archives. Thanks!\n> >\n> > [1] https://www.postgresql.org/docs/15/event-trigger-matrix.html\n>\n> Good question. My guess would be that no one really needed an event\n> trigger for REINDEX so far.\n>\n\nMy answer is not authoritative, but I notice that ANALYZE and VACUUM are\nalso not there. Those, together with REINDEX, are maintenance commands,\nwhich normally should not affect which queries you can run or their\nresults. If we think of the queries we can run and the objects we can run\nthem against as forming an abstraction with maintenance commands breaking\nthe abstraction, then we can think of event triggers as operating against\nthe abstraction layer, not the underlying maintenance layer.\n\nOn the other hand, the event triggers include tags related to indexes,\nwhich themselves (except for enforcement of uniqueness) in some sense sit\nbelow the abstraction: presence of an index can affect the query plan and\nhow efficient it is, but shouldn't change the result of a query or whether\nit is a valid query. So this is not a fully satisfactory explanation.\n\nOn Mon, 17 Jul 2023 at 11:26, Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> I was working on a project with event triggers and was wondering if there was any context from the developers around why some things make this list and others do not. Example: REVOKE/ GRANT are in the event trigger matrix [1] but REINDEX is not. Just wondering if there's a mailing list thread or a commit message that has more info. I can't seem to find anything in the postgres list archives. Thanks!\n>\n> [1] https://www.postgresql.org/docs/15/event-trigger-matrix.html\n\nGood question. My guess would be that no one really needed an event\ntrigger for REINDEX so far.My answer is not authoritative, but I notice that ANALYZE and VACUUM are also not there. Those, together with REINDEX, are maintenance commands, which normally should not affect which queries you can run or their results. If we think of the queries we can run and the objects we can run them against as forming an abstraction with maintenance commands breaking the abstraction, then we can think of event triggers as operating against the abstraction layer, not the underlying maintenance layer.On the other hand, the event triggers include tags related to indexes, which themselves (except for enforcement of uniqueness) in some sense sit below the abstraction: presence of an index can affect the query plan and how efficient it is, but shouldn't change the result of a query or whether it is a valid query. So this is not a fully satisfactory explanation.",
"msg_date": "Mon, 17 Jul 2023 11:39:52 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for context around which event triggers are permitted"
},
{
"msg_contents": "That's a good point, Isaac. Select into, security label, comment, etc are\nall maintenance style commands but are already added to the matrix. I do\nthink there's a good case to include other maintenance related commands as\nevent triggers. Suppose you want to know the last time a table was vacuumed\nor the last time a table was reindexed. If you can trigger off of these\nmaintenance commands, there's a lot you could build on top of postgres to\nmake the maintenance experience easier. Seems like a positive thing.\n\nThe code exists but they are disabled at the moment. Happy to enable those\nwith a patch if it's as Aleksander said. Meaning, no real reason they were\ndisabled other than someone thought folks wouldn't need them.\n\nThat's a good point, Isaac. Select into, security label, comment, etc are all maintenance style commands but are already added to the matrix. I do think there's a good case to include other maintenance related commands as event triggers. Suppose you want to know the last time a table was vacuumed or the last time a table was reindexed. If you can trigger off of these maintenance commands, there's a lot you could build on top of postgres to make the maintenance experience easier. Seems like a positive thing.The code exists but they are disabled at the moment. Happy to enable those with a patch if it's as Aleksander said. Meaning, no real reason they were disabled other than someone thought folks wouldn't need them.",
"msg_date": "Mon, 17 Jul 2023 10:04:42 -0600",
"msg_from": "Garrett Thornburg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looking for context around which event triggers are permitted"
},
{
"msg_contents": "Hi,\n\n> Happy to enable those with a patch if it's as Aleksander said. Meaning, no real reason they were disabled other than someone thought folks wouldn't need them.\n\nSure, please feel free submitting the patch and we will see how it\ngoes. I don't foresee a strong push-back from the community, but this\nbeing said you can never be certain.\n\nIdeally the patch should include corresponding tests and changes to\nthe documentation. If you will experience difficulties with those,\nthat's fine, submit the patch as is. Somebody (me, perhaps) will add\nthem if necessary.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 17 Jul 2023 19:31:19 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for context around which event triggers are permitted"
},
{
"msg_contents": "On 2023-Jul-17, Garrett Thornburg wrote:\n\n> That's a good point, Isaac. Select into, security label, comment, etc are\n> all maintenance style commands but are already added to the matrix. I do\n> think there's a good case to include other maintenance related commands as\n> event triggers. Suppose you want to know the last time a table was vacuumed\n> or the last time a table was reindexed. If you can trigger off of these\n> maintenance commands, there's a lot you could build on top of postgres to\n> make the maintenance experience easier. Seems like a positive thing.\n> \n> The code exists but they are disabled at the moment. Happy to enable those\n> with a patch if it's as Aleksander said. Meaning, no real reason they were\n> disabled other than someone thought folks wouldn't need them.\n\nYeah, as I recall, initially there were two use cases considered for\nevent triggers:\n\n1. DDL replication. For this, you need to capture commands that somehow\nmodify the set of objects that exist in the database. So creating an\nindex or COMMENT are important, but reindexing one isn't.\n\n2. DDL auditing. Pretty much the same as above. You don't really care\nwhen vacuuming occurs, but if a table changes ownership or a security\nlabel is modified, that needs to be kept track of.\n\n\nLater, a further use case was added to enable people avoid long-running\ntable locking behavior: you only want to let your devs run ALTER TABLE\nin production if it's going to finish really quick. So table_rewriting\nappeared and allowed some further options. (As for SELECT INTO, it may\nbe that it is only there because it's very close in implementation to\nCREATE TABLE AS, which naturally needs to be logged for auditing\npurposes ... but I'm not sure.)\n\n\nI'm wondering why you want REINDEX reported to an event trigger. What's\nyour use case?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 17 Jul 2023 18:31:21 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Looking for context around which event triggers are permitted"
},
{
"msg_contents": "That makes sense and is similar to the problem I'm hoping to solve for our\nteam. We had a DB upgrade that corrupted a few indexes. Gitlab went through\nsomething similar as part of their OS/ DB upgrade. We had to concurrently\nreindex everything. This took a few days and just to make sure we completed\nthis, we reindexed again. If we had had a way to log the event to a table\nfor each index, it would have made our lives a lot easier.\n\nAt a more high level though, it really made me wish there was a way to\naudit these things. Sounds like that is what event triggers were designed\nfor and adding a few more operations could prove useful. Example: You can\ntrack Create/Alter/Drop of a table's lifecycle, capturing timestamps in a\ntable, but not indexes without REINDEX.\n\nOn Mon, Jul 17, 2023 at 10:31 AM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2023-Jul-17, Garrett Thornburg wrote:\n>\n> > That's a good point, Isaac. Select into, security label, comment, etc are\n> > all maintenance style commands but are already added to the matrix. I do\n> > think there's a good case to include other maintenance related commands\n> as\n> > event triggers. Suppose you want to know the last time a table was\n> vacuumed\n> > or the last time a table was reindexed. If you can trigger off of these\n> > maintenance commands, there's a lot you could build on top of postgres to\n> > make the maintenance experience easier. Seems like a positive thing.\n> >\n> > The code exists but they are disabled at the moment. Happy to enable\n> those\n> > with a patch if it's as Aleksander said. Meaning, no real reason they\n> were\n> > disabled other than someone thought folks wouldn't need them.\n>\n> Yeah, as I recall, initially there were two use cases considered for\n> event triggers:\n>\n> 1. DDL replication. For this, you need to capture commands that somehow\n> modify the set of objects that exist in the database. So creating an\n> index or COMMENT are important, but reindexing one isn't.\n>\n> 2. DDL auditing. Pretty much the same as above. You don't really care\n> when vacuuming occurs, but if a table changes ownership or a security\n> label is modified, that needs to be kept track of.\n>\n>\n> Later, a further use case was added to enable people avoid long-running\n> table locking behavior: you only want to let your devs run ALTER TABLE\n> in production if it's going to finish really quick. So table_rewriting\n> appeared and allowed some further options. (As for SELECT INTO, it may\n> be that it is only there because it's very close in implementation to\n> CREATE TABLE AS, which naturally needs to be logged for auditing\n> purposes ... but I'm not sure.)\n>\n>\n> I'm wondering why you want REINDEX reported to an event trigger. What's\n> your use case?\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\nThat makes sense and is similar to the problem I'm hoping to solve for our team. We had a DB upgrade that corrupted a few indexes. Gitlab went through something similar as part of their OS/ DB upgrade. We had to concurrently reindex everything. This took a few days and just to make sure we completed this, we reindexed again. If we had had a way to log the event to a table for each index, it would have made our lives a lot easier. At a more high level though, it really made me wish there was a way to audit these things. Sounds like that is what event triggers were designed for and adding a few more operations could prove useful. Example: You can track Create/Alter/Drop of a table's lifecycle, capturing timestamps in a table, but not indexes without REINDEX.On Mon, Jul 17, 2023 at 10:31 AM Alvaro Herrera <[email protected]> wrote:On 2023-Jul-17, Garrett Thornburg wrote:\n\n> That's a good point, Isaac. Select into, security label, comment, etc are\n> all maintenance style commands but are already added to the matrix. I do\n> think there's a good case to include other maintenance related commands as\n> event triggers. Suppose you want to know the last time a table was vacuumed\n> or the last time a table was reindexed. If you can trigger off of these\n> maintenance commands, there's a lot you could build on top of postgres to\n> make the maintenance experience easier. Seems like a positive thing.\n> \n> The code exists but they are disabled at the moment. Happy to enable those\n> with a patch if it's as Aleksander said. Meaning, no real reason they were\n> disabled other than someone thought folks wouldn't need them.\n\nYeah, as I recall, initially there were two use cases considered for\nevent triggers:\n\n1. DDL replication. For this, you need to capture commands that somehow\nmodify the set of objects that exist in the database. So creating an\nindex or COMMENT are important, but reindexing one isn't.\n\n2. DDL auditing. Pretty much the same as above. You don't really care\nwhen vacuuming occurs, but if a table changes ownership or a security\nlabel is modified, that needs to be kept track of.\n\n\nLater, a further use case was added to enable people avoid long-running\ntable locking behavior: you only want to let your devs run ALTER TABLE\nin production if it's going to finish really quick. So table_rewriting\nappeared and allowed some further options. (As for SELECT INTO, it may\nbe that it is only there because it's very close in implementation to\nCREATE TABLE AS, which naturally needs to be logged for auditing\npurposes ... but I'm not sure.)\n\n\nI'm wondering why you want REINDEX reported to an event trigger. What's\nyour use case?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 17 Jul 2023 11:58:07 -0600",
"msg_from": "Garrett Thornburg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Looking for context around which event triggers are permitted"
}
] |
[
{
"msg_contents": "Hi,\n\nIn a number of workloads one can see two wait events prominently:\nLWLock:WALWrite and LWLock:WALInsert. Unfortunately for both that is not very\ninformative:\n\nLWLock:WALWrite can be reported because there is genuine contention on the\nLWLock, or, more commonly, because several backends are waiting for another to\nfinish IO. In the latter case we are not actually waiting to acquire the lock,\nwe are waiting for the lock to be released *without* then acquiring it.\n\nLWLock:WALInsert can be reported because there are not enough WALInsert locks\n(c.f. NUM_XLOGINSERT_LOCKS) or because we are waiting for another backend to\nfinish copying a WAL record into wal_buffers. In the latter case we are\ntherefore not waiting to acquire an LWLock.\n\n\nI think both of these cases are relevant to distinguish from an operational\nperspective. Secondarily, I've received many questions about making those\nlocks more scalable / granular, when in most of the cases the issue was not\nactual lock contention.\n\nToday it's surprisingly hard to figure out whether the issue is lock\ncontention or the speed of copying buffers for WAL insert locks / computing\nthe last prc of the CRC checksum.\n\n\nTherefore I'm proposing that LWLockAcquireOrWait() and LWLockWaitForVar() not\nuse the \"generic\" LWLockReportWaitStart(), but use caller provided wait\nevents. The attached patch adds two new wait events for the existing callers.\n\nI waffled a bit about which wait event section to add these to. Ended up with\n\"IPC\", but would be happy to change that.\n\nWAIT_EVENT_WAL_WAIT_INSERT WALWaitInsert \"Waiting for WAL record to be copied into buffers.\"\nWAIT_EVENT_WAL_WAIT_WRITE WALWaitWrite \"Waiting for WAL buffers to be written or flushed to disk.\"\n\n\nPreviously it was e.g. not really possible to distinguish that something like\nthis:\n\n┌────────────────┬─────────────────┬────────────┬───────┐\n│ backend_type │ wait_event_type │ wait_event │ count │\n├────────────────┼─────────────────┼────────────┼───────┤\n│ client backend │ LWLock │ WALInsert │ 32 │\n│ client backend │ (null) │ (null) │ 9 │\n│ walwriter │ IO │ WALWrite │ 1 │\n│ client backend │ Client │ ClientRead │ 1 │\n│ client backend │ LWLock │ WALWrite │ 1 │\n└────────────────┴─────────────────┴────────────┴───────┘\n\nis a workload with a very different bottleneck than this:\n\n┌────────────────┬─────────────────┬───────────────┬───────┐\n│ backend_type │ wait_event_type │ wait_event │ count │\n├────────────────┼─────────────────┼───────────────┼───────┤\n│ client backend │ IPC │ WALWaitInsert │ 22 │\n│ client backend │ LWLock │ WALInsert │ 13 │\n│ client backend │ LWLock │ WALBufMapping │ 5 │\n│ walwriter │ (null) │ (null) │ 1 │\n│ client backend │ Client │ ClientRead │ 1 │\n│ client backend │ (null) │ (null) │ 1 │\n└────────────────┴─────────────────┴───────────────┴───────┘\n\neven though they are very different\n\nFWIW, the former is bottlenecked by the number of WAL insertion locks, the\nsecond is bottlenecked by copying WAL into buffers due to needing to flush\nthem.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 17 Jul 2023 09:55:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Report distinct wait events when waiting for WAL \"operation\""
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 10:26 PM Andres Freund <[email protected]> wrote:\r\n>\r\n> Previously it was e.g. not really possible to distinguish that something like\r\n> this:\r\n>\r\n> ┌────────────────┬─────────────────┬────────────┬───────┐\r\n> │ backend_type │ wait_event_type │ wait_event │ count │\r\n> ├────────────────┼─────────────────┼────────────┼───────┤\r\n> │ client backend │ LWLock │ WALInsert │ 32 │\r\n> │ client backend │ (null) │ (null) │ 9 │\r\n> │ walwriter │ IO │ WALWrite │ 1 │\r\n> │ client backend │ Client │ ClientRead │ 1 │\r\n> │ client backend │ LWLock │ WALWrite │ 1 │\r\n> └────────────────┴─────────────────┴────────────┴───────┘\r\n>\r\n> is a workload with a very different bottleneck than this:\r\n>\r\n> ┌────────────────┬─────────────────┬───────────────┬───────┐\r\n> │ backend_type │ wait_event_type │ wait_event │ count │\r\n> ├────────────────┼─────────────────┼───────────────┼───────┤\r\n> │ client backend │ IPC │ WALWaitInsert │ 22 │\r\n> │ client backend │ LWLock │ WALInsert │ 13 │\r\n> │ client backend │ LWLock │ WALBufMapping │ 5 │\r\n> │ walwriter │ (null) │ (null) │ 1 │\r\n> │ client backend │ Client │ ClientRead │ 1 │\r\n> │ client backend │ (null) │ (null) │ 1 │\r\n> └────────────────┴─────────────────┴───────────────┴───────┘\r\n>\r\n> even though they are very different\r\n>\r\n> FWIW, the former is bottlenecked by the number of WAL insertion locks, the\r\n> second is bottlenecked by copying WAL into buffers due to needing to flush\r\n> them.\r\n>\r\n\r\nThis gives a better idea of what's going on. +1 for separating these waits.\r\n\r\n-- \r\nWith Regards,\r\nAmit Kapila.\r\n",
"msg_date": "Wed, 19 Jul 2023 18:49:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report distinct wait events when waiting for WAL \"operation\""
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 06:49:57PM +0530, Amit Kapila wrote:\n> On Mon, Jul 17, 2023 at 10:26 PM Andres Freund <[email protected]> wrote:\n>> FWIW, the former is bottlenecked by the number of WAL insertion locks, the\n>> second is bottlenecked by copying WAL into buffers due to needing to flush\n>> them.\n> \n> This gives a better idea of what's going on. +1 for separating these waits.\n\n+ * As this is not used to wait for lwlocks themselves, the caller has to\n+ * provide a wait event to be reported.\n */\n bool\n-LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval)\n+LWLockWaitForVar(LWLock *lock, uint64 *valptr, uint64 oldval, uint64 *newval,\n+ uint32 wait_event_info)\n\nMakes sense to me to do this split, nice! And this gives more\nflexibility for out-of-core callers, while on it.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 14:18:05 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report distinct wait events when waiting for WAL \"operation\""
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 10:25 PM Andres Freund <[email protected]> wrote:\r\n>\r\n> Hi,\r\n>\r\n> Therefore I'm proposing that LWLockAcquireOrWait() and LWLockWaitForVar() not\r\n> use the \"generic\" LWLockReportWaitStart(), but use caller provided wait\r\n> events. The attached patch adds two new wait events for the existing callers.\r\n\r\n+1 for having separate wait events for WAL insert lock acquire and\r\nwait for WAL insertions to finish. However, I don't think we need to\r\npass wait events to LWLockAcquireOrWait and LWLockWaitForVar, we can\r\njust use wait events directly in the functions. Because these two\r\nfunctions are used for acquiring WAL insert lock and waiting for WAL\r\ninsertions to finish, they aren't multipurpose functions.\r\n\r\n> I waffled a bit about which wait event section to add these to. Ended up with\r\n> \"IPC\", but would be happy to change that.\r\n>\r\n> WAIT_EVENT_WAL_WAIT_INSERT WALWaitInsert \"Waiting for WAL record to be copied into buffers.\"\r\n> WAIT_EVENT_WAL_WAIT_WRITE WALWaitWrite \"Waiting for WAL buffers to be written or flushed to disk.\"\r\n\r\nIPC seems okay to me. If not, how about the PG_WAIT_LWLOCK event\r\nclass? Or, have WAIT_EVENT_WAL_WAIT_WRITE under PG_WAIT_IO and the\r\nother under PG_WAIT_IPC?\r\n\r\n> ┌────────────────┬─────────────────┬───────────────┬───────┐\r\n> │ backend_type │ wait_event_type │ wait_event │ count │\r\n> ├────────────────┼─────────────────┼───────────────┼───────┤\r\n> │ client backend │ IPC │ WALWaitInsert │ 22 │\r\n> │ client backend │ LWLock │ WALInsert │ 13 │\r\n> │ client backend │ LWLock │ WALBufMapping │ 5 │\r\n> │ walwriter │ (null) │ (null) │ 1 │\r\n> │ client backend │ Client │ ClientRead │ 1 │\r\n> │ client backend │ (null) │ (null) │ 1 │\r\n> └────────────────┴─────────────────┴───────────────┴───────┘\r\n>\r\n> even though they are very different\r\n>\r\n> FWIW, the former is bottlenecked by the number of WAL insertion locks, the\r\n> second is bottlenecked by copying WAL into buffers due to needing to flush\r\n> them.\r\n\r\nThis separation looks clean and gives much more info.\r\n\r\n--\r\nBharath Rupireddy\r\nPostgreSQL Contributors Team\r\nRDS Open Source Databases\r\nAmazon Web Services: https://aws.amazon.com\r\n",
"msg_date": "Thu, 20 Jul 2023 10:59:46 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Report distinct wait events when waiting for WAL \"operation\""
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nI've tried sending this patch to community before, let me try it second\ntime. Patch is revised and improved compared to previous version.\n\nThis patch adds TOAST support for system tables pg_class,\npg_attribute and pg_largeobject_metadata, as they include ACL columns,\nwhich may be potentially large in size. Patch fixes possible pg_upgrade\nbug (problem with seeing a non-empty new cluster).\n\nDuring code developing it turned out that heap_inplace_update function\nis not suitable for use with TOAST, so its work could lead to wrong\nstatistics update (for example, during VACUUM). This problem is fixed\nby adding new heap_inplace_update_prepare_tuple function -- we assume\nTOASTed attributes are never changed by in-place update, and just\nreplace them with old values.\n\nI also added pg_catalog_toast1 test that does check for \"invalid tupple\nlength\" error when creating index with toasted pg_class. Test grants and\ndrops roles on certain table many times to make ACL column long and then\ncreates index on this table.\n\nI wonder what other bugs can happen there, but if anyone can give me a\nhint, I'll try to fix them. Anyway, in PostgresPro we didn't encounter\nany problems with this feature.\n\nFirst attempt here:\nhttps://www.postgresql.org/message-id/[email protected]\n\nThis time I'll do it better\n\n--\nSofia Kopikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 18 Jul 2023 01:13:25 +0300",
"msg_from": "Sofia Kopikova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add TOAST support for more system tables"
},
{
"msg_contents": "Sofia Kopikova <[email protected]> writes:\n> This patch adds TOAST support for system tables pg_class,\n> pg_attribute and pg_largeobject_metadata, as they include ACL columns,\n> which may be potentially large in size.\n\nWe have been around on this topic before, cf discussion leading up to\ncommit 96cdeae07. Allowing toasted data in pg_class or pg_attribute\nseems quite scary to me because of the potential for recursive access,\nparticularly during cache-flush scenarios. (That is, you need to be\nable to read those catalogs on the way to fetching a toasted value,\nso how can you be sure that doesn't devolve into an infinite loop?)\n\nI wonder whether we'd be better off shoving the ACL data out of\nthese catalogs and putting it somewhere else (compare pg_attrdef).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jul 2023 18:31:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TOAST support for more system tables"
},
{
"msg_contents": "On Tue, 18 Jul 2023 at 10:31, Tom Lane <[email protected]> wrote:\n> I wonder whether we'd be better off shoving the ACL data out of\n> these catalogs and putting it somewhere else (compare pg_attrdef).\n\nrelpartbound is another column that could cause a pg_class row to grow\ntoo large. I did have a patch [1] to move that column into\npg_partition. I imagine it's very bit rotted now.\n\nDavid\n\n[1] https://postgr.es/m/CAKJS1f9QjUwQrio20Pi%3DyCHmnouf4z3SfN8sqXaAcwREG6k0zQ%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 18 Jul 2023 11:40:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TOAST support for more system tables"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 06:31:04PM -0400, Tom Lane wrote:\n> Sofia Kopikova <[email protected]> writes:\n> > This patch adds TOAST support for system tables pg_class,\n> > pg_attribute and pg_largeobject_metadata, as they include ACL columns,\n> > which may be potentially large in size.\n> \n> We have been around on this topic before, cf discussion leading up to\n> commit 96cdeae07. Allowing toasted data in pg_class or pg_attribute\n> seems quite scary to me because of the potential for recursive access,\n> particularly during cache-flush scenarios. (That is, you need to be\n> able to read those catalogs on the way to fetching a toasted value,\n> so how can you be sure that doesn't devolve into an infinite loop?)\n\nYep. I have something to add here. The last time I poked at that, I\nwas wondering about two code paths that have specific comments on this\nmatter. Based on my notes:\n1) finish_heap_swap() in cluster.c:\n * pg_class doesn't have a toast relation, so we don't need to update the\n * corresponding toast relation. Not that there's little point moving all \n * relfrozenxid updates here since swap_relation_files() needs to write to\n * pg_class for non-mapped relations anyway.\n2) extract_autovac_opts() in autovacuum.c:\n * we acquired the pg_class row. If pg_class had a TOAST table, this would\n * be a risk; fortunately, it doesn't. \n\nWhat has been posted makes zero adjustments in these areas.\n--\nMichael",
"msg_date": "Tue, 18 Jul 2023 14:32:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add TOAST support for more system tables"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 06:31:04PM -0400, Tom Lane wrote:\n\n> Sofia Kopikova <[email protected]> writes:\n>> This patch adds TOAST support for system tables pg_class,\n>> pg_attribute and pg_largeobject_metadata, as they include ACL columns,\n>> which may be potentially large in size.\n> We have been around on this topic before, cf discussion leading up to\n> commit 96cdeae07. Allowing toasted data in pg_class or pg_attribute\n> seems quite scary to me because of the potential for recursive access,\n> particularly during cache-flush scenarios. (That is, you need to be\n> able to read those catalogs on the way to fetching a toasted value,\n> so how can you be sure that doesn't devolve into an infinite loop?)\nMany thanks for your reviews. I'm gonna do research and revise this\nfeature thoroughly.\n\nI'll set status of the patch to \"Waiting on author\" for now.\n\n--\nSofia Kopikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n",
"msg_date": "Tue, 18 Jul 2023 14:19:46 +0300",
"msg_from": "Sofia Kopikova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add TOAST support for more system tables"
}
] |
[
{
"msg_contents": "Hi,\n\nContinuing a topic from earlier threads[1][2], I've been wondering\nabout how to de-klugify wal_sync_method=fsync_writethrough (a setting\nthat actually affects much more than just WAL), and how to do the\nright thing for our users on macOS and Windows by default. Commit\nd0c28601 was a very small cleanup in this area. Here are some bigger\nideas I'd like to try out.\n\nShort version:\n\n * Make wal_sync_method=fdatasync the default everywhere\n * Drop wal_sync_method=fsync_writethrough\n * Add new macOS-only level for the fsync GUC: fsync=full\n * Make fsync=full redirect both pg_fsync() and pg_fdatasync()\n * Make fsync=full the default on macOS\n\nMotivation:\n\nI think expectations might have changed quite a bit since ~2000. Back\nthen, fsync() didn't flush write caches on any OS (you were supposed\nto use battery-backed controllers and SCSI as found on expensive\nproprietary Unix systems if you were serious, IDE/ATA protocols didn't\noriginally have flush commands, and some consumer drives famously\nignored them or lied, so users of cheap drives were advised to turn\nwrite caches off). Around 2005, Linux decided to start sending the\nflush command in fsync(). Windows' FlushFileBuffers() does the same,\nand I gathered from Raymond Chen's blog that by the Windows 8\ntimeframe all consumer drive vendors supported and respected the flush\ncommand. macOS *still* doesn't send it for fsync(), but has had\nfcntl(F_FULLFSYNC) since 2003. In Apple's defence, they seem to have\nbeen ahead of the curve on this problem[3]... I suppose they didn't\nanticipate that everyone else was going to do it in their main\nfsync()/fdatasync() call, they blazed their own trail, and now it all\nlooks a bit weird.\n\nIn other words, back then all systems running PostgreSQL risked data\nloss unless you had fancy hardware or turned off unsafe caching. But\nnow, due to the changing landscape and our policy choices, that is\ntrue only for rarer systems by default while most in our community are\non Linux where this is all just a historical footnote. People's\nbaseline expectations have moved, and although we try to document the\nsituation, they are occasionally very surprised: \"Loaded footgun\nopen_datasync on Windows\" was Laurenz Albe's reaction[4] to those\nparagraphs. Surely we should be able to recover after power loss by\ndefault even on a lowly desktop PC or basic server loaded with SATA\ndrives, out of the box?\n\nProposal for Windows:\n\nThe existing default use of FILE_FLAG_WRITE_THROUGH is probably a\nbetter choice on hardware where it works reliably (cache disabled,\nnon-volatile cache, or working FUA support), since it skips a system\ncall and doesn't wait for incidental other stuff in the cache to\nflush, but it's well documented that Windows' SATA drivers neither\npass the \"FUA\" flag down to the device nor fall back to sending a full\ncache flush command. It's also easy to see in the pg_test_fsync\nnumbers, which are too good to be true on consumer gear. Therefore\nwal_sync_method=fdatasync is a better default level. We map that to\nNtFlushBuffersFileEx(FLUSH_FLAGS_FILE_DATA_SYNC_ONLY). (The \"SYNC\" in\nthat flag name means flush the drive cache; the \"DATA...ONLY\" in that\nflag name means skip non-essential stuff like file modification time\netc just like fdatasync() in POSIX, and goes visibly faster thanks to\nnot journaling metadata.)\n\nProposal for macOS:\n\nOur current default isn't nice to users who run a database on\nmains-powered Macs. I don't have one myself to try it, but \"man\nfsync\" clearly states that you can lose data and it is easily\ndemonstrated with a traditional cord-yanking test[5]. You could\ncertainly lose some recent commits; you could probably also get more\nsubtle corruption or a total recovery failure like [6] too, if for\nexample the control file can make it to durable storage and while\npointing to a checkpoint that did not (maybe a ZFS-like atomic\nroot-switch prevents that sort of disorder in APFS, I dunno, but I\nread some semi-informed speculation that it doesn't work that way\n*shrug*).\n\nWe do currently offer a non-default setting\nwal_sync_method=fsync_writethough to address all this already.\nDespite its name, it affects every caller of pg_fsync() (control file,\ndata files, etc). It's certainly essential to flush all those files\nfully too as part of our recovery protocol, but they're not \"WAL\".\nThe new idea here is to provide a separate way of controlling that\nglobal behaviour, and I propose fsync=full. Furthermore, I think that\nsetting should also affect pg_fdatasync(), given that Apple doesn't\neven really have fdatasync() (perhaps if they carry out their threat\nto implement it, they'll also invent F_FULLFDATASYNC; for now it\n*seems* to be basically just another name for fsync() albeit\nundeclared by <unistd.h>).\n\nIt's possible that fcntl(F_FULLFSYNC) might fail with ENOSUPP or other\nerrors in obscure cases (eg unusual file systems). In that case, you\ncould manually lower fsync to just \"on\" and do your own research on\nwhether power loss can toast your database, but that doesn't seem like\na reason for us not to ship good solid defaults for typical users.\n\nRationale for changing wal_sync_method globally (for now):\n\nWith wal_sync_method=fdatasync as default for Linux, FreeBSD, OpenBSD,\nDragonflyBSD already, if we added macOS and Windows, that'd leave only\nNetBSD, AIX, Solaris/illumos. I don't like having different and more\nmagical defaults on rare target OSes with no expert users left in our\ncommunity (as [6] reminded me), so I figure we'd be better off with\nthe same less magical setting everywhere, as a baseline.\n\nLater we might want a per-platform default again. For example, Linux\n(like Windows) has policies on whether to believe FUA works reliably\nfor the purposes of O_DSYNC, but (unlike Windows) falls back to\nsending cache flushes instead of doing nothing, so in theory\nopen_datasync might be a safe and sometimes better performing default\nthere. If we decided to do that, we'd just restore the\nPLATFORM_DEFAULT_SYNC_METHOD mechanism.\n\nThe only other OS where I have detailed enough knowledge to comment is\nFreeBSD. Its ZFS flushes caches for all levels just fine, so it\ndoesn't much matter, while its UFS never got that memo (so it's like a\nMac and probably other old Unixes; maybe I'll get that fixed, see\nFreeBSD proposal D36371 if interested). The reasons for using\nfdatasync on both FreeBSD and Linux wasn't cache control policies, but\nrather some obscure logic of ours that would turn on O_DIRECT in some\ncases (and I think in the past when wal_level was lower by default, it\nwould have been common), which might have complications or fail. The\nlast trace of that is gone since d4e71df6, so if we were to put Linux\non a 'known-good-for-open_datasync' list I'd probably also consider\nputting FreeBSD on the list too.\n\nNote that while this'll slow down some real world databases by being\nmore careful, 'meson test' time shouldn't be affected on any OS due to\nuse of fsync=off in tests.\n\nDraft patches attached.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJZJVO%3DiX%2Beb-PXi2_XS9ZRqnn_4URh0NUQOwt6-_51xQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/20221123014224.xisi44byq3cf5psi%40awork3.anarazel.de\n[3] https://lists.apple.com/archives/darwin-dev/2005/Feb/msg00087.html\n[4] https://www.postgresql.org/message-id/flat/1527846213.2475.31.camel%40cybertec.at\n[5] https://news.ycombinator.com/item?id=30372194\n[6] https://www.postgresql.org/message-id/flat/18009-40a42f84af3fbda1%40postgresql.org",
"msg_date": "Tue, 18 Jul 2023 15:28:52 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\" [1], but there has\nbeen no activity on this thread for 6+ months.\n\nIs anything else planned, or can you post something to elicit more\ninterest in the patch? Otherwise, if nothing happens then the CF entry\nwill be closed (\"Returned with feedback\") at the end of this CF.\n\n======\n[1] https://commitfest.postgresql.org/46/4453/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:16:17 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "On Mon, 22 Jan 2024 at 07:46, Peter Smith <[email protected]> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\n> been no activity on this thread for 6+ months.\n>\n> Is anything else planned, or can you post something to elicit more\n> interest in the patch? Otherwise, if nothing happens then the CF entry\n> will be closed (\"Returned with feedback\") at the end of this CF.\n\nWith no update to the thread and the patch not applying I'm marking\nthis as returned with feedback. Please feel free to resubmit to the\nnext CF when there is a new version of the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 21:15:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "Rebased over 8d140c58.",
"msg_date": "Sat, 2 Mar 2024 00:04:32 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "Short sales pitch for these patches:\n\n* the default settings eat data on Macs and Windows\n* nobody understands what wal_sync_method=fsync_writethrough means anyway\n* it's a weird kludge that it affects not only WAL, let's clean that up\n\n\n",
"msg_date": "Thu, 14 Mar 2024 13:12:05 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "On Thu, Mar 14, 2024 at 01:12:05PM +1300, Thomas Munro wrote:\n> Short sales pitch for these patches:\n> \n> * the default settings eat data on Macs and Windows\n> * nobody understands what wal_sync_method=fsync_writethrough means anyway\n> * it's a weird kludge that it affects not only WAL, let's clean that up\n\nI recently started using macOS for hacking on Postgres and noticed this\nproblem, so I was delighted to find this thread. I intend to review\nfurther soon, but +1 for improving the default settings. I think we might\nalso need some additional fcntl(F_FULLFSYNC) calls in sync_pgdata(),\nsync_dir_recurse(), etc., which are used by initdb, pg_basebackup, and\nmore.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 24 May 2024 23:41:18 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "On Tue, 18 Jul 2023 at 05:29, Thomas Munro <[email protected]> wrote:\n> It's possible that fcntl(F_FULLFSYNC) might fail with ENOSUPP or other\n> errors in obscure cases (eg unusual file systems). In that case, you\n> could manually lower fsync to just \"on\" and do your own research on\n> whether power loss can toast your database, but that doesn't seem like\n> a reason for us not to ship good solid defaults for typical users.\n\nIs this the only reason why you're suggesting adding fsync=full,\ninstead of simply always setting F_FULLFSYNC when fsync=true on MacOS.\nIf so, I'm not sure we really gain anything by this tri-state. I think\npeople either care about data loss on power loss, or they don't. I\ndoubt many people want his third intermediate option, which afaict\nbasically means lose data on powerloss less often than fsync=false but\nstill lose data most of the time.\n\nIf you're going to keep this tri-state for MacOS, then it still seems\nnicer to me to \"fix\" fsync=true on MacOS and introduce a fsync=partial\nor something. Then defaults are the same across platforms and anyone\nsetting fsync=yes currently in their postgresql.conf would get the\nfixed behaviour on upgrade.\n\n\n",
"msg_date": "Sat, 25 May 2024 13:01:44 +0200",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "On 25.05.24 04:01, Jelte Fennema-Nio wrote:\n> Is this the only reason why you're suggesting adding fsync=full,\n> instead of simply always setting F_FULLFSYNC when fsync=true on MacOS.\n> If so, I'm not sure we really gain anything by this tri-state. I think\n> people either care about data loss on power loss, or they don't. I\n> doubt many people want his third intermediate option, which afaict\n> basically means lose data on powerloss less often than fsync=false but\n> still lose data most of the time.\n\nI agree, two states should be enough. It could basically just be\n\npg_fsync(int fd)\n{\n#if macos\n fcntl(fd, F_FULLFSYNC);\n#else\n fsync(fd);\n#endif\n}\n\n\n\n",
"msg_date": "Wed, 29 May 2024 06:49:57 -0700",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "On Wed, May 29, 2024 at 06:49:57AM -0700, Peter Eisentraut wrote:\n> On 25.05.24 04:01, Jelte Fennema-Nio wrote:\n>> Is this the only reason why you're suggesting adding fsync=full,\n>> instead of simply always setting F_FULLFSYNC when fsync=true on MacOS.\n>> If so, I'm not sure we really gain anything by this tri-state. I think\n>> people either care about data loss on power loss, or they don't. I\n>> doubt many people want his third intermediate option, which afaict\n>> basically means lose data on powerloss less often than fsync=false but\n>> still lose data most of the time.\n> \n> I agree, two states should be enough. It could basically just be\n> \n> pg_fsync(int fd)\n> {\n> #if macos\n> fcntl(fd, F_FULLFSYNC);\n> #else\n> fsync(fd);\n> #endif\n> }\n\nIIUC with this approach, anyone who is using a file system that fails\nfcntl(F_FULLSYNC) with ENOSUPP would have to turn fsync off. That might be\nthe right thing to do since having a third option that sends the data to\nthe disk cache but doesn't provide any real guarantees if you lose power\nmay not be worth much. However, if such a file system _did_ provide such\nguarantees with just fsync(), then it would be unfortunate to force people\nto turn fsync off. But this could very well all be hypothetical, for all I\nknow... In any case, I agree that we should probably use F_FULLFSYNC by\ndefault on macOS.\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 3 Jun 2024 10:28:15 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
},
{
"msg_contents": "On 03.06.24 17:28, Nathan Bossart wrote:\n>> I agree, two states should be enough. It could basically just be\n>>\n>> pg_fsync(int fd)\n>> {\n>> #if macos\n>> fcntl(fd, F_FULLFSYNC);\n>> #else\n>> fsync(fd);\n>> #endif\n>> }\n> IIUC with this approach, anyone who is using a file system that fails\n> fcntl(F_FULLSYNC) with ENOSUPP would have to turn fsync off. That might be\n> the right thing to do since having a third option that sends the data to\n> the disk cache but doesn't provide any real guarantees if you lose power\n> may not be worth much. However, if such a file system_did_ provide such\n> guarantees with just fsync(), then it would be unfortunate to force people\n> to turn fsync off. But this could very well all be hypothetical, for all I\n> know... In any case, I agree that we should probably use F_FULLFSYNC by\n> default on macOS.\n\nYeah, my example code above says \"#if macos\", not \"#ifdef F_FULLSYNC\". \nThe latter might be a problem along the lines you describe if other \nsystems use that symbol in a slightly different manner.\n\n\n\n",
"msg_date": "Wed, 5 Jun 2024 08:32:31 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Volatile write caches on macOS and Windows, redux"
}
] |
[
{
"msg_contents": "---------- Forwarded message ---------\nFrom: Sahil Sojitra <[email protected]>\nDate: Tue, 18 Jul, 2023, 8:43 am\nSubject: Regarding Installation of PostgreSQL\nTo: <[email protected]>\n\n\nHello Sir,\n I got stuck into an error repeatedly while installing\nPostgreSQL v15.3 and I don't know what to do, while opening pgAdmin 4 just\ngetting *a blank page* written *Loading pgAdmin 4 v7.4.... * plz provide me\nthe steps to resolve this issue. i am attaching the screenshot of the error\nbelow",
"msg_date": "Tue, 18 Jul 2023 11:18:49 +0530",
"msg_from": "Sahil Sojitra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Regarding Installation of PostgreSQL"
},
{
"msg_contents": "You are still in the wrong place - this is a developers list, which is only\nslightly less bad than sending it to a security list.\n\nWe have a \"general\" list if you really cannot find a better place to send\nstuff. But in this case your complaint has to do with the pgAdmin program\nso its support list would be most appropriate.\n\nhttps://www.postgresql.org/list/pgadmin-support/\n\nThat all said, you probably should try downgrading pgAdmin, the 7.4 release\nseems to be buggy from other reports.\n\nDavid J.\n\n\nOn Tue, Jul 18, 2023 at 9:45 AM Sahil Sojitra <[email protected]>\nwrote:\n\n>\n> ---------- Forwarded message ---------\n> From: Sahil Sojitra <[email protected]>\n> Date: Tue, 18 Jul, 2023, 8:43 am\n> Subject: Regarding Installation of PostgreSQL\n> To: <[email protected]>\n>\n>\n> Hello Sir,\n> I got stuck into an error repeatedly while installing\n> PostgreSQL v15.3 and I don't know what to do, while opening pgAdmin 4 just\n> getting *a blank page* written *Loading pgAdmin 4 v7.4.... * plz provide\n> me the steps to resolve this issue. i am attaching the screenshot of the\n> error below\n>\n>\n\nYou are still in the wrong place - this is a developers list, which is only slightly less bad than sending it to a security list.We have a \"general\" list if you really cannot find a better place to send stuff. But in this case your complaint has to do with the pgAdmin program so its support list would be most appropriate.https://www.postgresql.org/list/pgadmin-support/That all said, you probably should try downgrading pgAdmin, the 7.4 release seems to be buggy from other reports.David J.On Tue, Jul 18, 2023 at 9:45 AM Sahil Sojitra <[email protected]> wrote:---------- Forwarded message ---------From: Sahil Sojitra <[email protected]>Date: Tue, 18 Jul, 2023, 8:43 amSubject: Regarding Installation of PostgreSQLTo: <[email protected]>Hello Sir, I got stuck into an error repeatedly while installing PostgreSQL v15.3 and I don't know what to do, while opening pgAdmin 4 just getting a blank page written Loading pgAdmin 4 v7.4.... plz provide me the steps to resolve this issue. i am attaching the screenshot of the error below",
"msg_date": "Tue, 18 Jul 2023 09:57:44 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regarding Installation of PostgreSQL"
}
] |
[
{
"msg_contents": "Since posgres 13 there's the option to do a FORCE when dropping a database\n(so it disconnects current users) Documentation here:\nhttps://www.postgresql.org/docs/current/sql-dropdatabase.html\n\nI am currently using dir format for the output\n pg_dump -d \"bdname\" -F d -j 4 -v -f /tmp/dir\n\nAnd restoring the database with\n pg_restore -d postgres -C -c --exit-on-error -F d -j 3 -v /tmp/dir\n\nHaving an option to add the FORCE option to either the generated dump by\npg_dump, or in the pg_restore would be very useful when restoring the\ndatabases to another servers so it would avoid having to do scripting.\n\nIn my specific case I am using this to refresh periodically a development\nenvironment with data from production servers for a small database (~200M).\n\nThanks,\n\nJoan\n\nSince posgres 13 there's the option to do a FORCE when dropping a \ndatabase (so it disconnects current users) Documentation here: https://www.postgresql.org/docs/current/sql-dropdatabase.htmlI am currently using dir format for the output pg_dump -d \"bdname\" -F d -j 4 -v -f /tmp/dirAnd restoring the database with pg_restore -d postgres -C -c --exit-on-error -F d -j 3 -v /tmp/dirHaving\n an option to add the FORCE option to either the generated dump by \npg_dump, or in the pg_restore would be very useful when restoring the \ndatabases to another servers so it would avoid having to do scripting.In my specific case I am using this to refresh periodically a development environment with data from production servers for a small database (~200M).Thanks, Joan",
"msg_date": "Tue, 18 Jul 2023 09:53:25 +0200",
"msg_from": "Joan <[email protected]>",
"msg_from_op": true,
"msg_subject": "There should be a way to use the force flag when restoring databases"
},
{
"msg_contents": "On Tue, Jul 18, 2023 at 12:53 AM Joan <[email protected]> wrote:\n>\n> Since posgres 13 there's the option to do a FORCE when dropping a database (so it disconnects current users) Documentation here: https://www.postgresql.org/docs/current/sql-dropdatabase.html\n>\n> I am currently using dir format for the output\n> pg_dump -d \"bdname\" -F d -j 4 -v -f /tmp/dir\n>\n> And restoring the database with\n> pg_restore -d postgres -C -c --exit-on-error -F d -j 3 -v /tmp/dir\n>\n> Having an option to add the FORCE option to either the generated dump by pg_dump, or in the pg_restore would be very useful when restoring the databases to another servers so it would avoid having to do scripting.\n>\n> In my specific case I am using this to refresh periodically a development environment with data from production servers for a small database (~200M).\n\nMaking force-drop a part of pg_dump output may be dangerous, and not\nprovide much flexibility at restore time.\n\nAdding a force option to pg_restore feels like providing the right tradeoff.\n\nThe docs for 'pg_restore --create` say \"Create the database before\nrestoring into it. If --clean is also specified, drop and recreate the\ntarget database before connecting to it.\"\n\nIf we provided a force option, it may then additionally say: \"If the\n--clean and --force options are specified, DROP DATABASE ... WITH\nFORCE command will be used to drop the database.\"\n\nUsing WITH FORCE is not a guarantee, as the DROP DATABASE docs clarify\nthe conditions under which it may fail.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:28:31 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "HI Gurjeet, that woulld be great, all the cases where a FORCE won't apply\nmake totally sense (either complex scenarios or permission issues)\n\n> It doesn't terminate if prepared transactions, active logical replication\n> slots or subscriptions are present in the target database.\n>\nThis will fail if the current user has no permissions to terminate other\n> connections\n>\n\nRegards\n\nMissatge de Gurjeet Singh <[email protected]> del dia dc., 19 de jul. 2023 a\nles 19:28:\n\n> On Tue, Jul 18, 2023 at 12:53 AM Joan <[email protected]> wrote:\n> >\n> > Since posgres 13 there's the option to do a FORCE when dropping a\n> database (so it disconnects current users) Documentation here:\n> https://www.postgresql.org/docs/current/sql-dropdatabase.html\n> >\n> > I am currently using dir format for the output\n> > pg_dump -d \"bdname\" -F d -j 4 -v -f /tmp/dir\n> >\n> > And restoring the database with\n> > pg_restore -d postgres -C -c --exit-on-error -F d -j 3 -v /tmp/dir\n> >\n> > Having an option to add the FORCE option to either the generated dump by\n> pg_dump, or in the pg_restore would be very useful when restoring the\n> databases to another servers so it would avoid having to do scripting.\n> >\n> > In my specific case I am using this to refresh periodically a\n> development environment with data from production servers for a small\n> database (~200M).\n>\n> Making force-drop a part of pg_dump output may be dangerous, and not\n> provide much flexibility at restore time.\n>\n> Adding a force option to pg_restore feels like providing the right\n> tradeoff.\n>\n> The docs for 'pg_restore --create` say \"Create the database before\n> restoring into it. If --clean is also specified, drop and recreate the\n> target database before connecting to it.\"\n>\n> If we provided a force option, it may then additionally say: \"If the\n> --clean and --force options are specified, DROP DATABASE ... WITH\n> FORCE command will be used to drop the database.\"\n>\n> Using WITH FORCE is not a guarantee, as the DROP DATABASE docs clarify\n> the conditions under which it may fail.\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n>\n\nHI Gurjeet, that woulld be great, all the cases where a FORCE won't apply make totally sense (either complex scenarios or permission issues)It doesn't terminate if prepared transactions, active logical \nreplication slots or subscriptions are present in the target database.This will fail if the current user has no permissions to terminate other connectionsRegards Missatge de Gurjeet Singh <[email protected]> del dia dc., 19 de jul. 2023 a les 19:28:On Tue, Jul 18, 2023 at 12:53 AM Joan <[email protected]> wrote:\n>\n> Since posgres 13 there's the option to do a FORCE when dropping a database (so it disconnects current users) Documentation here: https://www.postgresql.org/docs/current/sql-dropdatabase.html\n>\n> I am currently using dir format for the output\n> pg_dump -d \"bdname\" -F d -j 4 -v -f /tmp/dir\n>\n> And restoring the database with\n> pg_restore -d postgres -C -c --exit-on-error -F d -j 3 -v /tmp/dir\n>\n> Having an option to add the FORCE option to either the generated dump by pg_dump, or in the pg_restore would be very useful when restoring the databases to another servers so it would avoid having to do scripting.\n>\n> In my specific case I am using this to refresh periodically a development environment with data from production servers for a small database (~200M).\n\nMaking force-drop a part of pg_dump output may be dangerous, and not\nprovide much flexibility at restore time.\n\nAdding a force option to pg_restore feels like providing the right tradeoff.\n\nThe docs for 'pg_restore --create` say \"Create the database before\nrestoring into it. If --clean is also specified, drop and recreate the\ntarget database before connecting to it.\"\n\nIf we provided a force option, it may then additionally say: \"If the\n--clean and --force options are specified, DROP DATABASE ... WITH\nFORCE command will be used to drop the database.\"\n\nUsing WITH FORCE is not a guarantee, as the DROP DATABASE docs clarify\nthe conditions under which it may fail.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Thu, 20 Jul 2023 08:44:44 +0200",
"msg_from": "Joan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "> On 19 Jul 2023, at 19:28, Gurjeet Singh <[email protected]> wrote:\n> \n> On Tue, Jul 18, 2023 at 12:53 AM Joan <[email protected]> wrote:\n>> \n>> Since posgres 13 there's the option to do a FORCE when dropping a database (so it disconnects current users) Documentation here: https://www.postgresql.org/docs/current/sql-dropdatabase.html\n>> \n>> I am currently using dir format for the output\n>> pg_dump -d \"bdname\" -F d -j 4 -v -f /tmp/dir\n>> \n>> And restoring the database with\n>> pg_restore -d postgres -C -c --exit-on-error -F d -j 3 -v /tmp/dir\n>> \n>> Having an option to add the FORCE option to either the generated dump by pg_dump, or in the pg_restore would be very useful when restoring the databases to another servers so it would avoid having to do scripting.\n>> \n>> In my specific case I am using this to refresh periodically a development environment with data from production servers for a small database (~200M).\n> \n> Making force-drop a part of pg_dump output may be dangerous, and not\n> provide much flexibility at restore time.\n> \n> Adding a force option to pg_restore feels like providing the right tradeoff.\n> \n> The docs for 'pg_restore --create` say \"Create the database before\n> restoring into it. If --clean is also specified, drop and recreate the\n> target database before connecting to it.\"\n> \n> If we provided a force option, it may then additionally say: \"If the\n> --clean and --force options are specified, DROP DATABASE ... WITH\n> FORCE command will be used to drop the database.\"\n\npg_restore --clean refers to dropping any pre-existing database objects and not\njust databases, but --force would only apply to databases.\n\nI wonder if it's worth complicating pg_restore with that when running dropdb\n--force before pg_restore is an option for those wanting to use WITH FORCE.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 20 Jul 2023 11:09:53 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 2:10 AM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 19 Jul 2023, at 19:28, Gurjeet Singh <[email protected]> wrote:\n> >\n> > The docs for 'pg_restore --create` say \"Create the database before\n> > restoring into it. If --clean is also specified, drop and recreate the\n> > target database before connecting to it.\"\n> >\n> > If we provided a force option, it may then additionally say: \"If the\n> > --clean and --force options are specified, DROP DATABASE ... WITH\n> > FORCE command will be used to drop the database.\"\n>\n> pg_restore --clean refers to dropping any pre-existing database objects and not\n> just databases, but --force would only apply to databases.\n>\n> I wonder if it's worth complicating pg_restore with that when running dropdb\n> --force before pg_restore is an option for those wanting to use WITH FORCE.\n\nFair point. But the same argument could've been applied to --clean\noption, as well; why overload the meaning of --clean and make it drop\ndatabase, when a dropdb before pg_restore was an option.\n\nIMHO, if pg_restore offers to drop database, providing an option to\nthe user to do it forcibly is not that much of a stretch, and within\nreason for the user to expect it to be there, like Joan did.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 20 Jul 2023 11:36:08 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "Hi everyone,\n\nI have been working on this. This is a proposed patch for it so we have a\nforce option for DROPping the database.\n\nI'd appreciate it if anyone can review.\n\n\n\nOn Thu, Jul 20, 2023 at 9:36 PM Gurjeet Singh <[email protected]> wrote:\n\n> On Thu, Jul 20, 2023 at 2:10 AM Daniel Gustafsson <[email protected]> wrote:\n> >\n> > > On 19 Jul 2023, at 19:28, Gurjeet Singh <[email protected]> wrote:\n> > >\n> > > The docs for 'pg_restore --create` say \"Create the database before\n> > > restoring into it. If --clean is also specified, drop and recreate the\n> > > target database before connecting to it.\"\n> > >\n> > > If we provided a force option, it may then additionally say: \"If the\n> > > --clean and --force options are specified, DROP DATABASE ... WITH\n> > > FORCE command will be used to drop the database.\"\n> >\n> > pg_restore --clean refers to dropping any pre-existing database objects\n> and not\n> > just databases, but --force would only apply to databases.\n> >\n> > I wonder if it's worth complicating pg_restore with that when running\n> dropdb\n> > --force before pg_restore is an option for those wanting to use WITH\n> FORCE.\n>\n> Fair point. But the same argument could've been applied to --clean\n> option, as well; why overload the meaning of --clean and make it drop\n> database, when a dropdb before pg_restore was an option.\n>\n> IMHO, if pg_restore offers to drop database, providing an option to\n> the user to do it forcibly is not that much of a stretch, and within\n> reason for the user to expect it to be there, like Joan did.\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n>\n>\n>",
"msg_date": "Sun, 23 Jul 2023 16:08:53 +0300",
"msg_from": "Ahmed Ibrahim <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "On Sun, Jul 23, 2023 at 6:09 AM Ahmed Ibrahim\n<[email protected]> wrote:\n>\n> Hi everyone,\n>\n> I have been working on this. This is a proposed patch for it so we have a force option for DROPping the database.\n>\n> I'd appreciate it if anyone can review.\n\nHi Ahmed,\n\n Thanks for working on this patch!\n\n+\n+ int force;\n\n That extra blank line is unnecessary.\n\n Using the bool data type, instead of int, for this option would've\nmore natural.\n\n+ if (ropt->force){\n\n Postgres coding style is to place the curly braces on a new line,\nby themselves.\n\n+ char *dropStmt = pg_strdup(te->dropStmt);\n\nSee if you can use pnstrdup(). Using that may obviate the need for\ndoing the null-placement acrobatics below.\n\n+ PQExpBuffer ftStmt = createPQExpBuffer();\n\n What does the 'ft' stand for in this variable's name?\n\n+ dropStmt[strlen(dropStmt) - 2] = ' ';\n+ dropStmt[strlen(dropStmt) - 1] = '\\0';\n\n Try to evaluate the strlen() once and reuse it.\n\n+ appendPQExpBufferStr(ftStmt, dropStmt);\n+ appendPQExpBufferStr(ftStmt, \"WITH(FORCE);\");\n+ te->dropStmt = ftStmt->data;\n+ }\n+\n\n Remove the extra trailing whitespace on that last blank line.\n\n I think this whole code block needs to be protected by an 'if\n(ropt->createDB)' check, like it's done about 20 lines above this\nhunk. Otherwise, we may be appending 'WITH (FORCE)' for the DROP\ncommand of a different (not a database) object type.\n\n Also, you may want to check that the target database version is\none that supports WITH force option. This command will fail for\nanything before v13.\n\n The patch needs doc changes (pg_restore.sgml). And it needs to\nmention --force option in the help output, as well (usage() function).\n\n Can you please see if you can add appropriate test case for this.\nThe committers may insist on it, when reviewing.\n\n Here are a couple of helpful links on how to prepare and submit\npatches to the community. You may not need to strictly adhere to\nthese, but try to pick up a few recommendations that would make the\nreviewer's job a bit easier.\n\n[1]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches\n[2]: https://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 26 Jul 2023 15:36:25 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "Hi Gurjeet,\n\nI have addressed all your comments except for the tests.\n\nI have tried adding test cases but I wasn't able to do it as it's in my\nmind. I am not able to do things like having connections to the database\nand trying to force the restore, then it will complete successfully\notherwise it shows errors.\n\nIn the meantime I will continue trying to do the test cases. If anyone can\nhelp on that, I will appreciate it.\n\nThanks\n\nOn Thu, Jul 27, 2023 at 1:36 AM Gurjeet Singh <[email protected]> wrote:\n\n> On Sun, Jul 23, 2023 at 6:09 AM Ahmed Ibrahim\n> <[email protected]> wrote:\n> >\n> > Hi everyone,\n> >\n> > I have been working on this. This is a proposed patch for it so we have\n> a force option for DROPping the database.\n> >\n> > I'd appreciate it if anyone can review.\n>\n> Hi Ahmed,\n>\n> Thanks for working on this patch!\n>\n> +\n> + int force;\n>\n> That extra blank line is unnecessary.\n>\n> Using the bool data type, instead of int, for this option would've\n> more natural.\n>\n> + if (ropt->force){\n>\n> Postgres coding style is to place the curly braces on a new line,\n> by themselves.\n>\n> + char *dropStmt = pg_strdup(te->dropStmt);\n>\n> See if you can use pnstrdup(). Using that may obviate the need for\n> doing the null-placement acrobatics below.\n>\n> + PQExpBuffer ftStmt = createPQExpBuffer();\n>\n> What does the 'ft' stand for in this variable's name?\n>\n> + dropStmt[strlen(dropStmt) - 2] = ' ';\n> + dropStmt[strlen(dropStmt) - 1] = '\\0';\n>\n> Try to evaluate the strlen() once and reuse it.\n>\n> + appendPQExpBufferStr(ftStmt, dropStmt);\n> + appendPQExpBufferStr(ftStmt, \"WITH(FORCE);\");\n> + te->dropStmt = ftStmt->data;\n> + }\n> +\n>\n> Remove the extra trailing whitespace on that last blank line.\n>\n> I think this whole code block needs to be protected by an 'if\n> (ropt->createDB)' check, like it's done about 20 lines above this\n> hunk. Otherwise, we may be appending 'WITH (FORCE)' for the DROP\n> command of a different (not a database) object type.\n>\n> Also, you may want to check that the target database version is\n> one that supports WITH force option. This command will fail for\n> anything before v13.\n>\n> The patch needs doc changes (pg_restore.sgml). And it needs to\n> mention --force option in the help output, as well (usage() function).\n>\n> Can you please see if you can add appropriate test case for this.\n> The committers may insist on it, when reviewing.\n>\n> Here are a couple of helpful links on how to prepare and submit\n> patches to the community. You may not need to strictly adhere to\n> these, but try to pick up a few recommendations that would make the\n> reviewer's job a bit easier.\n>\n> [1]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches\n> [2]: https://wiki.postgresql.org/wiki/Submitting_a_Patch\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n>",
"msg_date": "Tue, 1 Aug 2023 18:19:15 +0300",
"msg_from": "Ahmed Ibrahim <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "Hi all,\n\nI have addressed the pg version compatibility with the FORCE option in\ndrop. Here is the last version of the patch\n\nOn Tue, Aug 1, 2023 at 6:19 PM Ahmed Ibrahim <[email protected]>\nwrote:\n\n> Hi Gurjeet,\n>\n> I have addressed all your comments except for the tests.\n>\n> I have tried adding test cases but I wasn't able to do it as it's in my\n> mind. I am not able to do things like having connections to the database\n> and trying to force the restore, then it will complete successfully\n> otherwise it shows errors.\n>\n> In the meantime I will continue trying to do the test cases. If anyone can\n> help on that, I will appreciate it.\n>\n> Thanks\n>\n> On Thu, Jul 27, 2023 at 1:36 AM Gurjeet Singh <[email protected]> wrote:\n>\n>> On Sun, Jul 23, 2023 at 6:09 AM Ahmed Ibrahim\n>> <[email protected]> wrote:\n>> >\n>> > Hi everyone,\n>> >\n>> > I have been working on this. This is a proposed patch for it so we have\n>> a force option for DROPping the database.\n>> >\n>> > I'd appreciate it if anyone can review.\n>>\n>> Hi Ahmed,\n>>\n>> Thanks for working on this patch!\n>>\n>> +\n>> + int force;\n>>\n>> That extra blank line is unnecessary.\n>>\n>> Using the bool data type, instead of int, for this option would've\n>> more natural.\n>>\n>> + if (ropt->force){\n>>\n>> Postgres coding style is to place the curly braces on a new line,\n>> by themselves.\n>>\n>> + char *dropStmt = pg_strdup(te->dropStmt);\n>>\n>> See if you can use pnstrdup(). Using that may obviate the need for\n>> doing the null-placement acrobatics below.\n>>\n>> + PQExpBuffer ftStmt = createPQExpBuffer();\n>>\n>> What does the 'ft' stand for in this variable's name?\n>>\n>> + dropStmt[strlen(dropStmt) - 2] = ' ';\n>> + dropStmt[strlen(dropStmt) - 1] = '\\0';\n>>\n>> Try to evaluate the strlen() once and reuse it.\n>>\n>> + appendPQExpBufferStr(ftStmt, dropStmt);\n>> + appendPQExpBufferStr(ftStmt, \"WITH(FORCE);\");\n>> + te->dropStmt = ftStmt->data;\n>> + }\n>> +\n>>\n>> Remove the extra trailing whitespace on that last blank line.\n>>\n>> I think this whole code block needs to be protected by an 'if\n>> (ropt->createDB)' check, like it's done about 20 lines above this\n>> hunk. Otherwise, we may be appending 'WITH (FORCE)' for the DROP\n>> command of a different (not a database) object type.\n>>\n>> Also, you may want to check that the target database version is\n>> one that supports WITH force option. This command will fail for\n>> anything before v13.\n>>\n>> The patch needs doc changes (pg_restore.sgml). And it needs to\n>> mention --force option in the help output, as well (usage() function).\n>>\n>> Can you please see if you can add appropriate test case for this.\n>> The committers may insist on it, when reviewing.\n>>\n>> Here are a couple of helpful links on how to prepare and submit\n>> patches to the community. You may not need to strictly adhere to\n>> these, but try to pick up a few recommendations that would make the\n>> reviewer's job a bit easier.\n>>\n>> [1]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches\n>> [2]: https://wiki.postgresql.org/wiki/Submitting_a_Patch\n>>\n>> Best regards,\n>> Gurjeet\n>> http://Gurje.et\n>>\n>",
"msg_date": "Sun, 6 Aug 2023 22:39:04 +0300",
"msg_from": "Ahmed Ibrahim <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "On 06.08.23 21:39, Ahmed Ibrahim wrote:\n> I have addressed the pg version compatibility with the FORCE option in \n> drop. Here is the last version of the patch\n\nThe patch is pretty small, but I think there is some disagreement \nwhether we want this option at all? Maybe some more people can make \ntheir opinions more explicit?\n\n\n\n",
"msg_date": "Wed, 20 Sep 2023 11:24:08 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "> On 20 Sep 2023, at 11:24, Peter Eisentraut <[email protected]> wrote:\n> \n> On 06.08.23 21:39, Ahmed Ibrahim wrote:\n>> I have addressed the pg version compatibility with the FORCE option in drop. Here is the last version of the patch\n> \n> The patch is pretty small, but I think there is some disagreement whether we want this option at all? Maybe some more people can make their opinions more explicit?\n\nMy my concern is that a --force parameter conveys to the user that it's a big\nhammer to override things and get them done, when in reality this doesn't do\nthat. Taking the example from the pg_restore documentation which currently has\na dropdb step:\n\n====\n:~ $ ./bin/createdb foo\n:~ $ ./bin/psql -c \"create table t(a int);\" foo\nCREATE TABLE\n:~ $ ./bin/pg_dump --format=custom -f foo.dump foo\n:~ $ ./bin/pg_restore -d foo -C -c --force foo.dump\npg_restore: error: could not execute query: ERROR: cannot drop the currently open database\nCommand was: DROP DATABASE foo WITH(FORCE);\npg_restore: error: could not execute query: ERROR: database \"foo\" already exists\nCommand was: CREATE DATABASE foo WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE_PROVIDER = libc LOCALE = 'en_US.UTF-8';\n\n\npg_restore: error: could not execute query: ERROR: relation \"t\" already exists\nCommand was: CREATE TABLE public.t (\n a integer\n);\n\n\npg_restore: warning: errors ignored on restore: 3\n====\n\nWithout knowing internals, I would expect an option named --force to make that\njust work, especially given the documentation provided in this patch. I think\nthe risk for user confusion outweighs the benefits, or maybe I'm just not smart\nenough to see all the benefits? If so, I would argue that more documentation\nis required.\n\nSkimming the patch itself, it updates the --help output with --force for\npg_dump and not for pg_restore. Additionally it produces a compilerwarning:\n\npg_restore.c:127:26: warning: incompatible pointer types initializing 'int *' with an expression of type 'bool *' [-Wincompatible-pointer-types]\n {\"force\", no_argument, &force, 1},\n ^~~~~~\n1 warning generated.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Sep 2023 13:57:28 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 17:27, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 20 Sep 2023, at 11:24, Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 06.08.23 21:39, Ahmed Ibrahim wrote:\n> >> I have addressed the pg version compatibility with the FORCE option in drop. Here is the last version of the patch\n> >\n> > The patch is pretty small, but I think there is some disagreement whether we want this option at all? Maybe some more people can make their opinions more explicit?\n>\n> My my concern is that a --force parameter conveys to the user that it's a big\n> hammer to override things and get them done, when in reality this doesn't do\n> that. Taking the example from the pg_restore documentation which currently has\n> a dropdb step:\n>\n> ====\n> :~ $ ./bin/createdb foo\n> :~ $ ./bin/psql -c \"create table t(a int);\" foo\n> CREATE TABLE\n> :~ $ ./bin/pg_dump --format=custom -f foo.dump foo\n> :~ $ ./bin/pg_restore -d foo -C -c --force foo.dump\n> pg_restore: error: could not execute query: ERROR: cannot drop the currently open database\n> Command was: DROP DATABASE foo WITH(FORCE);\n> pg_restore: error: could not execute query: ERROR: database \"foo\" already exists\n> Command was: CREATE DATABASE foo WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE_PROVIDER = libc LOCALE = 'en_US.UTF-8';\n>\n>\n> pg_restore: error: could not execute query: ERROR: relation \"t\" already exists\n> Command was: CREATE TABLE public.t (\n> a integer\n> );\n>\n>\n> pg_restore: warning: errors ignored on restore: 3\n> ====\n>\n> Without knowing internals, I would expect an option named --force to make that\n> just work, especially given the documentation provided in this patch. I think\n> the risk for user confusion outweighs the benefits, or maybe I'm just not smart\n> enough to see all the benefits? If so, I would argue that more documentation\n> is required.\n>\n> Skimming the patch itself, it updates the --help output with --force for\n> pg_dump and not for pg_restore. Additionally it produces a compilerwarning:\n>\n> pg_restore.c:127:26: warning: incompatible pointer types initializing 'int *' with an expression of type 'bool *' [-Wincompatible-pointer-types]\n> {\"force\", no_argument, &force, 1},\n> ^~~~~~\n> 1 warning generated.\n\nI have changed the status of the patch to \"Returned with Feedback\" as\nthe comments have not been addressed for some time. Please feel free\nto address these issues and update commitfest accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 16:25:07 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: There should be a way to use the force flag when restoring\n databases"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWe have encountered an issue (invalid message length) when the\npassword length is > 1000 in pg 11,12,13 versions. This is due to the\nlimit(1000) on the max length of the password. In this case the\npassword is an access token(JWT) which can have varied lengths >\n1000. I see that this is already handled for GSS and SSPI\nauthentication tokens where the maximum accepted size is 65535.\n\nThis is not the case with pg versions >=14 as the limit on max length\nis 65535(this change was added as part of sanity checks[1]).\n\nSo we have two options:\n1. Backport patch[1] to 11,12,13\n2. Change ONLY the limit on the max length of the password(my patch attached).\n\nPlease let me know your thoughts.\n\nThanks,\nMahendrakar.\n\n[1]: https://www.postgresql.org/message-id/flat/2003757.1619373089%40sss.pgh.pa.us",
"msg_date": "Tue, 18 Jul 2023 15:00:25 +0530",
"msg_from": "mahendrakar s <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increase limit on max length of the password( pg versions < 14)"
},
{
"msg_contents": "> On 18 Jul 2023, at 11:30, mahendrakar s <[email protected]> wrote:\n\n> So we have two options:\n> 1. Backport patch[1] to 11,12,13\n> 2. Change ONLY the limit on the max length of the password(my patch attached).\n\nWe typically only backpatch bugfixes and not functional changes, and this seems\nto fall in the latter category.\n\nAs the size of the JWT depends on the number of claims in it, are you able to\nreduce the number of claims to stay under the limit as a workaround?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 18 Jul 2023 11:40:31 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increase limit on max length of the password( pg versions < 14)"
},
{
"msg_contents": "Access token length with bare minimal claims is more than 1000 in this case.\nWorkarounds are not possible in production.\n\nOn Tue, 18 Jul 2023 at 15:10, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 18 Jul 2023, at 11:30, mahendrakar s <[email protected]> wrote:\n>\n> > So we have two options:\n> > 1. Backport patch[1] to 11,12,13\n> > 2. Change ONLY the limit on the max length of the password(my patch attached).\n>\n> We typically only backpatch bugfixes and not functional changes, and this seems\n> to fall in the latter category.\n>\n> As the size of the JWT depends on the number of claims in it, are you able to\n> reduce the number of claims to stay under the limit as a workaround?\n>\n> --\n> Daniel Gustafsson\n>\n\n\n",
"msg_date": "Tue, 18 Jul 2023 16:53:00 +0530",
"msg_from": "mahendrakar s <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Increase limit on max length of the password( pg versions < 14)"
},
{
"msg_contents": "On 7/18/23 11:30, mahendrakar s wrote:\n> Hi hackers,\n> \n> We have encountered an issue (invalid message length) when the\n> password length is > 1000 in pg 11,12,13 versions. This is due to the\n> limit(1000) on the max length of the password. In this case the\n> password is an access token(JWT) which can have varied lengths >\n> 1000. I see that this is already handled for GSS and SSPI\n> authentication tokens where the maximum accepted size is 65535.\n> \n> This is not the case with pg versions >=14 as the limit on max length\n> is 65535(this change was added as part of sanity checks[1]).\n> \n> So we have two options:\n> 1. Backport patch[1] to 11,12,13\n> 2. Change ONLY the limit on the max length of the password(my patch attached).\n> \n> Please let me know your thoughts.\n\nThe third option is to upgrade.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 18 Jul 2023 14:12:45 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increase limit on max length of the password( pg versions < 14)"
},
{
"msg_contents": "Vik Fearing <[email protected]> writes:\n> On 7/18/23 11:30, mahendrakar s wrote:\n>> We have encountered an issue (invalid message length) when the\n>> password length is > 1000 in pg 11,12,13 versions.\n\n> The third option is to upgrade.\n\nYeah. I don't see any good reason to consider this behavior change as\nsomething other than a new feature. Also, the proposed patch is\neffectively cherry-picking one single line of the combined effect of\ntwo rather large patches (67a472d71 and 9626325da). I'm unconvinced\nthat it does very much of use without the rest of those patches; but\nwe are most certainly not back-patching 67a472d71.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jul 2023 10:50:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increase limit on max length of the password( pg versions < 14)"
}
] |
[
{
"msg_contents": "Looking at the upgrade question in [0] made me realize that we discard\npotentially useful information for troubleshooting. When we check if the\ncluster is properly shut down we might as well include the status from\npg_controldata in the errormessage as per the trivial (but yet untested)\nproposed diff.\n\nIs there a reason not to be verbose here as users might copy/paste this output\nwhen asking for help?\n\n--\nDaniel Gustafsson\n\n[0] CACoPQdbQTysF=EKckyFNGTdpOdXXMEsf_2ACno+bcNqQCB5raA@mail.gmail.com",
"msg_date": "Tue, 18 Jul 2023 16:59:32 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Giving more detail in pg_upgrade errormessage"
},
{
"msg_contents": "Hi,\n\n> Is there a reason not to be verbose here as users might copy/paste this output\n> when asking for help?\n\nSeems better than nothing.\n\n> [0] CACoPQdbQTysF=EKckyFNGTdpOdXXMEsf_2ACno+bcNqQCB5raA@mail.gmail.com\n\n\nFull link for convenience.\n\n[0]https://www.postgresql.org/message-id/CACoPQdbQTysF=EKckyFNGTdpOdXXMEsf_2ACno+bcNqQCB5raA@mail.gmail.com\n\nZhang Mingli\nhttps://www.hashdata.xyz\n\n\nHi,\nIs there a reason not to be verbose here as users might copy/paste this outputwhen asking for help?Seems better than nothing.[0] CACoPQdbQTysF=EKckyFNGTdpOdXXMEsf_2ACno+bcNqQCB5raA@mail.gmail.comFull link for convenience.[0]https://www.postgresql.org/message-id/CACoPQdbQTysF=EKckyFNGTdpOdXXMEsf_2ACno+bcNqQCB5raA@mail.gmail.comZhang Minglihttps://www.hashdata.xyz",
"msg_date": "Tue, 18 Jul 2023 23:17:24 +0800",
"msg_from": "Mingli Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Giving more detail in pg_upgrade errormessage"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> Looking at the upgrade question in [0] made me realize that we discard\n> potentially useful information for troubleshooting. When we check if the\n> cluster is properly shut down we might as well include the status from\n> pg_controldata in the errormessage as per the trivial (but yet untested)\n> proposed diff.\n\n> Is there a reason not to be verbose here as users might copy/paste this output\n> when asking for help?\n\nAgreed, but I think you need to chomp the string's trailing newline,\nor it'll look ugly. You might as well do that further up and remove\nthe newlines from the comparison strings, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jul 2023 12:04:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Giving more detail in pg_upgrade errormessage"
},
{
"msg_contents": "> On 18 Jul 2023, at 18:04, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> Looking at the upgrade question in [0] made me realize that we discard\n>> potentially useful information for troubleshooting. When we check if the\n>> cluster is properly shut down we might as well include the status from\n>> pg_controldata in the errormessage as per the trivial (but yet untested)\n>> proposed diff.\n> \n>> Is there a reason not to be verbose here as users might copy/paste this output\n>> when asking for help?\n> \n> Agreed, but I think you need to chomp the string's trailing newline,\n> or it'll look ugly. You might as well do that further up and remove\n> the newlines from the comparison strings, too.\n\nYeah, the previous diff was mostly a sketch. The attached strips newline and\nmakes the comparisons a bit neater in the process due to that. Will apply this\ntrivial but seemingly useful change unless objected to.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 19 Jul 2023 22:26:02 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Giving more detail in pg_upgrade errormessage"
}
] |
[
{
"msg_contents": "Dear pgsql:\n When we adding a custom system table and defining an index for it at the\nsame time, the code compilation is possible, but the following errors may\noccur when starting the database:\n\n ----------------------------------------------------------------------------------------------------------\n fixing permissions on existing directory /usr/local/pgsql/data ... ok\n creating subdirectories ... ok\n selecting dynamic shared memory implementation ... posix\n selecting default max_connections ... 100\n selecting default shared_buffers ... 128MB\n selecting default time zone ... Asia/Shanghai\n creating configuration files ... ok\n running bootstrap script ... 2023-07-19 09:40:47.083 CST [2808392] FATAL:\n operator class \"key_ops\" does not exist for access method \"btree\"\n 2023-07-19 09:40:47.083 CST [2808392] PANIC: cannot abort transaction\n1, it was already committed\n Aborted (core dumped)\n child process exited with exit code 134\n initdb: removing contents of data directory \"/usr/local/pgsql/data\"\n---------------------------------------------------------------------------------------------------------------\n There are my steps as follows:\n\n 1. add a new header file (pg_bm_client_global_keys_args.h) for the\n custom system table, the file path is :*\n src/include/catalog/pg_bm_client_global_keys_args.h;*\n 2. the modified Makefile is: *src/backend/catalog/Makefile, *add the new\n header file at the label *CATALOG_HEADERS *of the file;\n 3. make; make install\n 4. when run the cmd :* /usr/local/pgsql/bin/initdb -D\n /usr/local/pgsql/data, *the error will occur*.*\n\nThis problem has been bothering me for a long time. Can you help me solve\nit?\n\n\nBest Wishes.\n\nDear pgsql: When we adding a custom system table and defining an index for it at the same time, the code compilation is possible, but the following errors may occur when starting the database: ---------------------------------------------------------------------------------------------------------- fixing permissions on existing directory /usr/local/pgsql/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default time zone ... Asia/Shanghai creating configuration files ... ok running bootstrap script ... 2023-07-19 09:40:47.083 CST [2808392] FATAL: operator class \"key_ops\" does not exist for access method \"btree\" 2023-07-19 09:40:47.083 CST [2808392] PANIC: cannot abort transaction 1, it was already committed Aborted (core dumped) child process exited with exit code 134 initdb: removing contents of data directory \"/usr/local/pgsql/data\"--------------------------------------------------------------------------------------------------------------- There are my steps as follows:add a new header file\n\n(pg_bm_client_global_keys_args.h) for the custom system table, the file path is : src/include/catalog/pg_bm_client_global_keys_args.h;the modified Makefile is: src/backend/catalog/Makefile, add the new header file at the label CATALOG_HEADERS of the file;make; make installwhen run the cmd : /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data, the error will occur. This problem has been bothering me for a long time. Can you help me solve it?Best Wishes.",
"msg_date": "Wed, 19 Jul 2023 10:39:44 +0800",
"msg_from": "mao zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "FATAL: operator class \"xxxx\" does not exist for access method \"btree\""
}
] |
[
{
"msg_contents": "Dear pgsql:\n When we adding a custom system table and defining an index for it at the\nsame time, the code compilation is possible, but the following errors may\noccur when starting the database:\n\n ----------------------------------------------------------------------------------------------------------\n fixing permissions on existing directory /usr/local/pgsql/data ... ok\n creating subdirectories ... ok\n selecting dynamic shared memory implementation ... posix\n selecting default max_connections ... 100\n selecting default shared_buffers ... 128MB\n selecting default time zone ... Asia/Shanghai\n creating configuration files ... ok\n running bootstrap script ... 2023-07-19 09:40:47.083 CST [2808392] FATAL:\n operator class \"key_ops\" does not exist for access method \"btree\"\n 2023-07-19 09:40:47.083 CST [2808392] PANIC: cannot abort transaction\n1, it was already committed\n Aborted (core dumped)\n child process exited with exit code 134\n initdb: removing contents of data directory \"/usr/local/pgsql/data\"\n---------------------------------------------------------------------------------------------------------------\n There are my steps as follows:\n\n 1. add a new header file (pg_bm_client_global_keys_args.h) for the\n custom system table, the file path is :\n * src/include/catalog/pg_bm_client_global_keys_args.h;*\n 2. the modified Makefile is: *src/backend/catalog/Makefile, *add the new\n header file at the label *CATALOG_HEADERS *of the file;\n 3. make; make install\n 4. when run the cmd :* /usr/local/pgsql/bin/initdb -D\n /usr/local/pgsql/data, *the error will occur*.*\n\nThis problem has been bothering me for a long time. Can you help me solve\nit?\n\n\nBest Wishes.",
"msg_date": "Wed, 19 Jul 2023 11:01:19 +0800",
"msg_from": "mao zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "FATAL: operator class \"xxxx\" does not exist for access method \"btree\""
},
{
"msg_contents": "mao zhang <[email protected]> writes:\n> running bootstrap script ... 2023-07-19 09:40:47.083 CST [2808392] FATAL:\n> operator class \"key_ops\" does not exist for access method \"btree\"\n\nI'm not sure what you find so mysterious about that error message.\n\n>\tOid\t\t\t\t\tglobal_key_id;\n> ...\n> DECLARE_UNIQUE_INDEX(pg_bm_client_global_keys_args_oid_index,8063,BmClientGlobalKeysArgsOidIndexId,on pg_bm_client_global_keys_args using btree(global_key_id key_ops));\n\nIf global_key_id is an OID, why aren't you declaring its index\nwith opclass oid_ops, rather than the quite nonexistent \"key_ops\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jul 2023 23:10:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FATAL: operator class \"xxxx\" does not exist for access method\n \"btree\""
},
{
"msg_contents": "Fixed!\n\nTom Lane <[email protected]> 于2023年7月19日周三 11:10写道:\n\n> mao zhang <[email protected]> writes:\n> > running bootstrap script ... 2023-07-19 09:40:47.083 CST [2808392]\n> FATAL:\n> > operator class \"key_ops\" does not exist for access method \"btree\"\n>\n> I'm not sure what you find so mysterious about that error message.\n>\n> > Oid global_key_id;\n> > ...\n> >\n> DECLARE_UNIQUE_INDEX(pg_bm_client_global_keys_args_oid_index,8063,BmClientGlobalKeysArgsOidIndexId,on\n> pg_bm_client_global_keys_args using btree(global_key_id key_ops));\n>\n> If global_key_id is an OID, why aren't you declaring its index\n> with opclass oid_ops, rather than the quite nonexistent \"key_ops\"?\n>\n> regards, tom lane\n>\n\nFixed! Tom Lane <[email protected]> 于2023年7月19日周三 11:10写道:mao zhang <[email protected]> writes:\n> running bootstrap script ... 2023-07-19 09:40:47.083 CST [2808392] FATAL:\n> operator class \"key_ops\" does not exist for access method \"btree\"\n\nI'm not sure what you find so mysterious about that error message.\n\n> Oid global_key_id;\n> ...\n> DECLARE_UNIQUE_INDEX(pg_bm_client_global_keys_args_oid_index,8063,BmClientGlobalKeysArgsOidIndexId,on pg_bm_client_global_keys_args using btree(global_key_id key_ops));\n\nIf global_key_id is an OID, why aren't you declaring its index\nwith opclass oid_ops, rather than the quite nonexistent \"key_ops\"?\n\n regards, tom lane",
"msg_date": "Thu, 20 Jul 2023 12:25:47 +0800",
"msg_from": "mao zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FATAL: operator class \"xxxx\" does not exist for access method\n \"btree\""
}
] |
[
{
"msg_contents": "Hello.\n\nThere's an issue brought up in the -bugs list [1]. Since triggers are\ndeactivated on a subscriber by default, foreign key constraints don't\nfire for replicated changes. The docs state this is done to prevent\nrepetitive data propagation between tables on subscribers. But foreign\nkey triggers don't contribute to this issue.\n\nMy understanding is that constraint triggers, including ones created\nusing the \"CREATE CONSTRAINT TRIGGER\" command, aren't spposed to alter\ndata. If this holds true, I propose that we modify the function\nCreateTrigger() to make constraint triggers enabled on subscribers as\nattached. The function CreateTrigger() can choose the value for the\nparameter \"trigger_fires_when\" of CreateTriggerFireingOn() based on\nwhether constraintOid is valid or not.\n\nWhat do you think about this change?\n\n\nA reproducer follows. The last UPDATE successfully propagates to the\nsubscriber, removing a row that couldn't be locally removed on the\nsubscriber due to the referencial constraint.\n\nPublisher:\nCREATE TABLE t (a int not null, b bool not null);\nALTER TABLE t REPLICA IDENTITY FULL;\nINSERT INTO t VALUES (0, true), (1, true), (2, true);\nCREATE PUBLICATION p1 FOR TABLE t WHERE (b IS true);\n\nSubscriber:\nCREATE TABLE t (a int primary key, b bool);\nCREATE TABLE t1 (a int references t(a) ON UPDATE CASCADE);\nCREATE SUBSCRIPTION s1 CONNECTION 'host=/tmp port=5432' PUBLICATION p1;\nSELECT pg_sleep(0.5);\nINSERT INTO t1 VALUES (2);\n\n== trigger correctly fires\nSubscriber:\nDELETE FROM t WHERE a = 2;\n> ERROR: update or delete on table \"t\" violates foreign key constraint \"t1_a_fkey\" on table \"t1\"\n> DETAIL: Key (a)=(2) is still referenced from table \"t1\".\n\n== trigger doesn't fire\nPublisher:\nUPDATE t SET b = false WHERE a = 2;\n\nSubscriber:\nSELECT * FROM t; -- (2 doesn't exist)\n\n\nregards.\n\n[1]: https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Wed, 19 Jul 2023 15:49:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do we want to enable foreign key constraints on subscriber?"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:21 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> There's an issue brought up in the -bugs list [1]. Since triggers are\n> deactivated on a subscriber by default, foreign key constraints don't\n> fire for replicated changes. The docs state this is done to prevent\n> repetitive data propagation between tables on subscribers. But foreign\n> key triggers don't contribute to this issue.\n>\n\nRight and recent reports indicate that this does cause inconvenience for users.\n\n> My understanding is that constraint triggers, including ones created\n> using the \"CREATE CONSTRAINT TRIGGER\" command, aren't spposed to alter\n> data.\n>\n\nI also think so. You need to update the docs for this.\n\nPeter E., do you remember if there is any specific problem in enabling\nsuch triggers by default for apply side? The only thing that I can\nthink of is that the current behavior keeps the trigger-firing rules\nthe same for all kinds of triggers which has merits but OTOH it causes\ninconvenience to users, especially for foreign-key checks.\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Jul 2023 10:47:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want to enable foreign key constraints on subscriber?"
}
] |
[
{
"msg_contents": "When looking at a patch in the CFBot I realized that the SSL tests generate\nbackend warnings under ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS due to roles\nand databases not following the regression test naming convention. While not\nimpacting the tested functionality, it's pretty silly to have warnings in the\ntest logs which can be avoided since those can throw off users debugging a test\nfailure.\n\nThe attached renames all roles with a regress_ prefix and databases with a\nregression_ prefix to match the convention, and regenerates the certificates to\nmatch. With this I get a clean warning-free testrun. There are no functional\nchanges included, just changed names (and comments to match).\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 19 Jul 2023 13:21:12 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove backend warnings from SSL tests"
},
{
"msg_contents": "Hi,\n\n> When looking at a patch in the CFBot I realized that the SSL tests generate\n> backend warnings under ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\nGood catch. I can confirm that the patch corrects the named WARNINGs\nappearing with:\n\nCPPFLAGS=\"-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\"\n\nThere are plenty of similar warnings left however.\n\nBefore:\n\n```\n$ grep -r WARNING ./build/ 2>/dev/null | grep 'regression test cases\nshould have names' | wc -l\n463\n```\n\nAfter:\n\n```\n$ grep -r WARNING ./build/ 2>/dev/null | grep 'regression test cases\nshould have names' | wc -l\n403\n```\n\nMaybe we should address them too. In order to prevent this from\nhappening in the future perhaps we should start throwing ERRORs\ninstead of a WARNINGs and make sure this is tested by cfbot.\n\nAlternatively we could get rid of\nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS entirely since its practical\nvalue seems to be debatable.\n\nThe patch was added to the nearest commitfest [1].\n\nThoughts?\n\n[1]: https://commitfest.postgresql.org/44/4451/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 19 Jul 2023 15:53:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove backend warnings from SSL tests"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> When looking at a patch in the CFBot I realized that the SSL tests generate\n>> backend warnings under ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n\n> Good catch. I can confirm that the patch corrects the named WARNINGs\n> appearing with:\n> CPPFLAGS=\"-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\"\n> There are plenty of similar warnings left however.\n\nYeah. We have not worried about making TAP tests clean under this\nrestriction, because the point of it is to limit the hazards of\nrunning \"make installcheck\" against a cluster containing useful data.\nTAP tests always use one-off test clusters, so there is no hazard\nto guard against.\n\nIf we wanted to extend the rule to TAP tests as well, I think we'd\nhave to upgrade the WARNING to an ERROR, because otherwise we'll\nnever find all the violations. Not clear to me that it's worth\nthe trouble, though. And it's definitely not worth the trouble to\nfix only one TAP suite.\n\n> Alternatively we could get rid of\n> ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS entirely since its practical\n> value seems to be debatable.\n\nStrong -1 on that, for the reason given above.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jul 2023 09:44:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove backend warnings from SSL tests"
},
{
"msg_contents": "I wrote:\n> Aleksander Alekseev <[email protected]> writes:\n>> Alternatively we could get rid of\n>> ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS entirely since its practical\n>> value seems to be debatable.\n\n> Strong -1 on that, for the reason given above.\n\nPerhaps an alternative could be to expend some more sweat on the\nmechanism, so that it's actually enforced (with ERROR) during\n\"make installcheck\", but not in \"make check\" or TAP tests?\nI'm not sure how to make that work exactly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jul 2023 10:22:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove backend warnings from SSL tests"
}
] |
[
{
"msg_contents": "Hello postgres hackers,\n Recently I encountered an issue: pg_rewind fails when dealing with in-place tablespace. The problem seems to be that pg_rewind is treating in-place tablespace as symbolic link, while in fact it should be treated as directory.\n Here is the output of pg_rewind:\npg_rewind: error: file \"pg_tblspc/16385\" is of different type in source and target\n To help reproduce the failure, I have attached a tap test. And I am pleased to say that I have also identified a solution for this problem, which I have included in the patch.\n Thank you for your attention to this matter.\nBest regards,\nRui Zhao",
"msg_date": "Wed, 19 Jul 2023 21:31:35 +0800",
"msg_from": "\"=?UTF-8?B?6LW16ZSQKOaDnOWFgyk=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?cGdfcmV3aW5kIGZhaWxzIHdpdGggaW4tcGxhY2UgdGFibGVzcGFjZQ==?="
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 09:31:35PM +0800, 赵锐(惜元) wrote:\n> Recently I encountered an issue: pg_rewind fails when dealing with\n> in-place tablespace. The problem seems to be that pg_rewind is\n> treating in-place tablespace as symbolic link, while in fact it\n> should be treated as directory. \n> Here is the output of pg_rewind:\n> pg_rewind: error: file \"pg_tblspc/16385\" is of different type in\n> source and target \n> To help reproduce the failure, I have attached a tap test. And I am\n> pleased to say that I have also identified a solution for this\n> problem, which I have included in the patch. \n> Thank you for your attention to this matter.\n\nIssue reproduced here, and agreed that we'd better do something about\nthat. I am not sure if your patch is right for the job though, but\nI'll try to study that a bit more.\n--\nMichael",
"msg_date": "Tue, 25 Jul 2023 16:36:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails with in-place tablespace"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 04:36:42PM +0900, Michael Paquier wrote:\n> On Wed, Jul 19, 2023 at 09:31:35PM +0800, 赵锐(惜元) wrote:\n>> To help reproduce the failure, I have attached a tap test. And I am\n>> pleased to say that I have also identified a solution for this\n>> problem, which I have included in the patch. \n>> Thank you for your attention to this matter.\n> \n> Issue reproduced here, and agreed that we'd better do something about\n> that. I am not sure if your patch is right for the job though, but\n> I'll try to study that a bit more.\n\nIt took me some time to remember that for the case of a local source\nwe'd finish by using recurse_dir() and consider the in-place\ntablespace as a regular directory, so a fix located in\nlibpq_traverse_files() sounds good to me.\n\n+ if (strncmp(link_target, \"pg_tblspc/\", strlen(\"pg_tblspc/\")) == 0)\n+ type = FILE_TYPE_DIRECTORY;\n+ else\n+ type = FILE_TYPE_SYMLINK;\n\nHowever this is not consistent with the other places where we detect\nif an in-place tablespace is used, like pg_basebackup.c, where we rely\non the fact that the tablespace path is a relative path, using\nis_absolute_path() to make the difference between a normal and\nin-place tablespace. I would choose consistency and do the same here,\nchecking if we have an absolute or relative path, depending on the\nresult of pg_tablespace_location().\n\nTesting only for the creation of the tablespace is fine for the sake\nof the report, but I would slightly more here and create a table on\nthis tablespace with some data, and a check_query() once pg_rewind is\ndone.\n\nI am finishing with the attached. Thoughts?\n--\nMichael",
"msg_date": "Fri, 28 Jul 2023 16:54:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails with in-place tablespace"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 04:54:56PM +0900, Michael Paquier wrote:\n> I am finishing with the attached. Thoughts?\n\nApplied this one as bf22792 on HEAD, without a backpatch as in-place\ntablespaces are around for developers. If there are opinions in favor\nof a backpatch, feel free of course.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 07:48:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails with in-place tablespace"
},
{
"msg_contents": "Sorry for the delay in responding to this matter as I have been waiting for another similar subject to approved by a moderator.\nUpon review, I am satisfied with the proposed solution and believe that checking absolute path is better than hard coding with \"pg_tblspc/\". I think we have successfully resolved this issue in the pg_rewind case.\nHowever, I would like to bring your attention to another issue: pg_upgrade fails with in-place tablespace. Another issue is still waiting for approved. I have tested all the tools in src/bin with in-place tablespace, and I believe this is the final issue.\nThank you for your understanding and assistance.\nBest regard,\nRui Zhao\n------------------------------------------------------------------\n发件人:Michael Paquier <[email protected]>\n发送时间:2023年7月31日(星期一) 06:49\n收件人:赵锐(惜元) <[email protected]>\n抄 送:pgsql-hackers <[email protected]>; Thomas Munro <[email protected]>\n主 题:Re: pg_rewind fails with in-place tablespace\nOn Fri, Jul 28, 2023 at 04:54:56PM +0900, Michael Paquier wrote:\n> I am finishing with the attached. Thoughts?\nApplied this one as bf22792 on HEAD, without a backpatch as in-place\ntablespaces are around for developers. If there are opinions in favor\nof a backpatch, feel free of course.\n--\nMichael\n\nSorry for the delay in responding to this matter as I have been waiting for another similar subject to approved by a moderator.Upon review, I am satisfied with the proposed solution and believe that checking absolute path is better than hard coding with \"pg_tblspc/\". I think we have successfully resolved this issue in the pg_rewind case.However, I would like to bring your attention to another issue: pg_upgrade fails with in-place tablespace. Another issue is still waiting for approved. I have tested all the tools in src/bin with in-place tablespace, and I believe this is the final issue.Thank you for your understanding and assistance.Best regard,Rui Zhao------------------------------------------------------------------发件人:Michael Paquier <[email protected]>发送时间:2023年7月31日(星期一) 06:49收件人:赵锐(惜元) <[email protected]>抄 送:pgsql-hackers <[email protected]>; Thomas Munro <[email protected]>主 题:Re: pg_rewind fails with in-place tablespaceOn Fri, Jul 28, 2023 at 04:54:56PM +0900, Michael Paquier wrote:> I am finishing with the attached. Thoughts?Applied this one as bf22792 on HEAD, without a backpatch as in-placetablespaces are around for developers. If there are opinions in favorof a backpatch, feel free of course.--Michael",
"msg_date": "Mon, 31 Jul 2023 10:07:44 +0800",
"msg_from": "\"Rui Zhao\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yacGdfcmV3aW5kIGZhaWxzIHdpdGggaW4tcGxhY2UgdGFibGVzcGFjZQ==?="
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 10:07:44AM +0800, Rui Zhao wrote:\n> However, I would like to bring your attention to another issue:\n> pg_upgrade fails with in-place tablespace. Another issue is still\n> waiting for approved. I have tested all the tools in src/bin with\n> in-place tablespace, and I believe this is the final issue. \n\nNo problem. Please feel free to start a new thread about that, I'm\nokay to look at what you would like to propose. Adding a test in\n002_pg_upgrade.pl where the pg_upgrade runs happen would be a good\nthing to have, I guess.\n--\nMichael",
"msg_date": "Mon, 31 Jul 2023 11:14:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yacGdfcmV3aW4=?= =?utf-8?Q?d?= fails with\n in-place tablespace"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile I'm working on the thread[1], I found that the function of\nworker_spi module fails if 'shared_preload_libraries' doesn't have\nworker_spi.\n\nThe reason is that the database name is NULL because the database name\nis initialized only when process_shared_preload_libraries_in_progress\nis true.\n\n```\npsql=# SELECT worker_spi_launch(1) ;\n2023-07-20 11:00:56.491 JST [1179891] LOG: worker_spi worker 1 \ninitialized with schema1.counted\n2023-07-20 11:00:56.491 JST [1179891] FATAL: cannot read pg_class \nwithout having selected a database at character 22\n2023-07-20 11:00:56.491 JST [1179891] QUERY: select count(*) from \npg_namespace where nspname = 'schema1'\n2023-07-20 11:00:56.491 JST [1179891] STATEMENT: select count(*) from \npg_namespace where nspname = 'schema1'\n2023-07-20 11:00:56.492 JST [1179095] LOG: background worker \n\"worker_spi\" (PID 1179891) exited with exit code 1\n```\n\nIn my understanding, the restriction is not required. So, I think it's\nbetter to change the behavior.\n(v1-0001-Support-worker_spi-to-execute-the-function-dynamical.patch)\n\nWhat do you think?\n\n[1] Support to define custom wait events for extensions\nhttps://www.postgresql.org/message-id/flat/b9f5411acda0cf15c8fbb767702ff43e%40oss.nttdata.com\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 20 Jul 2023 11:15:51 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 11:15:51AM +0900, Masahiro Ikeda wrote:\n> While I'm working on the thread[1], I found that the function of\n> worker_spi module fails if 'shared_preload_libraries' doesn't have\n> worker_spi.\n\nI guess that you were patching worker_spi to register dynamically a\nwait event and embed that in a TAP test or similar without loading it\nin shared_preload_libraries? FWIW, you could use a trick like what I\nam attaching here to load a wait event dynamically with the custom\nwait event API. You would need to make worker_spi_init_shmem() a bit\nmore aggressive with an extra hook to reserve a shmem area size, but\nthat's enough to show the custom wait event in the same backend as the\none that launches a worker_spi dynamically, while demonstrating how\nthe API can be used in this case.\n\n> In my understanding, the restriction is not required. So, I think it's\n> better to change the behavior.\n> (v1-0001-Support-worker_spi-to-execute-the-function-dynamical.patch)\n> \n> What do you think?\n\n+1. I'm OK to lift this restriction with a SIGHUP GUC for the\ndatabase name and that's not a pattern to encourage in a template\nmodule. Will do so, if there are no objections.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 12:55:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 9:25 AM Michael Paquier <[email protected]> wrote:\n>\n> > In my understanding, the restriction is not required. So, I think it's\n> > better to change the behavior.\n> > (v1-0001-Support-worker_spi-to-execute-the-function-dynamical.patch)\n> >\n> > What do you think?\n>\n> +1. I'm OK to lift this restriction with a SIGHUP GUC for the\n> database name and that's not a pattern to encourage in a template\n> module. Will do so, if there are no objections.\n\n+1. However, a comment above helps one to understand why some GUCs are\ndefined before if (!process_shared_preload_libraries_in_progress). As\nthis is an example extension, it will help understand the reasoning\nbetter. I know we will it in the commit message, but a direct comment\nhelps:\n\n /*\n * Note that this GUC is defined irrespective of worker_spi shared library\n * presence in shared_preload_libraries. It's possible to create the\n * worker_spi extension and use functions without it being specified in\n * shared_preload_libraries. If we return from here without defining this\n * GUC, the dynamic workers launched by worker_spi_launch() will keep\n * crashing and restarting.\n */\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Jul 2023 09:43:37 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 09:43:37AM +0530, Bharath Rupireddy wrote:\n> +1. However, a comment above helps one to understand why some GUCs are\n> defined before if (!process_shared_preload_libraries_in_progress). As\n> this is an example extension, it will help understand the reasoning\n> better. I know we will it in the commit message, but a direct comment\n> helps:\n> \n> /*\n> * Note that this GUC is defined irrespective of worker_spi shared library\n> * presence in shared_preload_libraries. It's possible to create the\n> * worker_spi extension and use functions without it being specified in\n> * shared_preload_libraries. If we return from here without defining this\n> * GUC, the dynamic workers launched by worker_spi_launch() will keep\n> * crashing and restarting.\n> */\n\nWFM to be more talkative here and document things, but I don't think\nthat's it. How about a simple \"These GUCs are defined even if this\nlibrary is not loaded with shared_preload_libraries, for\nworker_spi_launch().\"\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 13:39:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 10:09 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jul 20, 2023 at 09:43:37AM +0530, Bharath Rupireddy wrote:\n> > +1. However, a comment above helps one to understand why some GUCs are\n> > defined before if (!process_shared_preload_libraries_in_progress). As\n> > this is an example extension, it will help understand the reasoning\n> > better. I know we will it in the commit message, but a direct comment\n> > helps:\n> >\n> > /*\n> > * Note that this GUC is defined irrespective of worker_spi shared library\n> > * presence in shared_preload_libraries. It's possible to create the\n> > * worker_spi extension and use functions without it being specified in\n> > * shared_preload_libraries. If we return from here without defining this\n> > * GUC, the dynamic workers launched by worker_spi_launch() will keep\n> > * crashing and restarting.\n> > */\n>\n> WFM to be more talkative here and document things, but I don't think\n> that's it. How about a simple \"These GUCs are defined even if this\n> library is not loaded with shared_preload_libraries, for\n> worker_spi_launch().\"\n\nLGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Jul 2023 10:20:43 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On 2023-07-20 12:55, Michael Paquier wrote:\n> On Thu, Jul 20, 2023 at 11:15:51AM +0900, Masahiro Ikeda wrote:\n>> While I'm working on the thread[1], I found that the function of\n>> worker_spi module fails if 'shared_preload_libraries' doesn't have\n>> worker_spi.\n> \n> I guess that you were patching worker_spi to register dynamically a\n> wait event and embed that in a TAP test or similar without loading it\n> in shared_preload_libraries? FWIW, you could use a trick like what I\n> am attaching here to load a wait event dynamically with the custom\n> wait event API. You would need to make worker_spi_init_shmem() a bit\n> more aggressive with an extra hook to reserve a shmem area size, but\n> that's enough to show the custom wait event in the same backend as the\n> one that launches a worker_spi dynamically, while demonstrating how\n> the API can be used in this case.\n\nYes, you're right. When I tried using worker_spi to test wait event,\nI found the behavior. And thanks a lot for your patch. I wasn't aware\nof the way. I'll merge your patch to the tests for wait events.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 20 Jul 2023 17:54:55 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-20 13:50, Bharath Rupireddy wrote:\n> On Thu, Jul 20, 2023 at 10:09 AM Michael Paquier <[email protected]> \n> wrote:\n>> \n>> On Thu, Jul 20, 2023 at 09:43:37AM +0530, Bharath Rupireddy wrote:\n>> > +1. However, a comment above helps one to understand why some GUCs are\n>> > defined before if (!process_shared_preload_libraries_in_progress). As\n>> > this is an example extension, it will help understand the reasoning\n>> > better. I know we will it in the commit message, but a direct comment\n>> > helps:\n>> >\n>> > /*\n>> > * Note that this GUC is defined irrespective of worker_spi shared library\n>> > * presence in shared_preload_libraries. It's possible to create the\n>> > * worker_spi extension and use functions without it being specified in\n>> > * shared_preload_libraries. If we return from here without defining this\n>> > * GUC, the dynamic workers launched by worker_spi_launch() will keep\n>> > * crashing and restarting.\n>> > */\n>> \n>> WFM to be more talkative here and document things, but I don't think\n>> that's it. How about a simple \"These GUCs are defined even if this\n>> library is not loaded with shared_preload_libraries, for\n>> worker_spi_launch().\"\n> \n> LGTM.\n\nThanks for discussing about the patch. I updated the patch from your \ncomments\n* v2-0001-Support-worker_spi-to-execute-the-function-dynamical.patch\n\nI found another thing to be changed better. Though the tests was assumed\n\"shared_preload_libraries = worker_spi\", the background workers failed \nto\nbe launched in initialized phase because the database is not created \nyet.\n\n```\n# make check # in src/test/modules/worker_spi\n# cat log/postmaster.log # in src/test/modules/worker_spi/\n2023-07-20 17:58:47.958 JST worker_spi[853620] FATAL: database \n\"contrib_regression\" does not exist\n2023-07-20 17:58:47.958 JST worker_spi[853621] FATAL: database \n\"contrib_regression\" does not exist\n2023-07-20 17:58:47.959 JST postmaster[853612] LOG: background worker \n\"worker_spi\" (PID 853620) exited with exit code 1\n2023-07-20 17:58:47.959 JST postmaster[853612] LOG: background worker \n\"worker_spi\" (PID 853621) exited with exit code 1\n```\n\nIt's better to remove \"shared_preload_libraries = worker_spi\" from the\ntest configuration. I misunderstood that two background workers would\nbe launched and waiting at the start of the test.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Thu, 20 Jul 2023 18:08:04 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 05:54:55PM +0900, Masahiro Ikeda wrote:\n> Yes, you're right. When I tried using worker_spi to test wait event,\n> I found the behavior. And thanks a lot for your patch. I wasn't aware\n> of the way. I'll merge your patch to the tests for wait events.\n\nBe careful when using that. I have not spent more than a few minutes\nto show my point, but what I sent lacks a shmem_request_hook in\n_PG_init(), for example, to request an amount of shared memory equal\nto the size of the state structure.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 18:29:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 2:59 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jul 20, 2023 at 05:54:55PM +0900, Masahiro Ikeda wrote:\n> > Yes, you're right. When I tried using worker_spi to test wait event,\n> > I found the behavior. And thanks a lot for your patch. I wasn't aware\n> > of the way. I'll merge your patch to the tests for wait events.\n>\n> Be careful when using that. I have not spent more than a few minutes\n> to show my point, but what I sent lacks a shmem_request_hook in\n> _PG_init(), for example, to request an amount of shared memory equal\n> to the size of the state structure.\n\nI think the preferred way to grab a chunk of shared memory for an\nexternal module is by using shmem_request_hook and shmem_startup_hook.\nWait events shared memory too can use them.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Jul 2023 15:09:04 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 2:38 PM Masahiro Ikeda <[email protected]> wrote:\n>\n> Thanks for discussing about the patch. I updated the patch from your\n> comments\n> * v2-0001-Support-worker_spi-to-execute-the-function-dynamical.patch\n>\n> I found another thing to be changed better. Though the tests was assumed\n> \"shared_preload_libraries = worker_spi\", the background workers failed\n> to\n> be launched in initialized phase because the database is not created\n> yet.\n>\n> ```\n> # make check # in src/test/modules/worker_spi\n> # cat log/postmaster.log # in src/test/modules/worker_spi/\n> 2023-07-20 17:58:47.958 JST worker_spi[853620] FATAL: database\n> \"contrib_regression\" does not exist\n> 2023-07-20 17:58:47.958 JST worker_spi[853621] FATAL: database\n> \"contrib_regression\" does not exist\n> 2023-07-20 17:58:47.959 JST postmaster[853612] LOG: background worker\n> \"worker_spi\" (PID 853620) exited with exit code 1\n> 2023-07-20 17:58:47.959 JST postmaster[853612] LOG: background worker\n> \"worker_spi\" (PID 853621) exited with exit code 1\n> ```\n>\n> It's better to remove \"shared_preload_libraries = worker_spi\" from the\n> test configuration. I misunderstood that two background workers would\n> be launched and waiting at the start of the test.\n\nI don't think that change is correct. The worker_spi essentially shows\nhow to start bg workers with RegisterBackgroundWorker and dynamic bg\nworkers with RegisterDynamicBackgroundWorker. If\nshared_preload_libraries = worker_spi not specified in there, you will\nmiss to start RegisterBackgroundWorkers. Is giving an initidb time\ndatabase name to worker_spi.database work there? If the database for\nbg workers doesn't exist, changing bgw_restart_time from\nBGW_NEVER_RESTART to say 1 will help to see bg workers coming up\neventually.\n\nI think it's worth adding test cases for the expected number of bg\nworkers (after creating worker_spi extension) and dynamic bg workers\n(after calling worker_spi_launch()). Also, to distinguish bg workers\nand dynamic bg workers, you can change\nbgw_type in worker_spi_launch to \"worker_spi dynamic worker\".\n\n- /* get the configuration */\n+ /* Get the configuration */\n\n- /* set up common data for all our workers */\n+ /* Set up common data for all our workers */\n\nThese unrelated changes better be there as-is. Because, the postgres\ncode has both commenting styles /* Get .... */ or /* get ....*/, IOW,\nsingle line comments starting with both uppercase and lowercase.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Jul 2023 15:44:12 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 03:44:12PM +0530, Bharath Rupireddy wrote:\n> I don't think that change is correct. The worker_spi essentially shows\n> how to start bg workers with RegisterBackgroundWorker and dynamic bg\n> workers with RegisterDynamicBackgroundWorker. If\n> shared_preload_libraries = worker_spi not specified in there, you will\n> miss to start RegisterBackgroundWorkers. Is giving an initidb time\n> database name to worker_spi.database work there? If the database for\n> bg workers doesn't exist, changing bgw_restart_time from\n> BGW_NEVER_RESTART to say 1 will help to see bg workers coming up\n> eventually.\n\nYeah, it does not move the needle by much. I think that we are\nlooking at switching this module to use a TAP test in the long term,\ninstead, where it would be possible to test the scenarios we want to\nlook at *with* and *without* shared_preload_libraries especially with\nthe custom wait events for extensions in mind if we add our tests in\nthis module.\n\nIt does not change the fact that Ikeda-san is right about the launch\nof dynamic workers with this module being broken, so I have applied v1\nwith the comment I have suggested. This will ease a bit the\nimplementation of any follow-up test scenarios, while avoiding an\nincorrect pattern in this template module. \n--\nMichael",
"msg_date": "Fri, 21 Jul 2023 12:08:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 8:38 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jul 20, 2023 at 03:44:12PM +0530, Bharath Rupireddy wrote:\n> > I don't think that change is correct. The worker_spi essentially shows\n> > how to start bg workers with RegisterBackgroundWorker and dynamic bg\n> > workers with RegisterDynamicBackgroundWorker. If\n> > shared_preload_libraries = worker_spi not specified in there, you will\n> > miss to start RegisterBackgroundWorkers. Is giving an initidb time\n> > database name to worker_spi.database work there? If the database for\n> > bg workers doesn't exist, changing bgw_restart_time from\n> > BGW_NEVER_RESTART to say 1 will help to see bg workers coming up\n> > eventually.\n>\n> Yeah, it does not move the needle by much. I think that we are\n> looking at switching this module to use a TAP test in the long term,\n> instead, where it would be possible to test the scenarios we want to\n> look at *with* and *without* shared_preload_libraries especially with\n> the custom wait events for extensions in mind if we add our tests in\n> this module.\n\nOkay. Here's a quick patch for adding TAP tests to the worker_spi\nmodule. We can change it to taste.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Jul 2023 11:24:08 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 11:24:08AM +0530, Bharath Rupireddy wrote:\n> Okay. Here's a quick patch for adding TAP tests to the worker_spi\n> module. We can change it to taste.\n\nWhat do you think if we removed completely the sql/ test, moving it to\nTAP so as we have only one cluster set up when running a make check?\nworker_spi.sql only does two waits (one for the initialization and one\nto check that the tuple has been processed), so these could be\nreplaced by some poll_query_until()?\n\nAs we have a dynamic.conf, installcheck is not supported so we don't\nuse anything with this switch. Besides, updating\nshared_preload_libraries and restarting the node in TAP is cheaper\nthan a second initdb.\n\n- snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi worker %d\", i);\n- snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi\");\n+ snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi static worker %d\", i);\n+ snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi static worker\");\n[..]\n- snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi worker %d\", i);\n- snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi\");\n+ snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi dynamic worker %d\", i);\n+ snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi dynamic worker\");\n\nGood idea to split that.\n--\nMichael",
"msg_date": "Fri, 21 Jul 2023 15:24:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 11:54 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jul 21, 2023 at 11:24:08AM +0530, Bharath Rupireddy wrote:\n> > Okay. Here's a quick patch for adding TAP tests to the worker_spi\n> > module. We can change it to taste.\n>\n> What do you think if we removed completely the sql/ test, moving it to\n> TAP so as we have only one cluster set up when running a make check?\n> worker_spi.sql only does two waits (one for the initialization and one\n> to check that the tuple has been processed), so these could be\n> replaced by some poll_query_until()?\n\nI think we can keep SQL tests around as it will help demonstrate\nsomeone quickly write their own SQL tests.\n\n> As we have a dynamic.conf, installcheck is not supported so we don't\n> use anything with this switch. Besides, updating\n> shared_preload_libraries and restarting the node in TAP is cheaper\n> than a second initdb.\n\nIn SQL tests, I ensured worker_spi doesn't start static bg workers by\nsetting worker_spi.total_workers = 0. Again, all of this is not\nnecessary, but it will be a very good example for someone writing\nextensions and play around with custom config files, SQL and TAP tests\netc.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Jul 2023 12:21:57 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-20 18:39, Bharath Rupireddy wrote:\n> On Thu, Jul 20, 2023 at 2:59 PM Michael Paquier <[email protected]> \n> wrote:\n>> \n>> On Thu, Jul 20, 2023 at 05:54:55PM +0900, Masahiro Ikeda wrote:\n>> > Yes, you're right. When I tried using worker_spi to test wait event,\n>> > I found the behavior. And thanks a lot for your patch. I wasn't aware\n>> > of the way. I'll merge your patch to the tests for wait events.\n>> \n>> Be careful when using that. I have not spent more than a few minutes\n>> to show my point, but what I sent lacks a shmem_request_hook in\n>> _PG_init(), for example, to request an amount of shared memory equal\n>> to the size of the state structure.\n> \n> I think the preferred way to grab a chunk of shared memory for an\n> external module is by using shmem_request_hook and shmem_startup_hook.\n> Wait events shared memory too can use them.\n\nOK, I'll add the hooks in worker_spi for the test of wait events.\n\n\nOn 2023-07-21 12:08, Michael Paquier wrote:\n> On Thu, Jul 20, 2023 at 03:44:12PM +0530, Bharath Rupireddy wrote:\n>> I don't think that change is correct. The worker_spi essentially shows\n>> how to start bg workers with RegisterBackgroundWorker and dynamic bg\n>> workers with RegisterDynamicBackgroundWorker. If\n>> shared_preload_libraries = worker_spi not specified in there, you will\n>> miss to start RegisterBackgroundWorkers. Is giving an initidb time\n>> database name to worker_spi.database work there? If the database for\n>> bg workers doesn't exist, changing bgw_restart_time from\n>> BGW_NEVER_RESTART to say 1 will help to see bg workers coming up\n>> eventually.\n> \n> Yeah, it does not move the needle by much. I think that we are\n> looking at switching this module to use a TAP test in the long term,\n> instead, where it would be possible to test the scenarios we want to\n> look at *with* and *without* shared_preload_libraries especially with\n> the custom wait events for extensions in mind if we add our tests in\n> this module.\n> \n> It does not change the fact that Ikeda-san is right about the launch\n> of dynamic workers with this module being broken, so I have applied v1\n> with the comment I have suggested. This will ease a bit the\n> implementation of any follow-up test scenarios, while avoiding an\n> incorrect pattern in this template module.\n\nThanks for the commits. As Bharath-san said, I forgot that worker_spi\nhas an aspect of demonstration and I agree to introduce two types of\ntests with and without \"shared_preload_libraries = worker_spi\".\n\n\n\nOn 2023-07-21 15:51, Bharath Rupireddy wrote:\n> On Fri, Jul 21, 2023 at 11:54 AM Michael Paquier <[email protected]> \n> wrote:\n>> \n>> On Fri, Jul 21, 2023 at 11:24:08AM +0530, Bharath Rupireddy wrote:\n>> As we have a dynamic.conf, installcheck is not supported so we don't\n>> use anything with this switch. Besides, updating\n>> shared_preload_libraries and restarting the node in TAP is cheaper\n>> than a second initdb.\n> \n> In SQL tests, I ensured worker_spi doesn't start static bg workers by\n> setting worker_spi.total_workers = 0. Again, all of this is not\n> necessary, but it will be a very good example for someone writing\n> extensions and play around with custom config files, SQL and TAP tests\n> etc.\n\nThanks for making the patch. I confirmed it works in my environments.\n\n> - snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi worker %d\", \n> i);\n> - snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi\");\n> + snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi static worker \n> %d\", i);\n> + snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi static \n> worker\");\n> [..]\n> - snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi worker %d\", i);\n> - snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi\");\n> + snprintf(worker.bgw_name, BGW_MAXLEN, \"worker_spi dynamic worker \n> %d\", i);\n> + snprintf(worker.bgw_type, BGW_MAXLEN, \"worker_spi dynamic worker\");\n> \n> Good idea to split that.\n\nI agree. It very useful. I'll refer to its implementation for the wait \nevent tests.\n\nI have some questions about the patch. I'm ok to ignore the following \ncomment since\nyour patch is for PoC.\n\n(1)\n\nDo we need to change the minValue from 1 to 0 to support\nworker_spi.total_workers = 0?\n\n\tDefineCustomIntVariable(\"worker_spi.total_workers\",\n\t\t\t\t\t\t\t\"Number of workers.\",\n\t\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t\t&worker_spi_total_workers,\n\t\t\t\t\t\t\t2,\n\t\t\t\t\t\t\t1,\n\t\t\t\t\t\t\t100,\n\t\t\t\t\t\t\tPGC_POSTMASTER,\n\t\t\t\t\t\t\t0,\n\t\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t\tNULL,\n\t\t\t\t\t\t\tNULL);\n\n(2)\n\nDo we need \"worker_spi.total_workers = 0\" and\n\"shared_preload_libraries = worker_spi\" in dynamic.conf.\n\nCurrently, the static bg workers will not be launched because\n\"shared_preload_libraries = worker_spi\" is removed. So\n\"worker_spi.total_workers = 0\" is meaningless.\n\n(3)\n\nWe need change and remove them.\n\n> # Copyright (c) 2021-2023, PostgreSQL Global Development Group\n> \n> # Test replication statistics data in pg_stat_replication_slots is sane \n> after\n> # drop replication slot and restart.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 21 Jul 2023 19:35:36 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 4:05 PM Masahiro Ikeda <[email protected]> wrote:\n>\n> > In SQL tests, I ensured worker_spi doesn't start static bg workers by\n> > setting worker_spi.total_workers = 0. Again, all of this is not\n> > necessary, but it will be a very good example for someone writing\n> > extensions and play around with custom config files, SQL and TAP tests\n> > etc.\n>\n> Thanks for making the patch. I confirmed it works in my environments.\n\nThanks for verifying.\n\n> I have some questions about the patch.\n>\n> (1)\n>\n> Do we need to change the minValue from 1 to 0 to support\n> worker_spi.total_workers = 0?\n>\n> DefineCustomIntVariable(\"worker_spi.total_workers\",\n> \"Number of workers.\",\n> NULL,\n> &worker_spi_total_workers,\n> 2,\n> 1,\n> 100,\n> PGC_POSTMASTER,\n> 0,\n> NULL,\n> NULL,\n> NULL);\n\nNo, let's keep it that way.\n\n> (2)\n>\n> Do we need \"worker_spi.total_workers = 0\" and\n> \"shared_preload_libraries = worker_spi\" in dynamic.conf.\n>\n> Currently, the static bg workers will not be launched because\n> \"shared_preload_libraries = worker_spi\" is removed. So\n> \"worker_spi.total_workers = 0\" is meaningless.\n\nYou're right. worker_spi.total_workers = 0 in custom.conf has no\neffect. without shared_preload_libraries = worker_spi. Removed that.\n\n> (3)\n>\n> We need change and remove them.\n>\n> > # Copyright (c) 2021-2023, PostgreSQL Global Development Group\n> >\n> > # Test replication statistics data in pg_stat_replication_slots is sane\n> > after\n> > # drop replication slot and restart.\n\nModified.\n\nI'm attaching the v2 patch. Thoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Jul 2023 21:35:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On 2023-07-22 01:05, Bharath Rupireddy wrote:\n> On Fri, Jul 21, 2023 at 4:05 PM Masahiro Ikeda \n> <[email protected]> wrote:\n>> (2)\n>> \n>> Do we need \"worker_spi.total_workers = 0\" and\n>> \"shared_preload_libraries = worker_spi\" in dynamic.conf.\n>> \n>> Currently, the static bg workers will not be launched because\n>> \"shared_preload_libraries = worker_spi\" is removed. So\n>> \"worker_spi.total_workers = 0\" is meaningless.\n> \n> You're right. worker_spi.total_workers = 0 in custom.conf has no\n> effect. without shared_preload_libraries = worker_spi. Removed that.\n\nOK. If so, we need to remove the following comment in Makefile.\n\n> # enable our module in shared_preload_libraries for dynamic bgworkers\n\nI also confirmed that the tap tests work with meson and make.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 24 Jul 2023 10:04:39 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 6:34 AM Masahiro Ikeda <[email protected]> wrote:\n>\n> OK. If so, we need to remove the following comment in Makefile.\n>\n> > # enable our module in shared_preload_libraries for dynamic bgworkers\n\nDone.\n\n> I also confirmed that the tap tests work with meson and make.\n\nThanks for verifying.\n\nI also added a note atop worker_spi.c that the module also\ndemonstrates how to write core (SQL) tests and extended (TAP) tests.\n\nI'm attaching the v3 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Jul 2023 08:31:01 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On 2023-07-24 12:01, Bharath Rupireddy wrote:\n> I'm attaching the v3 patch.\n\nI verified it works and it looks good to me.\nThanks to your work, I will be able to implement tests for\ncustom wait events.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 24 Jul 2023 16:26:45 +0900",
"msg_from": "Masahiro Ikeda <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 08:31:01AM +0530, Bharath Rupireddy wrote:\n> I also added a note atop worker_spi.c that the module also\n> demonstrates how to write core (SQL) tests and extended (TAP) tests.\n\nThe value of the SQL tests comes down to the DO blocks that emulate\nwhat the TAP tests could equally be able to do. While we already have\nsome places that do something similar (slot.sql or postgres_fdw.sql),\nthe SQL tests of worker_spi count for a total of five queries, which\nis not much with one cluster initialized:\n- One pg_reload_conf() to work a loop to happen in the worker.\n- Two sanity checks.\n- Two wait emulations.\n\nAnyway, most people that do serious hacking on this list care about\nthe runtime of the tests all the time, and I am not on board in making\nthings slower for the sake of showing a test example here\nparticularly if there are ways to make them faster (long-term, we\nshould be able to do the init step only once for most cases), and\nbecause we *have to* switch to TAP to have more advanced scenarios for\nthe custom wait events or just dynamic work launches based on what we\nset on shared_preload_libraries. On top of that, we have other\nexamples in the tree that emulate waits for plain SQL tests to satisfy\nassumptions with some follow-up query.\n\nSo, I don't really agree with the value gained here compared to the\nexecution cost of initializing two clusters for this module. I have\ntaken the time to check how the runtime changes when switching to TAP\nfor all the scenarios discussed here, and from my laptop, I can see\nthat:\n- HEAD takes 4.4s, for only the sql/ test.\n- Your latest patch is at 5.6s.\n- My version attached to this message is at 3.7s.\n\nIn terms of runtime the benefits are here for me. Note that with the\nfirst part of the test (previously in sql/), we don't lose coverage\nwith the loop of the workers so I agree that only checking that these\nare launched is OK once worker_spi is in shared_preload_libraries.\nHowever, I think that we should make sure that they are connected to\nthe correct database 'mydb'. I have updated the test to do that.\n\nSo, what do you think about the attached?\n--\nMichael",
"msg_date": "Mon, 24 Jul 2023 16:40:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 1:10 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Jul 24, 2023 at 08:31:01AM +0530, Bharath Rupireddy wrote:\n> > I also added a note atop worker_spi.c that the module also\n> > demonstrates how to write core (SQL) tests and extended (TAP) tests.\n>\n> In terms of runtime the benefits are here for me. Note that with the\n> first part of the test (previously in sql/), we don't lose coverage\n> with the loop of the workers so I agree that only checking that these\n> are launched is OK once worker_spi is in shared_preload_libraries.\n> However, I think that we should make sure that they are connected to\n> the correct database 'mydb'. I have updated the test to do that.\n>\n> So, what do you think about the attached?\n\nI disagree with removing SQL tests from the worker_spi module. As said\nupthread, it makes the worker_spi a fully demonstrable\nextension/module - one can just take it, start adding required\nfunctionality and test-cases (both SQL and TAP) for a new module. I\nagree that moving to TAP tests will reduce test run time by 1.9\nseconds, but to me personally this is not an optimization we must be\ndoing at the expense of demonstrability.\n\nHaving said that, others might have a different opinion here.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 24 Jul 2023 13:50:45 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 01:50:45PM +0530, Bharath Rupireddy wrote:\n> I disagree with removing SQL tests from the worker_spi module. As said\n> upthread, it makes the worker_spi a fully demonstrable\n> extension/module - one can just take it, start adding required\n> functionality and test-cases (both SQL and TAP) for a new module.\n\nWhich is basically the same thing with TAP except that these are\ngrouped now? The value of a few raw SQL queries with a\nNO_INSTALLCHECK does not strike me as enough on top of having to\nmaintain two different sets of tests. I'd still choose the cheap and\nextensible path here.\n\n> I agree that moving to TAP tests will reduce test run time by 1.9\n> seconds, but to me personally this is not an optimization we must be\n> doing at the expense of demonstrability.\n\nIn a large parallel run, the difference can be felt.\n--\nMichael",
"msg_date": "Mon, 24 Jul 2023 17:38:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 05:38:45PM +0900, Michael Paquier wrote:\n> Which is basically the same thing with TAP except that these are\n> grouped now? The value of a few raw SQL queries with a\n> NO_INSTALLCHECK does not strike me as enough on top of having to\n> maintain two different sets of tests. I'd still choose the cheap and\n> extensible path here.\n\nI've been sleeping on that a bit more, and I'd still go with the\nrefactoring where we initialize one cluster and have all the tests\ndone by TAP, for the sake of being much cheaper without changing the\ncoverage, while being more extensible when it comes to introduce tests\nfor the follow-up patch on custom wait events.\n--\nMichael",
"msg_date": "Wed, 26 Jul 2023 09:02:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 09:02:54AM +0900, Michael Paquier wrote:\n> I've been sleeping on that a bit more, and I'd still go with the\n> refactoring where we initialize one cluster and have all the tests\n> done by TAP, for the sake of being much cheaper without changing the\n> coverage, while being more extensible when it comes to introduce tests\n> for the follow-up patch on custom wait events.\n\nFor now, please note that I have applied your idea to add \"dynamic\" to\nthe names of the bgworkers registered on a worker_spi_launch() as this\nis useful on its own. I have given up on the \"static\" part, because\nthat felt unconsistent with the API names, and we don't use this term\nin the docs for bgworkers, additionally.\n--\nMichael",
"msg_date": "Wed, 26 Jul 2023 12:52:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "Hi,\n\nThe new test fails with my AIO branch occasionally. But I'm fairly certain\nthat's just due to timing differences.\n\nExcerpt from the log:\n\n2023-07-27 21:43:00.385 UTC [42339] LOG: worker_spi worker 3 initialized with schema3.counted\n2023-07-27 21:43:00.399 UTC [42344] 001_worker_spi.pl LOG: statement: SELECT datname, count(datname) FROM pg_stat_activity\n\t WHERE backend_type = 'worker_spi' GROUP BY datname;\n2023-07-27 21:43:00.403 UTC [42340] LOG: worker_spi worker 2 initialized with schema2.counted\n2023-07-27 21:43:00.407 UTC [42341] LOG: worker_spi worker 1 initialized with schema1.counted\n2023-07-27 21:43:00.420 UTC [42346] 001_worker_spi.pl LOG: statement: SELECT worker_spi_launch(1);\n2023-07-27 21:43:00.423 UTC [42347] LOG: worker_spi dynamic worker 1 initialized with schema1.counted\n2023-07-27 21:43:00.432 UTC [42349] 001_worker_spi.pl LOG: statement: SELECT worker_spi_launch(2);\n2023-07-27 21:43:00.437 UTC [42350] LOG: worker_spi dynamic worker 2 initialized with schema2.counted\n2023-07-27 21:43:00.443 UTC [42347] ERROR: duplicate key value violates unique constraint \"pg_namespace_nspname_index\"\n2023-07-27 21:43:00.443 UTC [42347] DETAIL: Key (nspname)=(schema1) already exists.\n2023-07-27 21:43:00.443 UTC [42347] CONTEXT: SQL statement \"CREATE SCHEMA \"schema1\" CREATE TABLE \"counted\" (\t\ttype text CHECK (type IN ('total', 'delta')), \t\tvalue\tinteger)CREATE UNIQUE INDEX \"counted_unique_total\" ON \"counted\" (type) WHERE type = 'total'\"\n\n\nAs written, dynamic and static workers race each other. It doesn't make a lot\nof sense to me to use the same ids for either?\n\nThe attached patch reproduces the problem on master.\n\nNote that without the sleep(3) in the test the workers don't actually finish\nstarting, the test shuts down the cluster before that happens...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 27 Jul 2023 19:23:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Thu, Jul 27, 2023 at 07:23:32PM -0700, Andres Freund wrote:\n> As written, dynamic and static workers race each other. It doesn't make a lot\n> of sense to me to use the same ids for either?\n> \n> The attached patch reproduces the problem on master.\n> \n> Note that without the sleep(3) in the test the workers don't actually finish\n> starting, the test shuts down the cluster before that happens...\n\nSo you have faced a race condition where the commit of the transaction\ndoing the schema creation for the static workers is delayed long\nenough that the dynamic workers don't see it, and bumped on a catalog\nconflict when they try to create the same schemas.\n\nHaving each bgworker on its own schema would be enough to prevent\nconflicts, but I'd like to add a second thing: a check on\npg_stat_activity.wait_event after starting the workers. I have added\nsomething like that in the patch I have posted today for the custom\nwait events at [1] and it enforces the startup sequences of the\nworkers in a stricter way.\n\nDoes the attached take care of your issue?\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Fri, 28 Jul 2023 13:45:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 10:15 AM Michael Paquier <[email protected]> wrote:\n>\n> Having each bgworker on its own schema would be enough to prevent\n> conflicts, but I'd like to add a second thing: a check on\n> pg_stat_activity.wait_event after starting the workers. I have added\n> something like that in the patch I have posted today for the custom\n> wait events at [1] and it enforces the startup sequences of the\n> workers in a stricter way.\n>\n> Does the attached take care of your issue?\n\n+# check their existence. Use IDs that do not overlap with the schemas created\n+# by the previous workers.\n\nWhile using different IDs in tests is a simple fix, -1 for it. I'd\nprefer if worker_spi uses different schema prefixes for static and\ndynamic bg workers to avoid conflicts. We can either look at\nMyBgworkerEntry->bgw_type in worker_spi_main and have schema name as\n'{static, dyamic}_worker_schema_%d', id or pass schema name in\nbgw_extra.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 10:47:39 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 10:47:39AM +0530, Bharath Rupireddy wrote:\n> +# check their existence. Use IDs that do not overlap with the schemas created\n> +# by the previous workers.\n> \n> While using different IDs in tests is a simple fix, -1 for it. I'd\n> prefer if worker_spi uses different schema prefixes for static and\n> dynamic bg workers to avoid conflicts. We can either look at\n> MyBgworkerEntry->bgw_type in worker_spi_main and have schema name as\n> '{static, dyamic}_worker_schema_%d', id or pass schema name in\n> bgw_extra.\n\nFor the sake of a test module, I am not really convinced that there is\nany need to go down to such complexity with the names of the schemas\ncreated.\n--\nMichael",
"msg_date": "Fri, 28 Jul 2023 16:56:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 1:26 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jul 28, 2023 at 10:47:39AM +0530, Bharath Rupireddy wrote:\n> > +# check their existence. Use IDs that do not overlap with the schemas created\n> > +# by the previous workers.\n> >\n> > While using different IDs in tests is a simple fix, -1 for it. I'd\n> > prefer if worker_spi uses different schema prefixes for static and\n> > dynamic bg workers to avoid conflicts. We can either look at\n> > MyBgworkerEntry->bgw_type in worker_spi_main and have schema name as\n> > '{static, dyamic}_worker_schema_%d', id or pass schema name in\n> > bgw_extra.\n>\n> For the sake of a test module, I am not really convinced that there is\n> any need to go down to such complexity with the names of the schemas\n> created.\n\nI don't think something like [1] is complex. It makes worker_spi\nfoolproof. Rather, the other approach proposed, that is to provide\nnon-conflicting worker IDs to worker_spi_launch in the TAP test file,\nlooks complicated to me. And it's easy for someone to come, add a test\ncase with conflicting IDs input to worker_spi_launch and end up in the\nsame state that we're in now.\n\n[1]\ndiff --git a/src/test/modules/worker_spi/t/001_worker_spi.pl\nb/src/test/modules/worker_spi/t/001_worker_spi.pl\nindex c293871313..700530afc7 100644\n--- a/src/test/modules/worker_spi/t/001_worker_spi.pl\n+++ b/src/test/modules/worker_spi/t/001_worker_spi.pl\n@@ -27,16 +27,16 @@ is($result, 't', \"dynamic bgworker launched\");\n $node->poll_query_until(\n 'postgres',\n qq[SELECT count(*) > 0 FROM information_schema.tables\n- WHERE table_schema = 'schema4' AND table_name = 'counted';]);\n+ WHERE table_schema = 'dynamic_worker_schema4' AND\ntable_name = 'counted';]);\n $node->safe_psql('postgres',\n- \"INSERT INTO schema4.counted VALUES ('total', 0), ('delta', 1);\");\n+ \"INSERT INTO dynamic_worker_schema4.counted VALUES ('total',\n0), ('delta', 1);\");\n # Issue a SIGHUP on the node to force the worker to loop once, accelerating\n # this test.\n $node->reload;\n # Wait until the worker has processed the tuple that has just been inserted.\n $node->poll_query_until('postgres',\n- qq[SELECT count(*) FROM schema4.counted WHERE type = 'delta';], '0');\n-$result = $node->safe_psql('postgres', 'SELECT * FROM schema4.counted;');\n+ qq[SELECT count(*) FROM dynamic_worker_schema4.counted WHERE\ntype = 'delta';], '0');\n+$result = $node->safe_psql('postgres', 'SELECT * FROM\ndynamic_worker_schema4.counted;');\n is($result, qq(total|1), 'dynamic bgworker correctly consumed tuple data');\n\n note \"testing bgworkers loaded with shared_preload_libraries\";\ndiff --git a/src/test/modules/worker_spi/worker_spi.c\nb/src/test/modules/worker_spi/worker_spi.c\nindex 903dcddef9..02b4204aa2 100644\n--- a/src/test/modules/worker_spi/worker_spi.c\n+++ b/src/test/modules/worker_spi/worker_spi.c\n@@ -135,10 +135,19 @@ worker_spi_main(Datum main_arg)\n int index = DatumGetInt32(main_arg);\n worktable *table;\n StringInfoData buf;\n- char name[20];\n+ char name[NAMEDATALEN];\n\n table = palloc(sizeof(worktable));\n- sprintf(name, \"schema%d\", index);\n+\n+ /*\n+ * Use different schema names for static and dynamic bg workers to avoid\n+ * name conflicts.\n+ */\n+ if (strcmp(MyBgworkerEntry->bgw_type, \"worker_spi\") == 0)\n+ sprintf(name, \"worker_schema%d\", index);\n+ else if (strcmp(MyBgworkerEntry->bgw_type, \"worker_spi dynamic\") == 0)\n+ sprintf(name, \"dynamic_worker_schema%d\", index);\n+\n table->schema = pstrdup(name);\n table->name = pstrdup(\"counted\");\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 14:11:48 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 02:11:48PM +0530, Bharath Rupireddy wrote:\n> I don't think something like [1] is complex. It makes worker_spi\n> foolproof. Rather, the other approach proposed, that is to provide\n> non-conflicting worker IDs to worker_spi_launch in the TAP test file,\n> looks complicated to me. And it's easy for someone to come, add a test\n> case with conflicting IDs input to worker_spi_launch and end up in the\n> same state that we're in now.\n\nSure, but that's not really something that worries me for a template\nsuch as this one, for the sake of these tests. So I'd leave things to\nbe as they are, slightly simpler. That's a minor point, for sure :)\n--\nMichael",
"msg_date": "Fri, 28 Jul 2023 18:19:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On 2023-Jul-28, Michael Paquier wrote:\n\n> So you have faced a race condition where the commit of the transaction\n> doing the schema creation for the static workers is delayed long\n> enough that the dynamic workers don't see it, and bumped on a catalog\n> conflict when they try to create the same schemas.\n>\n> Having each bgworker on its own schema would be enough to prevent\n> conflicts, but I'd like to add a second thing: a check on\n> pg_stat_activity.wait_event after starting the workers. I have added\n> something like that in the patch I have posted today for the custom\n> wait events at [1] and it enforces the startup sequences of the\n> workers in a stricter way.\n\nHmm, I think having all the workers doing their in the same table is\nbetter -- if nothing else, because it gives us the opportunity to show\nhow to use some other coding technique (but also because we are forced\nto write the SQL code in a way that's correct for potentially multiple\nconcurrent workers, which sounds useful to demonstrate). Can't we\ninstead solve the race condition by having some shared resource that\nblocks the other workers from proceeding until the schema has been\ncreated? Perhaps an LWLock, or a condition variable, or an advisory\nlock.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 28 Jul 2023 12:06:33 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "Hi,\n\nOn 2023-07-28 13:45:29 +0900, Michael Paquier wrote:\n> Having each bgworker on its own schema would be enough to prevent\n> conflicts, but I'd like to add a second thing: a check on\n> pg_stat_activity.wait_event after starting the workers. I have added\n> something like that in the patch I have posted today for the custom\n> wait events at [1] and it enforces the startup sequences of the\n> workers in a stricter way.\n\nIs that very meaningful? ISTM the interesting thing to check for would be that\nthe state is idle?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 28 Jul 2023 13:34:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 01:34:15PM -0700, Andres Freund wrote:\n> On 2023-07-28 13:45:29 +0900, Michael Paquier wrote:\n>> Having each bgworker on its own schema would be enough to prevent\n>> conflicts, but I'd like to add a second thing: a check on\n>> pg_stat_activity.wait_event after starting the workers. I have added\n>> something like that in the patch I have posted today for the custom\n>> wait events at [1] and it enforces the startup sequences of the\n>> workers in a stricter way.\n> \n> Is that very meaningful? ISTM the interesting thing to check for would be that\n> the state is idle?\n\nThat's interesting for the sake of the other patch to check that the\ncustom events are reported. Anyway, I am a bit short in time, so I\nhave applied the simplest fix where the dynamic workers just use a\ndifferent base ID to get out of your way.\n--\nMichael",
"msg_date": "Sat, 29 Jul 2023 11:38:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 12:06:33PM +0200, Alvaro Herrera wrote:\n> Hmm, I think having all the workers doing their in the same table is\n> better -- if nothing else, because it gives us the opportunity to show\n> how to use some other coding technique (but also because we are forced\n> to write the SQL code in a way that's correct for potentially multiple\n> concurrent workers, which sounds useful to demonstrate). Can't we\n> instead solve the race condition by having some shared resource that\n> blocks the other workers from proceeding until the schema has been\n> created? Perhaps an LWLock, or a condition variable, or an advisory\n> lock.\n\nThat's an idea interesting idea that you have here. So basically, you\nwould have all the workers use the same schema do their counting work\nfor the same base table? Or should each worker use the same schema,\nperhaps defined by a GUC, but different tables? One thing that has\nbeen itching me a bit with this module was to be able to pass down to\nthe main worker routine more arguments than just an int ID, but I\ncould not find myself do that for just for the wait event patch, like:\n- The database to connect to.\n- The table to create.\n- The schema to use.\nIf any of these are NULL, just use as default what we have now, with\nperhaps the bgworker PID as ID instead of a user-specified one.\n\nHaving a shared memory state is second thing I was planning to add,\nand that can be useful as point of reference in a template. The other\npatch about custom wait events introduces that, FWIW, to track the\ncustom wait events added.\n--\nMichael",
"msg_date": "Sat, 29 Jul 2023 11:48:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support worker_spi to execute the function dynamically."
}
] |
[
{
"msg_contents": "Hi,\ni had a issue here, When executing a SELECT statement using an index-only scan, i got a wrong row number because there may be an inconsistency between the VM page visibility status and the visibility status of the page,the VM bit is set and page-level is clear\n\n\ni read the code and note that there has a chance to happen like this,but how it happens?\nthe code do clear the page-level visibility and vm bit at the same time, i don not understand how it happens",
"msg_date": "Thu, 20 Jul 2023 14:44:49 +0800",
"msg_from": "\"=?utf-8?B?eWFuaHVpLnhpb25n?=\"<[email protected]>",
"msg_from_op": true,
"msg_subject": "inconsistency between the VM page visibility status and the\n visibility status of the page"
},
{
"msg_contents": "On 7/20/23 08:44, yanhui.xiong wrote:\n> Hi,\n> \n> i had a issue here, When executing a SELECT statement using an\n> index-only scan, i got a wrong row number because there may be an\n> inconsistency between the VM page visibility status and the visibility\n> status of the page,the VM bit is set and page-level is clear\n> \n> \n> i read the code and note that there has a chance to happen like this,but\n> how it happens?\n> \n> the code do clear the page-level visibility and vm bit at the same time,\n> i don not understand how it happens\n> \n\nWell, by only looking at the code you're assuming two things:\n\n1) the code is correct\n\n2) the environment is perfect\n\nEither (or both) of these assumptions may be wrong. There certainly\ncould be some subtle bug in the visibility map code, who knows. Or maybe\nthis is due to some sort of data corruption outside postgres.\n\nIt's impossible to answer without you digging much deeper and providing\nmuch more information. What's the environment? What hardware? What PG\nversion? How long is it running? Any crashes? Any other cases of similar\nissues? What does the page look like in pageinspect?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 20 Jul 2023 12:16:26 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inconsistency between the VM page visibility status and the\n visibility status of the page"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.